text_clean
stringlengths
3
2.28M
label
int64
0
1
this is the instructor this week reporting this issue sakai some of the assignment scores are making it into the gradebook tool or some dont make it to the gradebook is perhaps a better way to put it as the majority do get sent there is no consistent pattern that we can see as to why some assignment scores are pushed to the gradebook and not others for some users i have no way of reproducing the problem as i dont know what causes it we have lots of random examples this is a difficult situation as students relygo to the gradebook for their scores and they are not always there
1
reported an issue that analyzing a load data statement fails in checking access to the source file while a ranger hdfs policy actually exists to allow the access impala only loads the permissions from hdfs and check accesses by itself related codes when ranger authorization is enabled this could be wrong if the hdfs permissions is more restrict than the ranger policies according to the ranger document quotewhen the namenode receives a user request the ranger plugin checks for policies set through the ranger policy manager then if there are no policies authorizing the request the ranger plugin checks for permissions set in hdfs quote we currently dont have an embeded rangerhdfs plugin to check this locally for a quick fix i think when ranger authz is enabled we can check the access using filesystemaccesspath path fsaction mode to invoke a namenode rpc to respect rangerhdfs policies
1
when the platform integration gets created or its keys queried in qplatformintegrationfactorycreate and keys qcoreapplicationaddlibrarypath gets called this call will invoke qfactoryloaderupdate pulling in all the other available plugins too this is obviously undesirable the application may not either need any of the plugins or the plugin might get loaded too early when the application is not fully constructed yetthe problem is either in the platform integration side as it calls addlibrarypath or in qfactoryloader itself because it probably shouldnt start loading the plugins when you add a library path to it
1
in version the userlister plugin does not highlight which users are actually logged in it does list the correct userlist for the group but fails to highlight the users currently logged in
0
mix up on wssecurity policy namespace in generated stubs workaround to perform a find replace on and to and respectively
1
unable to access
1
results in we should somehow deploy newest docs to latest directory or do some redirect i think the easiest way to handle this would be to have another manual step in docs build that would additionaly pushed docs pushed in previous step to the latest directory
0
goal as an osdrosa dedicated admin i want to enable team to be use tekton pipelines by installing openshift pipelines operator through the operatorhub and configure the operator based on my own requirements the access model on osd and rosa entails that customers dedicatedadmin do not have read or write access to layered products namespaces eg openshiftpipelines namespace the consequence is that while dedicatedadmin can install the operator on their own from operatorhub they are not able to make any modifications in the layered products namespaces openshiftpipelines acceptance criteria osd and rosa customers can install and configure openshift pipelines operator without having access to openshiftpipelines namespace
1
updated oracle pdf dumps exam dumps by professionals for anyone who is a oracle utilities customer cloud aspirant who thinks that the oracle certification exam is one of the toughest tasks to attain then you definitely might not have the ideal information on the way to prepare for the oracle exam questions you need to need to get one of the most updated and valid oracle pdf dumps for the preparation of oracle new questions a valid exam dumps up on your hands for the preparation of oracle new questions will most certainly set you around the path to achievement in oracle new questions in motion valid oracle dumps pdf of oracledumpsfree however the true query is how you can get the valid oracle exam dumps for the preparation of new questions the oracle pdf questions provided by the oracledumpsfree are one of several finest and most verified modes for the preparation of oracle utilities customer cloud service implementation essentials exam the among the list of best positive aspects of acquiring this oracle exam questions is that you can possess the oracle braindumps inside the pdf format not only this you can also download the demo version of oracle dumps pdf three months totally free updates on the oracle exam questions the oracle exam dumps provided by the oracledumpsfree may also be obtained with all the chance of receiving yourself updated with all of the new updates on the oracle exam due to the fact the team of oracle professionals is active who keeps a keen eye on all the alterations on the oracle pdf questions you are able to have these updates totally free for as much as days this makes it even a lot easier for you to obtain on track for good results in oracle new questions clear all of your doubts with oracle pdf questions also you can get the oracle exam questions with all the opportunity of customer support in case you really feel any kind of stress within the oracle pdf dumps then you can clear all of your debts by receiving in touch with all the team of oracle specialists who are there to assist you in any kind of problem on braindumps in brief the oracle pdf dumps of oracledumpsfree are the great selection for the preparation of oracle exam new questions
0
hi i noticed that there is no activity record logged for documents excluded by the document filter transformation connector in the webcrawler connector to reproduce the issue on mcf out of the box null output connector web repository connector job documentfilter added which only accepts applicationmsword docdocx documents the simple history does not mention the documents excluded excepted for html documents they have fetch activity and thats all see simplehistorywebjpeg we can only see the documents excluded by the mcf log with debug verbosity activity on connectors codejava removing url because it had the wrong content type imagepngcode see manifoldcflocalfileslog the related code is in webcrawlerconnectorjava codejava fetchstatuscontextmessage it had the wrong content type contenttype fetchstatusresultsignal resultnodocument activityresultcode nullcode the activityresultcode is null if we configure the same job but for a local file system connector with the same document filter transformation connector the simple history mentions all the documents excluded in the simple history see simplehistoryfilesjpeg and the code mentions a specific error code with an activity record logged class fileconnector l codejava if activitiescheckmimetypeindexablemimetype errorcode activitiesexcludedmimetype errordesc excluded because mime type mimetype loggingconnectorsdebugskipping file documentidentifier because mime type mimetype was excluded by output connector activitiesnodocumentdocumentidentifierversionstring continue code so the web crawler connector should have the same behaviour than for fileconnector and explicitly mention all the documents excluded by the user i think best regards olivier
0
using arrayprototypeforeach on a qt list does not give stable result the following test passes with a success rate of about on windows but fails randomly if i replace foreach with a for loop it always passescodeimport qtquick qttest column id column text text text text text text testcase name arrayprototypeforeachonlist function testforeach var count arrayprototypeforeachcallcolumnchildren functionchild count comparecount codein addition there also seem to be platform differences while the test above runs fine on mac os the following program always logs count to the console when i click into its window on mac os whereas it correctly logs count windowscodeimport qtquick width height column id item text text hello world text text hello world text text hello world mousearea anchorsfill parent onclicked var count arrayprototypeforeachcallitemchildren functionchild count consolelogcount count code
1
for jbide please perform the following if nothing has changed in your component since eg xulrunner gwt freemarker birt colorredreject this make sure your component has no remaining unresolved jiras set for fixversion jiras with fixversion ensure your component featuresplugins have been properly upversioned eg from to note if you already did this for the previous milestone you do not need to do so againcodemvn dtychomodemaven update your root pom to use parent pom version code orgjbosstools parent ensure youve built run your plugin tests using the latest target platform version clean verify if the tp is already released ormvn clean verify if still being branch from your existing master branch into a new branch codegit checkout mastergit pull origin mastergit checkout b push origin close do not resolve this jira when donesearch for all task jira or search for base task jira
1
the adapter for eap name is it should contain the majorminor version of the prodcut
1
update the library by building aganist freetype
0
currently the logic to archive edits logs is filespecific which presents some issues for ivans work since it relies on inspecting storage directories using nnstorageinspectstoragedirs it also misses directories that the image layer considers failed which results in edits logs piling up in these kinds of directories this jira is similar to but only deals with archival for now
0
rpms cluster servers wolf clients wolf while trying to run the standard soak ior cmdline with oclass some jobs would hang after they appeared to be completed in the ior logs noformat more nodes wolf node count job name iorharasser mpi coordinated test of parallel io began tue dec command line usrbinior a dfs b v w w r r t i s o testfile t dfschunksize dfscont dfsdiroclass dfsgroup daosserver dfsoclass dfspool dfssvcl machine linux start time skew across all tasks sec dfs container namespace uuid pool uuid svcl testid starttime tue dec path testfile fs gib used fs inodes mi used inodes participating tasks options api dfs apiversion daos test filename testfile access singlesharedfile type independent segments ordering in a file sequential ordering inter file no tasks offsets nodes tasks clients per node repetitions xfersize bytes blocksize mib aggregate filesize mib verbose results access bwmibs iops latencys blockkib xferkib opens wrrds closes totals iter commencing write performance test tue dec write verifying contents of the files just written tue dec commencing read performance test tue dec read remove commencing write performance test tue dec write verifying contents of the files just written tue dec commencing read performance test tue dec read remove commencing write performance test tue dec write verifying contents of the files just written tue dec commencing read performance test tue dec read remove commencing write performance test tue dec write verifying contents of the files just written tue dec commencing read performance test tue dec read remove commencing write performance test tue dec write verifying contents of the files just written tue dec commencing read performance test tue dec read remove max write mibsec mbsec max read mibsec mbsec noformat it would hang at the above i did a bt on the hung ior processes mpi rank noformat missing separate debuginfos use debuginfoinstall gdb bt in clockgettime in clockgettime from in ofigettimens from in ofigettimems from in sockcqsreadfrom from in naofiprogress from in naprogress from in hgcoreprogressna from in hgcoreprogress from in hgcoreprogress from in hgprogress from in crthgprogress from in crtprogresscond from in daoseventprivwait from in dctaskschedule from in daoscontclose from in dfsfinalize in iormain in libcstartmain from in start noformat another mpi rank noformat gdb bt in poll from in mpidnemtcpconnpoll from in from in mpirwaitstate from in mpirwaitimpl from in mpicwait from in mpicsendrecv from in mpirbarrierintradissemination from in mpirbarrierimpl from in pmpibarrier from in dfsfinalize in iormain in libcstartmain from in start gdb cquit noformat i reverted to the version of rpms that i used over the weekend and i no longer hung
1
sqoops test suite is heavily biased toward testing with the hsqldb embedded database this issue proposes to refactor some tests into abstract classes which can be used as a basis for testing a variety of connection manager implementations ensuring better crossdatabase compatibility coverage
0
gliffy confluence is ready for deployment to ondemand see here obr available here notable bugfixes and improvements highresolution image export shapes as links android shapes improved network shapes diagram version pinning specify the published version of a diagram using this new feature when editing a confluence page select the gliffy diagram macro and click the version button to specify the publishedpinned version of a diagram fullscreen viewer improvements shape search improvements fix for firefox js indexsizeerror fix for link color rendering on legacy diagrams numerous other bug fixes and stability improvements some resources served from a cdn when in ondemand cloud diagram import improvements fullscreen viewer security vulnerability fix confluence attachment macro button fix
1
after updating to nbjavac when i try to open a class in a binary jar file it often fails with an exception like this javalangclasscastexception comsuntoolsjavaccodetypeclasstype cannot be cast to comsuntoolsjavaccodetypeerrortype at at at this happens whenever the class references a class in a different jar for example create a maven project add orgapachehttpcomponentshttpclient but dont download source and open the httpclient class it fails because of the various classes referenced from the httpcore dependency it looks like this started once was merged into nbjavac treefactorys assumption that getkinderror implies errortype is no longer true
0
ive verified that they are broken in my devt environment toonoformat git checkout mvn dtestmetasplittestshellservertest package dfailifnotestsfalsenoformat
1
seam cvs
0
responsecachingpolicy currently uses integers for interpreting the size of contentlength as well internallythis causes issues in attempting to use the module for caching entities that are over in size the module does not fail gracefully but throws a numberformatexceptioni have a patch that fixes this by promoting the int long which should allow for larger entities to be cached it also updates the public facing api where possible i dont think that the promotion should break compatibility massivelythe changes can also be seen here
0
just for curiosity i tried tomee with upgrading xbean i found out that the javasecurityacl package is removed from and javasecurityaclgroup is referenced in abstractsecurityservice i dont know if it is used somewhere else ive tried to patch it making abstractsecurityservicegroup implement principal instead of javasecurityaclgroup and everything works at least in my environment but i dont know about other setups
1
this bug is caused by the improvements made in which fixes an issue with streamstream leftouter joins the issue is only caused when a streamstream leftouter join is used with the new joinwindowsoftimedifferenceandgrace api that specifies the window time grace period this new api was added in ak no previous users are affected the issue causes that the internal changelog topic used by the new outershared window store keeps growing unbounded as new records come the topic is never cleaned up nor compacted even if tombstones are written to delete the joined andor expired records from the window store the problem is caused by a parameter required in the window store to retain duplicates this config causes that tombstones records have a new sequence id as part of the key id in the changelog making those keys unique thus causing the cleanup policy not working in we deprecated joinwindowsofsize in favor of joinwindowsoftimedifferenceandgrace the old api uses the old semantics and is thus not affected while the new api enable the new semantics the problem is that we deprecated the old api and thus tell users that they should switch to the new broken api we have two ways forward fix the bug non trivial undeprecate the old joinwindowofsize api and tell users not to use the new but broken api
1
disallow descendingtrue for continuous changes feed you dont get updates past the seq of the db when you startedeither ignore or throw a preference for a
0
im using awesomewm and whenever i restart it only tray icons which use qt disappear tray icons using gtk dont a bug report on awesomewms side was reported here ive found a similar bug report here related to orgkdestatusnotifieritem but it seems not relevant because awesomewm implements the xembed protocol
1
the api at appears to behave improperly when either the until or since param cant be resolved it should probably return a instead it returns the entire set of commits in the repos history
0
sourcetree crashes immediately after i press the open button in the file selection dialog for the apply patch action the log and the patch files are attached to this issue report the repository is
1
code failed to execute goal defaultcli on project test unable to resolve artifacts javautilnosuchelementexception role orgapachemavensharedtransferdependenciescollectdependencycollector rolehint orgapachemavenlifecyclelifecycleexecutionexception failed to execute goal defaultcli on project test unable to resolve artifacts at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternalbuildersinglethreadedsinglethreadedbuilderbuild at orgapachemavenlifecycleinternallifecyclestarterexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavenexecute at orgapachemavenclimavencliexecute at orgapachemavenclimavenclidomain at orgapachemavenclimavenclimain at native method at jdkinternalreflectnativemethodaccessorimplinvoke at jdkinternalreflectdelegatingmethodaccessorimplinvoke at javalangreflectmethodinvoke at orgcodehausplexusclassworldslauncherlauncherlaunchenhanced at orgcodehausplexusclassworldslauncherlauncherlaunch at orgcodehausplexusclassworldslauncherlaunchermainwithexitcode at orgcodehausplexusclassworldslauncherlaunchermain caused by orgapachemavenpluginmojoexecutionexception unable to resolve artifacts at orgapachemavenpluginsdependencyresolverslistrepositoriesmojodoexecute at orgapachemavenpluginsdependencyabstractdependencymojoexecute at orgapachemavenplugindefaultbuildpluginmanagerexecutemojo at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternalbuildersinglethreadedsinglethreadedbuilderbuild at orgapachemavenlifecycleinternallifecyclestarterexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavenexecute at orgapachemavenclimavencliexecute at orgapachemavenclimavenclidomain at orgapachemavenclimavenclimain at native method at jdkinternalreflectnativemethodaccessorimplinvoke at jdkinternalreflectdelegatingmethodaccessorimplinvoke at javalangreflectmethodinvoke at orgcodehausplexusclassworldslauncherlauncherlaunchenhanced at orgcodehausplexusclassworldslauncherlauncherlaunch at orgcodehausplexusclassworldslauncherlaunchermainwithexitcode at orgcodehausplexusclassworldslauncherlaunchermain caused by orgapachemavensharedtransferdependenciescollectdependencycollectorexception javautilnosuchelementexception role orgapachemavensharedtransferdependenciescollectdependencycollector rolehint at orgapachemavensharedtransferdependenciescollectinternaldefaultdependencycollectorcollectdependencies at orgapachemavenpluginsdependencyresolverslistrepositoriesmojodoexecute at orgapachemavenpluginsdependencyabstractdependencymojoexecute at orgapachemavenplugindefaultbuildpluginmanagerexecutemojo at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternalbuildersinglethreadedsinglethreadedbuilderbuild at orgapachemavenlifecycleinternallifecyclestarterexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavenexecute at orgapachemavenclimavencliexecute at orgapachemavenclimavenclidomain at orgapachemavenclimavenclimain at native method at jdkinternalreflectnativemethodaccessorimplinvoke at jdkinternalreflectdelegatingmethodaccessorimplinvoke at javalangreflectmethodinvoke at orgcodehausplexusclassworldslauncherlauncherlaunchenhanced at orgcodehausplexusclassworldslauncherlauncherlaunch at orgcodehausplexusclassworldslauncherlaunchermainwithexitcode at orgcodehausplexusclassworldslauncherlaunchermain caused by orgcodehausplexuscomponentrepositoryexceptioncomponentlookupexception javautilnosuchelementexception role orgapachemavensharedtransferdependenciescollectdependencycollector rolehint at orgcodehausplexusdefaultplexuscontainerlookup at orgcodehausplexusdefaultplexuscontainerlookup at orgapachemavensharedtransferdependenciescollectinternaldefaultdependencycollectorcollectdependencies at orgapachemavenpluginsdependencyresolverslistrepositoriesmojodoexecute at orgapachemavenpluginsdependencyabstractdependencymojoexecute at orgapachemavenplugindefaultbuildpluginmanagerexecutemojo at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternalbuildersinglethreadedsinglethreadedbuilderbuild at orgapachemavenlifecycleinternallifecyclestarterexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavenexecute at orgapachemavenclimavencliexecute at orgapachemavenclimavenclidomain at orgapachemavenclimavenclimain at native method at jdkinternalreflectnativemethodaccessorimplinvoke at jdkinternalreflectdelegatingmethodaccessorimplinvoke at javalangreflectmethodinvoke at orgcodehausplexusclassworldslauncherlauncherlaunchenhanced at orgcodehausplexusclassworldslauncherlauncherlaunch at orgcodehausplexusclassworldslauncherlaunchermainwithexitcode at orgcodehausplexusclassworldslauncherlaunchermain caused by javautilnosuchelementexception at orgeclipsesisuplexusrealmfilteredbeansfiltereditrnext at orgeclipsesisuplexusrealmfilteredbeansfiltereditrnext at orgeclipsesisuplexusdefaultplexusbeansitrnext at orgeclipsesisuplexusdefaultplexusbeansitrnext at orgcodehausplexusdefaultplexuscontainerlookup at orgcodehausplexusdefaultplexuscontainerlookup at orgapachemavensharedtransferdependenciescollectinternaldefaultdependencycollectorcollectdependencies at orgapachemavenpluginsdependencyresolverslistrepositoriesmojodoexecute at orgapachemavenpluginsdependencyabstractdependencymojoexecute at orgapachemavenplugindefaultbuildpluginmanagerexecutemojo at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternalmojoexecutorexecute at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternallifecyclemodulebuilderbuildproject at orgapachemavenlifecycleinternalbuildersinglethreadedsinglethreadedbuilderbuild at orgapachemavenlifecycleinternallifecyclestarterexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavendoexecute at orgapachemavendefaultmavenexecute at orgapachemavenclimavencliexecute at orgapachemavenclimavenclidomain at orgapachemavenclimavenclimain at native method at jdkinternalreflectnativemethodaccessorimplinvoke at jdkinternalreflectdelegatingmethodaccessorimplinvoke at javalangreflectmethodinvoke at orgcodehausplexusclassworldslauncherlauncherlaunchenhanced at orgcodehausplexusclassworldslauncherlauncherlaunch at orgcodehausplexusclassworldslauncherlaunchermainwithexitcode at orgcodehausplexusclassworldslauncherlaunchermain code
1
pick rental item in ecommerce and try to enter its start or end date the calendar is not displayed the way it should also the hours minutes and seconds are not set to zeroas far as i recall they should bevalentina
0
if ha is configured with replicated journal then it takes some time to backup to synchronize with live server once backup is in sync with live then following information appears in info backup server is synchronized with info backup announcedcodereading server logs to see whether backup is in sync is not convinient and user friendly way we should provide public api to check state of synchronization it should be added to activation interface so it can be checked in cli in eap this method should be implemented for sharednothingbackupactivation and also sharednothingliveactivation
1
hi i want integrate npanday in our build process but i have problems using it under linux the problems seems to be in the dotneetexecutable component which tries to do some strange escaping and finally fails since the runtime implementation parses the command using stringtokenizer which does not repects any escaping when splititng arguments sobinsh c gmcs pathtomyreposnsefilerspis actuallly splitted into this four parametersbinshcgmcspathtomyreposnsefilerspwhich cannot be interpreted by the shellprobably you should leave the plexus commandline shell to take care of escapeingi did not tried this on windows yet but for linux i provide a patch that works for me at least to compile my modules
1
os ubuntu de gnome session resolution dpi gtk scaling when i try to run any qtbased application tested with on my uhd display under a fresh ubuntu install the app does not scale at all kde plasma sets qtscreenscalefactors but gnome upstream thinks it shouldnt
0
follow on issue from the tracing spans all appear to be span kind client
0
the following choices make the examples not streaming many of the streaming example are currently using a small bounded data set the data set is immediately consumed the program finishes many examples read text or csv input filesi suggest to rework the examples to use not static data sets and files but infinite streams and throttled generated streams via iteratorscommand line parameters can be used to influence their behavior
0
the greenhouse app starts and lets the user log in once the oauth process is complete the app displays the events list screen with an activity spinner that never goes away
1
related issues issue may be related to converting message formatto generate this problem create a new port ie and give it assign it another port number for example the security provider to anonymous create a provider for thiscreate a queue examplessend few messages to examples to amqp porttry to read from broker will crash end exitin my case the writing was done from java and reading from pythonthese code can be usedfor maven orgapacheqpid qpidjmsclient javautilpropertiesimport javaxjmsconnectionimport javaxjmsconnectionfactoryimport javaxjmsdeliverymodeimport javaxjmsdestinationimport javaxjmsexceptionlistenerimport javaxjmsjmsexceptionimport javaxjmsmessageimport javaxjmsmessageproducerimport javaxjmssessionimport javaxjmstextmessageimport javaxnamingcontextimport javaxnaminginitialcontextpublic class hello private static final string user guest private static final string password guest public static void mainstring args try properties prop new properties propputjavanamingfactoryinitial orgapacheqpidjmsjndijmsinitialcontextfactory propputconnectionfactorylocalhost propputqueuemyqueue examples context context new initialcontextprop connectionfactory factory connectionfactory contextlookuplocalhost destination queue destination contextlookupmyqueue connection connection factorycreateconnectionuser password connectionsetexceptionlistenernew myexceptionlistener connectionstart session session connectioncreatesessionfalse sessionautoacknowledge for int i i i messageproducer producer sessioncreateproducerqueue textmessage message sessioncreatetextmessagehello world from java i producersendmessage deliverymodenonpersistent messagedefaultpriority messagedefaulttimetolive connectionclose catch exception exp systemoutprintlncaught exception exiting expprintstacktrace private static class myexceptionlistener implements exceptionlistener public void onexceptionjmsexception exception systemoutprintlnconnection exceptionlistener fired exiting exceptionprintstacktrace usrbinenv pythonfrom qpidmessaging import url queue examplesconnection connectionurltry connectionopen session connectionsession while true receiver sessionreceiverqueue message receiverfetch print got message print messagecontent sessionacknowledgeexcept messagingerrorm print exeption m connectionclosenote if the queue is empty and we try to read no writing can be done the client java will throw an exceptionjavaxjmsjmsexception javalangnullpointerexception at at at at at at at at at at at at at the queue has messages the broker will crash when we consuming messages pythonthe following trace generated by the debug transportconnection recv unitmessage debug transportsession identify debug transportsession processed debug transportsession processed debug transportconnection flush debug transportconnection flush unhandled exception javalangnullpointerexception in thread exitingjavalangnullpointerexception at at at at at at at at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at error servermain uncaught exception shutting downjavalangnullpointerexception at at at at at at at at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at info subscriptionstate state suspendedthe reciever will produce this error pythonexeption connection abortedtraceback most recent call last file examplesapihello line in connectionclose file line in close file line in close ssnclosetimeouttimeout file line in close file line in close selfsynctimeouttimeout file line in sync file line in sync if not selfewaitlambda not selfoutgoing and not selfacked timeouttimeout file line in ewait result selfconnectionewaitlambda selferror or predicate timeout file line in ewait selfcheckerror file line in checkerror raise eqpidmessagingexceptionsconnectionerror connection aborted
1
functional description improve logging so that database records will now include the calculated grade in addition to the existing letter gradeoriginal messagefrom kirk alexander sent friday october pmto thomas amsler beginoftheskypehighlighting endoftheskypehighlighting joyce johnstone ucdsakaidevsmartsiteucdavisedusubject gb loggingit looks like we save the export eid the userid studentid and lettergrade in the log tables but not the calculated grade lets make a jira asap to add this loghere is a sample of what i seeactionrecordid propertyname exportcmid exportuserid lettergrade a
1
propscreatorupl and propscreator in contentcontentbundlestypesproperties should be taken care of word order likepropscreator created by originally uploaded by japanese case the correct word order is something like ΒΏΒΏΒΏΒΏΒΏΒΏΒΏΒΏΒΏΒΏ by createdbest regardsshoji
1
note this bug report is for confluence cloud using confluence server see the corresponding bug report panelwhen a user move issues on a jira calendar those changes are not being replicated to other users in a timely manner how to replicate create a jira calendar inside a page in confluence with open that same page with move issues on that calendar with refresh calendar with no changes can be seen add or remove due date to an issue in jira that change does not get replicated to the calendarif you move issues with wont see those changes promptly as well even if on jira the due dates are correct this same behavior applies to issues that had a resolution set and should be grayed out some users see it some users dontalso when setting due date on an issue that issue doesnt show up on the users calendar as well but if a new calendar is created the same shows up this can lead to serious problems as users can edit calendars at the same time and completely lost track of what is actually happening to the due dates of their jira issues
1
start with clean browser data no local storage go to rapid board should be on work tab select an issue click on report mode quickly navigate twice back with browser quickly navigate twice forward with browseryou should see a bunch of errors to do with attr being undefined the errors keep appearing at a regular frequency the page grows with errors
1
working in client modeaxiomnodetostring doesnt return elements containing special latin charactersthe tags received containing special latin characters are returned empty by this functionresult obtained with this obtained with okserver applicationsoapxml action aã±o contable est㑠cerradothanks
1
this was originally reported against as attached sample project builds using maven but fails with using snapshot svn rev x clean packageapache maven version home locale enca platform encoding name linux version arch family unix error stacktraces are turned on scanning for projects created new class realm included excluded excluded excluded included excluded included excluded excluded excluded excluded excluded excluded excluded excluded included included included included included included included included included included included included included included included included included included included included included included included included included included failed to lookup a member of active collection with role orgapachemavenlifecyclemappinglifecyclemapping and rolehint bundlethis realm strategy orgcodehausplexusclassworldsstrategyselffirststrategyurls of foreign imports entryimport entryimport entryimport entrynumber of parent imports entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entryimport entrythis realm plexuscorethis strategy orgcodehausplexusclassworldsstrategyselffirststrategyurls of foreign imports unable to lookup component orgapachemavenlifecyclemappinglifecyclemapping it could not be started role orgapachemavenlifecyclemappinglifecyclemapping rolehint bundleclassrealm at at at at at at at at at at at at at at at at at method at at at at at at at by orgcodehausplexuscomponentrepositoryexceptioncomponentlifecycleexception error constructing component role orgapachemavenlifecyclemappinglifecyclemapping implementation orgapachemavenlifecyclemappingdefaultlifecyclemapping role hint bundle at at at at at morecaused by orgapachexbeanrecipeconstructionexception unable to convert property value from orgcodehausplexuscomponentbuilderxbeancomponentbuilderplexusconfigurationrecipe to javautillist for injection private javautillist orgapachemavenlifecyclemappingdefaultlifecyclemappinglifecycles at at at at at at at morecaused by orgapachexbeanrecipeconstructionexception unable to convert configuration for property lifecycles to javautillist at at at at more some problems were encountered while processing the poms unknown packaging bundle tmpbundletestpomxmlorgapachemavenprojectprojectbuildingexception some problems were encountered while processing the poms unknown packaging bundle tmpbundletestpomxml at at at at at at at at method at at at at at at at
0
problematst plugin gives false alarm for jvm code cache error utilizing the flagsxxprintcodecacheoncompilation xxprintcodecache xxusecodecacheflushing it can clearly be seen that the code cache is no where near filled up but the healthcheck reports the warning up until the version this false warning is not notethe health check also checks the catalinaout log for the codecache is full compiler has been disabled message it will raise the warning in case those messages are found
1
noformat β€’ failure channelparticipation three node etcdraft network with a system channel joins channels using the legacy channel creation mechanism and then removes the system channel to transition to the channel participation api timed out after expected name testchannel url status active clusterrelation member height to equal name testchannel url status inactive clusterrelation configtracker height noformat
1
the hbase rest server component is not visible on the hbase summary page after a stack upgrade from bi to hdp str install bi with hbase on ambari upgrade to ambari
1
simpledb has a dependency on bdb we should rewrite this using dbm ndbm whatever is available in libc
1
i need to output some information about current item of listview separately for example in outside label but i cant succeed i figured out that problem related to where model was defined if it is separate object with id this behavior happened but when i define it inlined inside the view all works fine this behaviour is very weird and looks like a bug to make attached code works just switch to commented model definition
0
when a process invokes another process theres currently no invoke check performed afterwhile to force the into activity recovery if no reply comes backneed to add the invoke check
0
created a table with columns notification message from hive hooks is about and the hook sends the message in compressed format however processing of this notification fails in atlas server due to the following error noformat warn audit record too long entitytypehivetable entity attribute values not stored in audit error graph rollback due to exception javalangillegalargumentexception keyvalue size too large at at at at orgapacheatlasrepositoryaudithbasebasedauditrepositoryputeve noformat this issue was addressed in earlier releases via the fix needs to be reviewedupdated for recent additions in master ie addition of relationshipattributes
1
hello my name is jonathan su and im from a software company called cloudwords we are an enterprise software platform for the translations space part of our service includes an api for our customers to simplify the usage we provide a java sdk that users can use that will act as a wrapper to make requests to our api our customers can use or modify this sdk as they wish our license is available here we want to publish this bundle as a maven dependency so that java developers can easily access what they need thank you jonathan
0
the permsall definition in java is permsall and does not include admin perms but in c the permsall def includes the admin perms we should make it consistent to include or not include the admin perms in both c and java
1
this one were after resizing the tree horizontallyjavalangillegalargumentexception argument not valid at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at method at at at at at at at
1
switching assignments in gradebook does not always reread the stored data thiscan result in confusion in the display on the left ad well as the wrong valuebeing stored for an item when the save button is pressedit looks like if i am working on assignments wo equal weights then i can confuse the form into both displaying the wrong of category and saving the wrong onto one of the items choose set category to press enter choose weight is sometimes carried into title is refreshed but weight comes from previous screen not from existing weight save change weight in enter step go to go back to save weight entered in step now gets saved to sayslooks to me like whats happening here is just that the text box is preserving its state when you click enter so the field value is now the code is supposed to refresh the field value from the data model each time you navigate between edit objects but for some reason its not in this case
1
there was a test case that looked like the last should have been but the compiler did not report that an stopped processing tags after the closing tag
0
generally it was the authentication client and its xml parser that needed to be able to load all elytron providers so we hard coded provider resolutionwe have a couple of alternative options hard code the class names in the authentication client and attempt to load best efforts load using serviceloader discovery but sort them into the correct position
1
we want to provide the sling resourcebuilder as alternative way in sling mock to simply create test resourcescurrently sling mocks supports its own contentbuilder via the create method on the contexta new method build should be added which provides a preconfigred resourcebuilder instance for the current resource resolveralternatively the resourceresolverfactory service can be accessed directly form the test code
0
sharetestinteropbintestrpcinteropsh loops over all the pids of the server processes it creates but then does not completely exit them
0
with lateralpop will have information about list of columns to be excluded from the lateral output container mostly this is used to avoid producing origin repeated column in lateral output if its not required from the projection list this is needed because in absence of it lateral has to copy the repeated column n number of times where n is the number of rows in right incoming batch for each left incoming batch row this copy was very costly both from memory and latency perspective hence avoiding it is a must for lateralunnest case
0
from the mailing listcodesince upgrading to spark im getting ascalareflectinternalmissingrequirementerror when creating a dataframefrom an rdd the error references a case class in the application therdds type parameter which has been verified to be presentitems of this is running on aws emr yarn i do not get this error runninglocally reverting to spark makes the problem go the jar file containing the referenced class the app assembly jaris not listed in the classpath expansion dumped in the error messagei have seen and am guessing that this is the root causeespecially since the code added there is involved in the stacktracethat said my grasp on scala reflection isnt strong enough to makesense of the change to say for sure it certainly looks though that inthis scenario the current threads context classloader may not be whatwe think it is given aboveany ideasapp code def registertablename string rddrddimplicit hc hivecontext val df hccreatedataframerdd dfregistertemptablename stack tracescalareflectinternalmissingrequirementerror class commyclass injavamirror with of type classsunmisclauncherappclassloader with classpath lots and lots of pathsand jars but not the app assembly jar not found reportcodehii use spark i tried to create dataframe from rdd below but got scalareflectinternalmissingrequirementerrorval is rdd and is a case classhow can i fix thisexception in thread main scalareflectinternalmissingrequirementerror class in javamirror with of type class sunmisclauncherappclassloader with classpath and parent being of type class sunmisclauncherextclassloader with classpath and parent being primordial classloader with boot classpath not found at at at at at at at at at at at at at at
1
i am currently running sourcetree sourcetree notifies that an update is available when the update is then triggered it fails to install
1
we need a service to keep track of missing feed instances
0
note this bug report is for jira server using jira cloud see the corresponding bug report panelthe jira rest api documentation gives the wadl file documentation for download this document in turns references the which is not available please see screenshot of the wadl source and the error returned this will break potential wadl clients
0
steps to reproduce in your addon descriptor add a webitem with location jiraagileboardtools go to classic software project expected a plain button is displayed actual please see attachment the button even though it has those two visual glitches works as expected it opens my dialog remarks i was looking into greenhopper soy templates via chrome dev tools and apparently codejavaghtplboardxrendertoolsectionscode is the problem maybe it should behave differently when there are no subitems or type equals to webitem just like in this case it works fine for next gen projects when i use jirasoftwareboardtools location
0
cryptogen currently creates useful artifacts but the are hard to use with common configuration and orchestration tools as an example fitting into a structure similar to the following would make life simpler buildnodes β”œβ”€β”€ cli β”‚ β”œβ”€β”€ channeltx β”‚ β”œβ”€β”€ configtxyaml β”‚ β”œβ”€β”€ coreyaml β”‚ β”œβ”€β”€ msp β”‚ β”‚ β”œβ”€β”€ admincerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ cacerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ keystore β”‚ β”‚ β”‚ └── β”‚ β”‚ └── signcerts β”‚ β”‚ └── β”‚ └── tls β”‚ β”œβ”€β”€ cacrt β”‚ β”œβ”€β”€ servercrt β”‚ └── serverkey β”œβ”€β”€ orderer β”‚ β”œβ”€β”€ configtxyaml β”‚ β”œβ”€β”€ coreyaml β”‚ β”œβ”€β”€ genesisblock β”‚ β”œβ”€β”€ msp β”‚ β”‚ β”œβ”€β”€ admincerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ cacerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ keystore β”‚ β”‚ β”‚ └── β”‚ β”‚ └── signcerts β”‚ β”‚ └── β”‚ β”œβ”€β”€ ordereryaml β”‚ └── tls β”‚ β”œβ”€β”€ cacrt β”‚ β”œβ”€β”€ servercrt β”‚ └── serverkey β”œβ”€β”€ β”‚ β”œβ”€β”€ configtxyaml β”‚ β”œβ”€β”€ coreyaml β”‚ β”œβ”€β”€ msp β”‚ β”‚ β”œβ”€β”€ admincerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ cacerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ keystore β”‚ β”‚ β”‚ └── β”‚ β”‚ └── signcerts β”‚ β”‚ └── β”‚ └── tls β”‚ β”œβ”€β”€ cacrt β”‚ β”œβ”€β”€ servercrt β”‚ └── serverkey β”œβ”€β”€ β”‚ β”œβ”€β”€ configtxyaml β”‚ β”œβ”€β”€ coreyaml β”‚ β”œβ”€β”€ msp β”‚ β”‚ β”œβ”€β”€ admincerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ cacerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ keystore β”‚ β”‚ β”‚ └── β”‚ β”‚ └── signcerts β”‚ β”‚ └── β”‚ └── tls β”‚ β”œβ”€β”€ cacrt β”‚ β”œβ”€β”€ servercrt β”‚ └── serverkey β”œβ”€β”€ β”‚ β”œβ”€β”€ configtxyaml β”‚ β”œβ”€β”€ coreyaml β”‚ β”œβ”€β”€ msp β”‚ β”‚ β”œβ”€β”€ admincerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ cacerts β”‚ β”‚ β”‚ └── β”‚ β”‚ β”œβ”€β”€ keystore β”‚ β”‚ β”‚ └── β”‚ β”‚ └── signcerts β”‚ β”‚ └── β”‚ └── tls β”‚ β”œβ”€β”€ cacrt β”‚ β”œβ”€β”€ servercrt β”‚ └── serverkey └── β”œβ”€β”€ configtxyaml β”œβ”€β”€ coreyaml β”œβ”€β”€ msp β”‚ β”œβ”€β”€ admincerts β”‚ β”‚ └── β”‚ β”œβ”€β”€ cacerts β”‚ β”‚ └── β”‚ β”œβ”€β”€ keystore β”‚ β”‚ └── β”‚ └── signcerts β”‚ └── └── tls β”œβ”€β”€ cacrt β”œβ”€β”€ servercrt └── serverkey
1
see look for pcsvtestsubgroupsdtestgroupscsvother i tried changing the string to dtestgroupscsvother pcsvtestsubgroups but the change seems to have affected the default branch only
1
the opencart queries work ok from sqlci and jdbc driver but fails thru with this error in resultsetclose methodsql read from file is select from octaxrule left join octaxrate on inner join octaxratetocustomergroup on left join oczonetogeozone on left join ocgeozone gz on gzgeozoneid where and shipping and and and or order by ascsql string length is occurredthe message id idsunknownreplyerror with parameters the message id idsunknownreplyerror with parameters at at at at at at at at at
1
heatmap doesnt show the full diagram image attached
1
summary the standalone downloads from the jira software website for versions and are gzipped twice the mac version of tar will handle this without issue and a tar zxf will work as expected however on linux tar will fail to unzip and untar the download environment centos all linux versions most likely steps to reproduce download standalone jira software targz attempt to extract using tar zxf expected results file is extracted to directory actual results noformat tar zxf tar this does not look like a tar archive tar skipping to next header tar exiting with failure status due to previous errors noformat workaround use gunzip and then rename to get tar working codejava gunzip mv tar zxf code
1
due to a race condition between the sender thread and the producersend the following is possible in kakfaproducerdosend we add partitions to the transaction and then do accumulatorappend in senderrun we check whether there are transactional request if there are we send them and wait for the response if there arent we drain the accumulator queue and send the produce requests the problem is that the sequence step is entire possible this means that we wont send the addpartitions request but yet try to send the produce data which results in a fatal error and requires the producer to close the solution is that in the accumulatordrain we should check again if there are pending add partitions requests and if so dont drain anything
1
disable activate review error message is enable refresh reviewthe review is now displayed in the editor but annotations are not visible i had to deactivate and activate the review to make it work
0
spring source seems to release pom which have no version according to the maven philosophy this means that they inherit their parents versionhowever if i trymvn installinstallfile dpomfilepomxmli getquote scanning for projects searching repository for plugin with prefix install building spring ldap core tasksegment aggregatorstyle fatal error an invalid artifact was detectedthis artifact might be in your projects pom or it might have been included transitively during the resolution process here is the information we do have for this artifact o groupid orgspringframeworkldap o artifactid springldapcore o version o type pom traceorgapachemavenartifactinvalidartifactrtexception for artifact orgspringframeworkldapspringldapcorepom the version cannot be emptyquotethe pom looks like thisquoteproject xmlns xmlnsxsi xsischemalocation orgspringframeworkldap springldapparent springldapcore jar spring ldap core quote
0
story as a quay administrator i want the operator to watch updates to a specified config bundle secret so that i can manage quay deployments with simpler gitops workflowsbackground an increasing number of customers want to manage all platform infrastructure components with gitops only quay operator currently supports that but makes for a complicated gitops workflow since the config bundle secrets must be completed ripnreplaced to trigger validation and rollout via the config editor operatoracceptance criteria the quay config is a configmap instead of a secret the quay operator watches all updates to the existing configmap triggering validation and rolling update of the quay deployment specifically the operators allows to leverage certs provided by certmanager specifically the operator exposes the ca so it can be referenced from a route and leverage the service ca operator to get the ca the operator supports references to credentials stored in secrets for any sensible data in the configmapbased config bundle so that for instance cloudcredential operator can be used to request provider credentials in a gitops workflow a apply merge of updates to the existing config bundle is enough to update the quay configuration configbundle updates are reflected in the status block of the cr this reconciliation allows for simple gitops workflows pushing commit changes down to the cluster
1
this is related to as a longer term solution for the categories and no categories modes
1
this issue is introduced in providing netty server for bookiein we agreed on the start sequence of bind bookie port first to avoid two processes running at the same start bookie eg initialize bookie storage and replaying start nio server to accept incoming requestsbut after refactoring for netty server step is combined to be executed in step so two processes could have chance to run at the same time replaying journals this is pretty badwe need to change the code to stick on the sequence described above
1
currently there is no way to specify whether parameters are required or not the metadata could add required boolean to the parameter definition for those that arearent currently parameters are gathered by annotation which could be extended to have a required field which default is true
0
instructors who give an assessment with a single submission can reset allow retake for individual students instructors should be able to see all submissions for reset students suggest enabling the highest or latest submissions or all submissions dropdown when viewing assessment submissions in which or more of the students have been reset use case student completes some of the questions in an assessment and gets disconnected or has an issue instructor would like the student to continue and just complete the remaining questions instructor resets student submission instructor would like to combine results from initial submission with second submission im not sure how youd handle the case where two submissions have different answers youd either have to give the option to accept the earlierlater answer or give the instructor some ui to make the decision much more complex
0
when using with composer class filestreamsplitter and a smooks config where i was parsing the input file with a csv reader i noticed that lines that would not match the provided tokens in terms of number were garbled this is mainly due to an issue in csv cartridge fixing that issue though would not suffice as there is no mechanism in jbossesb to provide an executioneventlistener to the filestreamsplitter which extends abstractstreamsplitter and also to the smooksactionplease refer to for further information
0
i got this error message bq could not find the build for build jira head tpm ldap tpm active directory there is no cardinal nor or even and the apostrophe is weird too it should be etc which is all probably way too much work to be bothered with such an unimportant message not sure if bamboo is but this is even more of a headache if you want to be able to translate to other languages i would suggest you avoid the problem altogether by a rewrite to something like could not find build for build jira head tpm ldap tpm active directory
0
while running a job without fault tolerance producing data to kafka the job failed due to batch expired exception i tried to increase the requesttimeoutms and maxblockms to instead of but still the same problem the only way to ride on this problem is using warn orgapachekafkaclientsproducerinternalssender got error produce response with correlation id on topicpartition retrying attempts left error warn orgapachekafkaclientsproducerinternalssender got error produce response with correlation id on topicpartition retrying attempts left error warn orgapachekafkaclientsproducerinternalssender got error produce response with correlation id on topicpartition retrying attempts left error error orgapacheflinkstreamingruntimetasksstreamtask caught exception while processing timerjavalangruntimeexception could not forward element to next operator at at at at at at at at at at at at at at at at by javalangruntimeexception could not forward element to next operator at at at at at at morecaused by javalangexception failed to send data to kafka batch expired at at at at morecaused by orgapachekafkacommonerrorstimeoutexception batch info orgapachekafkaclientsproducerkafkaproducer closing the kafka producer with timeoutmillis ms
0
integration tests covering kiewbcommondmnwebappkogitomarshaller unit tests need to cover unit tests need to cover paymentdatedmn unit tests need to cover decision service marshalling unit tests need to cover null default expression unit tests need to cover attachment marshallingx unit tests need to cover formatted xml string this is not possible with webappruntime simple business knowledge model node simple decision node simple decision service node simple input data node simple knowledge source node decision node with simple context decision node with decision table decision node with function java decision node with function pmml decision node with function feel decision node with invocation decision node with literal expression decision node with relation decision node with complex context with decision table decision node with complex context with function decision node with complex context with invocation decision node with complex context with literal expression decision node with complex context with relation business knowledge model node with simple context business knowledge model node with decision table business knowledge model node with function java its not really necessary to cover all function typesx business knowledge model node with function pmml its not really necessary to cover all function typesx business knowledge model node with function feel its not really necessary to cover all function types business knowledge model node with invocation business knowledge model node with literal expression business knowledge model node with relation x decision service node with input data covered by the test for decision service node with input decision covered by the test for x decision service node with encapsulated decision covered by the test for decision service node with output decision covered by the test for connector association connector information requirement connector authority requirement connector knowledge requirement graph with all nodes and all connector types graph with colours and fonts set on nodes data type simple data type simple list data type simple constraint enumeration data type simple constraint expression data type simple constraint range data type structure
1
decision table x grid header could show output data type editing output data type is possible via properties panel x hide output data type in header when there are multiple outputclause columns inputclause columns header should show input data type inputclause columns should support changing the input data type outputclause columns header should show output data type outputclause columns should support changing the output data manual acceptance test build and deploy when one input output multiple inputs multiple outputs reopening edit from header edit from properties panel
0
several ui issues remain in the roster tool interface please see attached screenshots for more information to view these items go to the roster tool overview page selected tab at top shows a button within a tab spaces are needed after comma in the user summary text long userids overlap the user role text for name userid role etc is very small font size all other fonts appear larger even the email links and dropdown menus after a connection is requested the email link and connection requested information no longer aligns with the other emailconnection links on the page if the group name is long enough to bump to more than one line the horizontal rule below groups no longer lines up with the rest of the column headings
1
when you install ews from rpm you cannot start httpd outoftheboxby default the snmpvar directory for modsnmp is set to etchttpdvar in confdmodsnmpconf line the directory does not exist by default so the httpd process tries to create it it is denied so because httpdt is not allowed to write into httpdconfigt directory etchttpd and create another directory within it in my opinion the best solution is to move the default snmpvar dir to a different one because the var directory doesnt belong to etchttpd which should be used for configuration only not runtime stuff else we would have to include the etchttpdvar directory with the appropriate selinux context writable for httpd httpdcachet or somethingnoformattypeavc avc denied write for commhttpd namevar tclassdirnoformat
1
using quick or qt widgets right button click is not detected
0
were seeing an issue where after converting pdf document to image the quality is degraded this causes the text to lose the sharpness and appear pixelated there is some pixelation even in a higher resolution image im attaching screenshot of the image and the original pdf as well please help take a look at what could be the problem also please let me know if you need anything else
0
configuration os x lion release modethe crashing tests have been qskipd until this is resolved
1
viewfilesystem allows unconditional listing of internal directories mount points and and changing work requires read requires executable permissionhowever the hardcoded permissionrrr for filestatus representing an internal dir does not have executable bit setthis confuses yarn localizer for public resources on viewfs because it requires executable permission for other on all of the ancestor directories of the resource codejavaioioexception resource viewfspubcachecachetxt is not publicly accessable and as such cannot be part of the public cache at at at at
1
problem pig view not loadingimpact cant use itsteps to create instance of hue to view migration create instance of pig open pig view
0
paneltitleupdate october currentreporter or the reporter syntax is now supported panel issue summary using currentreporter syntax at the insight custom field iql filter scope are not working steps to reproduce add user attribute in insight object type and update with a user email address add insight custom field and add to the project screen at insight object field configuration use user currentreporter iql create issue on behalf of customer from jira update the reporter with the user youve added previously at object expected results the insight field only listed value that meet the iql filter actual results no result workaround if your attribute is of type jira user you should be able to use something like filter issue scope iql owner reporter heres some documentation on setting it up
1
let me know if you would like me to provide any other information thank you
0
using javautillogging has two related problemsone is that virtually no java project in the wild uses it everyone uses the more advanced is that unlike javautillogging does not warn you when it is swallowing errors because it has not been configuredso thrift exceptions vanish without a trace which is badhere is a patch to switch to
0
orgjbosstoolsvpejsptest failure failed to execute goal defaulttest on project orgjbosstoolsvpejsptest an unexpected error occured while launching the test runtime return code see log for details code
1
already client is connected and during polling event ssl handshake failure happened it led to leaving the coordinator even on ssl handshake failure which was actually intermittent issue polling should have some resilient and retry the polling leaving group caused all instances of clients to drop and left the messages in kafka for long time until resubscribe the kafka topic manually noformat error orgapachekafkaclientsnetworkclient connection to node hostport failed authentication due to ssl handshake failed error reactorkafkareceiverinternalsdefaultkafkareceiver unexpected exception javalangnullpointerexception null at at at at at at at at at at at at at at at at at at at at at at at info orgapachekafkaclientsconsumerinternalsabstractcoordinator member sending leavegroup request to coordinatornoformat
1
need to set a cpu time limit so as to ensure that a berserk workflow executor wont use far too much cpu better to shoot the process in the head when that happens as it wont ever finish the time limit itself can be set fairly high hours hours and should be tuneable via the administration interface in case someone wants to run cpuheavy workflows
1
which component is itthere is a new findbugs detector for finding leaking resources oblunsatisfiedobligationsee for an overview of all relevant issues
1
have the linux ubuntu virtual machine in the qt datacenter done by the ci team use similar settings for the vm than has for the code coverage
0
project cant be created in clustered business central running on openshift when i tried to create a new project the following exception was displayed unable to complete your request the following exception occurred exception cleaning and unsetting batch mode on fslog from the pod where was the project created is attached to this jira
1
in the regionserver ui page show regions sorted makes things easier to find
0
i am using drools flow and have configured the bam module with oracle persistance i am getting the following error since date is a reserved word in oracle due to which its not able to create the nodeinstancelog tablesep am executeinfo exporting generated schema to databasesep am createsevere unsuccessful create table nodeinstancelog id not null type nodeinstanceid char nodeid char processinstanceid processid char date timestamp primary key idsep am createsevere invalid identifier
0