text_clean
stringlengths
3
505k
label
int64
0
1
the qquicktextnode class has a addtextlayout which takes a qtextlayout and presumably sets up the scene to draw it can we clean up this class to make it publicly accessible so that users can create their own text items that are not rectangular the qtextlayout does all the work of laying it out in a nonrectangular fashion hopefully the qquicktextnode just draws it this would allow us to create nonrectangular text to use in any way we see fit for example creating our own text editor which has minigames inline with text
0
just a column name change
0
during following methods orgapachekafkaclientsconsumerconsumercommittedorgapachekafkacommontopicpartition and orgapachekafkaclientsconsumerconsumercommittedorgapachekafkacommontopicpartition javatimeduration were deprecated as both methods are still widely used it might be worth to either remove the deprecation for mentioned methods or provide a deeper reasoning on why they should stay deprecated and eventually removed if the later is decided then the original kip should be updated to include said reasoning
0
wiki and getting started links are not workingmajor part of are blanki remember i had seen details about magics that are supported and how to add jar files to the kernel etcwould be great if this basics reference matrials are made avaialble asap to all
0
this method appears to have been missed it should be overridden as it is often used by frameworks to look up resources that could appear in more than one place
0
as reported by a user on the user mailing list combination of using boundedblockingsubpartition with yarn containers can cause yarn container to exceed memory limits info orgapacheflinkyarnyarnresourcemanager closing taskexecutor connection because container is running beyond physical memory limits current usage gb of gb physical memory used gb of gb virtual memory used killing container quote this is probably happening because memory usage of mmap is not capped and not accounted by configured memory limits however yarn is tracking this memory usage and once flink exceeds some threshold container is being killed workaround is to overrule default value and force flink to not user mmap by setting a secret config option noformat taskmanagernetworkboundedblockingsubpartitiontype file noformat
1
if supervise is enabled mesosclusterscheduler will retry a failing driver indefinitely this takes up cluster resources which is freed up only when the driver is explicitly killed the proposed solution is to introduce spark configuration sparkdriversupervisemaxretries which allows the maximum number of retries to be specified while preserving the default behavior of retrying the driver indefinitely
0
while iwas trying to resolve a new dependency on defaulti got the following nullpointer exception found md file for downloading downloading downloading ok for default checking cache for dependency default module revision found in cache problem occured while resolving dependency test with javalangnullpointerexception at at at at at at at at at at at at at at at at at at at at at at at at at at at
0
we need to follow standards for dependency and version property definitionslike alphabetic order of declared version properties and versiongroupid convention to easily searchable and unified way
1
currently while defining acls for master or agents there is a boolean field permissive that can be set on the global level that applies to all aclsit defines the behavior when no acl matches to the request made if set to true which is the default it will allow by default all nonmatching requests if set to false it will reject all nonmatching requestswe should consider supporting a local permissive field specific to each acl which would override the global permissive field if the local permissive field is setthe use case is that if support for a new acl is added to master or agent and a cluster uses the global permissive field set to false that would imply that the authorization for the newly added acl shall fail unless the operator adds the corresponding entry for the newly added acl which leads to a upgrade issueif we have both the global as well as local permissive bit then the global permissive bit can be set to true whereas the local permissive bit can be set to true or false based on the use case with this approach it would not be mandatory to add an entry for the new acl entry unless the operator chooses to do sothat obviously also leads to the fact that maybe we should not have the global permissive bit in the first place
0
summary clicking the application navigator hamburger menu using internet explorer doesnt do anything steps to reproduce using go to jira and click the application navigator expected results it would open and you would be able to select the applications listed there actual results the application list doesnt show up use firefox or chrome
0
the element reference is implemented as two largely redundant files this creates multiple paths to apparently identical but duplicate targets and confuses navigation consolidate into one file fix up embedded references update leftside subnav
0
this api contains a few small helper methods used internally by spark mostly related to hadoop configs and kerberos its been historically marked as developerapi but in reality its not very useful for others and changes a lot to be considered a stable api better to just make it private to spark
0
for jbide please perform the following ensure your component featuresplugins have been properly upversioned eg from to or switch to using the new ojtfoundationlicensefeature instructions resolve this jira when done qe can then verify and close it latersearch for all task jira or search for browsersim task jira
1
when creating a new space in the business central as described in steps to reproduce following error occurs unable to complete your request the following exception occurred undefinedsee attached screenshot
1
this adds installtargets managementgen directory remains unchanged and obviously broken but one step at a time
0
i was using the ejb control in a jws file after assembly the ejbjarxml that was created was modified became invalid because the ejblocalref element and the ejbref element are in opposite orderive attached the ejbjarxml file so you can see the improper modification this appears to be caused by the swap from xmlbeans to dom as it was not occuring previously
1
add an example and walk through it for developers
0
the health checker does not appear to be caching pings of endpoints correctly as in a workflow the same service can come up as ok and unreachable if the connection to it is flaky
0
on the dates for release show on dec when in fact it was a few days ago dec was the nn is still missing i suspect this may be tracked in another jira the section showing downloads is labelled drools milestone downloads not milestone
0
after adds a marker interface for blobstores move into their own hadoopamazon library keeps the dependencies out of the standard hadoop client dependency graph lets people switch this for alternative implementationsfeature would let you swap over to another impl eg amazons without rebuilding everything
0
screensharingrecording is not working with firefox or safari this is mac specific issue
0
motivation the normal war must be deployable on a vanilla eap or even wildfly without modification of that eap or that war file it should be easy to try out the openshift war must support system properties to be configured specifically to mysql or postgrsql one of which can only be set in jtadatasource in persistencexml which means we need to turn on eaps system property replacement and therefore eap isnt vanilla any moreimplementation a normal product build does mvn clean install dproductized the openshift product build must do mvn clean install dproductized dproductizedopenshiftthe downside is that well have builds in an ideal world wed find a way to set jtadatasource dynamically from a system property without messing with the persistencexml nor the eap config change i am looking into that but it doesnt seem to be any such approach see my mail to smeeapproposal bin a productized build we always use the openshift variant of the persistencexml that doesnt work on vanilla wildfly the productized zip only contains the sources not the war file
1
if user creates realm and then removes it from admin console web application server constantly throws exception on any other requestthe same behavior when user creates realm and removes it using admin rest api realm is removed successfully but any other request eg list realms or search user is failedrest api client app errorexception in thread main javaxwsrsinternalservererrorexception http internal server error at at at at at at source at at at at at at at at source
0
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
1
application crush when exists especial chars like á in the filename
1
ruta feature expression in composed string expression is not resolved as feature expression but as simple type expression which leads to the plain string of the expressionnoformatnumericvalue dim dim uunit unormalized null unormalizedascii null udimension null uparsed uparsed dimnoformat
0
the pomxml for this project uses the maven enforcer plugin to require it be built with version or greater of maven due to requirements of the integration testsall the maven dependencies for this project require maven howeverthings work more or less however consider the following issueyou want to run integration tests on this plugin and attach a debugger so when you execute maven you are forced to use maven however the debugger example one include with netbeans or eclipse by default will load sources from this makes stepping through code accurately impossible unless you manually run the inegration test and set sources by hand by passing ide as noted in the pom is required for some integration testsunless there is some major concern not evident the plugin dependencies marked should be updated to depend on maven or match whatever the minimum required maven version used to actually build the project
0
code cid useafterfreelibtsinkqueuecc in mallocbulkfreeinkfreelist void void unsigned longlibtsinkqueuecc in mallocbulkfreeinkfreelist void void unsigned longlibtsinkqueuecc in mallocbulkfreeinkfreelist void void unsigned longlibtsinkqueuecc in mallocbulkfreeinkfreelist void void unsigned long cid useafterfreelibtsinkqueuecc in mallocbulkfreeinkfreelist void void unsigned void item avoid compiler if falignment cid useafterfree using freed pointer for sizet i i numitem item i item void item else for sizet i i numitem item i item void item atsfreeitemlibtsinkqueuecc in mallocbulkfreeinkfreelist void void unsigned if falignment for sizet i i numitem item i item void item else cid useafterfree using freed pointer for sizet i i numitem item i item void item libtsinkqueuecc in mallocbulkfreeinkfreelist void void unsigned if falignment for sizet i i numitem item i item void item else cid useafterfree using freed pointer for sizet i i numitem item i item void item libtsinkqueuecc in mallocbulkfreeinkfreelist void void unsigned void item avoid compiler if falignment cid useafterfree using freed pointer for sizet i i numitem item i item void item else for sizet i i numitem item i item void item atsfreeitemcodeseems we ought to not use the item in the iterator after weve already freed it
0
we need to make clear in the documentation and enforce in the code the following watch event rules a watch event will be delivered once to each watcher even if it is registered multiple times for example if the same watch object is used for getchildrenfoo watchobj and getdatafoo watchobj stat and foo is deleted watchobj will be called once to processed the nodedeleted event session events will be delivered to all watchersnote a watcher is a watcher object in java or a watch function context pair in cthere is currently a bug in the java client that causes the session disconnected event to be delivered twice to the default watcher if the default watcher is also used to watch a path this violates rule
0
currently we have the following paths in zk that stores nonsingleton topic assignment valuebrokerstopicstopic leaderandisr broker partition reassignment pathit would be good if we do the followinga make them true json eg using number as the value for brokerpartition instead of stringb add version support for future growth
1
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
1
go install v ldflags x x mainconfigdirhomestgdaosinstalletc b githubcomdaosstackdaossrccontrolcmddaosadmin githubcomdaosstackdaossrccontrollibipmctl githubcomdaosstackdaossrccontrollibipmctl undefined ctypestructdeviceerrorlogstatus scons error scons error scons building terminated because of errors and daos hg err crthginit could not initialize na class daos crt err crtinitopt crthginit failed rc daos crt err crtinitopt crtinit failed rc daos client err daoseqlibinit failed to initialize crt daos client err daosinit failed to initialize eqlib daos fi info dfaultinjectinit no config file fault injection is off daos crt info crtinitopt libcart version initializing
1
stacktrace javalangillegalstateexception number of indices got must be same as array rank indices at at at at at at at at at
1
sysinfo example correctly detect only language os version and supported feature simcardsysinfo tells that htc have no bluetooth wlan camera etccant to detect model manufacturer etc
0
often but not always fails on start up withinfo loaded default cache name orgsakaiprojectnewsapinewsservicecache status statusalive eternal false overflowtodisk false maxelementsinmemory maxelementsondisk memorystoreevictionpolicy lru timetoliveseconds timetoidleseconds diskpersistent false diskexpirythreadintervalseconds cacheeventlisteners hitcount memorystorehitcount diskstorehitcount misscountnotfound misscountexpired mainorgsakaiprojectmemoryimplbasicmemoryserviceinfo init mainorgsakaiprojectpresenceimplbasepresenceserviceinfo init mainorgsakaiprojectcomponentappprofileprofilemanagerimplinfo init reportsmanagerimpl mainorgsakaiprojectreportslogicimplreportsmanagerimplcompiler line attribute class outside of line attribute class outside of line attribute class outside of line attribute bean outside of elementwarn sql error sqlstate mainorghibernateutiljdbcexceptionreportererror lock wait timeout exceeded try restarting transaction mainorghibernateutiljdbcexceptionreportererror could not synchronize database state with session mainorghibernateeventdefabstractflushingeventlistenerorghibernateexceptiongenericjdbcexception could not execute jdbc batch updateat methodat
1
it is important to know when an admin user becomes another user and which user they become for auditing and tracking purposesplease add an event such assubecomeuser that records the admin user the session id and a ref for the id of the user they becomenot as critical but interesting would to also add tracking for the view user info function that event would be something likesuviewuserwhich would also record the admin user the session id and a ref for the id of the user they look up
0
currently the executornotifier runs the handler in the worker thread if there is an exception thrown from the callable this breaks the threading model and prevents an exception from bubbling up to fail the job another issue is that right now when an exception bubbles up from the sourcecoordinator the uncaughtexceptionhandler will call and kill the jm this is too much instead we should just fail the job to trigger a failover
1
the release grades does not pass the grades onto gradebook to pass the grades onto gradebook you need to go through every student and click the return assignment to student buttononce you click that button you can see the grade for that student in the gradebookto test this created a site on created an assignment and selected add gradebook item for this posted the went and marked a couple of clicked on release grades tick appears in the release went to looked at the student that id marked no mark went back to assignment and the students clicked return assignment to student went back to gradebook mark present for that studentit appears some error has crept in whereby the release grades is insufficient to release the grades to gradebook
1
if a user has uploaded a document directly to the lessons page they might try to upload a new version by deleting the element on the page and reuploading or using edit change file both methods create multiple versions potentially creating huge numbers of files the only way to replace a file is to go into resources “upload new version” can we include this functionality as part of the lessons tool when the user clicks on edit below change file or url add a new link upload new version and embed the functionality that exists in resources within the lessons page ie without jumping to the resources tool just like resources warn if the file has a different name to the one it is replacing
0
when i try open the page editor for an xhtml file i get this exceptionjavalangnullpointerexception at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at method at sunreflectnativemethodaccessorimplinvokeunknown source at sunreflectdelegatingmethodaccessorimplinvokeunknown source at javalangreflectmethodinvokeunknown source at at at
1
drew thornton writes on the user mailing listquoteif one zookeeper node is shutdownfailswhatever and the rest of the ensemble stays up the tablet servers attached as clients to the shutdown node immediately fail if one of the clients happens to be the master the cluster goes downaccumulo does not seem to be failing over to the remaining zookeeper nodes and this causes me to restart the individual tablet servers againthe zookeeper ensemble is very stable and has plenty of bandwidthmemoryprocessing so taking one node down out of five doesnt crash the zookeepers just the tablet serversquote
1
when compiling the following classes the service class ends up with an incomplete type parameter in it this causes errors for the ide inconsistent classfile encountered the undefined type parameter t is referenced from within service code import javautilfunctionconsumer import groovytransformcompilestatic class event eventstring id t payload event setreplytoobject replyto compilestatic trait events def registration onclass key closure consumer interface registration class service implements events code javap output for events shows codejava public abstract registration onjavalangclass groovylangclosure code javap output for service shows codejava public registration onjavalangclass groovylangclosure public registration eventstraitsuperonjavalangclass groovylangclosure code it is this t in the trait method that is not defined i think it should be instead when looking at the original methods and the trait bridge methods type parameters
0
when user takes assessment via url they should get a continue button when done
0
problem summary changes in epic link field does not trigger listener for issue updated event as an example webhook will not fire when it is configured for issue updated and the only change for the issue is on the epic link steps to reproduce create a listener for issue updated update just epic link field actual result listener is not triggered expected result listener is triggered notes this is the only field that has the problem all other fields standard custom agile etc do not exhibit this problem see
0
example error build python c import pyarrow import tensorflow traceback most recent call last file line in file line in from tensorflowpython import pywraptensorflow pylint disableunusedimport file line in from tensorflowpython import keras file line in from tensorflowpythonkeras import datasets file line in from tensorflowpythonkerasdatasets import imdb file line in from tensorflowpythonkeraspreprocessingsequence import removelongseq file line in from tensorflowpythonkeraspreprocessing import image file line in from keraspreprocessing import image file line in from dataframeiterator import dataframeiterator file line in from pandasapitypes import isnumericdtype modulenotfounderror no module named pandas
1
a nullpointerexception is thrown during the crawl generate stage when i deploy to a hadoop cluster but for some reason it works fine locally it looks like this is caused because the urlpartitioner class still has the old configure method in there which is never called causing the normalizers field to remain null rather than implementing the configurable interface as detailed in the newer mapreduce apis partitioner spec stack trace code javalangnullpointerexception at at at at at at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at code oh and it might also be because a static urlpartitioner instance is being used in the generatorselector class but its only initialized in the setup method of the generatorselectorselectormapper class so that whole setup looks pretty weird
1
helloactually i use the cms adapter to connect to my cms and import all content to contenthub this operation is done successfullywhen i try to download the related enhancement of a specific contentitem i use this urli get error i think that the problem is that the identifier that the contenthub give to contentitem is the main source of problemmy question is how can i avoid this problem is there any config that can i use to avoid that and let contenthub provide to contentitem the default identifierexample
1
an avroruntimeexception exception is thrown when attempting to read an avro file serialized with an older version of a schema containing a field which has been subsequently removed in the newer schemaexceptioncodeexception in thread main orgapacheavroavroruntimeexception bad index at recordputunknown source at at at at at at at readreadfromavrounknown source at readmainunknown sourcecodesteps to reproduce generate code for schema and write an avro file with the codegenerated record class using the datafilewriter and specificdatumwriter informational only read the avro file using the codegenerated record class using datafilestream and specificdatumreader output read the avro file using the codegenerated record class using datafilestream and specificdatumreaderschema schemacodename record type record fields name name type string name id type int schemacodename record type record fields name name type string codewrite codecode public static record createrecordstring name int id record record new record recordname name recordid id return record public static void writetoavrooutputstream outputstream throws ioexception datafilewriter writer new datafilewriternew specificdatumwriter writercreaterecordschema outputstream writerclose outputstreamclose coderead codecode public static void readfromavroinputstream is throws ioexception datafilestream reader new datafilestream is new specificdatumreader for record a reader systemoutprintlntostringbuilderreflectiontostringa ioutilscleanupnull is ioutilscleanupnull reader code
0
globstatus for a path that is a symlink to a directory used to report the resulting filestatus as a directory but recently this has changed
1
sqlsselect passselect failederror messagecaused by orgapachecalcitesqlparserimplparseexception encountered at line column expecting one of not in between like similar and or is member submultiset multiset filter over at at at at at at at at at at at at at at at at at at at at at
0
the following error turned up in a continuous integration test run on the branch on platform noformat javasqlsqlexception derby sql error errorcode sqlstate sqlerrmc orgapachederbysharedcommonsanityassertfailureassert failed connection is at at at at at at at at at by error derby sql error errorcode sqlstate sqlerrmc orgapachederbysharedcommonsanityassertfailureassert failed connection is at at at at at at at at at at at at at at
0
xwork will be released later this week and will contain some bug fixes for serious problems related to
1
it allows to avoid unnecessary thread blocking on each isrunning callfrom other side there are many redundant implementations with internal additional varialbe like active shuttingdown etc
0
too many things depend upon auis event bus to omit it add this to the core
1
after creating a web service model and selecting operation and preview data action there was no resulting preview vdb created and deployed to teiid serverthe project source model virtual model and web service model were all deployed as pvdbs
1
some people want to be able to configure whether the keyboard always resizes or sometimes pans the window when showing the keyboard we might be able to use settings from the manifest depending on what the expectations would be if you explicitly change those options this needs to be investigated and determined first
0
the test derbynetautostart uses randomaccessfilereadline on derbylog to find an occurrence of a string to decide whether the tests passes or notthis mechanism assumes that derbylog is in unicodecompatible encoding which may not be truealso readline documentation indicates that it does not support the full unicode range so may not be the best choiceand indeed the test fails on zos even though the string looked for is present in derbylog
0
the crontabs currently use to run the scriptshowever this symlinked to so the two are currently equivalentunless the scripts actually need version and no other it does not make sense to use the specific version using will make a python upgrade much simpler no need to edit all the crontabs as they will pick up the whatever is the current version
0
this task is to configuring hive metastore as storage engine in implement schemas to list tables in hive metastore and covert hive types into drillsql types
0
following is the error trace that i am getting in rad the same code worked perfectly fine in wsad systemout o document services request failed with the following exception javalangclasscastexception orgapacheaxisattachmentsattachmentpart incompatible with comibmwswebservicesenginepart systemout o document services ecmspecialty section ends systemout o ue ue orgapacheaxisattachmentsattachmentpart incompatible with comibmwswebservicesenginepartcomcnagsldocumentservicesdocumentservicesexception ue orgapacheaxisattachmentsattachmentpart incompatible with comibmwswebservicesenginepart at at at at at at at at at at at at at at at at at at at at at at at at at at at at
1
steps to compile runsapplication xmlnsfx xmlnsslibrarynsadobecomflexspark xmlnsmxlibrarynsadobecomflexmx cdata import sparkprimitivesbitmapimage import sparkcomponentslabel fxobject labeldisplaycoldfusion imagedisplay fxobject labeldisplaydreamweaver imagedisplay fxobject labeldisplayflash builder imagedisplay fxobject imagedisplay fxobject labeldisplayflash catalst imagedisplay fxobject labeldisplayflash professional imagedisplay fxobject labeldisplayflex sdk imagedisplay fxobject labeldisplayphotoshop imagedisplay fxobject imagedisplay fxobject imagedisplay slist itemrendererimagerenderer dataproviderproducts cdata public function irfitemobjectclassfactory return new classfactoryimagerenderer actual resultsplayer hangserror error a script has executed for longer than the default timeout period of seconds atmxmanagerslayoutmanagervalidateclient atsparklayoutshorizontallayoutupdatedisplaylistvirtual atsparklayoutshorizontallayoutupdatedisplaylist atsparkcomponentssupportclassesgroupbaseupdatedisplaylist atsparkcomponentsdatagroupupdatedisplaylist atmxcoreuicomponentvalidatedisplaylist atmxmanagerslayoutmanagervalidatedisplaylist atmxmanagerslayoutmanagerdophasedinstantiation atmxmanagerslayoutmanagerdophasedinstantiationcallback expected results no hang workaround if any
1
im using a nightly snapshot and now source files in installed plugins are not being picked up on the class path for example i have feeds installed into a project and in my project i have the codecodedef fb new feedspluginfeedbuildercodethe above line has an error marker because sts cant find the feedbuilder class which is lurking in the srcgroovy directory of the feeds plugin
1
the ability to hideshow buttons in the dialogs js api
0
if a newly discovered drl contains a rule whose rhs uses a previouslydeclared class that rules compilation will fail
0
velocity date selector is incorrectly rendered for some locales this makes impossible to select a date and sometimes to addedit object eg you cannot addedit a schedule event assignmentmarked this as a blocker as makes some sakai tools unusable for a subset of localessee attached screenshotslocales affected ptpt zhcn nlnl frca kokr ruru svse
1
this problem is caused by a bug in the java runtime httpclient falls in running with cpu usage after an error signalled on channel backported as the fix has been verified to be available in adoptopenjdk panel webhooks are likely impacted by the same problem as is described here in fixing all that was done was to disable in the http client however that client instance is specific to the repository importer the above is suspected to relate to the following jdk bug httpclient falls in running with cpu usage after an error signalled on channel workaround run bitbucket server with java or not yet verified the issue is observed with java that uses tls it is likely that tls works fine and will not cause this issue this can be done by setting the following jvm parameter while starting bitbucket server up
1
if webinfbeansxml is used instead of metainfbeansxml the bdamode for webinfclasses is annotated independent of the information provided by the beansxml filejust by moving the same beansxml to metainf fixes the issuea demoapp is available in the following thread
1
as discussed in currently the tablesource only support to define rowtime field but not support to extract watermarks from the rowtime field we can provide a new interface called definedwatermark which has two methods getrowtimeattribute can only be an existing field and getwatermarkgenerator the definedrowtimeattribute will be marked deprecatedhow to support periodic and punctuated watermarks and support some builtin strategies needs further discussion
1
if i go to a projects agile tab then on the right click on the hours issues or flow tab i do not see more than days worth of data this makes these charts useless attached is an example from a project that has been ongoing for about weeks and which will end in about weeks all i see is what happened today this gives me no clue of where were coming from or whether i have appropriate progress
0
for jbide please perform the following tasksif you contributed anything to the or builds you will need to perform the following in the branch update your root pom to use parent pom version orgjbosstools parent ensure youve built run your plugin tests using the latest target platform versioncodemvn clean verify or once releasedmvn clean verify close do not resolve this jira when donesearch for all task jira or search for hibernate task jira
1
dear ofbiz teami would like to know what is the status of some imperative features in general ledger moduleessentially i am looking at ability to create a new delete a year end reports a trial balance b balance sheet c profit and loss statements d cash flow regardsprashant
1
when server to server calls ejb remote calls where transaction context is propagated then ejb call can be routed to a one pod where the recovery call may directed to a different podsuch situation causes a consistency issuelets say the scenario the first server lets call it txclient makes remote ejb call to remote server which is on of the servers joint in cluster named and the txclient calls the the processing continues up to the start of the and the crashes or host goes down network issue happenstxclient understands that the process was not succesful and ask recovery manager to retry and finishthe recovery manager starts to call the remote server based on data saved in the object store of txclientbut unfortunately the recovery remote call goes not to the but to the txclient gets error code xaexceptionxaernota and removes data from its object store opteapstandalonedatatxobjectstore opteapstandalonedataejbxarecovery and then never finishes indoubt transactions at in doubt if its issue of openshift configuration or if its a trouble of wftcejbremoting layer in wildflythis is tested with wfly operator from
1
helloin version i have noticed that if a user logs in and uses the account link in my workspace to change or reset their password the account type is lost on submission this only occurs with the user changing their own password if a system admin changes the password via the users tool the account type remains anyone else seen this or know the fix for itfyiusing sakai mysql tomcat
1
i ran into this problem when trying to put my avro records through the sqltransform i was able to reduce the reproduction path to the code below this code fails on my machine using beam with the following nullpointerexception codejava orgapachebeamsdkextensionssqlimplparseexception unable to parse query select name direction from inputstream at at at at at at at at at method at at at at at at at at at at at at at at at at at at at at at at at caused by javalangnullpointerexception at at more caused by javalangnullpointerexception at at at at at at at at at at at at at at at at at at at at at at at at at at at more code codejava test categoryneedsrunnerclass public void the base test input schema testschema new schemaparserparse typerecordnametransport fields namenametypedefaultnull namedirectiontypetypeenumnamedirectiontypesymbols genericrecord record new genericrecordbuildertestschema setname test setdirection new genericdataenumsymboltestschemagetfielddirectionschema pull build list of test inputs list testrecords collectionssingletonlistrecord convert into a pcollection pcollection input pipeline applycreateoftestrecordswithcoderavrogenericcoderoftestschema applypardoofnew dofn processelement public void processelementprocesscontext c coutputtobeamrowstrictcelement null setrowschematobeamschematestschema pcollection result this way we give a name to the input stream for use in the sql pcollectiontuple ofinputstream input apply the sql applyexecute sql sqltransform queryselect name direction from inputstream pipelinerunwaituntilfinish code
0
for jbide ensure all commits pushed to have either been made in master or are intentionally excluded from master below are listed all commits made to that have not been found in master it is possible these commits do exist in master but the patches are not identical it is also possible that the commits made in are not relevant to master or should not be applied if there are no commits listed below please close do not resolve this issue otherwise click on each link below evaluate whether the given commit should be in master if it should be in master please find out why it is not in master you may need to browse masters commit log to find a commit that should match if you cannot find a matching commit and it should be committed to master please cherrypick that commit to master or otherwise merge it in in comments please indicate any suspicious commits ie commits not related to simple version changes whether it exists in master whether you have successfully merged it in now or whether it is not intended to be committed to master or is inappropriate for master when all complete please close do not resolve this issue folder jbosstoolsserver at searching for commits present in and missing from master commit hash is the last common ancestor of and master is commits ago in branch master is commits ago in branch commits missing from master that are in search for all task jira or search for server task jira
1
the wildflydist project should be able to generate licensesxml that contains entries for all the artifacts that are not productized
1
in our jsp we have houtputtext idhelloinputlabel valueenter number of controls to display this is the hello world from javaserver faces in action example as the for tag refers to helloinput a component that hasnt yet been defined we get an exception and the app wont deploy belowchanging the order of the components so the helloinput outputlabel is defined first works although we lose the ordering also the notion of wrapping the two components as children in a panelgroup does not appear to work eitherbehavior seen in and nightly snapshot running with wls and tomcat could not render message unable to find component helloinput calling findcomponent on component welcomeformerrors at at at etc
0
the authorizer interface must be updated to accommodate changes introduced by the implementation of executor authentication the authorizationsubject message must be extended to include the claims from a principal the local authorizer must be updated to accommodate this interface change
0
stack traceorgapachehopcoreexceptionhoppluginexception unexpected error loading class with name to load class in this classloader or in the parent at at at at at at at at at at at at at by javalangclassnotfoundexception unable to load class in this classloader or in the parent at at at morecaused by javalangclassnotfoundexception at at at more
0
start nodes with stop one of tree nodesresultstopped node lognoformatusing configuration examplesconfigexamplecachexml ver copyright c gridgain systems quiet mode logging to file to see full console log here add dgridgainquietfalse or v to ggstartshbat failed to initialize http rest protocol consider adding gridgainresthttp module to classpath performance suggestions for grid fix if possible to disable set dgridgainperformancesuggestionsdisabledtrue decrease number of backups set keybackups to disable fully synchronous writes set writesynchronizationmode to primarysync or fullasync disable query index set queryindexenabled to false disable peer class loading set peerclassloadingenabled to false disable grid events remove includeeventtypes from configuration if running benchmarks see to start console management monitoring run ggvisorcmdshbat gridgain node started ok topology snapshot topology snapshot cmacintoshbin gridgain noformatother nodesnoformatusing configuration examplesconfigexamplecachexml ver copyright c gridgain systems quiet mode logging to file to see full console log here add dgridgainquietfalse or v to ggstartshbat failed to initialize http rest protocol consider adding gridgainresthttp module to classpath performance suggestions for grid fix if possible to disable set dgridgainperformancesuggestionsdisabledtrue decrease number of backups set keybackups to disable fully synchronous writes set writesynchronizationmode to primarysync or fullasync disable query index set queryindexenabled to false disable peer class loading set peerclassloadingenabled to false disable grid events remove includeeventtypes from configuration if running benchmarks see to start console management monitoring run ggvisorcmdshbat gridgain node started ok topology snapshot topology snapshot a fatal error has been detected by the java runtime environment sigsegv at pc jre version javatm se runtime environment build java vm java hotspottm server vm mixed mode compressed oops problematic frame v jnithrownew failed to write core dump core dumps have been disabled to enable core dumping try ulimit c unlimited before starting java again an error report file with more information is saved as if you would like to submit a bug report please visit ggstartsh line abort trap java jvmopts quiet dockopts restartsuccessopt jmxmon dgridgainupdatenotifierfalse dgridgainhomegridgainhome jvmxopts cp cp mainclassmacintoshbin gridgain in attachment
1
some jmx classes eg jobtrackermxbean and tasktrackermxbean in needs to be forward ported to in some fashion depending on how mapreduce emergesnote similar item for hdfs is already in
0
somehow we got into a situation when jira insists on reindexing with exclusive locking only but is unable to finish the job while the reindexing is running the following message is spamming the log warn mteterinxxx secureadminjiraindexreindexjspa javautilconcurrentexecutionexception javalangnullpointerexception cannot invoke method getvalidators on null object it ends with the following quote indexing completed with errors task completed in minutes seconds with unexpected error started today pm est finished today pm est comatlassianjiraindexindexingfailureexception indexing completed with errors at at at at at at at method at at at at at at source at at at at at at at at at at at at at quote while we await a fix any hint as to the workaround would be appreciated
1
apache yetus adds an archiving capability to store files from the build tree in order to use the hadoop dockerfile the rsync package needs to be added
0
following the example in results in creating pod failuresto make sure the node has hugepages preallocated oc describe nodes grep cat deployhugeyaml codejavaapiversion podmetadata generatename hugepagesvolumespec containers securitycontext privileged true image command sleep inf name example volumemounts mountpath devhugepages name hugepage resources limits memory cpu volumes name hugepage emptydir medium hugepages runtimeclassname kataoccodeeof oc create f hugeyaml oc describe pod type reason age from message normal scheduled defaultscheduler successfully assigned to normal addedinterface multus add normal pulling over kubelet pulling image normal pulled over kubelet successfully pulled image warning failed over kubelet error createcontainer failed timeout reached after waiting for device unknown
1
please test all new releases of jboss sso against java
1
c client impl for hotrod protocol is documented herecould be based off the java reference impl client
1
currently the following only scans all rule files under rules but not any subdirectories it would be nice to have a parameter like includesubdirectoriestrue in resource tagchangeset xmlns xmlnsxs xsschemalocation the following code in packagebuilder can be changed to add at least one level of subdirectory and im sure with a little bit of refactoring and retest one can add nlevels of subdirectories though i would agree that we should limit the inestedresourceresourceisdirectory thisresourcedirectoriesadd inestedresourceresource for resource childresource inestedresourceresourcelistresources if internalresource childresourceisdirectory process one level subdirectory thisresourcedirectoriesadd childresource for resource subchildresource childresourcelistresources if internalresource subchildresourceisdirectory continue ignore second level sub directories internalresource childresourcesetresourcetype inestedresourceresourcegetresourcetype addknowledgeresource childresource inestedresourceresourcegetresourcetype inestedresourceresourcegetconfiguration else internalresource childresourcesetresourcetype inestedresourceresourcegetresourcetype addknowledgeresource childresource inestedresourceresourcegetresourcetype inestedresourceresourcegetconfiguration else addknowledgeresource inestedresourceresource inestedresourceresourcegetresourcetype inestedresourceresourcegetconfiguration
0
i am trying to deploy an artifacts under the group id iogithubzhtmf using my credentials i have successfully deployed and released several versions before however this time the artifact does not show up in staging repositories section of my nexus repository at the deployment itself completes without error i have done several deployments roughly hours ago and i gave another try just now but the result is the same i am deploying a snapshot version using maven deploy plugin version and the command mvn clean deploy part of the log file has been attached for your reference please advise what i am missing or how can i resolve this problem
1
in the portlet environment conversationmanager is not getting initialized the frameworkadaptergetcurrentinstance is as well nullthe part of the exception is as followscaused by javalangnullpointerexception at at at at at at at at at at more the filter is not working as expected in portlet environment but works perfetly well in norman servlet container can myfacesportletbridge be used in someway to configure the filter to run in portlet environmentthanks and regardsrashmi
1
jaeger requires a persistent storage solution to retain collected traces for a period of timedue to the use of elasticsearch within the ecl group logging integrated into openshift it makes sense to investigate whether this option is also appropriate for jaegerfirst step is to find an appropriate way to collaborate with the ecl group on a shared docker image using elasticsearch required by jaegerproduct git repo
0
the sla stats metriccalculator thread does not wait until the storage is ready and attempts to run on a nonleading scheduler this results in the executor thread death and inability to pick up sla stat calculation on leader change
1
in discoveryserviceimpldoupdateproperties filter any properties that contain instead of just jcrprimarytype
1
this is a simple addon which does just diacritics replacementno actual stemming in there yet
0
tasks may be explicitly dropped by the agent if all the following conditions are met several launchtask or launchgroup calls use the same executor the executor currently does not exist on the agent due to some race conditions these tasks are trying to launch on the agent in a different order from their original launch order see below how this could happen in this case tasks that are trying to launch on the agent before the first task in the original order will be explicitly dropped by the agent taskdropped or tasklost will be sent up until now mesos does not guarantee inorder task launch on the agent lets say mesos master sends two launchtask messages launch and to an agent in most cases except these messages are delivered to the agent in order however currently there are two asynchronous steps unschedule gc and task authorization in the agent task launch path depending on the cpu scheduling order launch may finish these two steps earlier than and get to the launch executor stage before in this case prior to these two tasks will still get launched if and use the same executor whoever reaches the launch executor stage first will launch the executor however after resolving agents start to enforce some order for tasks using the same executor specifically when master crafts the launch task message it will specify the launchexecutor flag thus in the above case will have launchexecutor flag set to true and and any subsequent tasks that use the same executor will have the flag set to false if reaches the launch executor stage before due to the race condition described above the agent will see that its launchexecutor is false but the executor specified in the launchtask message is not running as a result it will explicitly drop as in based on discussion with and we should take an explicit approach of using process sequence to ensure ordered task delivery on both the master and agent
1
following jiras are not updated in
1
one step of executing a beam pipeline is staging pipeline dependencies to the runner for python sdk example see and its implementations as a part of this process we create a manifest of all staged artifacts we should identify the requirements that constitute correctness of the manifest for example artifacts do not repeat twice see also and verify these requirements on the sdk side at pipeline submission to fail faster
0
from the ivyuser mail list need a way to create pom files from the ivy configuration proposed let people declare what ivy confs one funny is that maven has a different eviction policy from ivy closest declaration to the root of the graph wins conflict at the same depth is an error we may want to avoid transitive dependency mismatch by explicitly listing the complete resolved graph in the pom as resolved by ivy rather than maven that way recipients get what we declared with the right to override it by declaring stuff closer in their own poms maybe that would be a switch
0
reported by mathias rodenbergtests in error » illegalstate could not find a valid dock
0
the hadoopauth authenticationfilter returns a unauthorized without a wwwauthenticate headers the is illegal per the http rpc and causes a npe in the httpurlconnectionthis is half of a fix that affects webhdfs see
1