text_clean
stringlengths
3
505k
label
int64
0
1
epic goal construct the openshift installer ipi metadatajson file for users to be able to download from the download configuration menu why is this important users may want to extract the metadatajson files in the event that the hub has failed and they still need a way to easily run the openshift installer destroy process scenarios acceptance criteria ci must be running successfully with tests automated release technical enablement provide necessary release enablement details and documents dependencies internal and external previous work optional … open questions … done checklist ci ci is running tests are automated and merged release enablement dev upstream code and tests merged dev upstream documentation merged dev downstream build attached to advisory qe test plans in polarion qe automated tests merged doc downstream documentation merged
0
when trying to create a fabric in jboss fuse build installation you get a whole list of exceptions and you end up with a messed up container
0
when you change the name of a priority status resolution or status we do not call refresh for their caches and they become stale this affects places that use the objects rather than the genericvalues the changes are only reflected once you restart the system see abstracteditconstantdoexecute this should either call an abstract method that just clears that constants cache or it could just call defaultconstantmanagerrefresh
1
the first step we will implement some objectbased api to verify the data consistency among replicas the next step we may offer some container or pool based tools to verify related data consistency in the given system
1
propose to do refactor on nodelabelsprovider abstractnodelabelsprovider to be more generic so node attributes providers can reuse these interfaceabstract classes
1
the jobtracker on our installation runs out of heap space rather quickly with less than jobs at one time even after just jobsrunning in mode with larger heap space does not help it will use up all available info orgapachehadoopipcserver ipc server handler on call false true from error javaioioexception javalangoutofmemoryerror gc overhead limit exceededjavaioioexception javalangoutofmemoryerror gc overhead limit exceeded
1
qdbufferlistclone on qdmessagecopy for qdmessagepvttmatooverridematracemaingress is dominated by cache misses costs to allocate new qdbuffert to reference any qdbuffert from the source qdbufferlistt
0
at least two product teams should have tested the latest version and provided feedback on its integration in to their product further bugs and improvement tasks for this release may be added to the epic as a result of addressing this task
0
for jbide please perform the following if nothing has changed in your component since jbt or jbds eg xulrunner freemarker birt colorredreject this jiracolorotherwise for all other make sure your component has no remaining unresolved jiras set for fixversion or unresolved issues should be marked with a respin label unless they are issues which cannot be completed until closer to gaunresolved jiras with fixversion in the branch update your root pom to use parent pom version orgjbosstools parent ensure youve built run your plugin tests using the latest target platform versionscodemvn clean verify clean verify if you did not already do so colororangein your master branchcolorcodegit checkout mastergit pull origin update your colororangemaster branchcolor parent pom to use the latest version orgjbosstools parent codenow your root pom will use parent pom version in your branch and in your colororangemastercolor close do not resolve this jira when donesearch for all task jira or search for arquillian task jira
1
note this bug report is for confluence server using confluence cloud see the corresponding bug report panelstarted a confluence od instance for the last shipit and the welcome video fails to load when clicked the the link uses httpurl while od uses httpsurl which blocks unsecured content switching to a protocolless url ought to default to resolve the issue
0
background here it seems to be vm protocol specific
0
add a queue property to queues to ensure that messages enqueued on the queue have a ttlallow for setting a vhost wide default for newly created temporary queues
0
cannot update queries any more error details date mon sep cest error parsing server response the reference to entity sheetworkflow must end with the delimiterseverity errorproduct eclipse orgeclipseepppackagejavaproductplugin comatlassianconnectoreclipseinternaljiracoresession microsystems incbootloader constants nldedeframework arguments product orgeclipseepppackagejavaproductcommandline arguments os ws arch product orgeclipseepppackagejavaproductexception stack tracecomatlassianconnectoreclipseinternaljiracoreservicejiraexception error parsing server response the reference to entity sheetworkflow must end with the delimiter at at at at at at at at at at at at at at at by orgxmlsaxsaxparseexception the reference to entity sheetworkflow must end with the delimiter at comsunorgapachexercesinternalparsersabstractsaxparserparseunknown source at more
1
all you get is standalonesh jboss bootstrap environment jbosshome java javaopts dorgjbossresolverwarningtrue djbossmodulessystempkgsorgjbossbyteman djavaawtheadlesstrue info jboss modules version info jboss msc version info jboss as ahoy error no handler for add at address info jboss as ahoy started in started of services services are passive or ondemand
1
given a user with only the assigned role realmmanagementviewusers which translates to effective roles querygroups queryusers and viewusers when they access the security admin console the link to the groups page is visible but it redirects to consolenotfound with the message resource not found there is nothing in keycloaks logsthe only way i could get the groups page to work was to add the viewrealm role but that of course also gives access to a bunch of other pagesas a sidenote it would be great if the groups management could be configured independently from the users management say with a viewgroups role
0
screensharing base url is brokenon ff screensharing base url is jsessionid need to be cutoff
1
the old incubatorapacheorg still exists and is potentially confusing we should redirect it to the dbapacheorg site per jean andersonadd a line to that looks like this redirectmatch permanent jdo after you commit that on peopleapacheorg do cd wwwincubatorapacheorg svn uponce that redirect is in place you can simply remove the files in wwwincubatorapacheorgjdo
0
wooguil pak hello jaroslav tulach im currently testing dew and it seems that it does not support parameters like integer array when i try javascriptbodyargs id r javacall true body var array new arrayn public static native void teststring id r compiler complaints callback to with wrong parameters i only known parameters are l if it is true is there no way to pass integer array to java as a parameter the report is correct there is an error the int shall be i and not l
0
i would like to use the ant tasks to resolve the dependencies of my web application to make the dependecies availbale to my web appklcation i have to make them available in the webinflib directory at the moment there is no elegant way to copy the jars i got using the dependencies task to the webinflib directoryi know that a work around would be to copy the jar manually from the repository but therefore additional information would be needed in the ant script this would not be elegant imho
0
as spec sayspublic static final resourcebundle getbundlestring basename gets a resource bundle using the specified base name the default locale and the callers class loader calling this method is equivalent to calling getbundlebasename localegetdefault thisgetclassgetclassloader except that getclassloader is run with the security privileges of resourcebundle
0
currently the change of indentation length is very slow example and how to reproduce open some regular php file with at least lines select all text ctrl a change the indentation length tab or shift tab currently one single change one press of tab or shift tab takes tens of seconds with cpu usage
0
the update done in causes downstream issues caused by there being multiple artifacts exporting javaxannotation see the change for now seems to be the simplest possible fix
0
gulpsize total kb finished styles after ms finished views after ms reactor summary ambari main success apache ambari project pom success ambari web success ambari views success ambari admin view failure ambari server skipped ambari agent skipped ambari client skipped ambari python client skipped ambari groovy client skipped ambari shell skipped ambari python shell skipped ambari groovy shell skipped build failure total time finished at mon sep edt final memory failed to execute goal gulp build on project ambariadmin command execution failed process exited with an error exit value orgapachemavenlifecyclelifecycleexecutionexception failed to execute goal gulp build on project ambariadmin command execution failedhide full text
1
dart presumes network big endian byte order for serialization of doubles it appears other languages such as go and java use the little endian byte order for doubles
0
once available in late august or early september we should move up to points fibonaccijbt tpjbds tpcentral tpjbtbuildsitesjbds site buildupdate installmatrix jobs to use new eclipse jeeplatform binarymailing list announcements
1
while trying to perform the release of i stumbled upon javadoc errors not finding artifactsnoformat failed to execute goal aggregate on project jackrabbitoak an error has occurred in javadocs report generation exit code javadoc warning multiple sources of package comments found for package orgosgiservicecomponent javadoc warning multiple sources of package comments found for package orgosgiutiltracker error cannot find symbol import orgapacheluceneindextermdocs symbol class termdocs location package orgapacheluceneindex error cannot find symbol import orgapacheluceneindextermenum symbol class termenum location package orgapacheluceneindexnoformata deeper investigation highlighted that oakupgrade leverage a lucene api version of as transiently pulled in via jackrabbitcore oak on the other hand leverage see luceneversion in parent pomwhile is possible to workaround the javadoc error by something likenoformatdiff git apomxml bpomxmlindex apomxml bpomxml basediroakdoctargetsite notimestamp orgapachelucene lucenecore noformatadding the right dependency to the oakupgrade makes it fail to compile hinting to two classes that seem to be gone with the release of lucene termdocs and termenumnoformatdiff git aoakupgradepomxml boakupgradepomxmlindex aoakupgradepomxml boakupgradepomxml orgapachetomcat tomcatjdbc orgapachelucene lucenecore luceneversion orgapachelucene luceneanalyzerscommon luceneversion failed to execute goal defaultcompile on project oakupgrade compilation failure compilation failure usersdgiannelworksourcesapachejackrabbitoakoakupgradesrcmainjavaorgapachejackrabbitoakupgraderepositoryupgradejava cannot find symbol symbol class termdocs location package orgapacheluceneindex usersdgiannelworksourcesapachejackrabbitoakoakupgradesrcmainjavaorgapachejackrabbitoakupgraderepositoryupgradejava cannot find symbol symbol class termenum location package orgapacheluceneindex usersdgiannelworksourcesapachejackrabbitoakoakupgradesrcmainjavaorgapachejackrabbitoakupgraderepositoryupgradejava cannot find symbol symbol class termenum location class orgapachejackrabbitoakupgraderepositoryupgrade usersdgiannelworksourcesapachejackrabbitoakoakupgradesrcmainjavaorgapachejackrabbitoakupgraderepositoryupgradejava cannot find symbol symbol method termsorgapacheluceneindexterm location variable reader of type orgapacheluceneindexindexreader usersdgiannelworksourcesapachejackrabbitoakoakupgradesrcmainjavaorgapachejackrabbitoakupgraderepositoryupgradejava cannot find symbol symbol class termdocs location class orgapachejackrabbitoakupgraderepositoryupgrade usersdgiannelworksourcesapachejackrabbitoakoakupgradesrcmainjavaorgapachejackrabbitoakupgraderepositoryupgradejava cannot find symbol symbol method termdocsorgapacheluceneindexterm location variable reader of type orgapacheluceneindexindexreadernoformat
1
im building a simple scenario where i try to decrypt a message the key identifier is the subject key identifier i loaded my private key to the key strorether error is javalangnullpointerexception while trying to invoke the method orgapachewssecuritycomponentscryptocryptoloadcertificatejavaioinputstream of an object loaded from local variable crypto
0
the search link next to the subject of an email always searches for and thread links dont appear below the email
1
the email archive tool no longer shows up in site info manage tools and is also not present in the list of available system tools in admin workspace sites edit site addedit pages new page tools new tool
1
here is a snippet of the errors seen when building against hbasecode invalid pom for transitive dependencies if any will not be available enable debug logging for more details some problems were encountered while processing the poms dependencymanagementdependenciesdependencyartifactid for orgapachehbasecompatmodulejar with value compatmodule does not match a valid id pattern line column dependencymanagementdependenciesdependencyartifactid for orgapachehbasecompatmoduletestjar with value compatmodule does not match a valid id pattern line column
1
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
1
tests in testdfsadminwithha fail on windows after testupgradecommand with error message could not format one or more journalnodes exceptions thrown directory is in an inconsistent state cant format the storage directory because the current directory is not empty at at at at at at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at restarting with upgrade option seems to keep the journalnode directory from being released after testupgradecommand codejava start with upgrade option dfsclustergetnamenodeinfossetstartopt hdfsserverconstantsstartupoptionupgrade true code does not have this issue because there is no testupgradecommand in
0
if a datetime pattern contains no year field the day of year field should not be ignored if exists
0
the easiest way to tune the threading implementation of an rpc server is to provide an alternate threadpool implementation
0
local index usually have the best overall profile to be confirmed experimentally they work well with mutable data are transactional hbase perform well for heavy updates including the initial seedwe should make them the default when a new index is created
0
please enable git and github for repository so it is easier to suggest and test changes every patch to must be welltested so attaching svn patch files to jira is a recipe for disaster
1
remasteredlogger is a logger as simple as it is complex it uses as an api base this logger provides the developer with ease of use for a wide variety of utility this also allows the simplicity of saving under several loggers but also the complexity of being able to create class or single event loggers
0
in firefox and at least not in chrome see for proof it appears to be caused by could be fixed either by reordering the less files in the build so that header comes before buttons or by adding an additional rule to headercss such as code auiheader aauibutton lineheight code not sure which you guys would prefer probably the latter
1
the gradescsv import option is not using ids to import grades we have tested locally and on maintenance and confirmed that different students with different ids and different grades will all receive the same grade after import currently the affected students will all receive the grade for the last entry with same name in the csv file
1
im an employee of line corp i need to deploy jar files to maven centeral could you please grant me for comlinecorp name junpei koyama ref
0
the default multipart size is which is byte while all the chunks saved by is byte which is greater than by looking into the objectendpointjava it seems the chunk size is retrieved from the contentlength header
1
for llap and general ease of use of the code
0
based on this comment quotecomplex tables heading – use the same size fontweight semibold and color for regular tables and complex table headings that appear in rowsquote
0
references to js scripts were included by the samples xslt unconditionaly thiscaused force clients to load a bunch of needles scripts ive created a patchwhich hopefully will prevent from this bad behaviourim using bugzilla for the first time so please be understanding
0
there is a missing artefacts in build commonsdaemon orgapachetomcat orgapacheeclipsethese are artifacts connected to redhat productized tomcatsee build and consult new versions with chris
1
i am unable to run the load tests using the addkeyspy and loadtestpy scripts the addkeyspy script shows in the output that is is using cryptonyms and we have switched to dids see attachment in order to perform operations the nym that is added must have an associated verkey in addition to run the send attrib command you have to add a nym without a verkey to also send attribs for that nym if you add a nym with a verkey then only the owner of that nym can add an attrib to it so the option for send attrib in the loadtestpy script needs to account for that
1
added codahale metrics but its not being uploaded from the client triggers a stack trace in the am
1
note this bug report is for confluence server using confluence cloud see the corresponding bug report problem description it seems that regular events are inheriting permissions from jira events inside a team calendar steps to reproduce create a new calendar in confluence and two events of type event and jira issue dates configure the jira issue dates event to use a jira filter that is restricted to your user create a few events in the calendar just to populate it for testing purposes access confluence with a different user thats its not the owner of the jira filter and go to team calendars click add existing calendar and add the calendar created previously what should be happening you should be able to see regular events within the calendar but not the ones related to jira because the jira filter is restricted to the user that set up the calendar in the first place what is currently happening you see nothing no events are displayed at all plus it appears that the calendar doesnt contain any events if you look at the calendar name at the right youll see no events under its name however if you create a new event of any type inside the calendar suddenly all regular events start showing up once again the ones you should be able to see in the first place and not the ones related to jira and if you delete the event you just created all events continue to appear in the calendar but if you remove and readd the calendar the problem happens again no events at all
0
as reported in tylenda added a comment amthis reminds me of a test case which sometimes fails for me while running against mysql and which i did not have time to look at maybe this is connected although the test case is singlethreaded the stack trace i am receiving istestdefaultvaluesorgapacheopenjpapersistencegenerationtypetestgeneratedvalues time elapsed sec failurejunitframeworkassertionfailederror at at at at at at method at at at at at at at at at at at at at at source at at at at at at at method at at at at at contains means uuid generator sometimes generates duplicates
0
steps to reproduce load dashboard minimize a gadget hit refresh in the browser gadget still appears minimized now try to expand it it wont work unless you do another refresh of the pageif you just minimize and expand without refreshing the page things work fine
0
during review of mentioned considering shading netty before putting the fix into would give users better experience when upgrading hadoop
0
obtaining logs for last n bytes gives the following exceptioncodeyarn logs applicationid containerid logfiles syslog size in thread main javaioioexception the bytes were skipped are different from the caller requested at at at at at at at
1
as an addon using the jira issueservice to create issues it is not possible to create issues with labels atomically this needs to be done as a postcreate activity this activity will work fine when a jira account holder has the editissue permission in jira but will not work when the project is jsd enabled regardless of permission scheme permission the only way that labels can be set correctly is if the user involved is a member of the servicedeskagents group which is not viable for jira account holder customers the only workaround i can see for this is to just ignore security and just setting the labels directly
0
cluster connectionsquotethe api changed in hbase its been cleaned up and users are returned interfaces to work against rather than particular types in hbase obtain a cluster connection from connectionfactory and thereafter get from it instances of table admin and regionlocator on an asneed basis when done close obtained instances finally be sure to cleanup your connection instance before exiting connections are heavyweight objects create once and keep an instance around table admin and regionlocator instances are lightweight create as you go and then let go as soon as you are done by closing them see the colorredclient package javadoc descriptioncolor for example usage of the new hbase apiquotethis link currently pointing to instead of
0
integration test is failing not because the unit test fails but because the run pipeline unit tests action is not initializing pipeline parameters correctly default values are not set in this particular example but parent variables and parameter values also need to be set
1
enable container workload to use ibm cryptoexpress cex cards to perform cryptographic operations on an hsm level in particular enable containers to use secure and protected key cryptography this enablement is about to provide a kubernetes device plugin to make cex resources apqns available to containers in pods as extended resourcesthe development of the kubernetes device plugin is handled by ibm linux and will be provided in a github communitynote that this is not about supporting cex to be consumed by rhcos itself eg luks disk encryptionreferencesepic on ibm internal github
1
noformatnopaneltruethis is item from greg hudsons thread about api issues we might want tosolve before the entire thread is hereprobably the most useful single mail about item is this also from greg hudsonin which he says if were pretty sure these will get into apr we could name them svnaprfoo and svnaprfoo make sure there are no doxygen comments for them and note in the header files that theyre for internal use only then it should be reasonably safe to remove them from the libraries in the future if we think some of them wont make it into apr then we should pick names were comfortable with supportingnoformat
1
bstansberry hbraun the first part is resolved from system properties hbraun the later is the default bstansberry yes hbraun wicked bstansberry but whats stored in the model is a modelnode of type modeltypeexpression bstansberry and the metadata for the attribute includes expressionallowed true hbraun ok hbraun did you just want to raise my attention bstansberry those really should appearing all over the place bstansberry yeah bstansberry really a very heavy of attributes should allow expressions hbraun damn hbraun i need to think about the implications hbraun for the ui hbraun so attributes actually allow multiple types hbraun ie string or expression bstansberry yes x or expression bstansberry where x is the type x in the metadata hbraun do we have proper setters for expression values hbraun in the dmr lib i mean hbraun or how does it currently work hbraun bstansberry modenodesetexpressionstring s hbraun ie how do you create an add operation that contains an expression hbraun ah guess we didnt port that to the gwt lib yet
1
we are planning to migrate a sakai instance from mysql to oracle would anyone who has been through a similar migration be willing to share their experiences and what tools they found useful thank you jorge cadiz
0
hii develop my web applications in struts and use tomcat catalina framework as my servlet containeri was trying to use the connectionpool utility and accordingly defined the same in strutsconfigxml as follows setproperty propertydescription valuetpard data source description setproperty propertydriverclass valueoraclejdbcdriveroracledriver setproperty propertyurl with this entry my struts application either does not startup or sometimes does not workif the above entry is the last in the strutsconfigxml tomcat when starting throws a parse exceptionparse error at line column the content of element type strutsconfig must match datasourcesformbeansglobalforwardsactionmappingsorgxmlsaxsaxparseexception the content of element type strutsconfig must match datasourcesformbeansglobalforwardsactionmappings at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at javalangreflectmethodinvokenative method at service tomcatapacheapache if i keep the entry the very first in the strutsconfigxml immed after the begining tagtomcat does not give a parse exception but the web application page which uses the tag gives the exceptionapache http status internal server errortype exception reportmessage internal server errordescription the server encountered an internal error internal server error that prevented it from fulfilling this requestexception javaxservletservletexception cannot find actionmappings or actionformbeans collection at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at cause javaxservletjspjspexception cannot find actionmappings or actionformbeans collection at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at other words the i cannot use the connectionpools utility that comes with struts clues will be
1
it is currently not possible to copy a resource whose name starts with a dot such as dsstore or hiddenfile or any other name that happens to match directoryscannerdefaultexcludesthese default excludes are added to every file set no matter what includes or excludes the user has specified eg an dsstore is silently ignoredfor these cases the maven assembly plugin provides an option called usedefaultexcludes which may be set to falsethe maven resource plugin should also offer such an option
0
release spring roo create github tag deploy maven artifacts on maven central deploy documentation on remove github branch create branch called to fix future errors publish spring roo
1
descriptionthis is an issue found when use new quay tng operator to deploy quay after created quay cr open quay config editor and modify storage and smtp configurations after click validate configuration changes get error message not connect to redis with values provided in buildlogsredis error misconf redis is configured to save rdb snapshots but it is currently not able to persist on disk commands that may modify the data set are disabled because this instance is configured to report errors during writes if rdb snapshotting fails stopwritesonbgsaveerror option please check the redis logs for details about the rdb errocolorr quay catalogsoure image image version message from redis oct failed opening the rdb file dumprdb in server root dir data for saving permission oct background saving oct changes in seconds oct background saving started by pid oct failed opening the rdb file dumprdb in server root dir data for saving permission oct background saving errorcodequay crcodejavaapiversion quayregistrymetadata name quaydemocodesteps login ocp console create new ocp namespace and deploy quay tng operator create quay cr with create n f quayyaml open quay config editor modify registry storage and smtp configurations click validate configuration changesexpected resultscan validate configurations successfullyactual resultsit was failed to validate configurations see screenshot for error message
1
create a sequence with a wsareferenceproperties or a wsareferenceparameters element as children of the wsmacksto element as in the following example wsaaddress xmlnswsa wsareferenceparameters xmlnswsa actnsoriginalepr xmlnsactnshttp actnsaccesspointkeyid xmlnsactnshttp sandesha doesnt send the wsareferenceparameters or wsareferenceproperties children along with the acknowledgments
1
in developing ejb applications chapter invoking session beans about ejb client contexts the first paragraph theres this sentencenoformatthe means an ejbclientcontext can potentially contain any number of ejb receiversnoformatwhich should probably benoformatthis means an ejbclientcontext can potentially contain any number of ejb receiversnoformatrevision
0
finj is a clojure library for financial computation
0
we are bumping up against the minute time limit for tests pretty regularly now since we have decreased the number of shuffle partitions and uped the parallelism i dont think there is much low hanging fruit to speed up the sql tests the tests that are listed as taking minutes are actually of tests that i think are valuable instead i propose we avoid running tests that we dont need tothis will have the added benefit of eliminating failures in sql due to flaky streaming testsnote that this wont fix the full builds that are run for every commit there i think we just just up the test timeoutcc
1
based on what is currently available in the tableaccessinfo we can infer when it would be a good idea to add bucketingsorting metadata for tables however we cant easily tell if were already getting the benefits of bucketingsortingthis information can be improved bya storing the input tablepartition objects so that we can tell if the tablespartitions are already bucketedsortedb running the tableaccessanalyzer after the logical optimizer so that we can tell from the operators whether or not we are already getting benefits bucketedsort merge map joins or map group bys
0
in order to fix we need to switch to jaeger based activity trackingepic brief
1
container with broker deployed is stoped after zk connection timeout it may cause that hawtio and cli reports container as disconnected but its still possible to connect to container using containerconnect commandcodeorgapachecuratorcuratorconnectionlossexception keepererrorcode connectionloss at at at at at at at at at at at at at at at at at at warn connectionstate connection attempt unsuccessful after greater than max timeout of resetting connection and trying again with a new info zookeeper initiating client connection info connectionstatemanager state change info gitdatastoreimpl shared counter reconnected doing a info connectionstatemanager state change warn connectionstate session expired event info activemqservicefactory disconnected from the info activemqservicefactory broker is now a slave stopping the info brokerservice orgapacheactivemqactivemqosgi apache activemq is shutting info osgifabricdiscoveryagent orgapacheactivemqactivemqosgi closing info networkconnector orgapacheactivemqactivemqosgi network connector discoverynetworkconnectorfabricbrokerabrokerservice info zookeeper session info transportconnector orgapacheactivemqactivemqosgi connector openwire info zookeeper initiating client connection info transportconnector orgapacheactivemqactivemqosgi connector mqtt info transportconnector orgapacheactivemqactivemqosgi connector amqp info transportconnector orgapacheactivemqactivemqosgi connector stomp info connectionstatemanager state change info contexthandler orgeclipsejettyaggregatejettyallserver stopped info defaultpullpushpolicy performing a pull on remote url info activemqservicefactory broker is now the master starting the info activemqservicefactory broker is being info transportconnector orgapacheactivemqactivemqosgi connector ws info pliststoreimpl orgapacheactivemqactivemqosgi pliststore info kahadbstore orgapacheactivemqactivemqosgi stopping async queue info kahadbstore orgapacheactivemqactivemqosgi stopping async topic info kahadbstore orgapacheactivemqactivemqosgi stopped info defaultpullpushpolicy pull result info activemqservicefactory disconnected from the info gitdatastoreimpl shared counter reconnected doing a info defaultpullpushpolicy performing a pull on remote url info activemqservicefactory reconnected to the info defaultpullpushpolicy pull result info defaultpullpushpolicy performing a pull on remote url info defaultpullpushpolicy pull result info defaultpullpushpolicy performing a pull on remote url info defaultpullpushpolicy pull result info brokerservice orgapacheactivemqactivemqosgi apache activemq uptime info brokerservice orgapacheactivemqactivemqosgi apache activemq is info activemqservicefactory broker shut down giving up being info activemqservicefactory disconnected from the info activemqservicefactory lost zookeeper service for broker stopping the info httpservicefactoryimpl unbinding bundle info serversession orgapachesshdcore server session created from info simplegeneratorhostkeyprovider orgapachesshdcore generating host info serversession orgapachesshdcore kex serverclient info serversession orgapachesshdcore kex clientserver info serveruserauthservice orgapachesshdcore session authenticatedcode
0
for jbide please perform the following if nothing has changed in your component since eg xulrunner gwt freemarker birt colorredreject this make sure your component has no remaining unresolved jiras set for fixversion jiras with fixversion ensure your component featuresplugins have been properly upversioned eg from to note if you already did this for the previous milestone you do not need to do so againcodemvn dtychomodemaven update your root pom to use parent pom version code orgjbosstools parent ensure youve built run your plugin tests using the latest target platform version clean verify if the tp is already released ormvn clean verify if still being branch from your existing master branch into a new branch codegit checkout mastergit pull origin mastergit checkout b push origin close do not resolve this jira when donesearch for all task jira or search for gwt task jira
1
env cluster sles with slurm or cray sw with cray job manager centos and slurm at least servers any number of clients purpose add fio jobs to the soak tests add fio jobs to the soak tests the number of client nodes transfer sizes block sizes and test duration must all be customizable and multiple test runs will vary these values between runs fio jobs are run both by themselves and in combination with other jobs and administrative jobs
0
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
0
it seems that webconsole along its plugins import version only newest pax web for instance uses and thus these bundles do not resolveas a solution i propose opening up the version range as per coderequirecapability osgicontract javaxservlet javaxservlethttp codeps isnt available in the maven repo but i assume it would have still this same defect as the previous ones
1
the current build is failing with on testdiskerrortestreplication with the following errorcannot lock storage the directory is already locked
1
do test builds and pull changes into viewerdevelopment from the integration queue
0
if you create a qeventloop within an adopted native thread the qthreaddata qeventdispatcher and associated objects will not be deleted on thread exit and will leakthese conditions occur in the tstqthread autotest in the adoptedthreadexec stepthe issue seems to be that all qobjects in a thread have a ref on the qthreaddata if these refs are not cleared at thread exit the qthreaddata will never be deleted in this case qeventloop is causing a qeventdispatcher to be created which normally qthread would clean up on exit but since the qeventdispatcher exists and refs the qthreaddata the qthread never cleans up so the objects exist forever
0
the memory of the cluster namenode continues to grow and the full gc eventually leads to the failure of the active and standby hdfs htrace is used to track the processing time of fsck checking the code it is found that the tracer object in namenodefsckjava was only created but not closed because of this the memory footprint continues to grow
0
types that inherit from arrownumericarray have a constructor that doesnt permit passing in a type instance containing additional metadata
1
clicking on modified files in unstaged files pane does not work i used to see the code changes in the right pane at this moment it only shows the changes of staged files basically i cannot see the code changes have to use the old sourcetree version
1
the latest camelk prod image still refers to as a base image we need to change it to the java base image
1
the connection status analytics object and engine test currently operates more as an integration test between the model and those instances we really should mock the model and the extract functions since separate tests exist for that
0
tested with qutebrowser and falkon i upgraded from qt to last weekend and found that when ebaycom or ebaycouk load i get a message that the renderer crashed in qutebrowser and that the page couldnt be displayed or similar in falkon the crash isnt instantaneous i see the page loading all the content and then it crashes right at the end testing with qtflag singleprocess seems to be a workaround in qutebrowser but it doesnt look like falkon supports passing flags to qt unfortunately i dont have enough resources to build qt with debug and ld gets oom killed after eating all my ram this is the only output if its any help received signal segvmaperr di si bp bx dx ax cx sp ip efl cgf erf trp msk let me know if theres anything else i can add
0
loading jbds update site throws exceptionsnoformatsome sites could not be found see the error log for more detailunable to read repository at sunsecurityvalidatorvalidatorexception pkix path building failed sunsecurityprovidercertpathsuncertpathbuilderexception unable to find valid certification path to requested targetunable to read repository at sunsecurityvalidatorvalidatorexception pkix path building failed sunsecurityprovidercertpathsuncertpathbuilderexception unable to find valid certification path to requested targetnoformatit has compositecontentxml in it withnoformatnoformatusing gives the same exceptionjbdstargetplatformlatest should not be included because update site built by product build has all required artifacts in itusing gives the same exception
1
issue summary in new issue view edit own worklog permission is working only when the user has edit all worklog and work on issue permission steps to reproduce remove edit all worklog permission provide edit own worklog permission to the user go back to the ticket user will not be able to edit own worklog expected results having edit own worklog the user should be able to edit own worklog in new issue view actual results having edit own worklog the user cannot edit own worklog in new issue view workaround use old issue view
1
this is a follow on to it would be good to deprecate jsonpropertiesaddprop schemaaddalias and schemafieldaddalias and remove them in instead users should use schemabuilder or we could also provide overloaded variants of the factory methods on schema to specify properties and aliases
0
warning while parsing valid beansxml cannot find the declaration of element beanscode warning while parsing cannot find the declaration of element beanscodesome details in says that needs to be adjusted to correctly detect error messages which should be ignored follow weld code in
0
the mongodocumentstore fails to report an error when the key length exceeds the allowable width in mongodbthis can be fixed by using a newer version of mongodb mongodb see
0
the way how the javascript used for this is done make it fails when the user language uses the character on the next image the inner of the red box will be never shown when using ‘català’ as user languagecatala
0
multiple plugins not showing compatibility information when editing information does show on the version list however so not impacting customers i would guess
1
i have been struggling to set hdfs storage of my single node cloudera cdh hdfs below is my storage configuration except this i have not done anything please suggest me which all steps are missingcode type file enabled true connection workspaces root location writable false defaultinputformat null formats csv type text extensions csv delimiter code
1
it is updating the codecenterprotex servers they are moving to new servers and as i understand it there will not be a separate codecenter server the migration should be automatic but needs to be verified this is one of our required sdlswlc validation procedures for code release
1
it is recommended to use n in format strings we may want to replace all n in hadoopcommon
0
we got broken dependencies coming from orgeclipselinuxtoolsdockercorereddeer appearing in openshiftcdk itests when starting ide please see attached log
1
some frontend plannertests rely on hbase tables being split into specific regions with those regions assigned to specific region servers right now the hbase tables are created via the hbase shell with a single region then they are populated via hive dmls then there is a java program that splits the tables into appropriate regions and assigns those regions to region servers once that is done nothing is maintaining the assignments the java code for doing the splitting is hard to maintain and flaky the assignments can sometimes drift due to rebalancing we should convert this to specify the splits at hbase table creation time we should have the frontend plannertest do assignments at setup time to avoid flakiness due to rebalancing this should move some flakiness out of dataload
1
the jbpmservicexml file contains commands to create or update the database mbean codeorgjbossinternalsoaesbdependenciesdatabaseinitializer namejbossesbservicejbpmdatabaseinitializer javajbpmds select from jbpmiduser jbpmsqljbpmjpdlhsqldbsql jbossjcaservicedatasourcebindingnamejbpmds true mbean codeorgjbossinternalsoaesbdependenciesdatabaseinitializer namejbossesbservicejbpmdatabaseupgrader javajbpmds select parentlockmode from jbpmnode jbossesbservicejbpmdatabaseinitializer true but if the scheam tool is udes only the first create script is updated mbean codeorgjbossinternalsoaesbdependenciesdatabaseinitializer namejbossesbservicejbpmdatabaseinitializer javajbpmds select from jbpmiduser jbpmsqljbpmjpdlpostgresqlsql jbossjcaservicedatasourcebindingnamejbpmds true mbean codeorgjbossinternalsoaesbdependenciesdatabaseinitializer namejbossesbservicejbpmdatabaseupgrader javajbpmds select parentlockmode from jbpmnode jbossesbservicejbpmdatabaseinitializer true the update is still using hsqldb specific script
1
the valve returns and no status information when the url doesnt correspond to something existing in tomcathere are a few examplesnoformat curl is head curl is head curl is head has been fixed upstream in
1
ocptelco definition of done epic template descriptions and epic goal as a node team we to make sure that all node components support euseus upgrade make sure api can talk to kubelet kubelet and crio is in why is this important scenarios acceptance criteria ci must be running successfully with tests automated release technical enablement provide necessary release enablement details and documents dependencies internal and external previous work optional … open questions … done checklist ci ci is running tests are automated and merged release enablement dev upstream code and tests merged dev upstream documentation merged dev downstream build attached to advisory qe test plans in polarion qe automated tests merged doc downstream documentation merged
1
its going to package kerby kdc files into a tar as a downloadable package including bin scripts to start the kdc server lib all the jars conf configuration folder of default kdcconf and backendconf
0
per discussion on the list lets figure out how to make the upgrade from a procedure store less errorprone could be a simple as documenting runbook steps to execute during the rolling upgrade but it would be nice if the software could roll over the data versions gracefully
1