text_clean
stringlengths
3
77.7k
label
int64
0
1
currently the code base for feedhenry integration is in github repository but the build artefacts are not picked up by nightly job
0
when choose some runtimes in jbds installer wizard these runtimes are not included in jbds it means no servers in server view and no runtimes in runtime environment window preferences server runtime environment except of eap which is bundled with jbds
1
ive deployed a esb service that contains around esb services jms queue sqlprovider queuethe moment i deploy the service i receive a error processing courier backing off for milliseconds erroras result of this my sqlprovider also does not read any data from the source table i saw a jira logged for a similar error for and the thread was closed citing that it was fixed in i deployed the smae service in and continue to see the same error is there a limitation on number of esb services that can be used in a single warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at debug courier exceptionorgjbosssoaesbcourierscourierexception unable to create message consumer at at at at at at at at by orgjbosssoaesbcourierscourierservicebindexception failed to get jms session from pool at at at morecaused by orgjbossinternalsoaesbrosettapoolingconnectionexception could not obtain a jms connection from the pool after at at warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for warn error processing courier backing off for debug acquiring jobs for debug creating jbpm context with service factories debug creating debug start user jta debug successfully registered debug opened session at timestamp debug using current hibernate session sessionimplpersistencecontextcollectionkeysactionqueue updates deletions collectioncreations collectionremovals debug querying for acquirable debug about to open preparedstatement open preparedstatements globally debug opening jdbc debug select top as as as as as as as as as as as as as as as as as as as as from jbpmjob where is null or and and order by debug about to open resultset open resultsets globally debug about to close resultset open resultsets globally debug about to close preparedstatement open preparedstatements globally debug aggressively releasing jdbc debug releasing jdbc connection debug initializing nonlazy debug no acquirable jobs in job debug closing jbpmcontext debug closing service persistence debug end user jta debug end jta transation with debug closing service tx debug obtained locks on following jobs debug creating jbpm context with service factories debug creating debug start user jta debug successfully registered debug opened session at timestamp debug using current hibernate session sessionimplpersistencecontextcollectionkeysactionqueue updates deletions collectioncreations collectionremovals debug about to open preparedstatement open preparedstatements globally debug opening jdbc debug select top as as as as as as as as as as as as as as as as as as as as from jbpmjob where is null or and and order by debug about to open resultset open resultsets globally debug about to close resultset open resultsets globally debug about to close preparedstatement open preparedstatements globally debug aggressively releasing jdbc debug releasing jdbc connection debug initializing nonlazy debug closing jbpmcontext debug closing service persistence debug end user jta debug end jta transation with debug closing service tx
1
hadoop fs ls command gives exit code for globbed input path which is the exit code for the last resolved absolute path whereas ls command always give same exit code regardless of position of nonexistent path in globbingcode hadoop fs mkdir codesince directory is not present the following command gives as exit codecode hadoop fs ls echo codenoformatfound itemsdrwxrxrx mitesh supergroup mitesh supergroup mitesh supergroup mitesh supergroup itemsdrwxrxrx mitesh supergroup mitesh supergroup mitesh supergroup mitesh supergroup itemsdrwxrxrx mitesh supergroup mitesh supergroup mitesh supergroup mitesh supergroup cannot access no such file or directory is not present but given as second last parameter in globbingthe following command gives as exit code because directory is presentcode hadoop fs ls echo codenoformatfound itemsdrwxrxrx mitesh supergroup mitesh supergroup mitesh supergroup mitesh supergroup itemsdrwxrxrx mitesh supergroup mitesh supergroup mitesh supergroup mitesh supergroup cannot access no such file or directoryfound itemsdrwxrxrx mitesh supergroup mitesh supergroup mitesh supergroup mitesh supergroup on linux ls command gives as exit code irrespective of position of nonexistent path in globbingcode mkdir p codecode ls echo codenoformatbinls no such file or a b c a b c a b c ls echo codenoformatbinls no such file or a b c a b c a b c
0
this is the task of releasing apache incubation netbeans
1
wildflyinitialcontextfactory ejb proxy security behavior inconsistent with different context lookupsusing wildflyinitialcontextfactory and calling a remote ejb serverobservations if the ejb lookup is reproducertestslsbtesttest basically like a remotenaming lookup the ejb is invoked successfully but the caller is seen as anonymous instead of the ejbuser which is specified in the context propertiesusing the ejbclient type lookup ejbreproducertestslsbtesttest then it shows up as ejbuser as if a client creates initialcontexts and uses the lookup reproducertestslsbtesttest on then uses the lookup ejbreproducertestslsbtesttest on in that order then they both show anonymous as if it uses only the context that was created firstif you switch the order and use ejbreproducertestslsbtesttest first then they both show ejbuser
0
recommendation add generation support of httpbinding for the tool from http argumentcase study default the command creates bindings for soap soap and httpconsider the following classimport javaxjwswebmethodimport javaxjwswebservice webservicepublic class helloservicebeantest private string message hello public void helloservicebeantest webmethod public string sayhellostring name return message name now consider the following command o filewsdl cn helloservicebeantestwsdl bindings will be created as such soapbinding and httpbindingcase study and proposalcxf creates wsdl with individual bindings o wsdl helloservicebeanwill createsoapbinding o wsdl helloservicebeanwill is missing is the http bindingthe following cxf argument http does not exist but would be nice to have http o wsdl helloservicebeanis proposed to createhttpbindinghttpbinding reference web services description language wsdl note march refactoring suggestion binding binding binding httpnote if for some reason can already produce the httpbinding or there is a technical reason why its excluded please explain and close the issue thanks
0
tracing options in options in openshiftclient is not showing upand when starting with debug optionsand options containsorgjbosstoolsopenshiftexpressclientdebugtrueorgjbosstoolsopenshiftexpressclientdebugclienttrueyou get failed to load class defaulting to nooperation nop logger see for further detailsapplication started no appenders could be found for logger please initialize the system see for more infoso this is not working at all
1
need to do following changes to support ranger atlas plugin from ambari check stack support and provide smart config section to enable disable ranger atlas stack advisor changes to suggest recommended configs on enable disable plugin actions ranger service creation on enable plugin action from ambari
0
we should set up an endtoend test which runs the general purpose job in a standalone setting with ha enabled zookeeper when running the job the job failures should be activated additionally we should randomly kill flink processes cluster entrypoint and taskexecutors when killing them we should also spawn new processes to make up for the loss this endtoend test case should run with all different state backend settings rocksdb fullincremental asyncsync fsstatebackend syncasync we should then verify that the general purpose job is successfully recovered without data loss or other failures
1
inserting a bytecharshort in a cache results fails with classcastexceptionfound this issue while testing the vertx infinispan cluster manager with
1
for jbide please perform the following tasks check out your existing colororangemastercolor branch code git checkout master code update your colororangemaster branchcolor root pom to use the latest parent pom version code orgjbosstools parent code now your root pom will use parent pom version in your colororangemastercolor branch ensure that component featuresplugins have been properly upversioned eg from to code mvn dtychomodemaven code ensure youve built your code using the latest minimum target platform version code mvn clean verify code ensure youve run your tests using the latest maximum target platform version code mvn clean verify code close do not resolve this jira when done if you have any outstanding new noteworthy jiras to do please complete them next search for all task jira or search for integrationtests task jira
1
this is a trackingplanning epic to make the dependency between cnf and ocpnode explicit epic goal enhance the existing cputopology manager kubelet policies or post new ones to make sure we enable latency optimal container pinning in constrained environments the biggest example is ranlike workers with cores possibly hyperthreaded there are two colliding requirements reducing overhead using all cores vs avoiding noisy why is this important not enough threads in total if we keep some of them unused latency sensitive workload needs to avoid any neigbours on the same scenarios the isolated cpu pool contains a partial core one thread from a core that has a sibling in the reserved pool the platform needs to make sure that anything latency sensitive is not pinned to that thread because otherwise if will be affected by a noisy neighbour this scenario is useful for minimizing the number of threads used for housekeeping one thread for reserved and one for infrastructure pods a workload that is latency sensitive must be the only workload running on a core or must be rejected report a noisy neigbour warning of some kind a workload that is security sensitive must be the only workload running on a core or must be rejected to make sure it cannot be compromised using timing and other cache related attacks spectre and other vulnerabilities included being the only workload on a core might mean using all threads or making unused threads unavailable to acceptance criteria ci must be running successfully with tests automated release technical enablement provide necessary release enablement details and documents or functional test must demonstrate the correct allocation happens a guaranteed latency sensitive workload has a way to be isolated from noisy neighbours on sibling threads a guaranteed latency sensitive workload that does not occupy a whole core all its threads must be rejected with a meaningful dependencies internal and external cpu manager topology manager as it shares some data with cpu previous work optional open questions upstream or downstream first related to previous work to some extent can the existing cpumanager static policy guarantee the desired behaviour where does the testsuite belong not sure it fits same reasons of the policy too narrow use case and we telco we want to run anyway perhaps submit us first and take it in ocpcnf if us rejects is rejection the only way if the pod is not requesting the whole core can the infrastructure block other threads from the rest of the risk assessment and work estimatethere is significant risk here if upstream solution is expected we have a design proposal but the kep process is lenghty and uncertain downstream only solution depends on the willingness of ocp teamthe proposed solution is mostly isolated from existing code at node kubelet level the impact of the policies on the resource accounting can be relevant increasing the risk of quick done checklist ci ci is running tests are automated and merged release enablement dev upstream code and tests merged dev upstream documentation merged dev downstream build attached to advisory qe test plans in polarion qe automated tests merged doc downstream documentation merged
1
when i try to assign a jdk to a the selection is restricted to jdk though ther are other jdks configured in the java platform manager eg jdk
1
the modeshape subsystem for already supports various configuration operations through the management layer but it should also expose any of the monitoring statistics and or other runtime status information through this management layer this will allow the console ui to access and use all of these operations
1
when i was trying to install all features from jbt nightly updatesite missing requirement was throwncodecannot complete the install because one or more required items could not be foundsoftware being installed jboss birt integration orgjbosstoolsbirtfeaturefeaturegroup requirement jboss birt core orgjbosstoolsbirtcore requires bundle orgeclipsebirtintegrationwtpui but it could not be foundcannot satisfy dependency from jboss birt integration orgjbosstoolsbirtfeaturefeaturegroup to orgjbosstoolsbirtcore code
1
sequence flows are connected to different magnets in explore diagram
0
according to the javadoc for connection and also for connectionsetautocommit autocommit should be turned on by default on new connections taken from the javadocquoteby default a connection object is in autocommit mode which means that it automatically commits changes after executing each statementquotephoenix currently sets autocommit to false by default on new connections
0
create service function eg pojo which has an array parameter egpublic static boolean testfuncinteger integers systemoutprintlnintegers null integerslength send request with null as first element send request with null as nonfirst element array of length ie null received array of length received null as expectedreasonbeanutiljava ignores nonnull elements of an array in processelement if the first element is null even if there is more than one element
0
dfsumount is called twice once in dfuseinodec then in dfusemainc causing segfault when unmounting dfuse to reproduce create pool container then dfuse p daospool s daossvcl c daoscont m tmpmschaara f then fusermount u tmpmschaara core dumped fuse writing device bad file descriptor error in dfuse double free or corruption prev backtrace homemschaarainstalldaosliblibdfssodfsumount dfuse dfuse memory map gdb bt in raise from in abort from in libcmessage from in intfree from in dfsumount dfs at in main argc argv at
1
we need to document a specific behavior of spark datasets that runs contrary to how kudu works say you have columns k x y where k is the primary key you run a first insert on a row now you upsert using any kudu api the full row would now be but with datasets you have xnull this means that datasets put a null value when some columns arent specified
0
ok so we have one send email method which takes orgapachecommonsmailemail param it then checks to see if email sending is enabledconfigured by server instance then sends the mail if it is problem happens for our qa testing we need to test email content but dont want to send emails to actual users in qa environment what we want to do is modify our one send email method and clear out the toccbcc fields and then set the to field to be our testing list but there is no way to remove emails already added setting it to null or empty collection results in an emailexception and we cant create a new email instance and copy because there is no get message accessor available we need a way to remove emailssomehow
1
hadoop uses an older version of jetty that allows we should fix it up
1
see see
1
this specialized graph will contain different types of subgraphs that can be used with one or more specific test cases each subgraph will be disconnected and test cases will need to take care to restrict their traversals to just those subgraphs that matter for their particular scenario initially we need a subgraph that has a selfloop for but other subgraphs might become useful like a tree structure
0
any plans to include it to the projecti remember that there was a plan to merge it to the main project
0
running the releaseverification build global build of everything in turn triggers the build of the beamtesttools project which has some test infrastructure scripts that we run on jenkins it seems to work fine on jenkins however running the build of the project locally fails what seems to happen is the gradle vendoring plugin caches the dependencies locally but fails to cache simplelru one workaround based on gradlew beamtesttoolsshowgopathgoroot code export gopathpwdtestinfratoolsgogradleprojectgopath go get githubcomhashicorpgolanglrusimplelru gradlew beamtesttoolsbuild code it is able to find the lrumap and simplelru during the dependency resolution step and i can see it mentioned in couple of artifacts produced by the gogradle plugin but when it does installdepedencies to actually copy them to vendor directory this specific package is missing this reproduces for me on a couple of different machines i tried both on release and master branches
0
created from upstream issue fix version
0
created from hi not sure if this is related or the same as but i cant seem to find a way to handle arrays of lists which occasionally consist of empty lists only to reproduce code na none arrays paarray na na typepalistpastring paarray typepalistpastring rb parecordbatchfromarrayslistarraysvalues listarrayskeys df rbtopandas paserializepandasdf arrownotimplementederror unable to convert type null tbl patablefrompandasdf sink pabufferoutputstream writer parecordbatchfilewritersink tblschema writerwritetabletbl arrownotimplementederror unable to convert type null code in my use case im processing data in batches where individual fields contain lists of strings some of the batches may however contain empty lists only and there doesnt seem to be any representation in arrow at the moment to deal with this situation also since im serializing the batches into a single filestream their schemas need to be consistent which is why i tried explicitly specifying the type of the array as liststring the only workaround ive found is to replace empty lists with but that implies lots of unnecessary glue code on the client side is there a better workaround until this is fixed in an official conda release
0
description of problemapplication does not react if user try to create a process definition which already exist it just rewrite the last state of process definition delete all changes and process is emptyversionrelease number of selected component if reproduciblesteps to go to project authoring jbpmplayground create new process definition evaluation already existactual resultsopen evaluation process and delete last changes until it the process is emptyexpected resultsalert about conflict same behaviour as for drl fileadditional infothere is difference in using git users designer use admin drl use sona i log into businesscentral as user sona
0
the frequency of mst state sharing between iroha nodes is hardcoded now and equals to seconds this parameter might be made configurable codejava kdefaultperiod gossippropagationstrategyparamsemissionperiodkdefaultperiodcode in irohadmultisigtransactionsgossippropagationstrategyparamshpp
0
observertest failed running on ran the test asant dtestjunitoutputformatxml dtestoutput dtestcaseasynchammertest clean testcorejava testout
1
support for noncovering range partitions in create table syntax that supports adding and removing partitions tablets from kudu tables see the design doc the actual apis are currently missing we plan on doing something similar to the hana syntax
1
every ejb deployment associated with an elytron security domain builds servicebut it fails if there are more such deployments because it mean second service with the same error msc service thread failed to start service jbossdeploymentsubunitreadpropslimitedearejbmodulereadpropslimitedjarpostmodule orgjbossmscservicestartexception in service jbossdeploymentsubunitreadpropslimitedearejbmodulereadpropslimitedjarpostmodule failed to process phase postmodule of subdeployment ejbmodulereadpropslimitedjar of deployment readpropslimitedear at at at at at at by orgjbossmscserviceduplicateserviceexception service is already registered at at at at at at at at at at morecode
1
langaggregateoptimizationsql fails with different query plan on zos ibm the query results look fine see attached out file for the plan
0
here i d like to propose a run level script for linux especially the capability to start as another personhere is the discussion around it and the script at the top
0
spark service fails to start because of changes in that require spark service inheritance to be fixed
1
made loadbalancer pluggableconfiguration it loads seems to be wrongly named and carries a typo hbasemaserloadbalancerclasscould rather be hbasemasterloadbalancerclassluckily is not out yet and we should fix it asap before folks start using it attaching patch
1
have to change the server skeleton code to reflect initfini removal
0
opened diff for review with one file commentclicked on comment to open comment in tooltiptooltip is not closableafter switching between applications following error occurederror during dispatching of javaawteventmouseevent on overrideredirect comintellijopenapiuipopupjbpopupgetcontentljavaawtcomponentjavalangnosuchmethoderror comintellijopenapiuipopupjbpopupgetcontentljavaawtcomponent at at at at at at at at at at at at at at at at at at at at at at at at during dispatching of javaawteventmouseevent on overrideredirect comintellijopenapiuipopupjbpopupgetcontentljavaawtcomponentjavalangnosuchmethoderror comintellijopenapiuipopupjbpopupgetcontentljavaawtcomponent at at at at at at at at at at at at
1
please note that also urls redirected by meta refresh redirection do have invalid scores for such urls a crawldatum is created on the lines of parseoutputformatjava the new crawldatums score isnt set anywhere after the creation so its as can be seen on the line of crawldatumjava its another question whether the redirected urls score should be just passed to the new url or should the redirection be considered as a link in which case the new urls score would be originalscore numberofoutlinks
0
the following external messaging providers are not mentioned in release notes in chapter new features and enhancements they are newly supported since eap tibco ems websphere mq
1
original proposal to wodendev mailing list july like to propose creating a new wrapper interface on the woden apisimilar in function to wsdlsource wsdlsource wraps an implementationspecific object that represents the wsdl source being passed in to thewsdlreader on a readwsdlwsdlsource method eg for the domimplementation there is a domwsdlsource class that takes a dom element ordocument or a sax inputsourcei propose an interface orgapachewodenelementsource to represent animplementation specific element information item object such as a domelement for the dom implementation or an omelement for the staxomimplementationit will have the methodspublic object getelementsource this method will return an object whichthe client must cast to the appropriate typepublic void setelementsourceobject the method implementation must checkobject is an appropriate type and throw an exception if notan example implementation will beorgapachewodeninternaldomelementsource which wraps can be used to replace in method signatures onxmlattr extensiondeserializer and extensionserializer this will meet therequirement from oshani for her staxom implementation to remove domdependencies she could create an implementation omelementsource to wrap anomelementin this way we keep the woden api clean of any particular xml parsing apior object modelwe can also use elementsource to represent a element from anyunderlying object model so that applications may use xml schema parsingapis other than wscommons xmlschema to manipulate schema data if theychoose their choice of schema parser would need to support the objecttypes wrapped by elementsource woden will still use xmlschema andexpose this via its api as it currently doeswe would need to add a method to orgapachewodenschemaschema to returnan elementsource for the elementpublic elementsource getschemaelementthis may satisfy a couple of recent requirements against woden foralternatives to xmlschema most recently from pierre chatel although hisparticular requirement could perhaps be solved by additions to wscommonsxmlschema a bigger solution might be pluggable type system support butprobably not any time soon due to other priorities and resourcesplease discuss via this mailing list if you have any comments or concernsill open a jira ifwhen this proposal is agreedregardsjohn kaputin
0
when i create an ejb project either by the ejb project wizard or the ear wizard the server is configured to deploy it as projectnamenull instead of projectnamejari can fix this by opening the server doubleclick on server in the server tab going to the deployment tab and editing the deployment location to correct itthis doesnt appear to happen with any other deployment type
1
it looks like acquiring checkpointreadlock is missing when we are trying to apply mvcc tx records
1
im trying to use groovys short hand way to put a new key value to an encapsulated map but the compilation fails with codejava cannot assign value of type javalangstring to variable of type code scenario to reproduce codejava import groovytransformcompilestatic compilestatic class mapholder private map map map getmap return map compilestatic static setmapvalue mapholder mapholder new mapholder mapholdermapkey value setmapvalue code using the put method works just fine codejava import groovytransformcompilestatic compilestatic class mapholder private map map map getmap return map compilestatic static setmapvalue mapholder mapholder new mapholder mapholderputkey value setmapvalue code
0
when starting osp in a cluster we are getting blocking sessions in oracle because of osp queries this prevents startup of some servers a copy of the email reporting the findings is below inability to bring up servers would prevent us from deploying osp into production so we started all of them up together and found blocking sessions inoracledeadlocks ona update osppresentationtemplate set where osppresentationlayout set first time we had waiting on a concurrently and then on bwhereas the second time zero on a and on bwhy havent we seen this before i dont know maybe there is enough osp stuffin the db that it matters on ctoolsloadadi
1
i suspect this is caused by the latest indexing changes in note the asset can be accessed using a project explorer
1
ldif and apache ds configuration files cant be saved in rcp modethis is due to detection made on the resourceperspectiveas it is still present in the rcp application the plugin thinks it is running inside eclipse and then tries to access classes that are not presentresolving will resolve this issue
1
ruta for each block setting local annotation variable for anchoring all rules within block
0
unable to use struts application with ognl by enabling security manager steps to security the appnoformatcaught an ognl exception while getting property serviceproviders class ognlobjectpropertyaccessorfile objectpropertyaccessorjavamethod getpossiblepropertyline at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at method at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at source at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at source at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at source at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at at at at at at at at at at at at at source at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at source at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at source at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at method at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at method at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at method at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at source at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at at at at at at at at by ognlognlexception serviceproviders at at at at at morecaused by javalangnullpointerexception permission cant be null at at at at at morenoformat
1
the flinkconnectorkinesis is a fat jar that bundles quite a few dependencies on its own the notice file of this module is correct however the flinksqlconnectorkinesis bundles the nonsql one with additional dependencies but its notice file lists only the additional dependencies
1
repro steps installed bi cluster on ibm ambari with zookeeper upgraded ambari to registered hdp repo installed packages ran service checks started express upgraderesult service check zookeeper step failed with keepererrorcode connectionloss for zksmoketestthis was caused by zookeeper dying immediately during restartnoformaterror occurred during initialization of vmtoo small initial heapnoformatnoformattitlezookeeperenvsh before upgradeexport zookeeperhomeusriopcurrentzookeeperserverexport zoologdirvarlogzookeeperexport zoopidfilevarrunzookeeperzookeeperserverpidexport javajavahomebinjavaexport classpathclasspathusrsharezookeepernoformatnoformattitlezookeeperenvsh after upgradeexport zookeeperhomeusrhdpcurrentzookeeperclientexport zoologdirvarlogzookeeperexport zoopidfilevarrunzookeeperzookeeperserverpidexport javajavahomebinjavaexport classpathclasspathusrsharezookeepernoformatnote missing m in memory settingzookeeperenv template containsnoformatexport serverjvmflagszkserverheapsizenoformatin this cluster zookeeperenv containszkserverheapsize the paramslinuxpy file has some inconsistencies with appending the letter mnoformatzkserverheapsizevalue strdefaultconfigurationszookeeperenvzkserverheapsize formatxmxzkserverheapsizevaluenoformatinstead it should benoformatzkserverheapsizevalue strdefaultconfigurationszookeeperenvzkserverheapsize zkserverheapsizevaluestripif lenzkserverheapsizevalue and not zkserverheapsizevalueisdigit zkserverheapsizevalue zkserverheapsizevalue mzkserverheapsize formatxmxzkserverheapsizevaluenoformat
1
the following snippet can reproduce this issuecodecreate table mapinsert overwrite table select mapkey value from src limit from throwcodejavalangruntimeexception javalangclasscastexception cannot be cast to javalangstring at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at source at at at at at at at at at at at by javalangclasscastexception cannot be cast to javalangstring at at at at at method at at at at morecode
1
when clustering fail generating unfinished replacecommit restart job will generate delta commit if the commit contain clustering group file the task will fail not allowed to update the clustering file group s for pending clustering operations we are not going to support update for now need to ensure that the unfinished replacecommit file is deleted or perform clustering first and then generate delta commit
1
the following attributes should have alloweddefault valueattributes allowed value default value subsystemteiidtransportjdbcwriteattributenamesslmode value disabled enabled login login subsystemteiidtransportjdbcwriteattributenamesslauthenticationmodevalue anonymous subsystemteiidwriteattributenameauthenticationtypevalue userpassword gss userpassword subsystemteiidtransportjdbcwriteattributenameprotocolvalue teiid pg teiid subsystemteiidclearcachecachetype preparedplancache queryserviceresultsetcache na
0
there is no where to change the port the teiid server is talking with i will provide a video to show
1
recent upgrade to jetty might have broken our maven proxy based on async servlet specificationthe download of resources from local maven proxy works fine for small archives that possibly dont require a chunked transfer but fails for large files like artifactyou can see the following command succeedcodewget codewhile this one failscodewget codewith the following warn httpchannel orgeclipsejettyutil committed at at at at at at at at at at at at at at at at at at at at at at at at at at
1
drill web ui sends a request to googles url since china mainland cannot visit google directly drill web ui will load and refresh slowly for the user in there
0
libcloud dropped support for python still lists python and python as far as i can tell the website source is stored in svn at im attaching a patch for the libcloudsitetrunk directory
0
when using clusterbinding in a alpha cluster environment the second node fails to utilize an alternate binding for port this port is now used by the hajndi service as the rmiport the samplebindingsxml file doesnt currently provide bindings for this service
0
when i want to use https settings in combination with elytron subsystem then i have to set to false valuefor settings i followed this blog post and as keystore i used default applicationkeystore
1
when a nodemanager is quickly restarted and happens to change versions during the restart eg rolling upgrade scenario the nm version as reported by the rm is not updated
0
its not possible to patch image container of crw instance on ocp to use quayio plugin images instead of registryredhatio ones permission denied error code oc exec namespacecrwcrwctl sh c find name metayaml xargs sed i sregistryredhatiocodereadyworkspacesquayiocrwg sed couldnt open temporary file permission denied command terminated with exit code apply fix of issue to pluginregistry image
1
failed in be test aggregatefunctionstest and several endtoend tests failed in the same addresssanitizer heapbufferoverflow on address at pc bp sp read of size at thread in asanmemcpy in impalaudfcopyfromimpalaudffunctioncontext unsigned char const unsigned long in impalareservoirsamplestateserializeimpalaudffunctioncontext in impalaudfstringval impalareservoirsampleserializeimpalaudffunctioncontext impalaudfstringval const in impalaudfudatestharnessbaseexecuteonelevelint impalaudfudatestharnessbasescopedfunctioncontext in impalaudfudatestharnessbaseexecuteimpalaudfstringval const impalaudfudaexecutionmode in histogramtesttestinttesttestbody in void testinghandleexceptionsinmethodifsupportedtestingtest void testing char const bebuilddebugexprsaggregatefunctionstest in testingrun bebuilddebugexprsaggregatefunctionstest in testingrun bebuilddebugexprsaggregatefunctionstest in testingrun bebuilddebugexprsaggregatefunctionstest in testingrunalltests bebuilddebugexprsaggregatefunctionstest in testingrun bebuilddebugexprsaggregatefunctionstest in main in libcstartmain in start bebuilddebugexprsaggregatefunctionstest is located bytes to the right of region allocated by thread here in interceptormalloc in impalaudfallocateint in impalaallocbufferimpalaudffunctioncontext impalaudfstringval unsigned long in void impalareservoirsampleinitimpalaudffunctioncontext impalaudfstringval in impalaudfudatestharnessbaseexecuteonelevelint impalaudfudatestharnessbasescopedfunctioncontext in impalaudfudatestharnessbaseexecuteimpalaudfstringval const impalaudfudaexecutionmode in histogramtesttestinttesttestbody in void testinghandleexceptionsinmethodifsupportedtestingtest void testing char const bebuilddebugexprsaggregatefunctionstestsummary addresssanitizer heapbufferoverflow in asanmemcpyshadow bytes around the buggy address fa fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fashadow byte legend one shadow byte represents application bytes addressable partially addressable heap left redzone fa heap right redzone fb freed heap region fd stack left redzone stack mid redzone stack right redzone stack partial redzone stack after return stack use after scope global redzone global init order poisoned by user container overflow fc array cookie ac intra object redzone bb asan internal fe left alloca redzone ca right alloca redzone time sectest failedaggregatefunctionstest end time mar pdtaggregatefunctionstest time elapsed
1
i was doublechecking the behavior in this plugin issue and ended up uninstalling something again this time i clicked on the grayedout uninstall button after i had already uninstalled the plugin this resulted in an unexpected error message
0
hi i have upgraded from cxf to i am now getting this exception while invoking existing ws caused by orgapachecxfbindingsoapsoapfault unexpected element found expected at at at at at at at at at at at at at at at at at at at at at at more have to say this issue is not consistent while ws called several times from same location not really sure why above expected have a feeling this is somehow related to some caching issue probably used in cxf but cant commit on that since not really familiar with whats going on inside getting the same if upgrading to appreciate if you can give any input on this thanks
1
please grant publish permission to maven central repository for orgjasigcasclient project group to dmitriy kopylenko sonatype username
0
in drillpushprojintoscan a new scan and a new projection are created using prelutilgetcolumnreldatatype listthe returned projectpushinfo instance has several fields one of them is desiredfields which is the list of projected fields theres one instance per rexnode but because instances were initially added to a set they might not be in the same order as the order they were createdthe issue happens in the following codecodejava list newprojects listsnewarraylist for rexnode n projgetchildexps newprojectsaddnacceptcolumninfogetinputrewriter codethis code creates a new list of projects out of the initial ones by mapping the indices from the old projects to the new projects but the indices of the new rexnode instances might be out of order because of the ordering of desiredfields and if indices are out of order the check projectremoveruleistrivialnewproj will failmy guess is that desiredfields ordering should be preserved when instances are added to satisfy the condition above
0
the content in uses prodver and prodprevver attributes which works well for che vs che when downstreamed this becomes problematic now that version is out prodprevver is no longer but which causes the crw doc to render incorrectly we already an attribute for the current product version prodver lets keep it prodprevver attribute is the previous minor versionwe need a new attribute for previous major version lets define it as prodprevvermajor downstream codeprodprevvermajor code upstream codeprodprevvermajor codenb please also use this new attribute in the downstreamonly upgrade from the previous major version docdocstopicscrwprocupgradingcodereadyworkspacesfrompreviousmajorversionadoc docstopicscrwprocupgradingcodereadyworkspacesfrompreviousmajorversionadoc this section describes how to perform an upgrade from the previous major version of prod this sections describes how to perform an upgrade from the previous major version of prod herethenewattribute
1
looks like we need to downgrade taskexecutorresources as well
0
connection stats appear to be incorrect as all values are can anybody confirm this is a bug am i doing something wrong hereputting a snippet from logsconnection session messagecount count unit count starttime lastsampletime description number of messages exchanged messageratetime count maxtime mintime totaltime averagetime averagetimeexminmax averagepersecond unit millis starttime lastsampletime description time taken to process a message thoughtput rate pendingmessagecount count unit count starttime lastsampletime description number of pending messages expiredmessagecount count unit count starttime lastsampletime description number of expired messages messagewaittime count maxtime mintime totaltime averagetime averagetimeexminmax averagepersecond averagepersecondexminmax unit millis starttime lastsampletime description time spent by a message before being delivered durablesubscriptioncount count unit count starttime lastsampletime description the number of durable subscriptions producers producer messagecount count unit count starttime lastsampletime description number of messages processed messageratetime count maxtime mintime totaltime averagetime averagetimeexminmax averagepersecond averagepersecondexminmax unit millis starttime lastsampletime description time taken to process a message thoughtput rate pendingmessagecount count unit count starttime lastsampletime description number of pending messages messageratetime count maxtime mintime totaltime averagetime averagetimeexminmax averagepersecond averagepersecondexminmax unit millis starttime lastsampletime description time taken to process a message thoughtput rate expiredmessagecount count unit count starttime lastsampletime description number of expired messages messagewaittime count maxtime mintime totaltime averagetime averagetimeexminmax averagepersecond averagepersecondexminmax unit millis starttime lastsampletime description time spent by a message before being delivered
0
user stories as a installerprovisioner i need to add additional servers to an existing cluster of zookeeper servers to provide increased service availability as a installerprovisioner i need to remove a server from an existing zookeeper cluster to allow for maintenace like replacing a server or to resize my footprint as a installerprovisioner i need to add additional servers to an existing cluster of kafka servers to provide additional processing capability as a installerprovisioner i need to remove a server from an existing kafka cluster because less processing capability is necessary
0
this can cause errors when actually deactivating components because the classloader and bundle context arent valid anymoreill provide a fix using the abstractextender from felix utils instead
0
for compilation with gcc on hpux i have made some changesin file srcxercescmakefileincli add gcc section section line to line hp specific options ifeq platform ifeq osver ifeq cxx platformcompileoptions dhpux dhpacc dosverdefine daportable makeshared cxx dplatform makesharedc cc dplatform ifeq transcoder alllibs libs licuuc alllibs ifeq messageloader alllibs libs licuuc licudata extralinkoptions b wls ifeq gxx platformcompileoptions fpic dplatform makeshared cxx dplatformcompileoptions shared makesharedc cc dplatformcompileoptions shared ifeq transcoder alllibs libs licuuc licudata lusrlib lusrlocallib lusrccslib lm alllibs libs lusrlib lusrlocallib lusrccslib lm ifeq messageloader alllibs libs licuuc licudata lxercesmessages lusrlib lusrlocallib lusrccslib lm extralinkoptions b wls templatesrepository commoncompileoptions dhpux dxercestmplsinc dosverdefine daportable eh z z ifeq module platformcompileoptions ddomproj platformcompileoptions commoncompileoptions makeshared cxx platformcompileoptions xmlincl makesharedc cc platformcompileoptions xmlincl ifeq transcoder alllibs libs licuuc alllibs ifeq messageloader alllibs libs licuuc licudata extralinkoptions b wls wlb compiler switch to embed a library ldsoname endifand i modify the source filesrcxercescutiltranscodersiconviconvtransservicecppline i add definedxmlhpux elif definedxmlopenserver include endifcan you add those changes in further released wilfried goemaere
0
for jbide the following core plugins depend on ui plugins directly or indirectly code corehasuideps orgjbosstoolsperftestcore rule orgjbosstoolsrelengcoreuidependency failed with message orgjbosstoolsperftestcore is a core plugin but depends on these ui plugins directly or transitively orgeclipseui orgjbosstoolscommonmodelui orgeclipseuieditors orgeclipseuiworkbench orgeclipseuiide orgjbosstoolsuibotext orgeclipseuiforms orgeclipseuiviewspropertiestabbedcodesearch for all task jira or search for integrationtests task jira
0
execute redefine eap runtime and seam to repair execute create seam web war project with seam after project creation exceptionwo stacktrace thrown cannot copy jdbc driver jar related with version of jboss as used in this build is instead of
1
when running current headnoformattitlewithout ipmesosmastersh master started on ipmesosmastersh ip master started on would be great this is caught by testsci
1
add markerdir and markerdirrelativeto attributes to the deployment scanner resource these control where the marker files are written with the default being the dir being scanned itselfthe idea here is to allow sharing of a deployments dir by letting users redirect the markers to some other locationto store in a serverspecific subdir of deployments making the markers still fairly accessible the usercodedeploymentscanner pathdeployments relativetojbossserverbasedir store the markers in the data dir out of sight out of mindcodedeploymentscanner pathdeployments relativetojbossserverbasedir all sounds simple enough but anyone looking into it must assume this will be a highly complex task as the scanner itself is very complex and assumes that the deployments and the markers are in the same dir extremely extensive test coverage will be required
0
the openejb custom wire protocol must start with protocol identifier and version number headers this will allow a multiplexer to distinguish the openejb protocol from other protocols and allow the protocol itself to distinguish between different versions
1
java is the most widely used programming language
0
emits illegalstateexception if servicecacheimpl is already closed that contradicts closeable contract it states that close call should be idempotent might be applicable to other closeable implementations in the curator projectanyway the issue is that we have a lot of errors like this in logsexception in thread javalangillegalstateexception already closed or has not been started at at at at at at at at at at
0
in case of failure to generate the test dataset the process should exit with a nonzero error code it seems that even though one of the keycloakadminclient threads returns an error the overall generator process swallows this exception and exits with success which is not an expected behavior
0
using for the attached files specifying d xmlbeans i am getting the following errorsin toom orgapacheaxiomomimplbuilderstaxombuilder builder new orgapacheaxiomomimplbuilderstaxombuilderorgapacheaxiomomomabstractfactorygetomfactory new paramnewxmlstreamreader is a nonexistent methodin fromom if orgapacheaxiomomomelementclassequalstype if extranamespaces null return orgapacheaxiomomomelementfactoryparseparamgetxmlstreamreaderwithoutcaching new orgapachexmlbeansxmloptionssetloadadditionalnamespaces extranamespaces else return orgapacheaxiomomomelementfactoryparseparamgetxmlstreamreaderwithoutcaching where the inner class orgapacheaxiomomomelementfactoryparse does not existthe full command is uri querywsdl d xmlbeans o bin p s ss sdmy issue seems to be the same as same compilation errors but this was marked as fixedand it does not look as if this has really been fixed under no circumstances should the generator generate code that cannot compile this looks to be the case where the code generator has not kept up with changes in axiombtw the workflowtypesxsd queryresultxsd and queryinterfacexsd are all in the same namespace but the querywsdl is in its own namespace
1
when deploying to mavenrepository which which doesnt allow overwriting of existing artifacts deploying of sas fails for nexus the following exception occurs configuring mojo f artifact f attachedartifacts f deploymentrepository repositorybvareleases s localrepository repository f packaging jbiserviceassembly f pomfile f skip false f updatereleaseinfo false end configuration using wagon implementation lightweight from default mapping for protocol http checking for preexisting useragent configuration adding useragent configuration not adding permissions to wagon connectionuploading uploaded using wagon implementation lightweight from default mapping for protocol http retrieving previous metadata from bvareleases using wagon implementation lightweight from default mapping for protocol http checking for preexisting useragent configuration adding useragent configuration connecting to repository bvareleases with url using wagon implementation lightweight from default mapping for protocol http repository metadata for artifact orgapacheservicemixsamplescamelsa could not be found on repository bvareleases so will be created uploading repository metadata for artifact orgapacheservicemixsamplescamelsa using wagon implementation lightweight from default mapping for protocol http checking for preexisting useragent configuration adding useragent configuration not adding permissions to wagon connection using wagon implementation lightweight from default mapping for protocol http uploading project information for camelsa using wagon implementation lightweight from default mapping for protocol http checking for preexisting useragent configuration adding useragent configuration not adding permissions to wagon connection using wagon implementation lightweight from default mapping for protocol http using wagon implementation lightweight from default mapping for protocol http checking for preexisting useragent configuration adding useragent configuration not adding permissions to wagon connectionuploading using wagon implementation lightweight from default mapping for protocol http build error error deploying artifact failed to transfer file return code is traceorgapachemavenlifecyclelifecycleexecutionexception error deploying artifact failed to transfer file return code is at at at at at at at at at at at method at at at at at at at by orgapachemavenpluginmojoexecutionexception error deploying artifact failed to transfer file return code is at at at morecaused by orgapachemavenartifactdeployerartifactdeploymentexception error deploying artifact failed to transfer file return code is at at morecaused by orgapachemavenwagontransferfailedexception failed to transfer file return code is at at at at at at at at more total time seconds finished at mon apr cest final memory as can be seen above maven tries to upload the artifact twice the first suceeds the second fails since overwriting of artefacts in the repository is not allowedappearently the the first upload comes from the main artifact f artifact the second from f attachedartifacts similar behavior can be seen for simple install where the jar is copied to a zip which is overwritten by the actual sa mvn install installing to installing to
0
update pomxml and srcetcheadertxt remove malformed or duplicate headersthen executemvn commycilamavenlicensepluginmavenlicensepluginremovemvn commycilamavenlicensepluginmavenlicensepluginformatmvn clean installgrep r licensed to jclouds inc
0
enginerunner classes will be put into a new package inorgapacheoodtcasworkflowenginerunnerworkflowprocessor classes will be out into a new package inorgapacheoodtcasworkflowengineprocessorthis will also include two enginerunnerfactories which where left out of patch
0
complete support for the addressingfeature and submissionaddressingfeature and related annotations
1
clickocean dmp sdk for android
0
on master the sitetosite reporting bundle is including the record readerwriter implementations to make use of the json reader this causes them to get loaded twice by the framework and shown twice in the ui
1
i get the following output when i try to unpack the gzcat xercesccurrenttargz tar xvf x bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blocksx bytes tape blockstar directory checksum errorthe fix to this issue is to use gtar instead of the native sun tar my system administrator tells me it is likely because there is a directory that doesnt have execute permissions turned on
0
hudi cli command clustering is failing with below numberformatexception for all the options info sparkcontext successfully stopped sparkcontextexception in thread main javalangnumberformatexception illegal value for config key sparkexecutormemory size must be specified as bytes b kibibytes k mebibytes m gibibytes g tebibytes t or pebibytesp eg or
1
similarly to textioreadall avroioreadall spannerioreadall and in the general spirit of making connectors more dynamic it would be nice to have jdbcioreadall that reads a pcollection of queries or perhaps better a parameterized query with a pcollection of parameter values and a user callback for setting the parameters on a query based on the pcollection elementjb as the author of jdbcio would you be interested in implementing this at some point
0
when using the trial version of fuse online the data virt cant be enabled because of not enough resources
1
sizenull should return null instead of in release this is a behavior change
0
clicking the download link for themebuilder version yields the following error message requesting get on servlet but only have get license generation works ok this is an urgent issue for us could you please treat it as a priority thanks mike
1
ticket tracking requested samigorelated changes to sakaiproperties files for
1
hello i create a artifact to sonatype the sync to maven central is ok but in this jar it missed a file i redeploy ok to sonatype same version comgithubnoraui how to force resync my new artifact to maven central this case is particular i do not up version
1
the javascript client for the ribbonwebapp fraction only ever uses the first host supplied in the array of servers registered for a given service there is no load balancing of any kind this should at least be some basic roundrobin of the servers providedsee here
0
with the ups service integration to the mcp we need to ensure the following bind takes googleapple client details from mcp ui and sets up the app representation in push app config will include push client details when binding is done
0