text_clean
stringlengths
3
505k
label
int64
0
1
note this bug report is for confluence server using confluence cloud see the corresponding bug report summary when generation a backup it fails with codejava caused by comatlassianrdbmsdbimportexportdbieapidatafiledatafileparseexception could not parse timestamp with pattern the backup fails to parse the date in the ao table codejava which holds custom libraries from gliffy enviroment confluence cloud steps to reproduce generate a backup through the backup manager expected results backup gets created successfully actual results backups fails with codejava export error unsupported field instantseconds but dont worry well fix it for you you just need to contact atlassian support and paste in this error message timestamp and your instance details timestamp workaround there are two workarounds wait until roughly the end of january when we are migrating to a new version of gliffy after this it should work again note that the new version of gliffy does not support the importexport of custom shape libraries or contact support who will run the gliffy data fix script attached to ticket
1
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
1
i was trying to verify and followed the test plan created by zhen im unable to associate an assignment with an existing gradebook item to reproduce create an assignment make it worth points and send it to the gradebook create a second assignment also make it worth points select associate with existing gradebook entry from the dropdown try to select the gb item created in also even though i had sent to the gb the dropdown shows it as associated with tested on and nightly sakai sakai kernel server on hsqldb built sakai kernel server localhost
1
yammerdropwizzardcodahale metrics would be a good thing to addwe should use histograms to track how long internode operations takewe should use meters or counters with successfulfailed requests it would be interesting to keep per host stats but possibly it makes sense to turn that off for larger deployments
0
a simplecontrol with a type parameter is checkedin by is instantiated declaratively without any problemwhen instantiating this control programmatically a classnotfoundexception is thrown this might be caused by javabeansbeansinstantiate not supporting instantiating bean class with type parameterfor instantiating controls with type parameter some instruction in controlprogrammingdoc might be necessary
0
testahswebservicestestcontainerlogsforfinishedapps is failingnoformatjavalangassertionerror null at at at at at null at at at at
0
if a nested component want´s to get the aliased variable during save state phase it will not find it since the makealias and removealias where not called in aliasbeanprocesssavestatehere comes the patchindex aliasbeanjavarcs file homecvspublicincubatormyfacessrccomponentsorgapachemyfacescustomaliasbeanaliasbeanjavavretrieving revision u aliasbeanjava aliasbeanjava mar aliasbeanjava mar throw new nullpointerexceptioncontext if istransient return null makealiascontext map facetmap null for iterator it getfacetsentrysetiterator ithasnext mapentry entry mapentry itnext removealiascontext return new object savestatecontext facetmap childrenlist
0
in partyuilabelsxml are the following labels deleted but almost in use in some eventspartycountrymissingpartycountrycodepartycountrycodemissingfixuse the labels from commonuilabelscommoncountrymissingcommoncountrycodecommoncountrycodemissing
0
when a notification results in creation of an entity atlas saves the classifications given in the notification as entity didn’t exist in atlas this is not considered as an update to classifications atlas should propagate the classifications added here only when ispropagate flag is set to true in other cases the classifications should not be propagated please note that notifications are meant to ingest metadata from source systems and not an interface to addupdate business metadata such as classifications addupdateremove of classifications via notifications is not supported except for the new entities created while processing notifications if a notification results in update to an existing entity atlas ignores classifications in the notification changes to classifications must be done via appropriate rest api calls
0
it isnt clear to me what exactly failed logs are full of stack traces error invocationerror for command setuppy nosetests exited with code summary error commands
0
ranger should allow to get or delete ranger policy based on policy guid
0
develop scripts to be used in connectathon activities dec that must work on all hyperledgersupportedversions of required tools docker dockercompose bash yaml file versions etc work with mihir and others to use marbles as the chaincode deployed in this workitem create a script and instructions for any friendly partner to download and create their own peer on their own machine and join the network
1
sentrymetastoreposteventlistener and sentrymetastoreposteventlistenerbase classes are obsolete now and can be removed
0
could the corinthia incubating project have svnpubsub publish fromtoplease
0
the following record fails to compile with the specific compilercodename ipaddr type record fields name addr type name type fixed size name type fixed size codethe stack trace isnoformatorgapacheavroavroruntimeexception ambiguous union at at at at at is on trunk svn infopath url repository root repository uuid code for unionschema in schemajava has this constructor codepublic unionschemalist types supertypeunion thistypes types int seen for schema type types check legality of union switch typegettype case union throw new avroruntimeexceptionnested union this case record if typegetname null continue default int mask typegettypeordinal if seen mask throw new avroruntimeexceptionambiguous union this seen mask codethat allows only one member of any type other than record the spec saysquoteunions may not contain more than one schema with the sametype except for the named types record fixed and enumquotethe code above does not adhere to thisi am attaching a patch for only this code but a unit test with a test schema that has two records two fixed and two enum in it as well as one of each of the unnamed types is probably necessary as well i am not yet familiar with the test infrastructurei am also not familiar with what else this may impact
1
once wildfly beta is out we need to review wildfly support specially regarding elytron adaptersthere were important changes in subsystem and apis that need to be reflected in our side
1
the release will include an updated installer for jbds for eap and standalone the installers must be tested and released along with the update sites
1
if a rebalance occurs with an inflight fetch the new kafkaconsumer can end up updating the fetch position of a partition to an offset which is no longer valid the consequence is that we may end up either returning to the user messages with an unexpected position or we may fail to give back the right offset in position additionally this bug causes transient test failures in consumerbouncetesttestconsumptionwithbrokerfailures with the following exceptionkafkaapiconsumerbouncetest testconsumptionwithbrokerfailures failed javalangnullpointerexception at at at
0
add the genesisbatchsizepreferredmaxbytes configuration property cut blocks to no more than preferredmaxbytes regardless of number of messages messages that exceed preferredmaxbytes will result in a batch of just that message as long as message does not exceed absolutemaxbytes
0
branding has requested that we rename devstudio from red hat developer studio to red hat codeready studioas we did recently in well need to collect graphics for use in the product and its website on well need this rebrand done for the upcoming december release all changes need to be done by november so they can be included in the feature complete release on nov in support of the docs team updating screenshots install instructions etc
1
in were seeing controllers running very hot about in several problem appears related to this sequence of errors levelerror msgreconciler error nameawsprivatelinkcontroller errorcould not get admin kubeconfig secret secret not found levelerror msgcould not get api url from kubeconfig controllerawsprivatelink errorcould not get admin kubeconfig secret secret not found levelerror msgerror cleaning up hosted zone controllerawsprivatelink errorcould not get admin kubeconfig secret secret not found levelerror msgerror cleaning up privatelink resources for clusterdeployment controllerawsprivatelink errorcould not get admin kubeconfig secret secret not found this may be a bug in itself however the controller is updating a condition timestamp every time we hit this error causing another reconcile which is where our hotloop comes in code lastprobetime lasttransitiontime message could not get admin kubeconfig secret secret not found reason cleanupfordeprovisionfailed status false type awsprivatelinkready code that then fans out to other controllers watching clusterdeployment
1
it is impossible to override following properties packageaccess packagedefinition commonloader serverloader sharedloader tomcatutilscandefaultjarscannerjarstoskip orgapachecatalinastartupcontextconfigjarstoskip orgapachecatalinastartuptldconfigjarstoskip tomcatutilbufstringcachebyteenabledfrom maven plugin configurationthe reason is that code in orgapachecatalinastartupcatalinapropertiesloadproperties blindly overrides all system properties with properties from by default as a result following maven plugin configuration is not used code myotherjartoskipjarcodewhich is a convenient way to adjust startup performance of tomcat which is poor due to servlet spec requirements to scan all classesi suggest either to call which applies system properties from maven plugin configuration after orgapachecatalinastartupcatalinapropertiesgetproperty which applies system properties from tomcat embedded catalinaproperties file in method and not the other way around or to modify orgapachecatalinastartupcatalinapropertiesloadproperties and to check whether a particular system property already existsthere is also a workaround which prevents the embedded tomcat from loading the default catalinaproperties filecode projectbaseuritargettomcatlogs codein such case system properties specified from command line or maven plugin configuration are usedif desired i can provide you with a pull request or a patch in order to make it more easy for youthanksstepan
0
when executing a maven archetype which is substituting a user defined value into a service namespace for use with cxf apache spring the archetype generator leaves two braces rather than the single brace to close the namespacei have attached three files one with the correct fragment which does not generate the expected output one with the incorrect fragment which does generate the expected output and the expected output
0
code rm install virtualenv collecting virtualenv could not find a version that satisfies the requirement virtualenv from versions no matching distribution found for virtualenv executing scheduled instruction of upload all core dumps if there are some provision failed error building provsioning failed error running bash script error running bash ex exit status code resulting in integrations failing with code continuous integration failed could not install python six package that should not happen please contact the coin admins and maybe try to restage your changes code
1
shouldnt it keep my incorrect input
0
i restored an xml backup to my confluence from another confluence instance of same version but login stopped working for all users including administrator can any body help me correct this problem i am running confluence on centos
1
get a graphic for wildflyorg for the wf releasethis wont be needed if we will have cut over to the new site by the release date but i think theres a good chance that will not be done
1
scenario there are two servers configured as replicated livebackup pair live server is killed test waits until backup server activates live server is restarted test expects that backup server deactivates and live becomes activerealitysometimes happens that live server doesnt become active in the log i can see that it was synchronized with backup but based on quorum vote it was restarted as backupcustomer impact failback feature in replicated ha is info activation for server activemqserverimplserveruuidnull apache activemq artemis backup server version started waiting live to fail before it gets info activation for server activemqserverimplserveruuidnull started epoll netty connector version unknown to info activemqclientnettythreads backup server is synchronized with info activemqclientnettythreads started epoll netty connector version unknown to info activemqclientnettythreads restarting as backup based on quorum vote info replication sending to info replication sending to info replication sending niosequentialfile to info replication sending niosequentialfile to warn activemqclientglobalthreads connection failure has been detected the connection was disconnected because of server shutdown warn activemqclientglobalthreads being disconnected for server shutdown activemqdisconnectedexceptionerrortypedisconnected the connection was disconnected because of server shutdown at at at at at at at warn default client connection failed clearing up resources for session warn default cleared up resources for session warn default client connection failed clearing up resources for session warn default cleared up resources for session info msc service thread unbound messaging object to jndi name info msc service thread unbound jca connectionfactory info serverservice thread pool unbound messaging object to jndi name info serverservice thread pool unbound messaging object to jndi name info serverservice thread pool resource adaptor warn default connection to the backup node failed removing replication now activemqremotedisconnectexception at at at at at at at at at at at at at at at at at at code
1
proposed title of this feature request add autofs package to rhcos what is the nature and description of the requestship rhcos with the autofs package already installed why does the customer need this list the business requirements herecustomer is running many pods that require mounts each which adds up to of nfs mounts on rhcos this many mounts causes the host to crash they need autofs to premount volume so that pods can make use of hostmounts vs many nfs mounts each list any affected packages or componentsautofs
1
automatic user synchronization from a remote directory when a synchronisation interval is set stops working when manual synchronisation is run steps to reproduce configure a remote directory crowd or ldap define a synchronisation interval eg minutes check it is running according to schedule click synchronise now info synchronisecache full synchronisation complete for directory in code wait for the synchronisation interval again it wont run workaround to have it working again it is necessary to edit the user directory and set the synchronisation interval again
0
allow integrators to make use of existing reflection cache bytecode scanning for resolving whether a class is annotated with a given annotation currently there is orgjbossweldbootstrapeventsannotationdiscovery together with the default implementation simpleannotationdiscovery reflection is used we should expose something similar as part of weld spi
1
i have recently checked out hive so i can merge changes made by a coworker and contribute the changes one of these files that are involved isorgapachehadoophivejdbcthe head revision on with the comment missiong some jdbc functionality like gettables getcolumns and hiveresultsetget methods based on column name bennie schut via jvs is an empty fileall other files in the same package orgapachehadoophivejdbc seem to be correctquestion there are no references to the class my build works is this class being phased out thanks for your timesean
0
the installer console at systemconsoleosgiinstaller tends to give out a lot of information leading to a very long page to ease jumping to the individual sections there should be a table of contents at the beginning of that console which allows to directly jump to the following sections active resource for each type one link processed resources for each type one link untransformed resources
0
hi i have some projects i would like to publish on maven central repository thoses projects will belong to the domain faylixefr kind regard i stay available for any information request
0
here is snippet of test resultcodetestregiontransitionoperationsorgapachehadoophbasecoprocessortestmasterobserver time elapsed sec errororgapachehadoophbaseunknownregionexception at at this could be related to a race not in coprocessors with region movingwe can skip moving region for now
0
start up the server and the following is logged info orgsonatypejsecuritywebplexusconfiguration adding new protected resource with pathservice and filterexpressionauthcbasicperms
1
i am running hive on spark with hive and spark running in spark standalone mode with no yarnhdfs my hive tables are external pointing to my hivesite has sparksubmitdeploymode set to client sparkmaster set to and in the spark ui i see the spark master has available worker with resources in beeline i run select from table this works then in beeline i run select count from table and i get error below contains the so called missing class and is started with nohup hivehomebinhive service hiveconf hiveconf hiverootloggerinfoconsole below error is from viewing the job in the sparkui codejava failed mappartitionstopair at javalangnoclassdeffounderror lorgapachehivesparkcountersparkcounters at method at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at caused by javalangclassnotfoundexception orgapachehivesparkcountersparkcounters at at at at more code note that if i do this in beeline set sparkmasterlocal then the count works what am i missing to make it work without setting sparkmaster to local
1
when http management interface is secured with legacy security realm using ldap user is not prompted to provide credentials as should be in case of basic http authentication mechanism instead http status is returned directlyusers wont be able to migrate their current configuration to without change
1
there is the log in my local environmentcode t e s t srunning run failures errors skipped time elapsed sec failure in time elapsed sec errorjavasqlsqlexception could not open connection to javanetconnectexception connection refused at javanetplainsocketimplsocketconnectnative method at at at at at at at at at at at at at at at tests in error sql could not open connection to jdbchitests run failures errors skipped as blocker since this is a unit test failure
1
every time you login and go to process task reports perspectives youll see no data shown by all displayers when you leave the reports perspective and return to it again the data are correctly loadedi consider this blocker issue because who would go to that perspective again after seing there are no data this regression was introduced very recently it worked a week ago snapshot from march
1
upgrading the target platform to luna the controllerpopupmenuextender shows a compile error due a change in api of the contributeobjectactions methodsince this method is internal it is within scope for eclipse to change its api in retrospect designer should rewrite this class to avoid the use of the internal api which would increase the chances of transitioning between target platforms being much more seamless
1
since we have resolved the class name conflicts the full qualified name for shadowing class such as cipher and securerandom is longer needed
0
the fix is to make invokemissingmethod do the lookup instead of maintaining a record of parent child relationships as we do now
1
start creator without previous settings eg by passing the parameter settingspath show the menu for selecting output views general messages is not checked although a button for this view is visible generalmessagespngthumbnail the menu and the line of buttons should be consistent when a button is being displayed the respective menu item should be checked and viceversa
0
resteasy was upgraded in the eap to the new minor version contains changes important for users and that should be documentedupstream documentation is available here
1
note this bug report is for confluence cloud using confluence server see the corresponding bug report panelthe ban this user button is displayed in some circumstances even when the logged in user is not actually able to ban the user being viewedfor example dont show the ban button when a user is viewing their own profile
0
urf unread field orgapachepigbackendhadoopexecutionenginemapreducelayercombineroptimizerchunksizeurf unread field orgapachepigbackendhadoopexecutionenginemapreducelayerjobcontrolcompilerseenurf unread field orgapachepigbackendhadoopexecutionenginemapreducelayermrcompilerlogurf unread field orgapachepigbackendhadoopexecutionenginemapreducelayermrcompilerlastinputstreamingoptimizerlogurf unread field orgapachepigbackendhadoopexecutionenginemapreducelayerplansmrprintermindenturf unread field orgapachepigbackendhadoopexecutionenginephysicallayerlogtophytranslationvisitorloadurf unread field orgapachepigbackendhadoopexecutionenginephysicallayerlogtophytranslationvisitorlogurf unread field orgapachepigbackendhadoopexecutionenginephysicallayerlogtophytranslationvisitorrurf unread field orgapachepigbackendhadoopexecutionenginephysicallayerplansplanprinterprinterurf unread field orgapachepigbackendhadoopstreaminghadoopexecutablemanagerwriteheaderfooterurf unread field orgapachepigbuiltinbinstorageiurf unread field orgapachepigbuiltinpigstorageosurf unread field unread field orgapachepigimpllogicallayerlogicalplanclonehelpermoriginalplanurf unread field orgapachepigimpllogicallayerloprinterprinterurf unread field orgapachepigimplplanoptimizerrulematchermcommonnodesurf unread field orgapachepigimplplanoptimizerruleplanprinterprinterurf unread field orgapachepigimplplanplanprinterprinterurf unread field orgapachepigpenderiveddatavisitorpcurf unread field orgapachepigpigservercachedscriptuuf unused field orgapachepigbackendhadoopexecutionenginemapreducelayermapreducepostoreimplpcuuf unused field orgapachepigbackendhadoopexecutionenginemapreducelayermapreducepostoreimplsfileuuf unused field orgapachepigbackendhadoopexecutionenginemapreducelayermapreducepostoreimplstoreruuf unused field orgapachepigbackendhadoopexecutionenginemapreducelayerpartitionersweightedrangepartitionernumquantilesuuf unused field orgapachepigbackendhadoopexecutionenginemapreducelayerpartitionersweightedrangepartitionersamples
0
warn error processing component withid instructorshierarchyselectionjavalangillegalstateexception invalid value for this ternary boolean required at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at method at at at at at at at at at at at method at at at at at at at at at at source at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at info
1
must include cstring
1
the behavior we experience here is that our main thread is stuck forever on the stop action of the ftp server heres a jconsole output on the stack trace sourcejavautilconcurrentthreadpoolexecutorrunworkerunknown sourcejavautilconcurrentthreadpoolexecutorworkerrununknown sourcejavalangthreadrununknown source
1
with the old style filters have gone from deprecated to deleted this means we need a way to upgrade all existing configurations to use the new filters depending on the changes make for aggregatorscombiners we may want to do that modification in the same sweep
1
could you please add to the project comlinecorparmeria so that i can have the deployer role for both snapshot and release account information guinsjguinsorg thanks in advance
0
two sides of the issue clients unable to handle too many requests pr merged backport to ok not yet in so we can not backport it into our clientgo yet some watchers in our operators are recreating watchers too often eg every instead of minutes the goal is to update our operators to recreate the watchers less often example pr to get the biggest watch offenders one can run kubectldevtool audit f homejchaloupprojectssrcgithubcomopenshiftinstallermustgatherauditlogskubeapiserverauditlog verbwatch byuser had line read failures count first last duration systemopenshiftmonitoringprometheusoperator systemopenshiftoperatorlifecyclemanagerolmoperatorserviceaccount systemkubecontrollermanager systemopenshiftclustercsidriversawsebscsidriveroperator systemopenshiftcontrollermanageroperatoropenshiftcontrollermanageroperator systemopenshiftingressoperatoringressoperator
1
the release notes have disappeared from the website all changes that were made for the contribution pages member pages etc have been removed this means that all the work done for has now been reversed
1
seen here codeunitkafkaserverreplicationquotastest shouldbootstraptwobrokerswithfollowerthrottle failed javalangassertionerror expected but was at at at at at at at at at at at at
0
the term partition should be globally replaced by the term clusterthis affects public interfaces the deprecation of which need to be fully backwards compatible public implementation classes member variables system properties javadocs and clustering documentation
0
update hp cloud object storage to work with api
0
changing the file paths to be shorter
0
the documentation of hiddenhttpmethodfilter says bq note this filter needs to run after multipart processing in case of a multipart post request due to its inherent need for checking a post body parameter so typically put a spring multipartfilter before this hiddenhttpmethodfilter in your webxml filter chainthat means in the current configuration of the roo generated web application the hiddenhttpmethodfilter will not work for multipart requests for example if i add a file upload field in the update form of an entity and change the form to multipart spring mvc will not recognize the request anymore as a put since the hiddenhttpmethodfilter does not workan alternative solution as said in the javadoc above is to place the multipartfilter in front of the hiddenhttpmethodfilterthis should be the default configuration created by spring roothe only ugly part of this is that the multipartresolver has to be moved from webmvcxml to applicationcontextxml
0
up to jena the trig output for the default graph includes the other trig forms trigblocks trigflat do notthey are not required by trigthis jira will change the behaviour of the pretty writer to not output around the default graph nor indent it
0
with increase in number of nodes puts to replicated cache are slowed down almost in the same proportionunit test reproducernoformat licensed to the apache software foundation asf under one or more contributor license agreements see the notice file distributed with this work for additional information regarding copyright ownership the asf licenses this file to you under the apache license version the license you may not use this file except in compliance with the license you may obtain a copy of the license at unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license package orgapacheigniteinternalprocessorscachedistributedreplicatedimport orgapacheigniteigniteimport orgapacheigniteignitecacheimport orgapacheignitecachecacheatomicitymodeimport orgapacheignitecachecachemodeimport orgapacheignitecachecachewritesynchronizationmodeimport orgapacheigniteconfigurationcacheconfigurationimport orgapacheigniteconfigurationigniteconfigurationimport orgapacheigniteconfigurationmemoryconfigurationimport orgapacheigniteinternaligniteeximport orgapacheignitespicommunicationtcptcpcommunicationspiimport orgapacheignitetestframeworkjunitscommongridcommonabstracttestimport orgapacheignitetransactionstransactionimport orgapacheignitetransactionstransactionconcurrencyimport orgapacheignitetransactionstransactionisolation tests replicated cache performance public class gridcachereplicatedtransactionaldegradationtest extends gridcommonabstracttest keys private static final int keys override protected igniteconfiguration getconfigurationstring gridname throws exception igniteconfiguration cfg supergetconfigurationgridname cfgsetclientmodegridnamestartswithclient cacheconfiguration ccfg new cacheconfiguration ccfgsetcachemodecachemodereplicated ccfgsetatomicitymodecacheatomicitymodetransactional ccfgsetwritesynchronizationmodecachewritesynchronizationmodefullsync ccfgsetnametest cfgsetcacheconfigurationccfg return cfg public void testthroughput throws exception try igniteex ignite client startgridclient ignitecache cache clientgetorcreatecachetest dotestclient cache dotestclient cache dotestclient cache finally stopallgrids param client param cache cache private void dotestignite client ignitecache cache long systemcurrenttimemillis for int i i keys i try transaction tx clienttransactionstxstarttransactionconcurrencypessimistic transactionisolationrepeatableread cacheputi i txcommit loginfotps mathroundkeys floatsystemcurrenttimemillis noformatmy test results transactional cache explicit transactiontps atomic cachetps transactional cache no explicit transactiontps
0
it may be useful to allow different edge manager plugin types based on different requirements in order to support this we would need to support different plugins per edge for routing the events on that edge a motivating scenario is when a custom plugin from an older release of a downstream project is using older apis while the latest release of that project has moved on to newer apis this would allow both old and new releases to work with the latest tez framework as optimally as possible
0
when httpinterface uses httpauthenticationfactory attribute for authentication and securityrealm attribute for ssl and referenced securityrealm does not include authentication then authentication through httpinterface is not possiblewhen management console is used then page with the red hat jboss enterprise application platform is running however you have not yet added any users to be able to access the admin console is displayedwhen is accessed then following output is returnedcode outcome failed failuredescription the security realm is not ready to process requests see rolledback truecodewhen securityrealm includes also authentication which is not used then authentication through httpinterface works as expectedwe request blocker flag because this issue blocks rfe this issue is reported in eap because this configuration could not be set on application server due to which was fixed in eap
1
the error can be created in the following waycreate a poll with the number of answers to be selected more than onesay now try to answer the poll and select more than the number specified say warning message is shown showing that the number of options specified is more than the requirednow click on the reset buttonthe checked options do not get reset
0
exception codejava javalangnullpointerexception at at at at at at at at at at at at at at at at at code we are using hudi as our storage engine for the output of our spark jobs we use aws emr to run the jobs recently we started observing that some of the upsert commits are leaving the table in an inconsistent state ie hoodierecordkey is observed to be null for a record which is updated during that commit how are we checking that hoodierecordkey is null codejava val df sparkread formatorgapachehudi dffilterhoodierecordkeyisnullshowfalse output hoodierecordkeyhoodiepartitionpathprimarykey null xxxxxxxxxxxxxxxxxxxxxx code one thing to note here is that the record which has null for hoodierecordkey was already present in the hudi table and was updated during the commit what is even weird for us is that there is only a single record in the hudi table with hoodierecordkey as null and all other records are fine we have verified that the column that is used as hoodierecordkey recordkeyfieldoptkey is present in the record and is not null after rolling back the faulty commit which introduced that record rerunning the job works fine ie there are no records with hoodierecordkey null hoodiewriter config codejava val hudioptions map recordkeyfieldoptkey primarykey partitionpathfieldoptkey partitionkey precombinefieldoptkey updatetime keygeneratorclassoptkey classofgetname cleanercommitsretainedprop dataframewriteformatorgapachehudi optionhoodiewriteconfigtablename mytable optionshudioptions optionhoodieindexconfigindextypepropsimple modesavemodeappend code we are using a custom recordpayload class which inherits from overwritewithlatestavropayload
0
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
0
test if the sanitizefieldnames configuration is supported by vitess connector it is supported by other connectorsif it is not supported add the support for it to the vitess connector
0
during internal changes following errors were introduced maven central is no longer default enabled loading settingsxml from classpath was brokenas testsuite is not using any remote artifacts the change was not apparent from the testsuite
1
after the upgrade of bamboo cloud to the following stack trace is shown then triggering builds noformat javalangillegalstateexception xsrf a mutative operation was attempted on bucketpropertysetitem within a nonmutative http request at at at at at at at at at at at at at at at at at at source at at at at at source at at at at at at source at at at at at method at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at noformat incident progress jdk has been rolled out to all bamboo cloud instances to resolve the problem please clear your cookies and cache if the problem is persisting if youre still encountering issues then log a ticket at other problems related to this upgrade with the same root cause bamboo builds and deployments sections showing ui problems after upgrade the create a new plan screen is broken triggers page seems to be broken option to run customized builds in bamboo is taking to a broken page
1
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
1
in impala we have flagsprincipal and flagsbeprincipal flags if only flagsprincipal is set we use it as both the internal and external principals if both flagsprincipal and flagsbeprincipal are set we use flagsbeprincipal as the internal principal and flagsprincipal as the external principal however in kudu they only source the internal principal from flagsprincipal and arent aware of a flag called flagsbeprincipal so as of the time went in if flagsbeprincipal is explicitly set to something different from flagsprincipal we would be using the external principal as the internal principal which is incorrect
1
the following issue has been found by worldspace tm issue reading order standard success criterion meaningful sequence priority high severity violation practice accessibility this issue was found in the source code in the create issue modal window focus of the screen reader is falling into summary text field as soon as the window loads skipping the project issue type text fields section d focus of the assistive aid user must be on the “create issue” heading so that assistive aid user can navigate the complete form help with remediation is available at
1
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
0
the file jetspeedwebinfdbmanagedbbat breaks if the classpath contians a space in a filename specifically the line java classpath classpathhsqldbjar orghsqldbutildatabasemanager url jdbcjetspeed need to be changed to java classpath classpathhsqldbjar orghsqldbutildatabasemanager url jdbcjetspeednote the quotes around classpathhsqldbjarthats allmarkus
1
bruno already updated the schema and fixed comment annotations attached the upgraded schema
0
winrt package missing from installer winrt and arm are available
1
from nicholas at the tail of if we annotate it as public all the classes associated with them should also be annotated as public also whenever we change the interface or any of the associated classes it is an incompatible changein our case blockvolumechoosingpolicy uses fsvolumeinterface which is a part of fsdatasetinterface in fsdatasetinterface there are many classes should not be exposed one way to solve it is to make fsvolumeinterface independent of fsdatasetinterface however fsvolumeinterface is not yet a welldesigned interface for the publicfor these reasons it is justified to annotate it as private the same as blockplacementpolicyquotewe should switch blockvolumechoosingpolicy to for a private audience
0
synapse configi have created a simple proxy and nokeepalive to dishonor the keepalive connection in response pathcodedefinitions xmlns address uri codeinvoke the proxy service with keepalive connection via curl as mentioned belowcurl commandcodecurl v h keepalive h connection keepalive header contenttype header soapactiongetquote data soapenvenvelope xmlnssoapenv xmlnswsa codesince i have mentioned nokeepalive in the response path i expect synapse to behave same as request path with nokeepalive property however it is not honored in the response received request and responsecode trying connected to localhost port post servicesstockquoteproxy host useragent accept keepalive connection keepalive contenttype soapactiongetquote contentlength upload completely sent off out of bytes ok contenttype textxml date wed jun gmt server synapsepthttpcomponentsnio transferencoding chunked connection keepalive connection to host localhost left jun pdt vsivajothy code
0
made datanode read request logging debug there is a good reason why it was at info for so many years this is very useful in debugging load issues this jira will revert havent seen it being a bottleneck on busy hbase clusters but if someone thinks it is a serious overhead please make it configurable in a separate jira
0
hi currently when you add an attribute of type version you are forced to select one specific jira project only would be nice to have an attribute for the project and another for the version that filter the versions in the project selected in the first attribute thanks claudio
0
replace it with getproxyid string token may need to update the implementation of entityproxyidimplasstring as necessary also remove the gettoken methods instead the entityproxyidimplasstring should sufficemarked it as a blocker since it is public api
1
hiit seems doing an insert overwrite on a partitioned table with a select that results in no records leaves the existing records in the target table intact if table is not partitioned it works fine and the result is the truncated table table storage type does not seem relevantsql to reproduceok nonpartitioned text tablecodecreate table i intinsert into values table i intinsert into values count from count is overwrite select from where i count from count is nonpartitioned parquet tablecodecreate table i int stored as parquetinsert into values table i int stored as parquetinsert into values count from count is overwrite select from where i count from count is partitioned text tablecodecreate table i int partitioned by j intinsert into partition j values table i int partitioned by j intinsert into partition j values count from count is overwrite partition j select from where j count from error count is still yousteve
0
update submodules on dev in with other two changes code c pipe stdliblibc arch isysroot fnoexceptions wall w ffunctionsections fdatasections fpic dqtnonarrowingconversionsinconnect dqtuseqstringbuilder dqtnoexceptions dlargefilesource dqtnodebug dqtqmldevtoolslib dqtbootstraplib dqtbootstrapped dqtnocasttoascii i isrctoolsqdoc isrctoolsqdocqmlparser iusersqtworkinstallinclude iusersqtworkinstallincludeqtqml iusersqtworkinstallincludeqtcore iusersqtworkinstallincludeqtxml iusersqtworkinstallmkspecsmacxclang o objtexto textcpp error no member named sprintf in qstring did you mean asprintf msgsprintfxml error parse error at line d column d sn asprintf note asprintf declared here static qstring asprintfconst char format code
1
after the rm restarts it forgets about existing nms and their potentially decommissioned status too after restart the rm cannot maintain the contract to the ams that a lost nms containers will be marked finished within the expiry time
0
mysqltogooglecloudstorageoperator should handle time columns represented as datetimetimedelta correctly mysqltogooglecloudstorageoperator should return datetime and timestamp columns in utc
0
buy fioricet online cheap click here here is the reason you should purchase fioricet on the web headaches and migraines can be ruthless they make an individual uncommonly awkward as well as simultaneously can bring about other physical and mental issues particularly if an individual is experiencing headache these migraines are very repeating and its absolutely impossible to stop them it is therefore that it is encouraged to make an essential move in the event that you feel that the headache is gaining out of power the medicine recommended for a migraine ought to be to such an extent that it ought to calm the pressure between the nerves and permit the individual to unwind it is thus that fioricet is a medication proposed to the patients who are experiencing headache to facilitate their agony and steady migraine the medication made by consolidating acetaminophen caffeine and butalbital is fixed extents to get a mix salt fit for countering headache the specialist recommends the medication if the patient is experiencing gentle to extreme migraine the presence of paracetamol and pain relieving in the medication guarantee that it will end up being compelling in treating the condition you can purchase fioricet online in the wake of finding out about the impacts and dangers implied with the medication fioricet mg headaches can be serious adapting to particularly with regards to headache assaults just the person who experiences them knows how much torment and disquiet they can make basically headache is an infection which accompanies no fix you cant entirely annihilate it yet endeavors are made toward making a drug which can counter the agonizingly long cerebral pains caused because of headache it is consequently that fioricet is medication prescribed to the patients who experience the ill effects of serious headache assaults the synthetics like acetaminophen butalbital and caffeine which are available in the medication permit the individual to feel mitigated the fioricet mg measurements is sufficient for the patient to discover moment alleviation pertinence and conceivable results the medication is utilized to treat patients who experience the ill effects of unnecessary headache assaults notwithstanding it isnt suggested by the specialist right away when the specialist knows about the state of the patient and knows the degree of his agony would he be able to prescribe the medication to him fioricet is solid medication and you should take it solely after counseling your primary care physician with respect to something very similar the drug can be stayed away from in the event that you have the accompanying liver cirrhosis or medication and liquor enslavement asthma kidney issues skin sensitivity whenever taken without the management of the specialist the medication can cause the accompanying results tipsiness seizures windedness a sleeping disorder uneasiness accordingly fioricet is a medication which should be directed distinctly under master watch
0
as a hawq user i want other qes of the same query still keep alive when one qe fails so that i can reuse the alive qes to execute the following queries
0
fixing test fails in ive found some failures are ignored by jdbc client below is code part that verifies result of jdbc callnoformatpublic static void verifysuccesststatus status boolean withinfo throws sqlexception if statusgetstatuscode tstatuscodesuccessstatus withinfo statusgetstatuscode tstatuscodesuccesswithinfostatus throw new hivesqlexceptionstatus noformatif withinfo is falseverifysuccess it ignores status by fixing this two tests are failed noformatorgapachehiveminikdctestjdbcwithminikdctestnegativeproxyauthorgapachehiveminikdctestjdbcwithminikdctestnegativetokenauthnoformat
1
summary observations • there’s different behaviour between amqp and the artemis clients• theres uuid subscription name in the subscription topic when you’re using the amqp client you don’t set the client id and you’ve selected a durable shared subscription it should just be the subscription name like with the artemis client• the amqp client seems to have a problem if you try to create a new durable nonshared subscription on the same topic with the same client id and a different subscription name the artemis client doesn’t have a problem with this
1
when using the quick create form for subtasks if you quickly click the create button it will create multiple subtasks
0
we have a hosted instance of jira the url is last night when i logged into jira the agile plugin did not load i went to the plugin section and saw an error message that the addon failed to enable i searched jira and found where a user reported a similar issue and the resolution was apparently to restart the instance
1
need versions for what is to become the camelk ckc release for camel nacamelk na yaks was removed during and will not be built
0
after renaming replication distribution and splitting of core and api modules itd be good to revisit and review current api to make it cleanermain issues remove unused unneded apis refactor transport authentication api as they mix the authentication algorithm implementation with providing the authentication secret to the transport algorithm
0
when you copy or move a question between question pools the source pool is not included on the available list of destinations this is probably supposed to prevent the bad things that happen when you move a question into its own pool but it also means you cant copy or move questions down into subpoolssteps to to make this extra interesting do this as a user who doesnt have any question pools create a question pool called create a question in that pool create a subpool under one call it find the question you put into pool one and click the copy or move link next to you cant movecopy the question down into subpool two in fact if there are no other question pools you cant move or copy it anywhere at alli can definitely see reasons why an instructor might want to take the questions in a pool and sort them into subpools underneath it we should allow this
0
gradebook classic is not listed in tools to import in site info import from site setup in sakaiproperties uncomment or remove sakaigradebooktool from code stealthtoolsorgsakaiprojecttoolapiactivetoolmanager code you need to uncomment it and not include sakaigradebooktool because its stealthed by default in kernelproperties so by uncommenting and leaving it blank or removing the tool id from sakaiproperties we overridding the property in kernelproperties steps go to site info click import from site choose a site that has gradebook classic the list of tools to import does not include gradebook classic after debugging the code it looks like it might not be displayed because it isnt in the list of entity producers but thats as far as i got im not familiar enough with the code that generates the entity producers but i suspect that this is due to remove useless code in gradebookservice
1
currently cfgmanager holds all information about configured servers cfgmanger is a application component and knows nothing about default credentials defined in the workspace and project configuration the problem is that cfgmanager notifies listeners about changes in the server configuration and can send only servercfg object in the plugin we use serverdata which combines servercfg and defaultcredentials cfgmanager should be able to construct serverdata objectreturning serverdata from cfgmanager instead servercfg would make much simpler using server info in the plugincfgmanager contains also info about global servers that feature is not used in idea it was supposed to use in the eclipse but eclipse does not use cfgmanager at all it uses some mylyn storage systemthe example problem which cannot be solved without the refactoring is
1
what we have what we need todohere is the doci have to find the original of the imagecould use the same tool as this picture was created with
0
the system chaincode needs to access the signedproposal object in order to perform access control the signedproposal might be used for additional purposes too specific the chaincodes logic
0
code cid resource leaks ctordtorleakmgmtclusterclustercomcc in clustercomclustercomunsigned long char int char int char inkfilepathmergeclusterconf sizeofclusterconf p clusterfile xxx shouldnt we pass the clusterconf to the rollback debugccom using cluster file s debugccom using cluster conf s clusterconf cid resource leaks ctordtorleak the constructor allocates field clusterfilerb of clustercom but the destructor and whatever functions it calls do not free clusterfilerb new rollbackclusterfile if inksysnamereleasesysname sizeofsysname sysrelease sizeofsysrelease mgmtlog node running on os s release sn sysname sysrelease cid uninitialized members uninitctormgmtclusterclustercomcc in clustercomclustercomunsigned long char int char int char code
0
currently some of the objects returned by using the serviceloader to load the repositoryfactory services may in fact implement orgmodeshapejcrapirepositories this is clumsy and incorrectly mixes contentwe need to clean up and separate the repositoryfactory notion from the repository container notion all of the repositoryfactory implementations need to be cleaned up and we should come up with how clients like the web applications can discover multiple repositories eg by finding a container perhaps with the serviceloader and a new repositorycontainer interface
1