text_clean
stringlengths
3
2.28M
label
int64
0
1
as described inoverriding a property like thiscode listener classnameorgjbossmodclustermodclusterlistener loadmetricclassorgjbossmodclusterloadmetricimplbusyconnectorsloadmetric codeeven to the default throws exceptioncodeapr pm orgapachetomcatutilintrospectionutils setpropertywarning iae loadmetriccapacity at method at at at at at at at at at at at at at at at at at at at at at at at method at at at at at
1
ok this one is on me journalnodemxbean is a public evolving interface but was cherrypicked to which breaks source compatibility between and by adding the following three methods noformat get host and port of journalnode return colon separated host and port string gethostandport get list of the clusters of journalnodes journals as one journalnode may support multiple clusters return list of clusters list getclusterids gets the version of hadoop return the version of hadoop string getversion noformat api checker error quote recompilation of a client program may be terminated with the message a client class c is not abstract and does not override abstract method getclusterids in journalnodemxbean quote
1
create table signed not null not null unsigned not null pic comp not null pic not null not null not null signed not null signed not null not null store by sql operation complete got error transaction subsystem tmf returned error on a commit traalter table add constraint primary key error transaction subsystem tmf returned error on a commit transaction sql operation failed with errorsinsert into values error unable to access hbase interface call to exphbaseinterfacerowexists returned error causeorgapachehadoophbaseclientretriesexhaustedexception failed after exceptionsfri may utc orgapachehadoophbasenotservingregionexception orgapachehadoophbasenotservingregionexception region is not online at at at method at at at at at may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception rows insertedselect from error unable to access hbase interface call to exphbaseinterfacescanopen returned error causeorgapachehadoophbaseclientretriesexhaustedexception failed after exceptionsfri may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception may utc orgapachehadoophbasetablenotfoundexception rows selectedalter table drop constraint error constraint does not exist sql operation failed with errorsshowddl table no default not null not droppable character set collate default no default not null not droppable unsigned no default not null not droppable unsigned no default not null not droppable no default not null not droppable character set collate default no default not null not droppable character set collate default no default not null not droppable no default not null not droppable no default not null not droppable character set collate default no default not null not droppable store by asc sql operation completelog offexitend of mxci session test scriptset schema usrdrop table cascadecreate table signed not null not null unsigned not null pic comp not null pic not null not null not null signed not null signed not null not null store by got error transaction subsystem tmf returned error on a commit traalter table add constraint primary key insert into values from table drop constraint showddl build release branch date platform release connectivity services version release build release date jdbc type driver command interface
1
try configuring jaegers ingester and collector storage to use strimzineed to look into which may require a more recent version of sarama as provided by
0
phpunit version till can export coverage data in nearly cloverxml compatible format nearly because the tag is defined at the end of the enclosing tag which is fine actually but not optimal as clover must read entire xml file instead of just a beginning in case when a php file has no namespace defined its being written as tag directly under the or instead of bamboo clover plugin does not expect the tag outside the tag as a consequence codexml code fix make plugin more resilient against deviations from clovers xml file format
0
this issue links to the outstanding jbws project issues that need to be included into jbas
1
there is for migration of pim from thorntail to quarkus we need to take processmigrationzip assembly into addons zip orgkieprocessmigrationservicezip
1
the http document states the followingquoteyou may be wondering how camel recognizes uri query parameters and endpoint options for example you might create endpoint uri as follows in this example myparam is the http parameter while compression is the camel endpoint option the strategy used by camel in such situations is to resolve available endpoint options and remove them from the uri it means that for the discussed example the http request sent by netty http producer to the endpoint will look as follows httpexamplecommyparammyvalue because compression endpoint option will be resolved and removed from the target urlquote the http component does resolve or strip out the known endpoint options eg compression bridgeendpoint etc from the uri however there appears to be one exception which is the nettyhttpbinding endpoint option for example take the following route producercode codeit will issue the following http get request uricodeget any of the other endpoint options results in those options being removed from the get request uri the following is camel debug output that illustrates the issue debug nettyhttpproducer starting producer debug multithreadeventloopgroup dionettyeventloopthreads debug nioeventloop dionettynokeysetoptimization debug nioeventloop dionettyselectorautorebuildthreshold debug nettyproducer created nettyproducer pool debug producercache adding to producer cache with key endpoint for producer debug defaulterrorhandler redelivery enabled false on error handler debug defaulterrorhandler redelivery enabled false on error handler debug defaultmanagementagent registered mbean with objectname debug starting producer debug producercache adding to producer cache with key endpoint for producer debug defaulterrorhandler redelivery enabled false on error handler debug routeservice starting child service on route timerroute pipeline debug routeservice starting child service on route timerroute debug nettyproducer creating connector to address debug nettyproducer channel id writing body defaultfullhttprequestdecoderesult success version content unpooledheapbytebufridx widx cap thu oct edt connection debug recycler dionettyrecyclermaxcapacitydefault debug javaniobytebuffercleaner debug nettyhttpproducer http responsecode
0
mvn install is required before mvn test can be runexample rm rf mvn vapache maven version home locale enus platform encoding macromanos name mac os x version arch family mac mvn testdownloading unable to get resource from repository cloudera error transferring file server returned http response code for url downloading unable to find resource in repository apachesnapshots build error failed to resolve dependencies for one or more projects in the reactor reason try downloading the file manually from the project website then install it using the command mvn installinstallfile dgroupidorgapachelogging dclassifiertests dpackagingtestjar dfilepathtofile alternatively if you host your own repository you can deploy the file there mvn deploydeployfile dgroupidorgapachelogging dclassifiertests dpackagingtestjar dfilepathtofile durl drepositoryid path to dependency required artifact is missingfor artifact the specified remote repositories apachesnapshots central cloudera
0
seems to be a problem with upgrading to has fixed the problem
1
implement remove callbacks for all bdev openers io stats device health monitor blobstore block device for hot remove events
1
the following script compiles but when run class loading fails saying javalangverifyerror class b overrides final methodcodeclass a def foo final def bar class b extends a def foo def bar bcodeif i swap the order of method definitions in class b ascodeclass a def foo final def bar class b extends a def bar def foo bcodethen it correctly gives the error message you are not allowed to overwrite the final method bar from class a
1
something around lru container is not working fine the attached log from an concurrency test in the level cache shows that in thread dumps taken over seconds appart is stuck tid nid runnable javalangthreadstate runnable at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at concurrency test does not get to finish in over a minute however once lru is switched to none or fifo the test runs in seconds so something looks fishy with lru
1
the selenium tests run too slowly on ie look at using jquery instead of xpath because this has better performance in ie
0
after updating our confluence instance from to some comments and changes changed tot the text migrated to confluence
0
please add support for oracles maven repository both artifactory and nexus already support the peculiarities of this repositoryif archiva already supports the oracle maven repository please update the documentation with the needed settings
0
any private or protected producer method in does not unwrap the contextual instance but gets performed on the owbnormalscopeproxy thus any inject field used by this producer method will be null
1
after updating to qt sdk commit to mercurial stopped workingafter entering my commit message selecting files to commit it gives out popup error dialogfile errorcant write file disk fullafter that in specified folder a zerosized file is created as well as a backup message file with name like with proper content of my commit repeating the attempt creates more copies etcalso you can only abort the above error dialog by removing all files from commit closing the qt creator doesnt work
1
pulp is changing the way that rbac works in summary we’re moving from assigning permissions directly to users and groups and instead grouping permissions into roles and assigning the roles to groups and users see the attached diagram for an overview of how permission checking will work note cloud and platform rbac are out of scope for this issue they are included in the diagram to demonstrate how they can be integrated into the existing pulp rbac framework to complete this migration we have to solve the following problems api django guardian has to be completely removed from our dependency chain and replaced with pulps internal methods for assigning and managing roles existing permission grants need to be migrated to roles we need to define the set of system roles that will ship by default with hub ui update the ui for assigning object permissions update ui for assigning global permissions create ui for creating and managing custom roles notes per david we need to create a proof of concept that will be the minimum we need to validate that the changes they brian bs team are making work per brian b slack for to ship on the we need the poc to be done by nov or i think given there are other tickets in front of it and that youll have to learn about the rbac as you go im worried im trying to raise concerns now before we get to the deadline and have a problem well need to work with ux docs qe to ensure that all feature level work is identified and completed to ensure its ready for delivery in release in mayjun a subset of what has in mind for the scope of this work david can point qe to the pulp pr that has api info for testing risk of redundancy is low the areas where some of this may be redundant once we adopt cloud or central auth rbac is the ui work for setting permissions but well need that anyway for ppl who dont have central or cloud rbac this is a technical debtpulp dependency priority for with phases of development a proof of concept to validate the changes pulp has made and then a more fully working feature w ui changes etc this is separate from cloud rbac and central auth rbac which hub doesnt currently integrate with and has not yet committed to acceptance criteria tbd qe will work with engineering to more clearly define maybe based on outcome of proof of concept well need ui specs at some point and a clear list of roles and capabilities
1
routemanagertargets went from map targets to list targetsthis change creates a new routetargettype but made no accessor methods i will attach a patch to make public accessors since i subclass routemanager and depend on accessing the targets list for resolving routes
0
using the configuration api to remove a handler from a logger and a handler in the same transaction does not the handler from a logger or a different handler the handler reference is removed and closed before the attempt to remove the handler from the logger or handler this ends up leaving the handler on the logger or on the handler so messages are still published to the handlernote too that this could also be an issue with the post configuration actions as they require the handler reference as well
1
summary the import issues button on the issue create screen does not work when trying it from within a project however when creating an issue being on the dashboard page the same button works and redirects the user to external system import page steps to reproduce go to a project and use the icon on the left bar to create an issue click on the import issues button on the top expected results the user is redirected to the external system import page actual results no result the below console errors are seen screen shot at pmpngthumbnail workaround directly access external system import from the jira system settings or use the import issues feature from the issue navigator screen shot at
1
when searching my directory via ldap searchrequests i receive erroneous results if an object is created with an object classdn uiduseroupeopledcexampledccomchangetype addobjectclass sambasamaccountobjectclass posixaccountobjectclass shadowaccountobjectclass topobjectclass personobjectclass inetorgpersonobjectclass organizationalpersongidnumber homeusersambasid xxxuidnumber usersn useruid userthenldapsearch h localhost p b oupeopledcexampledccom objectclassposixaccount xwill return the new user however if extra object classes are added to a previously existing userdn uiduseroupeopledcexampledccomchangetype modifyadd objectclassobjectclass sambasamaccountobjectclass posixaccountobjectclass shadowaccountadd gidnumbergidnumber homedirectoryhomedirectory homeuseradd sambasidsambasid xxxadd uidnumberuidnumber h localhost p b oupeopledcexampledccom objectclassposixaccount xwill not return the user however any successful changes made will be visible if the user itself is queried ie i will be able to see all the changes i made i just wont be able to use any of them to search for the object this has been tested using both ldapmodify and apache directory studio this has caused issues in our transition to using ldap to authenticate our samba servers please let me knoe if any more information is needed thanks
1
remote group ports assume the uuid of the port on the target instance this is critical for retaining which port the connection is associated with however this is problematic for any flow which contains multiple rpgs pointed to the same target instance associating the underlying component when only an id is known ie provenance is impossible as the uuid is ambiguousthis issue also exists for selfreferencing rpgs but is mitigated with extra logic around these troublesome scenarios for instance we can differentiate the remote group port from the root group port of a selfreferencing rpg by looking at the component type however this isnt possible with multiple rpgs referencing the same target instance
1
because parquet tries very hard to avoid autoboxing most of the core classes are specialized for each primitive by having a method for each type egcodevoid writeintint xvoid writelonglong xvoid writedoubledouble xcodeand so onhowever the statistics classes take the other approach of having an inststatistics class a longstatistics class a doublestatistics class and so on i think its worth going for consistency and picking a pattern and sticking to it seems like the first pattern i mentioned is currently the more common onewe may want to take this one step further and define an interface that these all conform to egcodepublic interface parquettypevisitor void visitintint x void visitlonglong x void visitdoubledouble xcode
0
clear useless method after remove id from public interface
0
last night basic changes were introduced from outside the project team which resulted in the application failing in its standalone test and sakaiembedded forms these need to be removed asap
1
eg demo actionassociatewith children should show actions but only shows
0
the fix revision was aimed to remove resource leak bugs on the bufferedreader object reader created in line the internetprintwriter object writer in the method handleconnectionof the file jamesservertrunksrcjavaorgapachejamesnntpservernntphandlerjava now moved to but it is incompletethere are some problems when reader isnt created successfully but the temp inputstreamreader object is created successfully at line the temp inputstreamreader object will be leaked when writer isnt created successfully but the temp bufferedwriter object is created successfully at line the temp bufferedwriter object will be leaked when the temp bufferedwriter object isnt created successfully but the temp outputstreamwriter object is created successfully at line the temp outputstreamwriter object will be leaked the best way to close such resource objects is putting such close operations for all resource objects in the finaly block of a trycatchfinally structure and then putting all other code in a try blockthe problem still exists in the head revision the temp inputstreamreader object created at line and the outs created at line can be leaked the buggy code is copied as bellows public void handleconnection socket connection throws ioexception try thissocket connection synchronized this handlerthread threadcurrentthread remoteip socketgetinetaddressgethostaddress remotehost socketgetinetaddressgethostname in new bufferedinputstreamsocketgetinputstream an ascii encoding can be used because all transmissions other that those in the message body command are guaranteed to be ascii reader new bufferedreadernew inputstreamreaderin ascii outs new bufferedoutputstreamsocketgetoutputstream writer new internetprintwriterouts true catch exception e try catch exception e finally resethandler
1
add a property of type principal and another property maybe of type group to script the principal should be used by the mockhttpservletrequest to represent and authenticated user the group property should be used by the mockhttpservletrequest to support the isuserinrole implementationthese properties ideally should be settable by overriding the setup method of script
0
the interface consolelogger should be programatically used instead of systemout
0
we are seeing the following schema evolution occurred from time to time while orc library doesnt seem to support that codejava error orgapacheorcimplschemaevolutionillegalevolutionexception orc does not support type conversion from file type uniontype to reader type uniontypestruct code we would like to add the support
0
class eduindianalibtwinpeakssearchsinglesearchcqlparser rule npnullonsomepathfromreturnvaluesee root parse the root parserparse cqlsearchquery catch javaioioexception ioe logerrorcql parse exception and also byteinputstream new javaiobytearrayinputstreamcqlxmlgetbytes catch javaiounsupportedencodingexception uee logerrorencoding exception clear the run the saxparserparse byteinputstream this catch e logerrorcql parse exception string cqlxml roottoxcql
0
i am trying to use netbeans on manjaro but every time i start it it crashes immediately the kernel of my os is java version openjdk runtime environment build the crash log is attatched
1
codejava if runtimejavaversionindexof code when runtimejavaversion is just then the above code throws an index out of bounds exception apply same fix from
0
some class loaders return null when you call myclassgetpackage in this case springversiongetversion will throw a npewhy would getversion not just return a hardcoded string
0
package manager failed to install packages error execution of usrbinyum d e y install disablerepophd atlasmetadatahiveplugin returned error nothing to doneed to remove atlasmetadata package dependency from hive service
1
ambari upgrades were only within the stack for example when applying an upgrade pack across stacks for example the finalization step must handle moving the clusters current stack version as well as specific component versionsin addition when setting the new stack version the alerts framework must be notified and initialize reloading alert files from disk for the new stack
1
add a checkbox to the move view panel labeled doubleclick tobelow that checkbox are two radio buttons teleport autopilotyou can only select one option or the other note this toggle is already implemented in the developer file menu item in case you need it for reference when teleport is selected users can doubleclick inworld to automatically teleport to that position when autopilot is selected users can doubleclick inworld to automatically walk towards that position note if there is an object in their way the autopilot will fail that is expected behavior for now
0
error when click on tab bulk edit
1
minor format issue space needed between created and nominal time in output of oozie job info it might be related to addition of locale in for examplecreated nominal gmt
0
currently it is not possible to set smtpspecific settings directly in the email action in a workflow in some use cases it can be helpful to set them in the workflow instead of having to configure it in ooziesite setting some configurations directly in the workflow would have the advantage to use fe a different from address for different workflows instead of one central address i would like to be able to set the following configurations from ooziesite directly in the workflow oozieemailsmtphost oozieemailsmtpport oozieemailfromaddress if these configurations are set in ooziesite the configuration in the workflow should take precedence
0
str deployed cluster with ambari version and hdp version upgrade ambari packages and then schema ambariserver upgrade to target version hash upgrade failed with below sep info loaded uri in sep error upgrade failedcomgoogleinjectprovisionexception guice provision error injecting method javalangruntimeexception trying to create a servicecomponent not recognized in stack info servicenamedruid componentnamedruidsuperset at at while locating orgapacheambariserverstateclusterclustersimpl while locating error at at at at at at at by javalangruntimeexception trying to create a servicecomponent not recognized in stack info servicenamedruid componentnamedruidsuperset at at at at at at at at at at at source at at at at at at at at at at source at at at at at at like the issue is due to newly introduced service druidsuperset that was part of druid service itself earlier
1
we recently saw an issue where the docs generated by a release were wrong because we released on a mac and that resulted in different effective defaults in this case it was code likes this that caused the issue codejava ifndef apple static constexpr bool kdefaultsystemauthtolocal true else macoss heimdal library has a noop implementation of so instead we just use the simple implementation static constexpr bool kdefaultsystemauthtolocal false code additionally the release process is fairly manual we should leverage the docker work to standardize a release environment and automated process to ensure a consistent reproducible release
0
have another take on it since i failed at the first iteration noformat it is doable like the xml dom wrapped does that you just need to implement both templatehashmodel preferably and templatesequencemodel i guess the problem was that you also wanted to expose the methods thats not possible since in ftl and unlike in java but like in many other languages theres no separate namespace for method names so either you move those to somewhere like under toolsdatasourceoperations or you stupport the api builtin so once can do datasourcesapifind noformat
0
include using namespace std include class myvectorpublic vector public void pushback coutpush backendl int main myvector v using namespace vector 当输入 using namespace vector 时qtcreator就会退出
1
as reported by daniele tartarini to devtaverna the taverna command line does not pass on inputfile parameters as file or url instances but rather they are read into memory and then registered freshin conjuction with the tool activity using files this mean the file is copied twicesetvalue in inputshandler supports file and url forwarded on todatabundlessetreference so this should be used instead
0
on a page i have textfield component with bounded custom id exmypagepage mypagejavapublic string getfieldid return someprefixbeanpropertynamerendered html contains something like thissubmiting this form shows empty dojo alertdialog validation mechanism is detecting that one component is required but can not match validation error with right message because of input nameidtapestryformregisterprofile contains everything what is required all validators and messages linked with right input id not name
1
in a busy cluster its possible that many ddldml queries keep in the created state for several minutes especially when using with syncddltrue tens of minutes are also possible they may be waiting for the execddl rpc to catalogd to finish itd be helpful for debugging ddldml hangs if we can show the inflight ddls in catalogd i think the following fields are important thread id coordinator db name table name ddl type eg addpartition droptable createtable etc more types here last event eg waiting for table lock got table lock loading file metadata waiting for sync ddl version etc start time time elapsed optional params link to show the tddlexecrequest in json format itd be better to also include running refreshinvalidate metadata commands
0
this bug is against the qt download on the qt site adding openssllinked to the end of a configureexe line causes configure to hang after the user accepts the license i have reproduced this on different xp virtual machines removing the openssllinked argument will allow configure to complete however the resulting qt lib binaries will not be able to open ssl socketsthis behaviour does not happen with qt this regression makes it effectively impossible to build static linked ssl enabled binaries on windows with qt
1
context we rely on mco run an ephemeral pod as if it was a library in bootstrap mode to render ignition configs out of an input at the moment hypershift nodes are labeled as master because multiple operators manifests are scheduled to a master pool dod mco support rendering for worker pool in bootstrap mode hypershift nodes are not labeled as master
1
on the agent is supposed to report the publichostname based on querying curl however it looks like this is not shows the internal fqdn for instances
1
currently the cleaning thread in the compactor does not run on a table or partition while any locks are held on this partition this leaves it open to starvation in the case of a busy table or partition it only needs to wait until all locks on the tablepartition at the time of the compaction have expired any jobs initiated after that and thus any locks obtained will be for the new versions of the files
1
even though bamboos configs and permissions are all based on plans and not on projects having applinks based on plans presents a very poor user experience after meeting with dave oflynn yesterday we decided to switch back to projects and to make as few changes as possible in bamboo to get it working
1
steps to reproduce navigate to the reports tab click switch report observe that font is light blue now hover to some item on the list the font changes to black actual result font in the switch report select popup is black when cursor is hovering over it expected result font remains the same as in the beginning light blue
0
user story updated user story requirements ability to view a list of existing decision models if no dmn files exist show an empty state ability to create a new decision model with or without a guided workflow ability to import decision models ability to identify the version roll back to a previous version delete a version ability to manage a decision model modify delete export share documentation ability to manage data types create modify delete import export resources acceptance criteria review and verify sketches with the team leverage existing conventions from openshift and patternfly patterns identify any similar existing patterns from other integration apps talk with devs to identify any technical limitations of the system
0
i posted this on keycloakuser vramikredhatcom asked me to put it herewe have a keycloak instance running as docker container in our aws ecs docker environmentfor single instance this setup works great but we failed to enhance it with a second instance for haproblem we cannot authenticate in one of instances behind the load balancer as soon as we have more than one keycloak instancecluster setup keycloak docker image containers are behind aws alb load balancers with roundrobin but without sticky sessions the latter is important for our setup jgroups with jdbcping configured and instances properly addremove themselve from the configured mysql table containers run on separete hosts tcp communication between containers is possible port exposed also on hosts cache owners for all distributed caches are set to we also tested with but without any different resultsstartup logs from infinispan look fine on startup we see log message that cluster nodes can discover each other received new cluster view for channel ejb after that also infinispan rebalancing happens finished rebalance with members ”analysis so far the problem is obviously because authentication starts on node due to round robin authentication will be continued on node and this fails because node does not know about the authentication session started on node according to the documentation there should be a lookup from node in the cluster for started authentication session seems like this is not happening but we cannot see any log related to this also regular sessions are not distributed in the cache we tested this running only node to do the authentication and then spinning up a second node and doing a failover to the new node afterwards the regular session was gone we are logged out
0
newly implemented metadata for jdo will allow specifying the interfaces implemented by a class need to add tag to jdo metadata for pcfieldtypessimpleclass
0
user needs to be able to choose orientation portraitlandscape resize artboard drag property value constrain proportions key mouse rename file name add move and delete artboards rearrange artboards
0
need to moveclass initializingobjectobject this allows definition of a method which is invoked by the container after an object has had all properties set def afterpropertiessetself passfrom springpythonfactory over to springpythoncontext
0
currently nimbus’s mkassignments task runs at configured nimbusmonitorfreqsecsscheduled intervals sometimes this causes the pending topology related tasks to wait till next scheduled interval this can be improved not to wait for next scheduled interval but as and when there are tasks submitted this behavior can be based on a configuration to maintain backward compatibility it gives flexibility not to read from zk when there are topology related tasks but at configured intervals
1
because of the maven group id changed i got this error when tried to compile mastercodenone failed to execute goal on project infinispanserverjgroups could not resolve dependencies for project could not find artifact
0
i just noticed that when you create a server with filesystem operations and then change it to use management api it will not change the poller to management but it will stay with web poller whereas when you create a new server with management profile it will use the management pollersfeel free to set the fix version yourself
0
temporarily disable timeouts on and until a proper fix can be implemented
1
summary if a customer has modified the default welcome message in confluence the main confluence url does not render if accessing the wikidiscoverallupdates or wikidiscoverpopular pages the sidebar and welcome message does not load steps to reproduce go to site settings global templates and blueprints edit default welcome message template make a single change or remove the welcome message altogether expected results when navigating to confluence home page loads without issues actual results in all browsers chrome firefox safari if visiting only a loading spinner appears if visiting or popular activity feed loads but nothing else no errors appear in dev console notes it does not seem to impact space sidebars only the dashboard reproducible with an empty welcome message revert the welcome message to defaults users may need to clear their browsers cachecookies after it has been reverted or if the welcome message needs to be utilized resizing the browser window causes the dashboard to load correctly
0
summary this occurs on the dashboard when jira has migrated from the old pie chart gadget to the new pie chart dashboard item if the pie chart gadget was previously displaying a saved filter and the pie chart gadget is upgraded when the dashboard item is edited the filters name will be lost steps to reproduce on the manage addons screen within the atlassian jira plugins gadgets plugin disable the piechartdashboarditem create a new saved filter create a new pie chart gadget on a dashboard and make that pie chart gadget display the saved filter reenable the piechartdashboarditem plugin to upgrade to the new pie chart dashboard item refresh the dashboard to display the updated gadget edit the pie chart dashboard item and notice that the filter name has changed to under project or saved filter expected results the filters name should be displayed instead of actual results is displayed under project or saved filter in the pie chart dashboard items edit screen the filter can be reselected from the edit screen which will restore it selecting cancel from the edit screen will redisplay the pie chart where the filter name can be noted down from the gadget title then select edit on the dashboard item again and enter the filters name alternatively the piechartdashboarditem can be disabled via the manage addons screen within the atlassian jira plugins gadgets plugin which will downgrade the pie chart dashboard item to the old version
0
this would be a large change but maybe now is still a good time to do it otherwise we will never fix thisactually the table api is in the wrong package at the moment it is in orgapacheflinkapitable and the actual scalajava apis are in orgapacheflinkapijavascalatable all other apis such as python gelly flink ml do not use the orgapacheflinkapi namespacei suggest the following packagescodeorgapacheflinktableorgapacheflinktableapijavaorgapacheflinktableapiscalacodewhat do you think
1
in recent ux testing many participants had difficulty locating the option to share andor revoke sharing several participants could not find the sharing option at all of those who did successfully share when asked how to stop sharing a rubric several looked in the “shared” section first for the revoke option not in the “local” area the sharingrevoking workflow needs to be more clearly delineated for users
0
cc beamsdksjavabomsignmavenjavapublication task fails with an obscure error duplicate key pomdefaultxmlascxmlascnull attempted merging values signature pomdefaultxmlascxmlascnull and signature pomdefaultxmlascxmlascnull downgrading to gradle by reverting works
1
this will fix the saslprep bug
0
the following patches integrates the ear plugin into a lifecycle so that package install deploy etc goals could be used with an ear packagingfirst patch is to be applied on mavencore to register ear lifecyclesecond patch is to be applied on mavenpluginsmavenearplugin to add support of generateapplicationxml flag which states whether applicationxml should be generated or not default is truethis relates to where the original discussion took place
1
codecreate table concurorctabname age int gpa clustered by age into buckets stored as orc tblproperties transactionaltrueselect name age from concurorctab order by namecoderesults incodediagnostic messages for this taskerror javaioioexception javalangnullpointerexception at at at at at at at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at by javalangnullpointerexception at at at at at at morecodethe issue is that the object inspector passed to vectorizedorcacidrowreader has all of the columns in the file rather than only the projected columns
1
the protobuf implements the official image manifest specification found at the field names in this spec are all written in snakecase as are the field names of the json representing the image manifest when reading it from disk for example after performing a docker save as such the protobuf for imagemanifest also provides these fields in snakecase unfortunately the docker inspect command also provides a method of retrieving the json for an image manifest with one major caveat it represents all of its top level keys in camelcase to allow both representations to be parsed in the same way we should intercept the incoming json from either source disk or docker inspect and convert it to a canonical snakecase representation
0
as an ios developer i want access to loginlogoutcheckstatus functions so that i can use this to manage end user authentication
0
i think we must support aws inside camelaws componentwith this first version of support camel will be able creating and running aws instances starting aws instances stopping aws instances terminating aws instancesthere are many improvements we can do this is a very basic version
0
description of problemwhen creating a new spreadsheetxls dt you can upload any file you wantit might be corrupted it might be a png it doesnt matter it is uploaded and the gui doesnt report any problems there is an error in the server log thoughif you try to validate the xls dt you get no notification no message validation passed or failed nothing again there is some error stack trace in the server logversionrelease number of selected component if applicablebpms
1
various connection instabilities appear when using internet explorer to connect to business central which is running on openshift various problems can be seen in pictures in attachments dialog with youve been disconnected message appears approximately every seconds missing icons in the menu it is not possible to save stunner asset stunner freezes while displaying saving dialogthere were no similar instabilities in firefox or chrome using the same network and the same setup
1
weve got classification clustering ga matrix but not taste or cf
0
i have noticed that in one of our jira instances with issues a lot of them have wrong count of watchers which leads to wrong search results we prioritize some items by amount of watchers because we have voting disabled i have verified that the watches count differs from the watchissue user associations in the database and in an xml export even after reimporting the xml data the count gets out of sync with the actual amount of watchers after inspecting the jira source the code seems quite stable except for some minor flaws i then found that our extensive use of the clone issue operation might have caused the issue and i was able to reproduce this on a jira instance cloningwatchcountincreasepng cloningwatchcountincreasewatcherspng besides fixing the bug in the clone issue operation it would be good to provide a fix for this probably a sql update statement that counts the actual user watchissue associations and updates the count thanks to produce create a new issue and make sure you are added as a watcher to the issue you are the only watcher for this ticket clone this issue to another issue and make sure that you are added as a watcher for the new cloned issue you are the only watcher for this ticket go to the issue navigator and add the column watchers to the issue navigator you will notice that for the cloned issue there will be two watchers and when you try to open the issue you will only see one user you can conduct either of the following for workaround update watches column of jiraissue table and run reindex upgrade jira to or later which automatically corrects the unsyncronized watchers number if even after the upgrade some issues are still showing the wrong number run the following sql query to see if this applies to your case if it returns any results you can proceed codesql select from jiraissue where id not in select sinknodeid from userassociation where associationtype watchissue and watches code create a backup from your jira just in case stop your jira run the following sql query codesql update jiraissue set watches where id not in select sinknodeid from userassociation where associationtype watchissue and watches code start your jira perform a full reindex notes this upgrade task can take quite some time especially depending upon the type of database used and also the issue watch count in some cases it takes minutes
1
we have ignitedatastreamer which is used to load data into ignite under high load it was previously named ignitedataloader see ticket akka for more information given that akka is a scala frameworks this streamer should be available in scalawe should create igniteakkastreamer which will consume messages from akka actors and stream them into ignite cachesmore details to follow but to the least we should be able to convert data from akka to ignite using an optional pluggable converter if not provided then we should have some default mechanism specify the cache name for the ignite cache to load data into specify other flags available on ignitedatastreamer class
0
building qt statically from git fails due to being multiply defined in and libegllibthe problem was obviously introduced in when the bundled angle code was upgraded to the patch was ported incorrectly it used to contain the lines ifndef in and which prevented the two dllmainfunctions to be compiledthese lines were removed in the update which results in the dllmainfunctions to exist in both static libraries and libegllibconcerning the affected versions i assume version corrensponds to the current stable branch has not been tagged yet in git
0
the license for visual studio has expired
1
adobe added support for aes encryption further information is available at specially or iso
0
due to several reasons the versions of plexusutils couldnt be upgraded from done some serious maintenance so can make a giant leap to one of the latest versions of plexusutils
0
i have some parquet files in i generated the user delegation sas token with the following permission on sassignedpermissions racwdxltmeop but when i read from i get a http error i believe this happens when abfss driver makes use of getaclstatus api call to determine whether the storage service has hierarchical namespace enabled or not i found a workaround ie to set fsazureaccounthnsenabled to true which would skip get acl api call and as folder level sas only works for hns enabled accounts may i know if this behavior is expected and the workaround i am using is stable for production use and if there are any hidden implications thank you in advance
0
hi guys i compiled qt myself and used a prefix option to set the install directory worked so far using nmake and nmake install since we are sharing that compiled version in our company to other developers and not everydeveloper got the same directory structure we changed the path in qtconf to forsome systems everything works excepts for project files that contain qt qaxcontainer axserver or uitools in this case qt generatesa project file that sets the additionaldependencies to and after manuallyremoving that first one the project is correctly configured i tested this also with the modules concurrent designer help multimedia multimediawidgets network opengl printsupport qml quick sql sql testlib webkit webkitwidgets widgets winextras xml xmlpatterns and everything was correct for these modules only was used the result of qmake query only shows the soi really dont get why project files with axcontainer axserver and uitools are still using the old path also you have also the same problem when using the online installer and change during installation the install path to
0
update fetch patchset stage in fabricchaincodenode master branch to fetch the patch set and run the tests on verify job and clone the repository and run the tests on merge job also optimize the publish npm module script to set the unstableversion to if there is no npm module available in npm registry otherwise fetch the unstable version and increment by
0
started with
0
beam jenkins builds are having problems related to the jdk version artifacts are being built with jdk when they should be built with jdk im seeing the following new message in beam jenkins logs starting on october no jdk named ‘jdk latest’ found were there changes i dont have jenkins admin access and cannot check whether this is a valid jdk name where version is set example error same job before the error
1
caused by javasqlbatchupdateexception batch entry insert into links destpagetitle destspacekey contentid creator creationdate lastmodifier lastmoddate linkid values null co vidya vidya was aborted call getnextexception to see the cause error error null value in column destpagetitle violates notnull constraint import is successful if you edit entitiesxml replacing all these with looks like were interpreting the page title as being null instead of a blank string
1
when a character is read and it could be the first character of the delimiter a lookahead is performed on the stream to determine if this will be the delimiter this look ahead initiates a new buffer to read into for each call this can cause overhead and much more turn in the heap
0
when an error occurs on cluster activation in methods ignitechangeglobalstatesupportonactivate statechange process hangs stay in in transition state and switches the cluster to an inoperable state even if the problematic node is stopped by the failure handler reproducer codejava public class erroronactivationtest extends gridcommonabstracttest override protected igniteconfiguration getconfigurationstring igniteinstancename throws exception return supergetconfigurationigniteinstancenamesetfailurehandlernew stopnodefailurehandler setclusterstateonstartclusterstateinactive test public void testerroronactivation throws exception ignite ignite contextinternalsubscriptionprocessorregisterdatabaselistener new databaselifecyclelistener override public void afterinitialiseignitecachedatabasesharedmanager mgr throws ignitecheckedexception throw new ignitecheckedexceptiontest igniteclusterstateclusterstateactive startclientgrid hangs here code
0
noformatnopaneltruelibsvnraserf allows to update working copy of a directory to revision notcontaining this directory it breaks working copylibsvnraneon doesnt allow it but prints useless error messagelibsvnralocal and libsvnrasvn behave correctlynoformat
1
now tableenviromentexplain will use streamenv to generatestreamgraph that will use execenvaddoperator to add transformations if we have done to the current table after the explain some of the other making a sink there will be a repeat of transformations leading to repeated job node in the job
1
update of frfr translation
0
here is test output from info received close region in zk yes version of zk closing destination debug processing close of debug sent close to for region debug waiting for region to be opened already retried debug closing disabling compactions debug updates disabled for region info closed info closed info adding moved region record to as of debug attempting to transition node from mzkregionclosing to debug waiting for region to be opened already retried debug waiting for region to be opened already retried debug waiting for region to be opened already retried debug waiting for region to be opened already retried debug waiting for region to be opened already retried debug waiting for region to be opened already retried debug waiting for region to be opened already retried debug waiting for region to be opened already retried info started disable of debug received zookeeper event typenodedatachanged statesyncconnected debug successfully transitioned node from mzkregionclosing to debug set region closed state in zk successfully for region sn name debug closed region debug handling transitionrszkregionclosed current state from region state map statependingclose warn closed region name startkey endkey encoded still on ignored reset it to info region name startkey endkey encoded transitioned from statependingclose to stateclosed debug handling closed event for info region name startkey endkey encoded transitioned from stateclosed servernull to stateclosed debug forcing offline stateclosed info region name startkey endkey encoded transitioned from stateclosed servernull to stateoffline debug found an existing plan for destination server is accepted as a dest server debug using preexisting plan for region info region name startkey endkey encoded transitioned from stateoffline servernull to stateoffline debug creating or updating unassigned node for with offline debug acquired a lock for debug creating scanner over meta starting at key debug received zookeeper event typenodedatachanged statesyncconnected debug handling transitionmzkregionoffline current state from region state map stateoffline info assigning region to info region name startkey endkey encoded transitioned from stateoffline servernull to statependingopen debug new admin connection to info received request to open region on debug finished scanning region name startkey endkey encoded info attempting to disable table debug attempting to transition node from mzkregionoffline to debug sleeping waiting for all regions to be disabled in info offlining debug starting unassignment of region debug creating unassigned node for in a closing debug received zookeeper event typenodedatachanged statesyncconnected debug successfully transitioned node from mzkregionoffline to info hregionopenhregion region name debug opening region name startkey endkey encoded debug received zookeeper event typenodechildrenchanged statesyncconnected debug registered coprocessor service debug handling transitionrszkregionopening current state from region state map statependingopen info region name startkey endkey encoded transitioned from statependingopen to stateopening region being moved was could see that the test exhausted retries in debug waiting for region to be opened already retried timescodebut the region movement continued after like more time should be allowed for region movement
0
create a proof of concept for doing interpreted mode in kogito runtime ala poc doc bpmn runner required a kafka server running in to execute statelessprocessresourcetest noformat binzookeeperserverstartsh configzookeeperproperties binkafkaserverstartsh configserverproperties noformat create topics noformat binkafkatopicssh create topic applicants bootstrapserver binkafkatopicssh create topic decisions bootstrapserver noformat usage noformat java dorgkiedeploymentpath jar targetworkflowrunnerjar noformat deployment noformat cp noformat you will see noformat info orgkiekogitojitexecutorprocessjitprocessserviceimpl info building processes info creating process definition for file info deployed processes are applicantworkflow info subscribing to kafka topic applicants noformat you can add additional deployments anytime
0
sometimes its beneficial to treat types like byteswritable and text as byte arrays agnostic to their type adding an interface to tag key types as amenable to this treatment can permit optimizations and reuse for that subset of types in tools and libraries
0
something like this would work comments to support chkconfig chkconfig description jboss eapchkconfig is miles better than handconfiguring your init symlinks
0
a great deluge algorithm which has been introduced by does not have a corresponding support in solver config editor in decision central as a result any attempt to add a local search phase fails with an error which can be found in attachment probable solutionin the repo optaplannerwb find in path for lateacceptance or tabusearch youll find or files that hard code the values of the localsearchtype add greatdeluge in there also add in in the properties filesubmit it as a pr and see if jenkins says if the full downstream build works fine
1