text_clean
stringlengths
3
505k
label
int64
0
1
please build aop and include it in the next eap cp
1
testsruntestspy querytesttestudfspy failed on and after running the test i cant connect to service port with impalashell here is the hserr log see details event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc event thread exception a orgapacheimpalacommoninternalexception memory limit exceeded functioncontexttrackallocations allocations exceeded memory limits could not allocate mb without exceeding limit error oc
0
if a user does a readsaveremove on a data store indexeddbwebsql before it is open it will throw an error add the ability to auto open for them
0
in any connector where a timeunit dropdown timer sql periodic execution the bullets of the list are still displayed
0
some links are broken if crucible is bounded via to apache the defect links contain an additional port number my configuration is webserver contextcrucible siteurl with this config can i access the crucible user interface and make administrative settings but if it comes to the point to create code reviews than point some buttons and links to whereas i never configured port the create review button leads to an timeout but a nonconfigured review is created these reviews could not be accessed from the review list because these links are broken too also to mention is that the crucible reference from the apache is set via https and the broken links using http
1
running mvn verify for the spark integration tests against current master fails code mvn dskiptests install scanning for projects reactor summary apache hbase success apache hbase checkstyle success apache hbase build support success apache hbase error prone rules success apache hbase annotations success apache hbase build configuration success apache hbase shaded protocol success apache hbase common success apache hbase metrics api success apache hbase hadoop compatibility success apache hbase metrics implementation success apache hbase hadoop two compatibility success apache hbase protocol success apache hbase client success apache hbase zookeeper success apache hbase replication success apache hbase resource bundle success apache hbase http success apache hbase procedure success apache hbase server success apache hbase mapreduce success apache hbase testing util success apache hbase thrift success apache hbase rsgroup success apache hbase shell success apache hbase coprocessor endpoint success apache hbase backup success apache hbase integration tests success apache hbase examples success apache hbase rest success apache hbase external block cache success apache hbase spark success apache hbase spark integration tests success apache hbase assembly success apache hbase shaded success apache hbase shaded client success apache hbase shaded mapreduce success apache hbase shaded packaging invariants success apache hbase archetypes success apache hbase exemplar for hbaseclient archetype success apache hbase exemplar for hbaseshadedclient archetype success apache hbase archetype builder success build success total time min finished at final memory mvn pl hbasespark pl hbasesparkit verify scanning for projects integrationtest hbasesparkit the parameter forkmode is deprecated since version use forkcount and reuseforks instead t e s t s corrupted stdout by directly writing to native stream in forked jvm see faq web page and the dump file running orgapachehadoophbasesparkintegrationtestsparkbulkload tests run failures errors skipped time elapsed s failure in orgapachehadoophbasesparkintegrationtestsparkbulkload testbulkloadorgapachehadoophbasesparkintegrationtestsparkbulkload time elapsed s error javaioioexception shutting down at caused by javalangruntimeexception failed construction of regionserver class orgapachehadoophbaseminihbaseclusterminihbaseclusterregionserver at caused by javalangillegalargumentexception port out of at results errors » io tests run failures errors skipped verify hbasesparkit reactor summary apache hbase spark success apache hbase spark integration tests failure build failure total time min finished at final memory failed to execute goal verify on project hbasesparkit there are test failures please refer to usersbusbeytmpprojectshbasehbasesparkittargetfailsafereports for the individual test results please refer to dump files if any exist jvmrundump dumpstream and jvmrundumpstream to see the full stack trace of the errors rerun maven with the e switch rerun maven using the x switch to enable full debug logging for more information about the errors and possible solutions please read the following articles after correcting the problems you can resume the build with the command mvn rf hbasesparkit code
1
when i try to deploy a seam web project to a remote eap server it takes a few minutes expected big seam lib and then fails not expected with this errorcodeentry orgjbossideeclipseascore deployment of module seamproj has failedsubentry the operation deploy for unit seamprojwar was rolled backentry orgjbossideeclipseaswtpcore error renaming to this may be caused by your servers temporary deploy directory being on a different filesystem than the final destinationyou may adjust these settings in the server editorcodethis is the server console on the remote info managementhandlerthread content added at location info msc service thread starting deployment of seamprojwar runtimename info msc service thread read persistencexml for warn msc service thread deployment deploymentseamprojwar is using a private module comsunjsfimplmain which may be changed or removed in future versions without warn msc service thread deployment deploymentseamprojwar is using a private module comsunjsfimplmain which may be changed or removed in future versions without warn msc service thread deployment deploymentseamprojwar is using an unsupported module which may be changed or removed in future versions without warn msc service thread deployment deploymentseamprojwar is using an unsupported module which may be changed or removed in future versions without info msc service thread jndi bindings for session bean named ejbsynchronizations in deployment unit deployment seamprojwar are as follows javaglobalseamprojejbsynchronizationsorgjbossseamtransactionlocalejbsynchronizations javaappseamprojejbsynchronizationsorgjbossseamtransactionlocalejbsynchronizations javamoduleejbsynchronizationsorgjbossseamtransactionlocalejbsynchronizations javaglobalseamprojejbsynchronizations javaappseamprojejbsynchronizations info msc service thread jndi bindings for session bean named timerservicedispatcher in deployment unit deployment seamprojwar are as follows javaglobalseamprojtimerservicedispatcherorgjbossseamasynclocaltimerservicedispatcher javaappseamprojtimerservicedispatcherorgjbossseamasynclocaltimerservicedispatcher javamoduletimerservicedispatcherorgjbossseamasynclocaltimerservicedispatcher javaglobalseamprojtimerservicedispatcher javaappseamprojtimerservicedispatcher warn msc service thread deployment deployment seamprojwar contains cdi annotations but beansxml was not error managementhandlerthread deploy of deployment seamprojwar was rolled back with the following failure message services with missingunavailable dependencies info msc service thread stopped deployment seamprojwar runtimename seamprojwar in info managementhandlerthread service status new missingunsatisfied dependencies service jbossdeploymentunitseamprojwarcomponentejbsynchronizationscreate missing dependents service jbossdeploymentunitseamprojwarcomponentejbsynchronizationsstart missing dependents service jbossdeploymentunitseamprojwarcomponentejbsynchronizationsvieworgjbossseamtransactionlocalejbsynchronizationslocal missing dependents service jbossdeploymentunitseamprojwarcomponenttimerservicedispatchercreate missing dependents service jbossdeploymentunitseamprojwarcomponenttimerservicedispatcherstart missing dependents service jbossdeploymentunitseamprojwarcomponenttimerservicedispatchervieworgjbossseamasynclocaltimerservicedispatcherlocal missing dependents service missing dependents service jbossdeploymentunitseamprojwarcomponentcomsunfacesconfigconfigurelistenercreate missing dependents service jbossdeploymentunitseamprojwarcomponentcomsunfacesconfigconfigurelistenerstart missing dependents service jbossdeploymentunitseamprojwarcomponentjavaxfaceswebappfacesservletcreate missing dependents service jbossdeploymentunitseamprojwarcomponentjavaxfaceswebappfacesservletstart missing dependents service jbossdeploymentunitseamprojwarcomponentjavaxfaceswebappfacettagcreate missing dependents service jbossdeploymentunitseamprojwarcomponentjavaxfaceswebappfacettagstart missing dependents service jbossdeploymentunitseamprojwarcomponentjavaxservletjspjstltlvpermittedtaglibstlvcreate missing dependents service jbossdeploymentunitseamprojwarcomponentjavaxservletjspjstltlvpermittedtaglibstlvstart missing dependents service jbossdeploymentunitseamprojwarcomponentjavaxservletjspjstltlvscriptfreetlvcreate missing dependents service jbossdeploymentunitseamprojwarcomponentjavaxservletjspjstltlvscriptfreetlvstart missing dependents service jbossdeploymentunitseamprojwarcomponentmanagedbeanorgrichfacesversionbeancreate missing dependents service jbossdeploymentunitseamprojwarcomponentmanagedbeanorgrichfacesversionbeanstart missing dependents service jbossdeploymentunitseamprojwarcomponentmanagedbeanorgrichfacesskinskinbeancreate missing dependents service jbossdeploymentunitseamprojwarcomponentmanagedbeanorgrichfacesskinskinbeanstart missing dependents service jbossdeploymentunitseamprojwarcomponentorgapachecatalinaservletsdefaultservletcreate missing dependents service jbossdeploymentunitseamprojwarcomponentorgapachecatalinaservletsdefaultservletstart missing dependents service jbossdeploymentunitseamprojwarcomponentorgapachejasperservletjspservletcreate missing dependents service jbossdeploymentunitseamprojwarcomponentorgapachejasperservletjspservletstart missing dependents service jbossdeploymentunitseamprojwarcomponentorgjbossseamservletseamfiltercreate missing dependents service jbossdeploymentunitseamprojwarcomponentorgjbossseamservletseamfilterstart missing dependents service jbossdeploymentunitseamprojwarcomponentorgjbossseamservletseamlistenercreate missing dependents service jbossdeploymentunitseamprojwarcomponentorgjbossseamservletseamlistenerstart missing dependents service jbossdeploymentunitseamprojwarcomponentorgjbossseamservletseamresourceservletcreate missing dependents service jbossdeploymentunitseamprojwarcomponentorgjbossseamservletseamresourceservletstart missing dependents service jbossdeploymentunitseamprojwarjndidependencyservice missing dependents service jbossdeploymentunitseamprojwarmoduledeploymentruntimeinformation missing dependents service jbossnamingcontextjavamoduleseamprojseamprojenvorgjbossseamasynctimerservicedispatchertimerservice missing dependents service jbossnamingcontextjavaseamprojdatasource missing dependents service jbosspersistenceunitseamprojwarseamproj missing dependents service jbosswebdeploymentdefaulthostseamproj missing dependents service jbosswebdeploymentdefaulthostseamprojrealm missing dependents codeso clearly the deployment is incomplete on the server in fact the first line says that new content was added to but this directory is empty
1
dear in poznan supercomputing and networking center we use your repo this repo is configured in our artifactory we use your repository to create local proxy ive noticed that you locked our ip address and we cant get data http similar issue is described on could you unlock our ip should i do some adjustment in our configuration to not be locked in future best dariusz janny poznan supercomputing and networking center
1
separated force write thread better group commit strategy on latency and throughput
0
right now we test consistency modes independently but they will eventually coexist and that can spawn trouble eg we should have an integration test that runs writes on multiple consistency modes at the same timeplus we should have the ycsb run on multiple consistency modes at the same time need to revivecleanup what i did for the ht paper
0
row locks in hregion are keyed by a intsized hash of the row key its perfectly possible for two rows to hash to the same key so if any client tries to lock both rows it will deadlock with itself switching to a hash is an improvement but still sketchy
0
currently on the java side boolean properties like enabled are seen as getenabled instead of isenabled the problem is even worse because jetgroovys joint compiler sees them as isenabled
0
as far as i can tell jspwiki currently lacks protection agains crosssite request forgery csrf are there plans or previous work to add for example some additional session token to prevent csrf im willing to contribute here but some general discussion about how and where to implement this would be helpful more info about csrf here
0
the list of supported databases is getting old need to update the supportedtested levels for derby informix ms sql and oracle
0
i hope that this is the right place to ask for help on this issue i have tested this on guacamole version installed natively on a centos on a centos native installation with guacamole version and on a docker setup on debian buster os the results were on all setups identical i try to print from a windows client connected via rdp session pdf download printing small documents like text only work without problems i get the pdf download as expected after a short period of time as soon as i try to print larger files with images in it the resulting pdfs get very big or cant even be downloaded one document ive run tests on is a pages pdf document with pictures and diagrams in it which original size is ive printed the first pages wich took about minutes and the resulting pdf was big printing the first pages results in a mb pdf file takes about minutes if i try to print more than pages ghostscript works on cpu but wont finish in a reasonable time period then i took other documents which one of our customers wanted to print these are scanned documents with only pages only images original size mb the windows printqueue says that the resulting document will be big but due to timeout issues its not available to download at all i did little research and microsoft says to change the printer settings to print directly to the printer but it is not possible to change this setting in the guacamoleprinter settings ive also tested against another windows system same problem here the same tests were run on a windows system and this dos not show this behavior so this is a windows guacamoleprinter issue is there any configuration without altering the code that can be changed to make things work again thanks heiko
0
osbs provides certain capabilities to support updating operator manifests explore some of these options using rendertemplates pinning operator acceptance criteria no manual steps are needed for building a z or ystream for the first time see related gchat for context this relates to related to distgit or gerrit repos afaik well still need to manually update the productyaml in gitlab with the correct product version all version pointers in the operator manifests in distgit are updated to the next product version based on cpaas env vars see related docs for details after buildpipeline jenkins build runs note gerrit distgit directory is unaffected all operator csvs related images in gerrit distgit dir use the latest tag the rendertemplates or other similar solution updates the tag according to the version being built this is needed for the checkpatch job see the related cpaas issue for details
0
hadoopconfigsh should be changed to not rely on behavior for classpath expansion since it breaks jsvc we need to add back the for loops in hadoopconfigsh which were changed in
1
currently our openshift deployment as described in the docs for both upstream and downstream is broken as strimizis builds are constantly failing one solution is to change the dock and instruct users how to build their own images based on the ones released by strimzi however for product it might be quite inconvenient
1
when you query couchdb directly the response has very useful information for instance see there you will see that along with the data you get execution time and total number of results returned at the moment the chaincode api does not provide any of this information back to the developer it would be great if some of this information that couchdb already provides in the response is made available to the developer through the chaincode api
0
when fresh new application is created after instalation of patch rpm the creation fails in the fuse log there is npecodecreating application fuse fuse already startedwaiting for patching service to become availableclient javahome not set results may varytruepatching all profile versions with javahome not set results may varyerror executing command unable to apply patchpatch was not succesfully applied you can try to apply this patch again on the container shell using fabricpatchapply allversions to execute control start for warn orgapachesshdclientkeyverifieracceptallserverkeyverifier server at presentedunverified key warn orgapachesshdclientkeyverifieracceptallserverkeyverifier server at presentedunverified key info shellcommandfactory shellcommandfactoryshellcommand orgapachekarafshellssh exception caught while executing commandjavalangruntimeexception unable to apply patchat javasecurityaccesscontrollerdoprivilegednative methodat by javalangnullpointerexceptionat more code
1
description this is an issue found when deploy quay with quay operator with managed clair component after push image to quay clair was unable to scan image vulnerability checked clair app pod logs get certificate error message certificate is valid for not fetch failurecolor see detailed clair app pod logs note clair image is paneltitlequay is unable to show image vulnerability panel codejava oc get pod name ready status restarts age running running running running running running completed running running running running completed running oc get pod o json jq speccontainersimage code codejava error while fetching a layer fetcher request failed get certificate is valid for not fetch failure code
1
in addition to specify onclick and ondblclick events for the rows in a datatable it would be nice to have an action actionlistener for click double click on a row then it would be possible to highlight rows and navigate to a detail page when a row is clicked now we have to create a command link in one of the columns as shown in the current masterdetail example i propose adding the following new attributes rowonclickaction rowondblclickaction rowonclickactionlistener rowondblclickactionlistener this is similar to what already exists in the form of rowonclick and rowondblclick
0
im auditing the jsd server doc space and discovered that in the tutorial a few issue types were being referenced access purchase that dont come with the templates for server anymore boooooo wazza and i then did a bit more digging around issues types to see what else had changed and we saw that the issue types and issue types schemes section on was out of date too
0
today most of the messages printed during images startupconfiguration are printed using echo we already have a logger module that we could use to properly print msg levels
0
ive some nms clients in total that uses synchronized receiving non message listener basedthat all are try to receive to the same queue using message selectorswhen failover occurs can be tested with stopping and starting the message bus i get the interrupt and the resumedevents but the clients block for ever trying to receive the next message the messages can be seenin the activemq web console i have even tried using a timeout and try to receive again but they are stuck untili restart the process is this a known issue
0
this problem was found when using the scroll versions plugin but it is actually a problem in the core confluence product and can affect any macro usage during a refactoring even such as page move update etc the resultant macro definition will be corrupted with a partial reference to the current page and as of confluence this will break rendering of the macro confluence is tolerant of the broken xml and will still execute the macro noformat we expect no change to a macro parameter public void throws exception string content blockabc blogpost blog new blogpost blogsetspacenew spacemky blogsettitlehello dolly blogsetbodyasstringcontent blogsetcreationdatenew date blogpostresourceidentifier ri new blogpostresourceidentifierbloggetspacekey bloggettitle bloggetpostingcalendardate conversioncontext context new defaultconversioncontextblogtopagecontext whenmockresourceidentifierfactorygetresourceidentifierblog contextthenreturnri whenmockresourceidentifiermarshallermarshalri contextthenreturnstreamablesfromyou shouldnt be seeing me in the output string result updaterexpandrelativereferencesincontentblog assertstoragexmlequalscontent result noformat
1
i would like to enhance sql so that the current transaction could be retrieved and displayed as a long int that could be used in dml statements use cases come to mind insert into t values rowkey currenttransid or get transid
0
see for parent task and for the details on the metrics this subtask is to create the sink task metrics
1
see we should either fix the java doc build or disable building the java docs personally id be fine with disabling the java docs if a maintainer doesnt show up
1
the home page for a given project on stac appears to be blank whether im logged in or notps why is the version in the footer not a selectable version above somewhat confusing
0
this block of constants struct me as odd looked like a bug to me at firstcode public class standardsyntaxparser implements syntaxparser standardsyntaxparserconstants private static final int conjnone private static final int conjand private static final int conjor codebut it turns out theyre not used at all anymore there is a conjunction block that is all commented out
0
description of problemclicking on any operator group in topology causes the page to steps to reproduce switch to any namespace that has operator group eg knativeserving and open topology click on the operator actual resultsthe page expected resultsoperator group sidebar should reproducibility alwaysintermittentonly build
1
we build maven module b and it depends of another maven module a if we call goal of weblogicmavenplugin in module a and in b we do the same thing but the plugin change version we got this internal error exception message error orgapachemavenclimavencli internal error javalangillegalstateexception duplicate plugin realm for plugin orgapachemaveninternalerrorexception internal error javalangillegalstateexception duplicate plugin realm for plugin at at at at at at at at at at at method at at at at at at at caused by javalangillegalstateexception duplicate plugin realm for plugin at at at
1
as a result nothing can pass the comparison the match function provide a wighted matching based on which fields are matching and is perhaps the best to use to obtain the credentials of the specified proxy from the configuration
0
the following html renders with white text and black bullets in the textedit demo the bullets should be white meta contenttexthtml httpequivcontenttype body backgroundcolor fontfamily times new romantimesserif color ffffff topic topic
0
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
1
sparkjava supports https from jks keystorewe can simply provide some configuration options for it defaults to falsehttpsenabledfalse compulsory when enabling httpshttpskeystorepathtokeystorehttpspasswordpassword optional when enabling https self signedhttpstrustkeystorehttpstrustpasswordcan be tested with a curl request for instance curl xget k however configuration is now very simple isenable port and we pass it directly through constructor we might need to rework configuration and extract it to an other object
0
the exception warn could not bind factory to jndijavaxnamingnamealreadyboundexception remaining name env at at at at at at at at at at at at at method at at at at at at at at at at
0
if qwebengineview is shown in a window with qgraphicsdropshadoweffect set as graphics effect the qwebengineview stops repainting appearing stuck resizing or moving the window triggers a repaint i have been unable to reproduce this with other graphics effects attached is a minimal reproducing project pushbuttton in mainwindow opens a dialog that exhibits the bug dialog implementation is in dialogcpp
0
if the input given is less than bytes we pad zero for the remaining bytes this is not correct if the value is a negative number we should pad or depending upon the number is a positive or negative number later when the value is retrieved we end up with an incorrect value
1
sql query outputs are not always deterministic unless there is an explicit order by this patch injects an explicit sort when the query plan is not supposed to be sorted to make query outputs deterministic this is inspired by hivecomparisontest
0
you can not currently remove content or layout items from a lessons page
1
refer to
0
the below exception starts occuring when my application tries to update index of consumerswhen all consumers are rebuild the indexes are not updated and all threads of apache is occupied by this errorso i need to restart my tomcat every time and this happens for every hoursi saw some issues related in jira but when i used links to go to bugzilla it never works it would be great if some explanation is givenfor setting ulimit in linux it is told that it should be filespersegmentwe have mergefactor of are also not clear on what is filespersegmentbelow is the stack trace which comes mq consumer indexupdate comunileverbrandcomsdseservicesindexindexupdateconsumer got solr server exceptionorgapachesolrclientexceptionsolrserverexception server returned nonzero statuscaused byserver side exception status javaiofilenotfoundexception too many open filesat javaiorandomaccessfileopennative methodat comfreiheitcommonssensorperformancelogfilterdofilterunknown sourceat after sending requestat server returned nonzero status
1
the existing rest service allows the update of mixins on a node via the put method however when an update causes mixins to be removed the following problems may occur because the mixins are processed first before any properties it is impossible to remove a mixin if the node has properties defined which belong to that mixin or an ancestor of that mixin this is because a validation is performed when removing the mixins which would result in a constraintviolationexception the existing rest service doesnt handle properly removing of properties via update by passing in null values or empty arrays this is needed in order to be able to remove properties from a node therefore not only should the last point from above be implemented but also the order in which mixins properties are updated via the rest service should be changed
1
lucenemisc contrib module has two package names equals to the core version orgapacheluceneindex and orgapachelucenestore for theese modules we need to configure the creation of the bundles so that the osgi repository recognize theese packages as merged else we get a uses constraint violation error in the installationthe solution is to mark the packages as mergeables adding a configuration in the mavenbundeplugin splitpackagemergefirst the represents the package names we could put orgapachelucene too
0
bridgerenderrequestwrapperjavain pack orgapachemyfacesportletfacesbridgewrapper is missing
1
convert the modeshapesequencerjbpmjpdl module be sure to change the toplevel pom file to move the module outside of the legacy profile
1
generally the ootb userfacing text refers to a username as the general name for a user idhowever several strings in trunksakailegacybundlesrcbundlesitesetupgenericproperties use the michiganspecific uniqname monikerjavaauthoriz the site request authorization email has been sent successfully to uniqnamejavathesiteemail the site request authorization email could not be sent to uniqnameaddconfsortbyuniq sort by uniqnameaddconfsuasc sort by uniqname ascendingaddconfsudesc sort by uniqname descendingchroluniq uniqnameas an aside it might nice to figure out a way to specify the local name for username uniqname uni utln netid or whatever in sakaiproperties and have that be inherited in all of the different places in the userfacing text
0
we need to make sure that eap is tested with the upcoming release of crw bootable jar devfile xzp new in devfile x new in devfile zp same as in
0
there are serveral log messages that report compete when they mean complete
0
cluster with all services at all nodes and default at host details in download client configs dropdownexpected resultall letters in yarn should be capital at download client configs dropdown as at services page yarn not yarnshould have yarn client not yarn client like hdfs clientactual resulthave yarn client in stead yarn client
0
we must add the apache license to our examples and reposexample license
1
it iterates over a set to print its flags
0
when table level quota is set and violated pre existing namespace level quota policy not imposed on table while removing quota for a table in addition to deleting from hbasequota table it should be removed from current state in quotaobserverchore as well
0
when regions are getting added and removed lots of cpu time can be used by jmx this is caused by sending jmx messages for every new metric that is added or removedseeing jstacks like thisrmi tcp daemon tid nid runnable javalangthreadstate runnable at at at at at at at at thread for monitoring hbase daemon tid nid runnable javalangthreadstate runnable at at at at source at at at
0
execute in html or xhtml file insert tagcodetextcodeassert jboss tools content assist works in value of onclick attributepartition type id in html event attributes is orgeclipsewsthtmlscripteventhandler
0
integration tests for the c data interface implementation for java the integration is against pyarrow
0
hi teamfacing this issue please can you help the plugin of cordova camera the callbackcontext is returning null and the app get crashcan you help us why this is happening in android and as well as can u do something in codova camera plugin and some try catch where as null value come the app should not shutting down fatal exception process comnielsennic pid javalangruntimeexception unable to resume activity comnielsenniccomnielsennicnic javalangruntimeexception failure delivering result resultinfowhonull datanull to activity comnielsenniccomnielsennicnic javalangnullpointerexception attempt to invoke virtual method javalangstring androidneturitostring on a null object at at at at at at at at at javalangreflectmethodinvokenative at at at caused by javalangruntimeexception failure delivering result resultinfowhonull datanull to activity comnielsenniccomnielsennicnic javalangnullpointerexception attempt to invoke virtual method javalangstring androidneturitostring on a null object at at caused by javalangnullpointerexception attempt to invoke virtual method javalangstring androidneturitostring on a null object at at at at at at more
1
was trying to verify users in a group are able to add textattachment to their group assignment but are unable to submit the assignment the assignment just keeps displaying in progress after clicking submit am attaching error from logs but the first line reads warn orgsakaiprojectcheftoolvelocityportletpaneledactionactiondispatch exception calling method doreadaddsubmissionform javalangreflectinvocationtargetexception caused by javalangnullpointerexception
1
currently wicket doesnt include a uniform and automatic solution against crsf vulnerability or vulnerability in order to solve csrf is necessary to avoid static html and create dynamic or aleatory html per usertwo posible include a random token aleatory parameter to each url link or form the name and the value of this parameter can be the same per user or change per request more secure but perform worse it seems that can be implemented creating other implementation of irequestcodingstrategy encrypt all urls links and form urls using request coding strategy strategy offered currently by wicket cryptedurlwebrequestcodingstrategy provide a security factory to use a different key per user or add some aleatory data to encrypted data for example user jessionid sunjcecrypt bundled in wicket is vulnerable to csrf because obtained encrypted string is the same for all the users
0
currently this is only true when an elements visible property is false but it must be true also when opacity is zero
0
comparisons done by metadatainheritancecomparator are not transitive it is possible to have classes a b and c such that the comparator simultaneously reports that a b b c and c a under certain unlucky conditions this causes the sortedtree holding the metadata resolution buffer to become confused during redblack fix such that it can retrieve a certain element but not delete it the processed list then grows until heap is exhaustedin the enclosed sample project a b by nameb c by assignable primary key fieldc a by levels from base class objectif you import the enclosed eclipse project into an aspectjenabled eclipse and refer the aspectj compiler to an openjpa jar file youll get the following output bugb buga bugc bugb buga bugc cycle detected buga bugc bugb bugathe project will work outside of aspectj and will exhibit the out of memory condition described abovei acknowledge that the enclosed persistencexml file is not kosher in that it doesnt list all classes to be instrumented my own project affected by this bug has a correct persistencexml file i had to work hard to contrive a simple example as the order in which classes are buffered affects the appearance of the bugthere is no workaround that i know of i dont believe that the comparators semantics are welldefined
1
the watch functionality within the sakai wiki is not performing as expected although the email notification for a change is sent out when the appropriate option is selected in the preferences the notification is also send out when the do not send me email option is selectedi was testing on the following test servermy site name istest tlt spring refer to the following screencast for further explanation
0
this may be a dup of but the system tests in shows that a concurrent transactional consumer reads aborted messages for the test in question the clients are bounced times with a transaction size of we expect aborted messages the concurrent consumer regularly over counts by to messages suggesting that some aborted transactions are consumed noformattestid kafkatesttestscoretransactionstesttransactionstesttesttransactionsfailuremodecleanbouncebouncetargetclientsstatus failrun time minute seconds detected dups in concurrently consumed messagestraceback most recent call last file line in run data selfruntest file line in runtest return selftestcontextfunctionselftest file line in wrapper return functoolspartialf args kwargswargs wkwargs file optkafkadevtestskafkatesttestscoretransactionstestpy line in testtransactions assert numdupsinconcurrentconsumer detected d dups in concurrently consumed messages numdupsinconcurrentconsumerassertionerror detected dups in concurrently consumed messagesnoformatthis behavior continues even after was merged
1
more like this ca be refactored to improve the code readability test coverage and maintenance scope of this jira issue is to start the more like this refactor from the more like this params this jira will not improve the current more like this but just keep the same functionality with a refactored code other jira issues will follow improving the overall code readability test coverage and maintenance
0
when i use the following jsoncodeobj a hellocodeand load it with the following python codecodetf sctextfiletestjsonv sqlcontextjsonrddtf structtypevsavetestparquet modeoverwritecodei get the following error in spark master an error occurred while calling orgapachesparksparkexception job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid localhost javalangclasscastexception javalangstring cannot be cast to at at at at at at at at at at at at at at at at at at at at at at at worked well in spark
1
i create table by phoenix like this sql create table if not exists toriginmsg msgid not null primary key msg varbinary updatetime date compressionsnappydurabilityasyncwal split on then upsert data in a loop regionserver was crash the infomation ref attanchment if i dont set asyncwal it will work
1
hi infra im trying to look into some of our rule generation and mail not being received for the logs so i went looking for the missing email in the account for example getting bounces like this connect to connection timed out and logs like dec savm postfixqmgr from queue active dec savm postfixsmtp connect to connection timed out dec savm postfixsmtp to relaynone statusdeferred connect to connection timed out i dont know what is but i think its asfs relay and where we are supposed to send email so why is the connection timing out postfix seems configured to use it so its purposeful postfixmaincfrelayhost what should the system be using as a relay host kam
1
task for ozone release
1
devsuite installer now supports xhyve hypervisor detection and configuration for cdk as it is explained here
0
there is desperate need for a libary to execute dependant tasks in a reliable way
0
currently selenium is configured for admin console to run via seleniummavenplugin updated to selenium this version is reported to have a longstanding bug with latest firefox versionsit seems that selenium solves such issue but it requires a maven configuration change according to
0
changes relating to setting up new modules tests on tc
0
currently the unboundedreaderiterator will read until elements have been read or have passed this works for most pipelines but is insufficient for pipelines that either require very high throughput or require low latency we should make these values controllable via a pipelineoption probably under dataflowpipelinedebugoptions the constants are defined here
0
noformatorgapachehadoopyarnexceptionsyarnruntimeexception javaiofilenotfoundexception file filetmphadoopyarnstaginghistorydoneintermediate does not exist at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at by javaiofilenotfoundexception file filetmphadoopyarnstaginghistorydoneintermediate does not exist at at at at at at at at at at at at at at at at
0
for jbide please perform the following if nothing has changed in your component since eg xulrunner gwt freemarker birt colorredreject this make sure your component has no remaining unresolved jiras set for fixversion jiras with fixversion ensure your component featuresplugins have been properly upversioned eg from to note if you already did this for the previous milestone you do not need to do so againcodemvn dtychomodemaven update your root pom to use parent pom version code orgjbosstools parent ensure youve built run your plugin tests using the latest target platform version clean verify if the tp is already released ormvn clean verify if still being branch from your existing master branch into a new branch codegit checkout mastergit pull origin mastergit checkout b push origin close do not resolve this jira when donesearch for all task jira or search for livereload task jira
1
goal implementing sqoop metastore to run daily incremental etl when executing the following sqoop job codejava sqoop job dmapreducejobqueuenamednosla create import connect username passwordfile outdir sqoop table incremental append checkcolumn greatestcreateauditkey updateauditkey lastvalue splitby createauditkey m targetdir fieldsterminatedby sqoop would automatically run the following query to update the max value codejava select maxgreatestcreateauditkey updateauditkey from code sql syntax error was given since there shouldnt be double quotes around greatest function the proposal of using free form query is rejected since metastore cannot be implemented to automatically run the daily incremental job
1
info orgapachezookeeperclientcnxn socket connection established to initiating info orgapachezookeeperclientcnxn session establishment complete on server sessionid negotiated timeout error orgapachehadoopyarnserverresourcemanagerrmappattemptrmappattemptimpl cant handle this event at current stateorgapachehadoopyarnstateinvalidstatetransitonexception invalid event containerallocated at launched at at at at at at at at at at error orgapachehadoopyarnserverresourcemanagerrmappattemptrmappattemptimpl cant handle this event at current stateorgapachehadoopyarnstateinvalidstatetransitonexception invalid event statusupdate at launched at at at at at at at at at at error orgapachehadoopyarnserverresourcemanagerrmappattemptrmappattemptimpl cant handle this event at current stateorgapachehadoopyarnstateinvalidstatetransitonexception invalid event statusupdate at launched at at at at at at at at at at warn orgapachehadoophaactivestandbyelector ignoring stale result from old client with sessionid code
0
note this bug report is for jira cloud using jira server see the corresponding bug report paneldefaulttemporaryindexprovider is not closing its searcher properlythis is skewing the instrumentation of lucene searcher open and close counts
0
create an agrad jira instance for tooling forge and rad related tasks
0
coinmex exchange api java sdk
0
async supports specifying which target executor to use it would be useful if scheduled had the same support
0
note this bug report is for jira portfolio cloud using jira portfolio server see the corresponding bug report summarywhen a jira portfolio plan is connected to an existing project where in order to create an issue the following fields are required under that projects field configuration description field another field such as a test custom fieldwhen committing a change such as a new epicstoryetc the set required fields screen will be brought up so that the user may populate that other required field in the process the description field is also brought up but it does not retain the description that may have been added in portfoliothe issue cannot be created without the user repopulating the description again since it is a required environment jira portfolio steps to reproduce create a new project create a new custom field eg test which is a single line text field configure the projects field configuration so that description and test are required create a new plan and associate it with this project create a new epic and give it a description in portfolio commit the changeat this point you will find that the issue isnt automatically created since the user needs to populate test description will be blank cannot proceed with creating the issue until description is populated expected resultsdescription should be retained from portfolio into the set required fields screen during the commit completing the commit should create the issue with the actual resultsdescription is not retained in the set required fields screen during the commit repopulate the description at the set required fields set description to be optional in the projects field configuration which will prevent it from coming up during the set required fields screen the description is retained at the time of issue creation this way
1
the plan for the query below has the join conditions refactored but the result is not correct alex looked at this and thinks its a new bugthe query should return rows but it returns as as join as on join as on and explain select as from as left join as on inner join as on and explain select as from as left join as on inner join as on and explain string estimated perhost requirements warning the following tables are missing relevant table andor column statistics join hash predicates hdfs join hash predicates hdfs predicates hdfs returned rows in
1
this issue is clone of another issue but is cloned for highlighting the specific issue around the actual default value dumped out a database from mssql using ddlutils ant task using these files attempting creating the equivalent database on mysql getting few errors as belowcreate table whatever integer not null integer not null default apossome stringapos null failed with you have an error in your sql syntax check the manual that corresponds to your mysql server version for the right syntax i changed the line to as follows and then the ddltodatabase works fine default some string null i am not too sure why this specific default value is xml encoded in the first place but in any case if one indeed had such literal data there should be a way to configure the ddlutils to deal with it possibly appropriately escapeencode i could not find info if there were any user configurable settings that can overcome this problem
0
test and certify inline file system in and hdfs
1
subtree links are randomly lost when closing and reopening sourcetree this began in version
1
jirasoapservice implements the following method codegetdefaultroleactorsjavalangstring token remoteprojectrole projectrolecode which you can use to get default actors for a given role according to migrate guidance you should now use which is not true as the resource returns actors for a specific project which may differ from defaults
1
for jbide please perform the following ensure your component featuresplugins have been properly upversioned eg from to or switch to using the new ojtfoundationlicensefeature instructions resolve this jira when done qe can then verify and close it latersearch for all task jira or search for webservices task jira
1
after having had some problems with corrupted jars we recently purged our proxying directory on the archiva server then switched from ignore to fail policy when bad hash is found on the remote serverafter some times for example we discovered we werent able to download the maven from the client side of archiva the thing is archiva only issues a when the remote hash is bad i guess it should issue a or some insteadto sum up what i think would be the best solutions issue something else than an when the remote artifact wont be downloaded because of a non matching hash offer a way to notify some admin by mail for example about corrupted artifacts that wont be downloaded in fact in this kind of case theres a big chance people using archiva are going to complain about some artifacts that cant be downloaded maven eg offer a dedicated page inside archiva admin summarizing all those problematic artifacts particularly giving those that couldnt be downloaded because of bash hash associated to fail policythanks a lotcheers
0
in fseditlogremoveeditsforstoragedir we iterate over the edits streams trying to find the stream corresponding to a given dir to check equality we currently use the following conditioncode file parentdir getstoragedirforstreamidx if parentdirgetnameequalssdgetrootgetname code which is horribly incorrect if two or more storage dirs happen to have the same terminal path component eg and then it will pick the wrong streams to remove
1
accessibility automation for web apps with java and selenium webdriver
0
noformatcreate table ta int b int clustered by a into buckets stored as orc tblpropertiestransactionalfalseinsert into tab into tab table t set tblproperties transactionaltruenoformat we should now have bucket files and orcrawrecordmergeroriginalreaderpairnext doesnt know that there can be copyn files and numbers rows in each bucket from thus generating duplicate idsnoformatselect rowid inputfilename a b from tnoformatproduces do you have any thoughts on a good way to handle thisattached patch has a few changes to make acid even recognize copyn but this is just a prerequisite the new ut demonstrates the issuefuthermorenoformatalter table t compact majorselect rowid inputfilename a b from t order by bnoformatproduces has demonstrating thisthis is because compactor doesnt handle copyn files either skips them
1
in particular the ones in the split directories arent being added
1
in we have these versions the correct version should be of that springbootcxfjaxrs and springbootcxfjaxws quickstarts pull wrong versions
1
in domain mode most almost certainly all server level operations that update the persistent configuration should not be directly accessible by the end user and should not appear in the results of the readresourcedescription readoperationnames or readoperationdescription operations if executed against a serverlevel resource they can only be invoked by the host controller that is responsible for the server
1
we should set the default value of jobmanagerexecutionfailoverstrategy to region this might require to adapt existing tests to make them pass
1
during the object rebuild test because we do not have enough target for test we have to add the excluded target back for subsequent tests but sometimes the excluded server is added back too early as to the current test will try to access data from it then caused failure for example some client side logs for the layout before update noformat daos placement dbug objlayoutdump dump layout for ver daos placement dbug objlayoutdump shardid tgtid fseq healthy daos placement dbug objlayoutdump shardid tgtid fseq healthy daos placement dbug objlayoutdump shardid tgtid fseq healthy noformat then client sends update rpc to the server but server found that its pool map version is stale then the client refreshes its pool map noformat daos object dbug objlayoutcreate place object on targets ver daos placement dbug objlayoutdump dump layout for ver daos placement dbug objlayoutdump shardid tgtid fseq healthy daos placement dbug objlayoutdump shardid tgtid fseq healthy daos placement dbug objlayoutdump shardid tgtid fseq healthy noformat and then the client retry the update rpc succeed noformat daos object dbug objshardrw rpc daos object dbug objshardrw opc rank tag eph datasize dti noformat and then the excluded server is added back noformat daos placement dbug objlayoutdump dump layout for ver daos placement dbug objlayoutdump shardid tgtid fseq healthy daos placement dbug objlayoutdump shardid tgtid fseq healthy daos placement dbug objlayoutdump shardid tgtid fseq healthy noformat the client begin to verify former written data but unfortunately the fetch rpc is sent to the just back target then fetched nothing noformat daos object dbug objshardrw rpc daos object dbug objshardrw opc rank tag eph datasize dti noformat
1
currently jobs giving an error message nonreadable settings input contained no data looks like there is something missing on the hard drive the following node is usedwhich is marked with slave is pending removal but why are running jobs on that machinefurthermore i can the buttons bring this node online and update offline reason which looks for me a problem in permissions im a maven pmc member
1

Dataset Card for "highest_vs_rest_5_levels"

More Information needed

Downloads last month
0
Edit dataset card