text_clean
stringlengths
3
2.28M
label
int64
0
1
already is moderator on user but would also like to help moderate private dev and commitsstreamsincubatorapacheorgmoderator email address sblackmonapacheorgthanks atementor streams podling
0
forked child with pid for container failureexpected statisticscpususertimesecs actual vs where typeparam mesosposixcpuisolatorprocess ms
0
id like to make my open source project available through maven central repository could you please create an appropriate project
0
seeing this issue when trying to enable rebuildioconfrun test running enablerecovery mapby node mca btlopenibwarndefaultgidprefix mca btl tcpself mca oob tcp mca pml mca btltcpifinclude np tagoutput homedinghwahdaosinstallbindaosrunioconf n etcdaosdaoscontrolyml dmgconfigfile etcdaosdaoscontrolyml creating pool scm gb nvme gb created pool connecting to pool connected to pool creating container opening container exclude rank exclude command succeeded dmg pool exclude o etcdaosdaoscontrolyml rc failure assertrcequal command enablerecovery mapby node mca btlopenibwarndefaultgidprefix mca btl tcpself mca oob tcp mca pml mca btltcpifinclude np tagoutput homedinghwahdaosinstallbindaosrunioconf n etcdaosdaoscontrolyml finished with after reproduced traceback from log attached
1
the sample domain mode configuration bundled in wildfly has two servers in started state and the third one named serverthree is created in stopped statefor stopped servers and for inventory the agent reported the following for the server config resourcecodejson statusdisabled server groupotherservergroup auto startfalse nameserverthreecodethese data allowed miq to associate the server to the stated server group now under inventory reported config contains onlycodejsonserver statestoppedcodeand nothing else given that the parent resource is the host master now its not possible to find which server group the server belongs to miq relies on correctly associating to a server group to treat the server as a domain serverfor reference this is the config data reported for running domain servers reporting correctly the server groupcodejson bound node namemasterserverone base server staterunning product namenil hostnameteya hostmaster server groupmainservergroup initial running modenormal nameserverone suspend staterunning running modenormal versionnil profile namefull
0
implement validation and error messaging for too many course grade criterions too many due date criterions on the same gradebook item too many score criterions on the same gradebook item
0
note this bug report is for confluence cloud using confluence server see the corresponding bug report panelsymptomson the page properties report macro either a content is missingmissingjpegb columns are duplicatedduplicatedjpegcausewhen manually modifying a page properties table a hidden tag like or might be present in the field text making fields with same text be treated as different this also makes impossible to list a column by its namesee the differences in the storage format ofcodehtmltitleproject page prioritystatuscode vscodehtmltitleproject page in a there is a column filter applied status priority but the report doesnt find those columns in project page because those fields have other characters in it in b the columns are listed separately because although they look the same they are different to the report macroexpected special tags to be ignored if rows look the same to be treated like the samesteps to reproduceto get the or values in the table you have to write two lines of text anywhere on the page copy the lines inside the tableediting the field names from now on will keep the or value on themworkaround remove the tables row and create it again for every row with the problem
0
our application is an sgml to html rendering tool we validate the sgml convert to xml and transform to xml using xslta significant portion of the data is cals tables and we have borrowed from norm walshs docbook code to perform these conversionswhen processing large tables xalan crashes with a javalangoutofmemoryerror and no further information when i isolated which tables were causing the crash i processed them individually by removing some rows and entries i finally got the table to render but as soon as i add one entry back in the crash will reoccurthis data is rendered successfully and properly using other xslt processors such as saxon in a comparison admittedly not complete of saxon and xalan both seemed to be returning exactly the same values and iterating the same number of times right up to the point where xalan crashed our client is already using xalan for other purposes and wishes to remain consistent in their toolset but will be unable to unless this problem is remedied they dont seem to receptive to the if tables crash it dont use tables solution pbelow is the xslt code for processing the tables followed by the xml to be processed simply using the xml as your input with this xslt code will reproduce our error thanks calstablexslxslstylesheet border ltbrgt ltbrgt th td th td xslvariable namenamest selectancestortgroupspanspecnamest xslvariable namenameend selectancestortgroupspanspecnameend xslvariable namecolst selectancestorcolspeccolnum xslvariable namecolend selectancestorcolspeccolnum xslvalueof selectnumbercolend numbercolst borderbottomthin solid black xslif testnotprecedingsibling and ancestorrowid xslvariable namecolspec selectentryancestortgroupcolspec xslvariable namecolspec selectentryancestortgroupcolspec xslvariable namecolspec selectentryancestortgroupcolspec xslvariable namecolspec selectentryancestortgroupcolspec xslwithparam namecolspec selectcolspecprecedingsiblingcolspec cols countcolspecs ltcolgt countcolspecs xslwithparam namecolspec selectentryancestortgroupcolspec xslwithparam namecolspec selectentryancestortgroupcolspec
1
the query below crashes impala from the stack trace the problem appears to be in the analytic codenoformatselect over partition by order by asc asc from functionalalltypestiny left semi join select from functionalalltypes on tracenoformatgdb in raise from in abort from in osabortbool from in vmerrorreportanddie from in jvmhandlelinuxsignal from in signalhandlerint siginfo void from in impalaudfimpl this at in impalaclose this state at in impalaclose this at in impalaplanfragmentexecutor this inchrg at in impalafragmentexecstate this inchrg at in boostcheckeddelete x at in boostspcountedimplpdispose this at in boostrelease this at in boostsharedcount this inchrg at in boostsharedptrsharedptr this inchrg at linecodevoid analyticevalnodecloseruntimestate state if isclosed return if inputstreamget null inputstreamclose dcheckeqevaluatorssize fnctxssize for int i i evaluatorssize i need to make sure finalize is called in case there is any state to clean up if currtuple null evaluatorsfinalizefnctxs currtuple dummyresulttuple evaluatorsclosestate fnctxsimplclosecodeoriginal querynoformatselect over partition by order by asc asc as from alltypestiny where exists select as from alltypes where
1
parsing of unordered sequences uses the choiceparser to handle parsing the different alternatives this causes issues because the sequenceparser that orchestrates the choiceparser needs to know why a choiceparser fails and react differently if the choiceparser fails because all branches speculatively failed then we need to simply ignore the failure and it signifies the end of the sequence but if the choiceparser fails because a discriminated branch failed then it signifies the unordered seuence failed and the error must propogate upwards this suggests that we cannot use the choiceparser when parsing an unordred sequence because we cannot know the reason for failure once it completes–all we know is it failed instead the sequenceparser must create and manage points of uncertainty attempt to parse each branch of the unordered sequence individually much like the choiceparser does this way it can know if a branch was discriminated and failed or if all branches were tried and they all failed and react appropriately
1
debezium server distribution contains kafka clients in version while debezium itself is on version the versions should be aligned
0
story as an administrator to cluster logging i want a chart that displays the top containers of logs collected so that i can understand who are the most chatty acceptance criteria add a new chart to the existing openshift logging dashboard that shows the top containers of logs collected the chart displays the podname and namespace associated with each metric
1
during the last days i have been reported and experienced a really scary effectuser x logs in and sets remember me this works for a whilewhen the user returns to cf after a longer time the next day in my case all edits are marked as being done by user yeeekby the way are there means to configure a timeout for the persitent cookie i would like it to last only for a limted time of inactivity eg hours
1
with netbeans on windows after updating from jdk to openjdk javafx i started getting the following exception after this exception since the source scan failed various ide features dont work navigator pane says please wait forever cant navigate sources using operations like go to declaration code refactoring doesnt work etc the ide is pretty useless while in this state strangely i am running netbeans on linux and didnt have this same problem there i saw this on proprietry code and cant share it if necessary i could try to provide a tiny code example that demonstrates the same issue an error occurred during parsing of cjava please report a bug against javasource and attach dump file cdump caused javalangstringindexoutofboundsexception string index out of range at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at
0
isdate stellar function is lenient while parsing datetime below are some of the example of isdate returning true for invalid dates codejava yyyymmdd true yyyymmdd true yyyymmdd true code
0
the code in getleavingtransitionsmap lazily constructs the map the first time the method is called our application can have of users going through the same node at the start of their shift it is possible to have more than one user leaving the first node at the same time in the same vm this can put multiple users in the getleavingtransitionsmap method at the same time we can get deadlock in the put method for the map which means no one else can enter the application the block of code should be synchronized or the map should be synchronized a simpler fix is to just remove the map the number of transitions exiting a node is unlikely to ever be large enough to justify the complexity of two data structures pointing at the same objects iterating across the list is almost always going to be close enough in performance to remove the need for this mapwere running just the jpdl which has this problem up through and including the current version are the leaving link transitions mapped by their name javalangstring public map getleavingtransitionsmap if leavingtransitionmapnull leavingtransitionsnull initialize the cached leaving transition map leavingtransitionmap new hashmap listiterator iter leavingtransitionslistiteratorleavingtransitionssize while iterhasprevious transition leavingtransition transition iterprevious leavingtransitionmapputleavingtransitiongetname leavingtransition return leavingtransitionmap
1
various enhancements should be applied to the maven project descriptor of the metatype bundle set the bundledocurl header to the metatype service page remove header definitions inherited from the parent pom embedd the kxml library as a whole instead of unpacking it add the project version to the package export
0
aerogeardev mailing list has nabble forums enabledwhile aerogearusers can be browsers only as mailman archive which is not sufficient from user perspectivewe should enable nabble for aerogearusers as well
0
code codehere initial configuration of the property mediator getting updated with each and every info logmediator to messageid direction request aa info logmediator to messageid direction request aa info logmediator to messageid direction request aa info logmediator to messageid direction request aa codefix would be to clone the valueelement at the propertymediatorfactoryinstead of codeif value null propmediatorsetvaluevaluegetattributevalue datatype else if valueelement null propmediatorsetvalueelementvalueelementcodewe need to usecodeif value null propmediatorsetvaluevaluegetattributevalue datatype else if valueelement null propmediatorsetvalueelementvalueelementcloneomelementcode
1
when creating a new project there is a wizard with a second step that allows you to specify if a default process a more advanced process or an empty project when creating a new project with maven this option is not available and it goes ahead and creates a simple process anywaysfurther more after the maven project is set up right clicking on the process to generate a junit test case the test case fails but works using the nonmaven project wizardas well the test case is created in srcmainjava as opposed to srcmaintest and has a package orgjbpm instead of the same package as the process comsamplehere is the stack trace of the exception when running the test case off the process generated from the new project maven wizard javalangruntimeexception unable to get lastmodified for classpathresource at at at at at at at at at at at at at at at at method at at at at at at at at at at at at at at at at at at at at at at at at at at by javaiofilenotfoundexception samplebpmn cannot be opened because it does not exist at at more
0
the actual usecase concerns a possible contribution to apache shindig providing optional support for api libraries are distributed under the license which of course cannot and will not be distributed by the asfthe support is intended to be fully optional so apache shindig will not depend on this and the enduser will have to provide the required dependencies itselfthe proposed contribution currently has a direct dependency on and usage of apisthis is already assumed not to be allowed as this causes the license to become applicable to the whole of the optional apache shindig support modulehowever a possible workaround has been proposed by replacing the direct api usages with a party compatible licensed bridge library spring data this bridge is licensed however it also depends and uses apis directly and is not provided by the project itselfso while this bridge seems to hide the direct usage behind an facade it seems to me this really is a fake solution still causing the to be applicable if indirectlyi like to get a clear confirmation or rejection of the above assumption so we can proceed with possible acceptingrejecting the contribution as proposedthanks ate
0
the vendor support resources and contact headers are misaligned they should be at the same vertical position
0
at present we builddevelop with a number of different versions of depending on what the distro provides and centos to centos has even gone back in terms of version this causes problems with api changes in libfuse versions but also means that we cant use new features examples of issues seen are differning ioctl callback prototype various ways to set readwrite buffer size logging callback is only available on ubuntu cachereaddir is supported by modern kernels but not userspace overall this has meant that a lot of my focus has been on developing dfuse such that its capable of running across a variety of systems whereas if we were to use a modern across the board we could focus on using the latest features and improving performance
1
this was brought up long time back we need to move some of the public apis from cellutil to internal private util class because they are used in some internal flow and does not make sense to have it in a public exposed util class the topic again came in rb comments also
1
we do experience inconsistent behavior of journal object store amq against shadow store this starts to happen from test case enlist activemq jms resource enlist test xa resource prepare jms resource prepare test xa resource commit jms resource commit test xa resource byteman force toplevelcommit to return result for xa resource is twophaseoutcomeheuristichazard and client gets javaxtransactionheuristicmixedexception probing log and showing state of transactions subsystemtransactionslogstorelogstoreprobe expecting one indoubt participant in heuristic state calling operation recovery on all transactions participants do recoverythis works fine when shadow log store or jdbc object store is used for amq object log store the participant is first not in heuristic state but is in state prepared and second there is not only one participant of transaction indoubt but theyre returned two participantsthen during recovery process the periodic recovery also can see two participants for recovery thats my feeling from log not only one as expected as first resource was already correctly committed thats how shadow log store works
1
some tools have the insert image icon in the wysiwyg toolbar syllabus quiztest and some do not legacy tools it was decided to put it in for all tools the reason for keeping it out originally was that someone could add an image that resided on some system that might not be available so the image wouldnt be displayed however even with that issue users are wanting images and it was decided to put it back in we will add an auto upload feature later so that an image would be loaded into the sites resources area and the image url would point to the image in resources
0
has anyone ever seen an issue where an instructor is shown studentassignment for a different site while grading an assignment the instructor had clicked the next ungraded button but then was shown the error you are not allowed to grade submission however the siteid in the error message was not the site id of the class the instructor was working in the instructor is not an instructor of the siteid in the error message the student that appeared on the page was one from the siteid in the alert but the title of the assignment on the page was correct checking the session and event logs the instructor that saw this error and the instructor of the siteid in the alert were both on the same server we have a clustered system at roughly the same time grading each grading the two assignments in question how is it possible that the instructor saw an assignmentstudent they shouldnt have had access to i didnt see any existing jiras but i might not be searching the right keywords sakai tomcat java mysql
1
see eg is using bom from
1
if the repository named as ossezvtiger late i try to delete this repository i got error a repository with the name ossezvtiger does not exist however if i named a repository as ossezvtiger or ossezvtiger they are both work fine which means we cant use dot at repository name
0
right now it returnsdatabase version suggested actual version from server nodesjdbc version suggested thats whats in java version suggested actual version of running ignite codedatabase product name is ignite cache probably keep that
0
document how to lookup cluster integration coverage cm for rmdynamic rm llama
1
goals embed launcher workflow in snowdropme user clicks “start” or primary call to action on snowdrop homepage user is presented with launcher workflow to create a starting point application proceed with a usecase driven guide
0
ruta problem in testing view resolving typesystems in classpath
0
error message when importing contains unnecessary text if project not supportedattached incorrect error message expected resulterror message should be specific
0
changes being planned aremigrate container repositories to new container namespace on distgit and drop the ‘docker’ suffix from the repository nameupdate brew package names and container targets to remove ‘docker’ and replace it with ‘container’update dockerfile with the below changes for each containerenv value from containerdocker to containerocilabelscomredhatcomponent should be updated to use new package namesany other labels that reference the word “docker” may need to be changed
0
side note doesnt affect trunk since we use johnzon
1
valuechangelistener is being called before the setters even with immediatetruethis is not the right behavior since it overwrites any property modified in the event handler public class productbean private long infoid private string description private string usage setters and getters for the above properties public void valuechangedhandlervaluechangeevent event long infoid long eventgetnewvalue if infoid null infoid dataservice and productinfo are related to hibernate productinfo info dataservicegetproductinfoinfoid thisdescription infogetdescription thisusage infogetusage description and usage properties can never be changed since they get overwritten with the initial values
1
when client calls we need following additional information queues defined hierarchy also if possible
0
currently when the master splits logs it outputs all edits it finds even those that have already been obsoleted by flushes at replay time on the rs we discard the edits that have already been flushedwe could do a pretty simple optimization here basically the rs should replicate a map region id last flushed seq id into zookeeper this can be asynchronous by some seconds without any problems then when doing log splitting if we have this map available we can discard any edits found in the logs that were already flushed and thus output a much smaller amount of data
0
upgrading jackrabbit from to has created an ldap exception the configuration file which has not changed except for the adding the new simplesecuritymanager as required is the default with the following substituted for the loginmodule this configuration worked correctly and i was able to authenticate properly with jackrabbit same configuration with throws the following exceptionjavaxjcrloginexception comsunsecurityauthmoduleldaploginmodule does not support userprovider comsunsecurityauthmoduleldaploginmodule does not support userprovider comsunsecurityauthmoduleldaploginmodule does not support userprovider at at at at at at at at at at at at at at at at at at at at at try initialcontext ctx new initialcontext repository repository ctxlookupjcrrepository session repositorylogincredentials catch exception e at at method at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at by javaxsecurityauthloginloginexception comsunsecurityauthmoduleldaploginmodule does not support userprovider at at morejavaxsecurityauthloginloginexception comsunsecurityauthmoduleldaploginmodule does not support userprovider at at at at at at at at at at at at at at at at at at at at at at at at method at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at failed to obtaincreate connection from connection pool reason failed to create session comsunsecurityauthmoduleldaploginmodule does not support userprovider comsunsecurityauthmoduleldaploginmodule does not support userprovider
1
as application developer i want an easy way to deploy and test my app by only having a single orderer process this refers to the solo ordering process this will be a simple to deploy minimal code ordering process because there is only one process no consensus need be reached between multiple processes so this is largely as simple as having multiple clients write into a buffered channel with a single thread reading it and creating batchesblocks the deliver side is only slightly more complicated as to allow for disconnection and reconnection the orderer service must allow history to be discovered this requires implementing a simple in ram ledger and allowing individual clients to seek within it
1
there appear to be several problems with this class if setservlet is not invoked then a null pointer exception will be thrown withinthe handlerequest method on the call to retrieve tempdir specifically this lineattempt to retrieve the servlet containers temporary directory servletcontext context servletgetservletconfiggetservletcontextthere is no servlet since we didnt call setservlet servletgetservletconfigreturns null and then calling getservletcontext on a null results in the nullpointer exceptionthe above line should be included in the trycatch block then this class wouldwork as advertisedanother suggestion would be to include a constructor that would take theactionservlet this is more clear than just having a set method within the class
0
steps to create a project using the adgissuemxml launch it actual results there is not no label displayed in the adg expected results it should have label launch the adgokmxml file to see the expected result workaround if any
0
define github actions to check for performance regressions per commit
0
selectable qtablewidgetitems are not drawn correctly when scrolling the table widget the attached example shows this the items from to are initially unchecked and items from to are checked when scrolling the table some of the unchecked items become checked and vice versa this is only reproducible with windowsvista style with for example windowsxp style the items are drawn correctlythis worked correctly with qt
1
str run jira click button go to ‘deadline’ field set any incorrect value for example ‘’ click button warning is shown you did not enter a valid date please enter the date in the format “dmmmyy” eg “” – ok set ‘’ click button b set ‘’ click button ar a there is no any warning message about incorrect data set into the field incorrect format “dmmmy” instead of “dmmmyy” b there is no any warning message about incorrect data set into the field incorrect format “dmmmyyyy” instead of “dmmmyy” zero value ‘’ is set for year definition er a b it should be prohibited to set incorrect value into ‘deadline’ field
0
the configure script provided for building the crowd apache connector has hardcoded file paths for checking for the apache installation and modules directories as apache is not always installed in these locations the build often fails whereby requiring symbolic links to be created to enable the configuration to complete successfully would it please be possible to add two command line parameters to the configure script to enable custom paths to be used without manually having to manage symbolic links on the filesystem regards stuart
0
if we provide deb and rpm packages for c arrow users can install it easily at least im happy as an useris there any location to provide deb and rpm packages if it doesnt exist how about using with open source plan we can find open source plan by clicking looking for free or opensource plans at
0
we are not receiving any emails as of yesterday from bitbucket i noticed yesterday but thought it was a fluke so today when waiting for a merge i asked about the status of the merge and that person advised me it was already merged and they noticed they did not receive an email saying it was assigned to them my coworkers are not receiving emails also
0
if you open one of the older version of project rename it and make a copy of a bp you not able to open new bp see attachment and steps to reproduce in it works as expected
0
after patch is applied the admin script is not updated with patched lib references for the followingcode
0
the installation docs on clustering businesscentral fail to mention that we only test this setup on rhel and that such installations on windows are not supported please add a note highlighting this limitation to add least the latest product doc or if possible to all docs since where this feature was moved from tech preview to full support thanks reference
1
ios app crashes when uploading multiple fileespecially those captured in ios camera i created the cordova project using reactjs and redux axios for server calls the app crashes when there is too much to sent to server sidefor eg app crashed when i upload more than photos at time but wont have any issue when uploading less than photos in xcode its showing there is a memory pressure and app gets crashed so i dont know how to debug this tried to cut down memory leak by reducing the loops but still no hope has anyone had the same issue or do you guys know how to solve it
1
findbugs should run as part of the normal build process for the sightly scripting engine
0
according to this issue should have been fixed but according to the logs and the effects weve observed failed builds the solution doesnt work reliable let me tell a story a particular agent is idle noformat info agent ready to take build from queue noformat a plugin is installed on the server noformat info tue jan utc anonymous installed addon autorestart comatlassianbamboopluginsautorestart version noformat the update is propagated to the agent noformat info rebroadcasting comatlassianbambooplugineventsdisableremotepluginevent info stop needed polling service stopped info rebroadcasting comatlassianbambooplugineventsupgraderemotepluginevent info syncing with info files already available in info syncing with info files already available in info mb transferred info mb transferred info mb transferred info removing from agent classpath info creating info mb transferred info mb transferred info mb transferred info found new plugins info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent info rebroadcasting comatlassianbambooplugineventsenableremotepluginevent info enabling plugin comatlassianbamboopluginsautorestart info stop file set as optbambooagentbambooagentrestart will stop agent if file is found info stop needed polling service started noformat odd the log says it is removing the plugin and yet it is enabled anyway time flies by and another plugin is installed on the server noformat info tue jan utc anonymous installed addon brokerurldiscovery comatlassianbamboobuildengbrokertaskbrokerurldiscovery version noformat and the update is propagated to the agent noformat info rebroadcasting comatlassianbambooplugineventsdisableremotepluginevent info rebroadcasting comatlassianbambooplugineventsupgraderemotepluginevent info syncing with info files already available in info syncing with info files already available in info removing from agent classpath info creating info stop needed polling service stopped info stop file set as optbambooagentbambooagentrestart will stop agent if file is found info stop needed polling service started info found new plugins info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent noformat now the autorestart plugin is installed but apparently not enabled and even worse the brokerurldiscovery plugin is gone for whatever reason the agent starts a build which is actually requiring the brokerurldiscovery and sadly it fails noformat info jira master tier ci functional tests func taken from queue error could not execute task no plugin with key comatlassianbamboobuildengbrokertaskbrokerurldiscoverybrokerkey is installed info build jira master tier ci functional tests func completed on bamboo agent sending results to server info agent ready to take build from queue noformat and another one noformat info jira master tier ci webdriver webdriver batch taken from queue error could not execute task no plugin with key comatlassianbamboobuildengbrokertaskbrokerurldiscoverybrokerkey is installed info agent ready to take build from queue noformat after minutes the brokerurldiscorvery plugin is enabled there is no message in the log when it was installed noformat info rebroadcasting comatlassianbambooplugineventsenableremotepluginmoduleevent info enabling plugin comatlassianbamboobuildengbrokertaskbrokerurldiscovery noformat
1
currently sentry hacontext tries to the principal and keytab sentryserviceserverprincipal and sentryserviceserverkeytab properties these are set in the sentry service but not in clients especially the server keytab this causes problems for sentry clients to work with sentry ha using secure zk the typical sentry clients are downstream services like hive and impala which has their own principals and keytab we should support additional config properties for sentry client to specify their principal and keytab for sentry client to use with secure zknote that unlike sentry thrift client we can reuse the ugi to wrap the connection calls to reuse the login contex created in hive or impala
0
once the meaning of the qsgrendererinterfaceopengl enum value is updated to mean openglonrhi tests that disable or skip based on such a check will no longer work until they are ported properly we seem to have one such case the qquickitemlayer autotest was never fully ported
1
after server restart attributes that are inherited from a superclass are not included in the representation of an entity when retrieved from the repository it appears to be an issue with how the type system is loaded from the repository at server startup graphbackedtypestorerestore such that the field mappings for subclasses do not include the attributes from superclasses
1
reproduce the problem from the asf header missed
0
our security audits have reported that this plugin has a dependency on struts which has several critical security flaws although this is a buildtime only plugin this still represents a security issue that version of struts is also eol which is far from ideal is there any way to update
0
need to duplicate all tests in core module where seconadry file system is used in hadoop module but instead of using igfsimplassecondary we must use normal hadoop wrapper do not foget to add tests for metrics
1
when trying to execute time series we get an error error traceback most recent call last file line in testpartexecutor yield file line in run testmethod file usersgreguskagithubprojectsincubatorsdapnexusclientnexusclitestnexusclitestpy line in testtimeseries sparktrue file usersgreguskagithubprojectsincubatorsdapnexusclientnexusclinexusclipy line in timeseries timeseriesdata file usersgreguskagithubprojectsincubatorsdapnexusclientnexusclinexusclipy line in timenparraydatetimeutcfromtimestamptreplacetzinfoutc for t in typeerror an integer is required got type numpystr
1
the validationxsd from package orgspringmodulesvalidationbeanconfloaderxml contains the following errors reported by eclipse ganymede cannot resolve the name vldvalidatorbean to an element declaration component line cannot resolve the name vldspringcondition to an element declaration component line have had a look at to the schema and i assume the following mistypes the validatorbean should be validatorref or vice versa the springcondition should be conditionref or vice versa
1
see mavenjettyplugin i need this feature in order to be able to specify javaxnetsslkeystore and javaxnetsslkeystorepassword variables
0
created by alexey kazakov from denis golovins crucible comment on cdiprojectjava lines it looks wrong it rather should be in one synchronizedthis blockcode synchronized beansbypath beansbypathclear synchronized beansbyname beansbynameclear synchronized namedbeans namedbeansclear synchronized alternatives alternativesclear synchronized decorators decoratorsclear synchronized interceptors interceptorsclear synchronized allbeans allbeansclear code
0
i have nested mvn child defines an invocation of the following url works both inside child and parent mvn but this url works only inside child when mvn run from parent it can not locate the resourcefiletargetclassesfeaturesfeaturestesterxml
0
fix the bad pattern described in for consistency reasons samigopngthumbnail
1
spark job fails with code error sparkcontext error initializing sparkcontext javalangsecurityexception class javaxservletfilterregistrations signer information does not match signer information of other classes in the same package at at at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at at at at at at at at at at at at at at at at at method at at at at code is fetching we need to exclude this here is dependency tree for sharelibspark code omitted for duplicate omitted for duplicate omitted for duplicate omitted for duplicate omitted for duplicate version managed from omitted for duplicate version managed from omitted for duplicate omitted for duplicate version managed from omitted for duplicate version managed from omitted for duplicate version managed from omitted for duplicate version managed from omitted for duplicate version managed from omitted for duplicate omitted for duplicate version managed from omitted for duplicate omitted for conflict with omitted for conflict with version managed from omitted for duplicate version managed from omitted for conflict with version managed from code to reproduce build oozie with profile and deploy run the spark example with yarnmaster mode which comes in oozie examples
1
zeppelin version elasticsearch version trying to use elasticsearch in zeppelin notebook index get or delete command successful but search and count command results with error insert a document in elasticsearch with index command then try to search notebook command search query matchall result message error jsonobject is not a long same error for count command postman searchcount request successful screenshot in attach
1
using a standard filestore to persist sessions codexml codepreload persistent sessionspassivation passivate all of them not only evictedpurge survive restartsresults in unable to unmarshall info deploymentscannerthreads deployed info new session created error execution error orginfinispancacheexception unable to unmarshall value at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at caused by javalangclassnotfoundexception orgjbossasclusteringwebinfinispandistributedcachemanagerfactorysessionkeyimpl from at at at at at at at method at at at at at at at at at at at at at error error while processing preparecommand orginfinispancacheexception unable to unmarshall value at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at caused by javalangclassnotfoundexception orgjbossasclusteringwebinfinispandistributedcachemanagerfactorysessionkeyimpl from at at at at at at at method at at at at at at at at at at at at at error aftercompletion failed for synchronizationadapterlocaltransactionlocaltransactionremotelockednodesnull ismarkedforrollbackfalse transactiondummytransactionxiddummyxid globaltransactionid array branchqualifier array lockedkeysnull backupkeylocksnull orginfinispancacheexception could not commit at at at at at at at at at at at at at at at at at at at at at at caused by javaxtransactionxaxaexception at at at morenoformat
1
the spring data mongodb reference documentation in section lifecycle events refers to deprecated methods inherited from abstractmongoeventlistener public void onbeforeconvertperson p public void onbeforesaveperson p dbobject dbo it should instead refer to the new methods available since release public onbeforeconvertbeforeconvertevent event public onbeforesavebeforesaveevent event
0
ci has been intermittently skipping test stages since landing of
1
running with the wsdl below generates classes which dont compile the problem is that countrycodetype is newly inherited didnt do that from countrycoded which doesnt define a nonargument default constructor but countrycodetype defines onemy wsdl xsschema xmlnstns targetnamespace xmlnsxsi soapbinding styledocument transport soapaddress location
1
the method argument resolver fails with orgspringframeworkwebmultipartmultipartexception the current request is not a multipart request when an optional multipartfile argument is present in the request handler method during a plain post request with no files includedi would expect that multipart request is enforced when files are present but not necessary when there are no files and the file argument is marked as optional requestmappingmethod requestmethodpost responsestatushttpstatusok public responseentity uploadcurrentuser user currentuser requestparamvalue file required false multipartfile file requestparamentity valid documentapimodel entity throws ioexception baserestexception it seems that this can be trivially fixed in by checking the optional attribute and the provided value
0
in solritas browse gui if facets contain very long strings such as contenttype tend to do currently the too long text runs over the main column and it is not prettyperhaps inserting a soft hyphen shy at position n in very long terms is a solution
0
please move the link to alternateolder versions back to a more prominent location id suggest putting it under resources or move the link to floatright on the same line as the latest version xx release notes ideally with some icon or other background highlighting to make it more prominent the most recent update to marketplace which changed the layout of versions has now made it extremely difficult to find the link to additionalolder versions of the addon see the attached screenshot for how little prominence the link to the additional versions page now has when customers visually search the full page customers who are on a different version track eg using an older version of confluence for which the latest versions are not compatible might not even think to look in the release notes for the latest version we also do patches to old versions to backport security fixes etc the customers who need access to those older versions now have a terribly hard time finding them even though those releases are just as current and relevant as our tipoftree release
0
when reviewing my jenkins installation i found out that tmp is filled by jenkins with the following files in linux and windowsrwrr jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul jenkins nogroup jul pattern refsorter i found in lucenes source code so it must comefrom tests why are they not cleaned up and why do we need those files would a ramdirectory not be enough for thisthis is serious as the files are never cleaned up they stay alive when the test passes so its not caused by the always failing solr suggester teststhere are also other filenames with sorted and similar at endthe slave was taken automatically offline after its rambased tmp gb was filling in also cleaned up lots of files
1
using the latest from nuget v i get the following errorsystemindexoutofrangeexception index was outside the bounds of the array at lucenenetsearchtermscorerscore in dlucenenetfullrepotrunksrccoresearchtermscorercsline at doc in dlucenenetfullrepotrunksrccoresearchbooleanscorercsline at lucenenetsearchtermscorerscorecollector c end firstdocid in dlucenenetfullrepotrunksrccoresearchtermscorercsline at lucenenetsearchbooleanscorerscorecollector collector max firstdocid in dlucenenetfullrepotrunksrccoresearchbooleanscorercsline at lucenenetsearchbooleanscorerscorecollector collector in dlucenenetfullrepotrunksrccoresearchbooleanscorercsline at lucenenetsearchindexsearchersearchweight weight filter filter collector collector in dlucenenetfullrepotrunksrccoresearchindexsearchercsline at lucenenetsearchindexsearchersearchweight weight filter filter ndocs in dlucenenetfullrepotrunksrccoresearchindexsearchercsline at lucenenetsearchsearchersearchquery query filter filter n in dlucenenetfullrepotrunksrccoresearchsearchercsline at lucenenetsearchsearchersearchquery query n in dlucenenetfullrepotrunksrccoresearchsearchercsline at sender doworkeventargs e at systemcomponentmodelbackgroundworkerworkerthreadstartobject argument
0
propagate limit context generated from globallimitoptimizer to storage handlers
0
hello when having both php and internal styles with media query in one file highlighter goes a bit crazy please kindly try the code below php somethings wrong here with highlighter media only screen and minwidth
0
terminal recording is done using typescript this has some limitations recording editors such as vim or nano is not possible in an elegant way at least playing typescripts like a video with forwardbackward capability is not possible or at least not easy playing terminal recordings in colorful way is not possibleadding terminal recording using another library such as asciinema would be really nice and useful
0
for please see to get details on out of box simpleaclauthorizer implementation
1
cordova plugin add will cause the xcode project to use the wrong source code path of the plugin for the file plugin example the cdvlocalfilesystemm is pointed to xxxprojectpluginsorgapachecordovafilecdvlocalfilesystemm however it is installed in the xxxprojectpluginsorgapachecordovafilesrcios
1
unless we are able to do the tabs need to also scroll with the content to be consistent with the others for the values list can we use a layout leave left column mainly as is but remove previous and no value set label add a right column with the heading previous values for those that have been modified repeat same stacked labelvalue or no value set pairing for those not modified show nothing this is the current behavior i think corresponding currentprevious values displayed in each column need to remain horizontally aligned with one another so users can quickly compare them side by sidefor unset class the only show modified checkbox place the checkbox first before the text change the text to show modified attributes onlyfor modifiedattributetogglelabel​
0
increase code test coverage through adding additional unit tests where possible
0
in qtcreatorwhen i deselect it cant launch my program when i click runbuttonuse jomit can create the debug and release directorythe program will create in this pathbut use nmakethe program will create in the same directory with source filesowhen i click the runbuttonit cant lanuch the programbecause it does not exist in the debug or release directoryhow to to solve the problemthanks
1
there is description for serializer in spark implementations of this trait should implement a zeroarg constructor or a constructor that accepts a as parameter if both constructors are defined the latter takes precedence java serialization interfaceclass gryoserializer in tinkerepop extends serializer but does not implement javaioserializable it works well before spark but with spark it changed by for dependencyscala gyro and all its fields must implement java serialisation interface otherwise hundreds of test cases are failed ascaused by orgapachesparksparkexception job aborted due to stage failure task not serializable javaionotserializableexception orgapachetinkerpopgremlinsparkstructureiogryogryoserializerserialization stack object not serializable class orgapachetinkerpopgremlinsparkstructureiogryogryoserializer value field class orgapachesparkshuffledependency name serializer type class orgapachesparkserializerserializer object class orgapachesparkshuffledependency field class name type class javalangobject object class mappartitionsrdd at maptopair at
0
problemjavalangexception javalangoutofmemoryerror unable to create new native thread at by javalangoutofmemoryerror unable to create new native thread at method at at at at at at at at at at at at at at at at at at at server use i began thought it was not specify enough memory i passed the test of java version so i known my server can use the max memory is so i add one line config to the script of binnutch but its not solve the problemthen i check the source code to see where to produce so many threads i find the codecodejava parseresult new parseutilgetconfparsecontent codewhich in line of the java source file orgapachenutchparseparsesegmentjavas map methodcontinue in the constructor of parseutil instantiate a cachedthreadpool object which no limit of the pool size see the codecodejavaexecutorservice executorsnewcachedthreadpoolnew threadfactorybuilder setnameformatparsedsetdaemontruebuildcodethrough the above analyse i know each map methods output will instantiate a cachedthreadpool and not to close it so executorservice field in parseutiljava not be right use and cause memory leaksolutioneach map method use a shared fixedthreadpool object whichs size can be config in nutchsitexml more detail see the patch file
1
error on startupa java runtime environment jre or java development kit jdkmust be available in order to run sts no java virtual machinewas found after searching the following locationsjdkpathbinjavait works if the following lines are removed from the stsini filevmjdkpathbinjavai am using the installer based version
0
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
1
now that we have changed the way we refer to our legacy components mx instead of halo their namespace is incorrect it currently islibrarynsadobecomflexhalowhich refers to the halo theme instead it should be librarynsadobecomflexmxsince we call them mx componentswe could mitigate the impact of this change by continuing to silently support the halo namespace but suppress it from codehints and generate mx references in the tool
0
its always visible and always disabled
1
i just spotted that the orderportletdataxml seed data have been wrongly removed with for
0
change eaproot to resolve correctly path with eap installation like in runshwith current implementation of there is quite issue with relative path when you execute it from your home directory you would change all files in system because you call use dirnamedirname to identify path to directoryjbosshome property must be evaluated too to be compatible with other sh scripts
1
i observed this test failing very rarely on travis testzookeeperreelectionorgapacheflinkruntimeleaderelectionzookeeperleaderelectiontest time elapsed sec failurejavalangassertionerror null at at at at at failed tests null
1
i find a problem in using testsquizzes tool in sakai can not retract a published assessment by configuring the deliver dates in the settings pageit seems that the date paker doest work the corresponding fields of due date and retract date are also emptyi think there must be code errorscan anybody help me
0
such call is segfaulting totimezoneqtimezone indeed looking at the code it is making the date time invalid in settimezone but it is calling frommsecssinceepoch anyway where line it is acessing internal of the mtimezone object without checking that this internal exists
1
theres message causing the saving paragraphlistenerimplafterstatuschange
0
error displayed error parsing value legal round unparseable number legal round check columns data type consistencywhen attempting to import csv file ti data provider sample attached
1
regression from telugu true type fonts are not rendered properly in
1