text_clean
stringlengths
3
505k
label
int64
0
1
i created an epic and three user stories added three tasks to one of the stories and assigned all stories to the epic when i open the epic view the estimated time is not summed up correctly see the attached screenshot is this a bug
0
after grading gradebook items i went back to add a comment and saved the rubric the score in the rubric was correct but the gradebook updated to the score of the last viewed rubric i believe this may be connected to
1
in the code below days are casted to double and then an error is thrown codesql create temp view vwindow as select i mini over order by i range between day preceding and days following as mini from rangenow days hour i code error codesql cannot resolve currenttimestamp days as double due to data type mismatch differing types in currenttimestamp days as double timestamp and doublecode
0
while experimenting on the nightly server i noticed that a new site i created as unpublished was not appearing at all in my workspace worksite setup it was appearing as a tab up top since i called the site a test site so it was alphabetically available however it should also be listed in the worksite setup area since an admin should be able to view all sites not just published sites like a faculty member can see hisher published and unpublished sites if they are the site ownermaintaininstructor role i opened a new browser and could reproduce the behavior i logged out and logged back in and it still happened i created a new site a test site also unpublished and it wasnt listed either i created another a test site this one published and it did appear in the list please see the screenshot for the visual
1
the following users are no longer employed by facebook and should be removed from the comfacebook domains michalgr caithagoras thanks chris lüer
0
hi as you can see in my public circleci builds outlookmessageparser simple java mail osssonatypeorg is failing to accept new deploys successfully again im getting timeouts yet again
1
based on the production guide here ive setup a single manager node and gateway nodes all of these nodes are tied into an external keycloak system ive configured the manager and gateway nodes to use an external elasticsearch system as wellive pointed the manager node at the gateway nodes the test gateway button indicates success on both gateway nodesi cannot publish a public api to either one of the gateway nodes the ui shows the why cant i publish message which indicates i need to select an api gatewayive tried this same setup with a single gateway and also have the problemi do not see any errors in the logs of the gateways or manager
1
we have used cheatsheets in project examples and suggested for use in quickstarts but because they only open when using project example wizard and central imports it is not as easy to get shown to users as we would likesuggestion is to make cheatsheets something that gets activated more easily for example when importing a project with a cheatsheetxml file make it show up be able to double click a cheatsheetxml file to have it open in cheatsheet viewer as opposed to always the cheatsheet editor and so forththis jira will have a few subtasks for specific concerns
1
the active issue menu says no active issue i select an issuedialog pops up that says issue has changed status assigned tome status open do you want to deactivate and thats when i get the errorheres what intellij generatederror during dispatching of javaawteventinvocationevent on wrong number of argumentsjavalangillegalargumentexception wrong number of arguments at method at at at at at at at at at at at at at at at at javasecurityaccesscontrollerdoprivilegednative method at at at at at at at at at at at at at at at at at at
1
generating dependency management report unable to create maven project for from repository orgapachemavenprojectprojectbuildingexception some problems were encountered while processing the poms unknown packaging bundle line column seems caused by orgxerialsnappysnappyjava dependency introduced by
1
there is missing information how to configure cluster and ha on azure we should provide links to configuring messaging guide how to configure cluster and ha with replicated journal this should be in new chapter behind there should be also pointed out that clusterconnection in messagingactivemq subsystem must use only jgroups udp stack with configured azureping protocol as described in chatpter
0
use the attached repeatsh to run observertest repeatedly by doing srcrepeatsh observertestthe test will will fail eventually after a few iterations should be only a few minutesthe line that fails in the test is zk new clientportobs clientbaseconnectiontimeout thisattached as outtxt is the output showing a successful run for comparison followed by a failed runnote that in the seconds before the test fails in the following lines that there is a second gap in time between and info client attempting to establish new session at info shutting down info shutdown called javalangexception shutdown leader reason only followers need
1
commit caused ato schema compilation time to jump from about seconds to over minutes large increases were also seen in vmf schema compilation some debugging showed that the cause is likely in the dfdlpathexpressioncompiler wrapping the following code the compiles an individual expression in a timer shows an average of increase in time to compile scala val compiler new dfdlpathexpressionparserqn nodeinfokind namespaces compileinfowherepropertywaslocated isevaluatedabove val compileddpath compilercompileexpr compileddpath so this most likely has something to do with schema compilation nothing jumps out at me in the specified commit as being especially egregious to cause such a performance degredation all the really changed was passing an extra parameter for error accumulation and changing how isreferencedbyexpressions is set i wouldnt expect that to cause such performance changes perhaps it is findnamedmatches now allocating a seq
0
were working on serialization mechanisms for the process definition and to have them work we need to be able to recreate the inner classes of foreachnode and compositenode to allow serialization deserialization of the object to be possible from outside the class
0
since wagonsshcommontest doesnt compile anymore the compiler plugin classpath is missing a good number of dependencies and work correctly
1
i am trying to start two profiles using first one starts fine but the second has following errorcodecontainer development environment starting profile check if deprecated options are used use of hypervvirtualswitch has been deprecated please use minishift config set hypervvirtualswitch minishiftvsfail checking if is reachable ok checking if requested openshift version is valid ok checking if requested openshift version is supported ok checking if requested hypervisor hyperv is supported on this platform ok checking if powershell is available ok checking if hyperv driver is installed ok checking if hyperv driver is configured to use a virtual switch default switch ok checking if user is a member of the hyperv administrators group ok checking the iso url ok checking if provided oc flags are supported ok starting the openshift cluster using hyperv hypervisor minishift vm will be configured with memory gb vcpus disk size gb starting minishift vm fail error starting the vm error creating the vm error creating machine error in driver during machine creation exit status retryingerror starting the vm error creating the vm error creating machine error in driver during machine creation exit status
0
summary specs fails to handle docker job requirement environment bamboo steps to reproduce create a plan with a job create a requirement for that job choosing docker export the plan as specs the code will look like this for the section we are interested in noformat jobsnew jobdefault job new tasksnew scripttask inlinebodyecho hii requirementsnew requirementsystemdockerexecutable noformat import the plan expected results the plan as before with the docker requirement present actual results the requirement is not present notes no errors in logs changing noformat requirementsnew requirementsystemdockerexecutable noformat with noformat requirementsnew requirementdocker noformat will make the requirement present
0
given a property of a fact report rules which use the property in lhsrhs if we change a propertys name of a fact class which rules need to be fixed this is not a relationship of rules but listing rules where the property is used
0
we are facing a problem and we are thinking that it is because of axis tomcat our axis is integrated with tomcat maxthreads value in server xml of tomcat is memory for jvm has been set to at one particular time cant simulate thishappens suddenly the cpu utilization by this tomcat process shoots up to normally its and it never comes down unless we bounce the tomcat server when this happens we get very bad response times even though our web service application completes the request quickly once the control moves out of our application we dont know what happens only axis serialization and sending response to client are two events we ruled out network delay as other instances of tomcat in the same node are running fineis there any way to know why the response is delivered so lateenabling axis logging is very costly for usany other suggestionsany help in this would be highly appreciable
1
see screenshot not sure if that is a problem only on
1
working on publishing the code from the kosherjava zmanim project to the central repository
0
this issue can be used for updates to user manual javadoc localizers to correct things like spelling or editorial changes
0
the authentication process that creates temporary files in tmpauth is not deleting them when a jmx client connectsat an extreme this would be a dos attack as the disk could fill up
0
since eclipse is released upgrade eclipse in our dependencies to
0
will make the dfsclient take advantage of the annotation this jira is to identify which methods can be marked as idempotent
0
cftyperef returned by ioregistryentrysearchcfproperty sometimes may be of unexpected type that leads to exception andor crash with some usb devices when product id is parsed in the internal loop of qserialportinfoavailableports in qserialportinfomaccpp console output is attached mainthreadlog the exception when the availableports function is called from the main thread qthreadcrashlog the crash when the availableports function is called from qthread it occurred by checking type with cfgettypeid that returned type of cftyperef corresponds to cfstring instead of cfnumber for kusbproductid during the first internal loop run cfnumberref is returned only during the second loop run so cfstringref is used as an argument for cfnumbergetvalue instead of cfnumberref on os x versions before the wrong type was handled by cfnumbergetvalue the return value was false so on the hasproductidentifier field was false and on the product id was correct starting from os x cfnumbergetvalue throws the exception in that case and there is no a workaround the simplest fix is to check the type in searchshortintproperty before calling cfnumbergetvalue cfgettypeidresultas cfnumbergettypeid
0
when using gradle and the android gradle plugin the generated apk is placed in a different path and when the process tries to install it using adb it fails old path new path for now the only solution i have found is to manually copy the apk to the old location but this is not acceptable
1
execute create struts execute switch to web projects view execute expand the created project and show modules configuration dialog by popup menu on configuration node then press execute show modules configuration dialog by popup menu on configuration node assert there is no error message attribute web root for module must be set see
0
is jws compatible with windows server
0
i have a question regarding openness of contains readmetxt with following quotethe gradle enterprise maven extension is not open source and thus does not ship with sources quote is that artifact waved from distributing sources and still be eligible for hosting on ossrh
0
we are using a table to condense information at the top of a wiki page we hoped to use anchor links to allow users to navigate to more details if necessary note unlike existing open related issues to anchors our page title does not contain any special characters colon etc steps to reproduce create table with text create text content below the table create an anchor in the text content highlight any text in a table cell and select link advanced try to reference the anchor eg sampleanchor observe the error the markup provided is not valid link markup
0
as the title indicates longrunning test applications with injected network outages seem to hit taskcorruptedexception more than expected seen occasionally on the alos application times in two days in one case for example and very frequently with eos many times per day
1
the dn is found no response to the jmx request and with further investigation this dn failed to join the new pipeline at the same time attached is the jstack it seems that dn is waiting for the raftserverproxyimplmap lock to add new group while the lock is hold by another thread to remove the old group and this thread wants to have the readlock of its segmentlock the readlock is hold by another writechunk thread which is waiting statemachinedatacache quota see the attached jstack
1
there are several issues with translating on the most serious one is that if the user has both the deprecated and the latest version of the same config set then the value picked up by sparkconf will be arbitrary why because during initialization of the conf we call confset on each property in sysprops in an order arbitrarily defined by java as a result the value of the more recent config may be overridden by that of the deprecated one instead we should always use the value of the most recent if we translate on set then we must keep translating everywhere else in fact the current code does not translate on remove which means the following wont work if x is deprecatedcodeconfsetx yconfremovex x is not in the confcodethis requires us to also translate in remove and other places as we already do for contains leading to more duplicate since we call confset on all configs when initializing the conf we print all deprecation warnings in the beginning elsewhere in spark however we warn the user when the deprecated config option env var is actually being usedwe should keep this consistent so the user wont expect to find all deprecation messages in the beginning of his logs
1
the following schema element pskeywordchoices references a globally defined element pskeywordchoice in a sequence this will not be resolved during wsdl processing while creating the type description in the produced java object because of that the array serializer produces the wrong xml and of course the array deserializer fails because it did expect xml as defined in the schema if the same schema construct if made with local elements it is processed fine see pshierarchynodeproperties in the schema below the label of the keyword as displayed to end users i will upload the wsld schema so that you are able to reproduce the problem i run with these argumentso dtempwsdl w s s false d request t dtempwsdldesignrhythmyxwsdlthe meta data produced for pskeyword are static typedescsetxmltypenew javaxxmlnamespaceqnameurn pskeyword orgapacheaxisdescriptionattributedesc attrfield new orgapacheaxisdescriptionattributedesc attrfieldsetfieldnamelabel attrfieldsetxmlnamenew javaxxmlnamespaceqname label attrfieldsetxmltypenew javaxxmlnamespaceqname string typedescaddfielddescattrfield attrfield new orgapacheaxisdescriptionattributedesc attrfieldsetfieldnamevalue attrfieldsetxmlnamenew javaxxmlnamespaceqname value attrfieldsetxmltypenew javaxxmlnamespaceqname string typedescaddfielddescattrfield attrfield new orgapacheaxisdescriptionattributedesc attrfieldsetfieldnamekeywordtype attrfieldsetxmlnamenew javaxxmlnamespaceqname keywordtype attrfieldsetxmltypenew javaxxmlnamespaceqname string typedescaddfielddescattrfield attrfield new orgapacheaxisdescriptionattributedesc attrfieldsetfieldnamesequence attrfieldsetxmlnamenew javaxxmlnamespaceqname sequence attrfieldsetxmltypenew javaxxmlnamespaceqname int typedescaddfielddescattrfield orgapacheaxisdescriptionelementdesc elemfield new orgapacheaxisdescriptionelementdesc elemfieldsetfieldnamechoices elemfieldsetxmlnamenew javaxxmlnamespaceqnameurn choices elemfieldsetxmltypenew javaxxmlnamespaceqnameurn pskeywordchoice elemfieldsetnillablefalse typedescaddfielddescelemfield but it should be static typedescsetxmltypenew javaxxmlnamespaceqnameurn pskeyword orgapacheaxisdescriptionattributedesc attrfield new orgapacheaxisdescriptionattributedesc attrfieldsetfieldnamelabel attrfieldsetxmlnamenew javaxxmlnamespaceqname label attrfieldsetxmltypenew javaxxmlnamespaceqname string typedescaddfielddescattrfield attrfield new orgapacheaxisdescriptionattributedesc attrfieldsetfieldnamevalue attrfieldsetxmlnamenew javaxxmlnamespaceqname value attrfieldsetxmltypenew javaxxmlnamespaceqname string typedescaddfielddescattrfield attrfield new orgapacheaxisdescriptionattributedesc attrfieldsetfieldnamekeywordtype attrfieldsetxmlnamenew javaxxmlnamespaceqname keywordtype attrfieldsetxmltypenew javaxxmlnamespaceqname string typedescaddfielddescattrfield attrfield new orgapacheaxisdescriptionattributedesc attrfieldsetfieldnamesequence attrfieldsetxmlnamenew javaxxmlnamespaceqname sequence attrfieldsetxmltypenew javaxxmlnamespaceqname int typedescaddfielddescattrfield orgapacheaxisdescriptionelementdesc elemfield new orgapacheaxisdescriptionelementdesc elemfieldsetfieldnamechoices elemfieldsetxmlnamenew javaxxmlnamespaceqnameurn choices elemfieldsetxmltypenew javaxxmlnamespaceqnameurn pskeywordchoice elemfieldsetnillablefalse elemfieldsetitemqnamenew javaxxmlnamespaceqnameurn pskeywordchoice typedescaddfielddescelemfield i found that the problem is in class orgapacheaxiswsdlsymboltableschemutilsjava public static qname getcollectioncomponentqnamenode node qnameholder itemqname booleanholder forelement symboltable symboltable if were going to turn wrapped arrays into types such that becomes just string we need to keep track of the inner element name foo in metadata this flag indicates whether to do so boolean storecomponentqname false if node null return null if itemqname null isxsdnodenode complextype if this complextype is a sequence of exactly one element we will continue processing below using that element and let the type checking logic determine if this is an array or not node sequence schemautilsgetchildbynamenode sequence if sequence null return null nodelist children sequencegetchildnodes node element null for int i i childrengetlength i if childrenitemigetnodetype nodeelementnode if element null element childrenitemi else return null if element null return null ok exactly one element child of continue the processing using that element node element storecomponentqname true try symboltablecreatetypefromrefnode catch ioexception e throw new if the node kind is an element dive to get its type if isxsdnodenode element compare the componentqname with the name of the full name if different return componentqname qname componenttypeqname utilsgettypeqnamenode forelement true if componenttypeqname null qname fullqname utilsgettypeqnamenode forelement false if componenttypeqnameequalsfullqname if storecomponentqname string name utilsgetattributenode name maybe its a reference if name null string ref utilsgetattributenode ref if ref null strip any namespace info it will be added later string parts refsplit name parts if name null check elementformdefault on schema element string def utilsgetscopedattributenode elementformdefault string namespace if def null defequalsqualified namespace utilsgetscopedattributenode targetnamespace itemqnamevalue new qnamenamespace name return componenttypeqname return null
1
if a hadoop client is run from inside a container like tomcat and the current accesscontrolcontext has a subject associated with it that is not created by hadoop then usergroupinformationgetcurrentuser will throw nosuchelementexception since it assumes that any subject will have a hadoop user principal
1
pointed out by in oahmapreducejob class the following changed from hadoop to hadoop failtasktaskattemptid change in return type from void to booleanboolean killtasktaskattemptid change in return type from void to booleantaskcompletionevent gettaskcompletioneventsint change in return type from orgapachehadoopmapredtaskcompletionevent to orgapachehadoopmapreducetaskcompletioneventusing same rational as in other jiras we should fix this to ensure hadoop to hadoop source compatibility taking releases as a casualty as there is not right way for everybody because we screwed up flagging it as incompatible change because of
1
the query generator has discovered a crash i took one of the crashing queries and simplified it down as far as i could go i searched for existing bugs with the same dcheck and didnt strictly see anything but i know there are other bugs right now and this may end up being a dupe noformat check failed usedreservation childreservations reservation vs noformat noformat in googlelogmessagefatal in impalacheckconsistency const this at in impalatransferreservationtoimpalareservationtracker long this other at in impalasavereservationimpalasubreservation long this dst at in impalanextreadpage this at in impalagetnextinternalimpalarowbatch bool stdvector this batch eos flatrows at in impalagetnextinternalimpalarowbatch bool stdvector this batch eos flatrows at in impalagetnextimpalarowbatch bool this batch eos at in impalagetnextoutputbatchimpalaruntimestate impalarowbatch bool this state outputbatch eos at in impalagetnextimpalaruntimestate impalarowbatch bool this state rowbatch eos at in impalaprocesschildbatchesimpalaruntimestate this state at in impalagetnextimpalaruntimestate impalarowbatch bool this state rowbatch eos at in impalaexecinternal this at in impalaexec this at in impalaexecfinstanceimpalafragmentinstancestate this fis at in impalaoperatorvoid const closure at in voidinvokeboostfunctionbuffer functionobjptr at in const this at in impalasupervisethreadstdstring const stdstring const boostfunction impalathreaddebuginfo const impalapromise nameexecfinstance categoryfragmentexecution functor parentthreadinfo threadstarted at in boostvalue boostvalue boostvalue boostvalue operator impalathreaddebuginfo const impalapromise void stdstring const stdstring const boostfunction impalathreaddebuginfo const impalapromise int this f impalathreaddebuginfo const impalapromise a at in boostbindt impalathreaddebuginfo const impalapromise boostvalue boostvalue boostvalue boostvalue operator this at in boostthreaddata impalathreaddebuginfo const impalapromise boostvalue boostvalue boostvalue boostvalue run this at in threadproxy in startthread arg at in clone at noformat simplified failing query noformat select over order by pcontainer over order by pcontainer from tpchpart noformat
1
command mvn execjava fails withcodejavalangreflectinvocationtargetexception at method at at at at at by javalangnoclassdeffounderror orgjbosslogginglogger at at at at morecaused by javalangclassnotfoundexception orgjbosslogginglogger at at at morecodesee the full maven output here
1
observed fit job in ci is failing consistently on latest fabric commits with the below errors fabricca container is crashing due to configuration mismatch in etchyperledgerfabriccaserverfabriccaserverconfigyaml file below is the log file fabric peer commit number fabric ca commit number fabric sdk node commit number codejava hyperledgerfabricca sh c fabricca seconds ago exited seconds ago hyperledgerfabricorderer orderer seconds ago up seconds hyperledgerfabricca sh c fabricca seconds ago exited seconds ago code docker logfile codejava attaching to couchdb  configuration file location etchyperledgerfabriccaserverfabriccaserverconfigyaml   error incorrect format in file etchyperledgerfabriccaserverfabriccaserverconfigyaml errors decoding     dbtlscertfiles source data must be an array or slice got string   usage    code
1
wrap base dataanalysis cli module in expressjs wrapper w key api routes handler stub codetasks create expressjs wrapper create api routes handler stubs code for key jira graphs burndown chart sprint report burndown chart average age report created vs resolved issues
0
we evaluated for days jira in an enterprise version because this is the only posibilty to evaluate after purchasing jira and installing the pro version license we first had problems to get all the tickets into the pro version but it is solved in the meantime now it occurs that all tickets miss or lost their workflow information only new tickets submitted after migration can be resolved or closedcan we restore the worklfow informationplease help we need the workflow info to proceed with our workthxs steffen trenkle
1
weve had a few reports in support of customers not able to reach the plugin repository preupm first reported may to reproduce fire up a version of confluence without upm such as go to confluence admin plugin repository youll see an error message caching failed hit retry youll get a browser error popupcodethe page at sayshtml htmlcodecorresponding stacktrace from warn warn method execution failed referer url pluginsservletpluginrepositorydwrexecrepositorydwrstartcachingdwr username admincomthoughtworksxstreamaliascannotresolveclassexception html html at at at at at at at at at at at at at at at at at at at at at at at at at method at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn warn erroring id message referer url pluginsservletpluginrepositorydwrexecrepositorydwrstartcachingdwr username admincoderyan talusan pointed me to the following which may be related but the symptoms are different
1
the sakai conversion script removes some defaults and is causing some problems it should just be removed ref and related
1
hey im unable to restore backup in to collection im getting error like below result responseheader operation restore caused exceptionorgapachesolrcommonsolrexceptionorgapachesolrcommonsolrexception solr cloud with available number of is insufficient for restoring a collection with shards total replicas per shard and maxshardspernode consider increasing maxshardspernode value or number of available nodes exception msgsolr cloud with available number of is insufficient for restoring a collection with shards total replicas per shard and maxshardspernode consider increasing maxshardspernode value or number of available nodes status statefailed msgfound in failed tasks
1
a few test runs have seen this failure in testmtdopscannodetestmtdopscannode noformat in testmtdopscannode assert bytesread not in resultruntimeprofile resultruntimeprofilenoformat this test is run in a loop due to we may need to increase the number of loops or rethink the conditions of the test
1
according to usergrid documentation groups are hierarchical every member of the group groupscaliforniasanfrancisco is also a member of the group groupscalifornia based on above concept i have created two groups californiasanfrancisco has users and california has users on usergrid documentation i am expecting that and should also be members of but they are notif i am hitting url groupscaliforniausers it only returns and should not it also return and i have posted an activity in california which is appearing in feeds of users of ie but not appearing in the feed of group groupscaliforniasanfrancisco users ie am i doing wrong please suggest
1
i have a flow configured with an avroschemaregistry i have a json reader and a json writer both referencing it i then have a few processors referencing both the reader and the writer when i click enable on the avroschemaregistry and choose to enable all referencing components i encountered a deadlock the threads involved were quote timerdriven process waiting on deadlocked thread at sunmiscunsafeparknative method at at at at at at at at at at at at at at waiting on at at at at at at at at number of locked synchronizers timerdriven process waiting on deadlocked thread at sunmiscunsafeparknative method at at at at at at at at at at at at at at waiting on at at at at at at at at number of locked synchronizers quote
1
device binary package c reproduce launch ftp from fluid navigate to connect and click it choose wlan in offline mode yes use lab enter password boomnote first time i ran it it connected fine maybe after its saved some initial iap info it borkes on parsing it wege got to investigate this more closely
1
steps to load the move the sliders around to change the various borderthickness styles of the panel actual results nothing happens the styles do not update at runtime expected results you should be able to change borderthicknesstop borderthicknessleft borderthicknessright and borderthicknessbottom at runtime workaround if any
0
currently replicahw is always initialized to checkedpointed hw however on unclean shutdown log end offset could be less than checkedpointed hw
1
when running codejava sparkreadformatlibsvmoptionsconfloadpath code the underlying file system will not receive the conf options
0
when i update the syndesis cr oc edit syndesis and enable any addon nothing happens eg for publicapi addon the publicapi oauth proxy should be created the operator contains the following reconcilingactionactioninstallactionphaseinstallingerroroperation cannot be fulfilled on syndesisessyndesisio app the object has been modified please apply your changes to the latest version and try againcodethe full operator log is in the attachments this happens from version version works correctly and after updating the syndesis cr the operator does appropriate changes
1
i tried creating a private repo but i got an error page browsing to the the repo will give a a error as well with the following information hash though the creation of the repo failed it seems to have partially succeeded i can find it using search and in my list of although accessing it is impossiblei marked this as a blocker because this touches on the core functionality of bitbucket
1
need to add this property to coredefaultxml its documented as being there but it isnt code the committer factory to use when writing data to filesystems code
1
we need typespecific strategies for comparing version numbers
1
to be able to serialize the request wls componentconfiguration should be serializable
1
from minor issues with balancer daemon isnt set to true in hdfs to enable daemonization startbalancer script has usage instead of hadoopusage
1
one provider does produce applicationjson an other provider does produce if the mediatyp applicationjson is now requested the provider which produces is resolved by the resteasyproviderfactorygetmessagebodywriter
0
at flinkscala project have the follow import statement scala import orgapacheflinkapicommonoperatorskeys import keysexpressionkeys so i want to commit one pull request to fix it
0
the call to sa gets capacityscheduler configs as dictionary compared to n separated single string of all the configs in subsequent calls invocation passedin capacityscheduler looks like code capacityscheduler properties capacityscheduler null yarnschedulercapacityrootaccessiblenodelabels yarnschedulercapacitymaximumamresourcepercent yarnschedulercapacityrootacladministerqueue yarnschedulercapacityqueuemappingsoverrideenable false yarnschedulercapacityrootdefaultcapacity yarnschedulercapacityrootdefaultuserlimitfactor yarnschedulercapacityrootqueues default yarnschedulercapacityrootcapacity yarnschedulercapacityrootdefaultaclsubmitapplications yarnschedulercapacityrootdefaultmaximumcapacity yarnschedulercapacitynodelocalitydelay yarnschedulercapacitymaximumapplications yarnschedulercapacityrootdefaultstate running code subsequent invocations gets capacityschdeuler ascodecapacityscheduler properties capacityscheduler yarnschedulercapacityrootqueuesdefaultn yarnschedulercapacityrootdefaultstaterunningn yarnschedulercapacityrootdefaultaclsubmitapplicationsn yarnschedulercapacityrootacladministerqueuen yarnschedulercapacityrootaccessiblenodelabelsn yarnschedulercapacityqueuemappingsoverrideenablefalsen code therefore sa fails to create the llap queue for hive server interactive as sa knows to handle only the case this specific issue was seen with while deploying cluster with blueprints
1
for jbide please perform the following tasks check out your existing branch code git checkout code update your branchcolor root pom to use the latest parent pom version code orgjbosstools parent code now your root pom will use parent pom version in your branch ensure that component featuresplugins have been properly upversioned eg from to code mvn dtychomodemaven code ensure youve built your code using the latest minimum target platform version code mvn clean verify code ensure youve run your tests using the latest maximum target platform version code mvn clean verify code close do not resolve this jira when done if you have any outstanding new noteworthy jiras to do please complete them next search for all task jira or search for livereload task jira
1
is a nice way to present and run examples
0
基于最新版的spring boot和spring cloud整合mybatisrediseasypoi等多种主流开源项目,定义model、service、serviceimpl、controller基类实现通用的增、删、改、查、分页、导入、导出、关联字段自动赋值、全局异常处理等功能。另整合hystrixdashboard和spring boot admin监控及分布式文档中心。
0
start from binary relase do not include parties library developers should include themselves
0
as caleb has pointed out the wordcounting function of svec svecsfv requires a sorted dictionary as input but doesnt actually check for it so if you pass an unsorted dictionary as argument it will produce wrong outputof course the function can always just sort the dictionary first before continuing but since the function doesnt return the dictionary the user will not know which words the counts are associated to since the counts are associated to the sorted version
0
there is a crash in qquickwindowprivatedeliversinglepointeventuntilaccepted in certain qml applications when scrolling a scrollview with the mouse wheel crashing line is this qpointf g itemwindowmaptoglobalpointscenepositiontopoint itemwindow returns nullptr here sometimes so that should be checked here like in another function in this file where it is accessed unfortunately the qml setup where this happens is so complicated that cannot provide a simple test app to reproduce it it has a scrollview that contains a vertical list which has horizontal lists as delegates so a dynamically viewed list in both dimensions also loaders are used to load the delegate stuff asynchronously anyway its only using components provided by qml and qtquick controls crash call stack is here qpoint pos line c this nullptr here event line c event line c event line c ev line c e line c receiver qevent e line c receiver qevent e line c receiver qevent event line c receiver qevent event line c e line c e line c flags line c qwindowsddllqwindowsguieventdispatchersendpostedevents line c flags line c qwindowsddllqwindowsguieventdispatcherprocesseventsqflags flags line c flags line c flags line c line c line c line c
0
add functionality that allows user to change the text string property of text from static text to be input via data node from list of text type data input nodes
1
buy ambien online without prescription click here flat what do you mean by the term “ambien” ambien or zolpidem generic is the name of a medicine that is popular among the people of the united states doctors usually prescribe this drug for treating people with the condition of insomnia the drug is available in the market in the form of oral tablets and oral sprays the tablets of ambien come in immediaterelease extendedrelease and sublingual versions your doctor will prescribe the dose which will be suitable for your health how do ambien works ambien belongs to the class of drugs called sedatives they have a direct effect on the human mind and help to slow down the activity of the brain that’s the reason why sedatives are also known as hypnotic drugs it is necessary for a mind to calm down in order to get proper sleep thus ambien is being prescribed to the patients since the fda approves it in the year later in zolpidem became available as the generic version of ambien which is lesser in cost you may either buy ambien online or its generic form but before doing so you must consult a health expert in this respect in what ways can ambien cause harm the dose of ambien is not a regular drug that you can whenever you want there are several reasons for which it is important to have a prescription of ambien before purchasing it from any platform it is equally essential to get all the details from your doctor about the aftereffects that can occur with the use of ambien following is the list of side effects of ambien that may emerge after taking a few doses of ambien – mild or severe headache getting dizzy pain in the chest mouth gets dry feeling lightheaded muscle pain swelling up of throat or face lack of energy feeling worthless lack of interest and activity abnormal thoughts aggressive behavior confusion and agitation feeling depressed going through suicidal thoughts
0
integrate the orderingpolicy framework with the capacityscheduler
0
when the logic falls into the section of code that uses separate iteratorsto render the options there is the potential that the getiterator iscalled twice to get an iterator to the same list it is extremely inefficient in the sense that the underlying method that returns the collection for theiterator may have significant processing involved from a puristsstandpoint in this situation there is simply no reason to have to do thistwice regardless of the degree of inefficiency this could be corrected by slightly modifying the code to use a combination of a flagand only one iterator as noted in the new code included in the bottom of thisfilepublic int doendtag throws jspexception if collection null else construct iterators for the values and labels collections iterator valuesiterator getiteratorname property iterator labelsiterator null boolean labels false if labelname null labelproperty null labels true labelsiterator getiteratorlabelname labelproperty render the options tags for each element of the values coll while valuesiteratorhasnext string value valuesiteratornexttostring string label value get the label values for each option if labels true if labelsiteratorhasnext label labelsiteratornexttostring else if the label property was not specified the label will be the same as the actual value for the option label value addoptionsb value label selecttagismatchedvalue thanksjason
0
i am having problem while listing the webservicethis is the error this xml file does not appear to have any style information associated with it the document tree is shown below the service cannot be found for the endpoint reference epr itgexigenserviceswebxmlwebapp idwebapp xmlns xmlnsxsi xsischemalocation wsprovider webinfrepository startupservlet comaigtwinsservletstartupservlet axisservlet axisadminservlet axisservlet servletaxisservlet axisservlet jws axisservlet services startupservlet startupservlet axisadminservlet axisadminservlet axisadminservlet indexhtml indexhtm indexjsp defaulthtml defaulthtm defaultjsp copyright the apache software foundation licensed under the apache license version the license you may not use this file except in compliance with the license you may obtain a copy of the license at unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license true false false false true false false admin services modules itgexigen exigenservices exigenrest to have pojo service as only one class file class can be annotated one or not support uncomment the following xml tag false messagereceiver mep messagereceiver mep messagereceiver mep messagereceiver mep messageformatter contenttypeapplicationxwwwformurlencoded messageformatter contenttypemultipartformdata messageformatter contenttypeapplicationxml messagebuilder contenttypeapplicationxml messagebuilder contenttypeapplicationxwwwformurlencoded messagebuilder contenttypemultipartformdata transportreceiver namehttp here is the complete list of supported parameters see example settings further below port the port to listen on default hostname if nonnull url prefix used in replyto endpoint references default null originserver value of http server header in outgoing messages default requesttimeout value in millis of time that requests can wait for data default requesttcpnodelay true to maximize performance and minimize latency default true false to minimize bandwidth consumption by combining segments requestcorethreadpoolsize number of threads available for request processing unless queue fills up default requestmaxthreadpoolsize number of threads available for request processing if queue fills up default note that default queue never fills up see httpfactory threadkeepalivetime time to keep threads in excess of core size alive while inactive default note that no such threads can exist with default unbounded request queue threadkeepalivetimeunit timeunit of value in threadkeepalivetime default seconds default seconds false milliseconds uncomment this and configure as appropriate for jms transport support after setting up your jms environment eg activemq orgapacheactivemqjndiactivemqinitialcontextfactory topicconnectionfactory orgapacheactivemqjndiactivemqinitialcontextfactory queueconnectionfactory orgapacheactivemqjndiactivemqinitialcontextfactory queueconnectionfactory this is a sample configuration it assumes a mail server running in localhost listener pops messages that comes to the email address redlocalhost users password is red listener connect to the server every milliseconds parameters with transport prefix is specific others are all from java mail api localhost red red redlocalhost transportreceiver nametcp tcpmyappcomws transportsender nametcp transportsender namelocal transportsender namehttp chunked true transportsender namehttps chunked transportsender namejms only need to uncomment the sender configuration is achieved with every client at any instant mail host should be given sample configuration has been given localhost na handler namerequesturibaseddispatcher handler namesoapactionbaseddispatcher handler nameaddressingbaseddispatcher handler namerequesturioperationdispatcher handler namesoapmessagebodybaseddispatcher handler namehttplocationbaseddispatcher handler nameinstancedispatcher handler namerequesturibaseddispatcher handler namesoapactionbaseddispatcher handler nameaddressingbaseddispatcher handler namerequesturioperationdispatcher handler namesoapmessagebodybaseddispatcher handler namehttplocationbaseddispatcher handler nameinstancedispatcher
1
release notes
0
in settings the managed integration scheduling tab and all of the functions on that page should not be available in openshift only the solution pattern tab page and its functions should be available
0
when creating a new jbossesbxml file using jboss developer studio autocompletion does not work because the generated file points to a schema location that does not exist it seems the correct location iswhilst jbds uses by default
1
there is some discussion about the codec name for ann search main points here are use plural form for consistency and use more specific name for ann search second point could be optional a few alternatives were proposed vectorsformat vectorvaluesformat neighborsformat densevectorsformat
1
if we refresh jbi deployer bundle then the blueprint container for this bundle doesnt start up correctlywe will see apache servicemix jbi deployer from the log we get exception error blueprintcontainerimpl containerblueprintcontainerimpl unable to start blueprint container for bundle orgapacheservicemixjbideployerorgosgiserviceblueprintcontainercomponentdefinitionexception unable to intialize bean admincommandsservice at at at at at at at at could cause problem when jbi deployer bundle get refreshed it could be caused by refresh other bundles which jbi deployer bundle import package from and afterwords jbi artifacts deployment doesnt work at all
0
extends security test for hr client so that the tests cover all hr operations available on remote cache
0
we have a custom osgi based application we specify we initialize through configuratorinitializemain logconfigpathstarting from above line fails due to npeexceptionininitializererror in logmanager static init block i found a work around by specifying disablethreadcontextmaptrue this is specific change noformatexception in thread main javalangexceptionininitializererror at at at at at at at at at at method at at at at at at at at at at method at at at at at at at at at at at at by javalangnullpointerexception at at at morenoformat
0
the imapidlechanneladapter runs a pingtask every seconds this sends a noop to the serverhowever it has no effect on the idle session because this ping occurs on another socket a completely different imap connectioni discovered this while writing a test server for mail support in the java dslit turns out that the second call to openfolder causes a second session to be created for the noopshowever the ping noops which cancel the idle should be increased above seconds and made configurable in
0
changing the default provider from hive to the value of sparksqlsourcesdefault for create table command to make it be consistent with dataframewritersaveastable api wrt to the new cofig by default we dont change the table provider also it brings more friendly to end users since spark is well know of using parquetdefault value of sparksqlsourcesdefault as its default io format
0
i did nothing else than create a minimal project add a qml file to it open it in qml designer and then click the resources tab in the lower leftprogram received signal sigsegv segmentation in qsvghandlerstartelementqstring const qxmlstreamattributes const from in qsvghandlerparse from in qsvghandlerinit from in qsvghandlerqsvghandlerqxmlstreamreader from in qsvgtinydocumentloadqxmlstreamreader from in qsvgrendererloadqxmlstreamreader from in qsvgiohandlerprivateloadqiodevice from in qsvgiohandlerreadqimage from in qimagereaderreadqimage from in qimagereaderread from in qpixmapdatafromfileqstring const char const qflags from in qpixmaploadqstring const char const qflags from in qpixmapqpixmapqstring const char const qflags from in qmldesignericonqfileinfo const const from in qdirmodelfileiconqmodelindex const const from in qmldesignersizehintqstyleoptionviewitem const qmodelindex const const from in qtreeviewindexrowsizehintqmodelindex const const from in qtreeviewprivatelayoutint bool bool from in qtreeviewdoitemslayout from in qabstractitemvieweventqevent from in qapplicationprivatenotifyhelperqobject qevent from in qapplicationnotifyqobject qevent from in qcoreapplicationnotifyinternalqobject qevent from in qwidgetprivateshowhelper from in qwidgetsetvisiblebool from in qstackedlayoutsetcurrentindexint from in qstackedwidgetsetcurrentindexint from in qstackedwidgetqtmetacallqmetaobjectcall int void from in qmetaobjectmetacallqobject qmetaobjectcall int void from in qmetaobjectactivateqobject qmetaobject const int void from in qtabbarcurrentchangedint from in qtabbarsetcurrentindexint from in qtabbarmousepresseventqmouseevent from in qwidgeteventqevent from in qtabbareventqevent from in qapplicationprivatenotifyhelperqobject qevent from in qapplicationnotifyqobject qevent from in qcoreapplicationnotifyinternalqobject qevent from in qapplicationprivatesendmouseeventqwidget qmouseevent qwidget qwidget qwidget qpointer bool from in qetwidgettranslatemouseeventxevent const from in from in int void void from in gmaincontextdispatch from in from in gmaincontextiteration from in qeventdispatcherglibprocesseventsqflags from in qguieventdispatcherglibprocesseventsqflags from in qeventloopprocesseventsqflags from in qeventloopexecqflags from in qcoreapplicationexec from in qapplicationexec from in main
0
after an attribute renderer has been introduced an untyped version of attribute renderer should be addedthis attribute renderer will try to render an attribute in which its type has not been specified
0
the default spacing between widgets is too big on windows it is currently but comparing the spacing between radiobuttons on windows would indicate that it should be smaller than this
0
i have three poms basepom multimodulepom earpom my applicationxml is at the the root of the project in metainfbasepom declares plugin configuration as such orgapachemavenplugins mavenearplugin false metainfapplicationxml metainfmanifestmf false metainfmanifestmf when earpom is run separately the plugin does what it is supposed to do and finds both the applicationxml and manifestmfas soon as i attempt to run a multimodule build using the multimodulepom which runs the earpom as a module the applicationxml can not be found but oddly enought the manifestmf can be foundif i change the metainfapplicationxml tobasedirmetainfapplicationxmlit works fine in both solo and multimodule
0
note this bug report is for jira cloud using jira server see the corresponding bug report panelthe jql autocomplete parser has some weird behaviour around reserved characters see fullstops are not an uncommon things in version numbersright now if one has versions and and types affectedversion and will still be in the list but there will be an at the top of the jql parser boxif you type affectedversion note the doublequote is not yet closed the autocomplete list will be winnowed down to just and but there will be an at the top of the jql parser boxif you type affectedversion and then within the type the list will be winnowed down to and and the jql parser will keep its tick i would expect that all three would behave the same way with perhaps the first keeping the as the value is not quoted correctly
0
cluster is configured with nodes they are up and runningas part of failover scenario simulation we are trying to test ethernet down scenario by running etcsysconfignetworkscriptsifdown command on the first nodeduring this scenario we are shutting down the first node where the eth is down by using monitoring scriptsinhouse scripts the second nodeamong those two nodes is kept alivesecond nodes hazelcast is not accessible for more than minutes we are getting bellow exception and no operation related to hazelcast is working applications whichever uses hazelcast kept frozeninvocation comhazelcast while asking isexecuting invocation servicenamehzmapservice opputoperationunacknowledgedalarm call invocation servicenamehzmapservice opcomhazelcastspiimploperationserviceimploperationsisstillexecutingoperationservicenamehzmapservice encountered a timeout at at at at at at at at at at at at at source at at at at at at at
1
hello sirmadam i am seeing access control issues when using service desk at jira when multiple service desk accounts are created at jira details as below jira service desk i created service desk accounts for client and and the link to access them as below for client for client logged into and when clicked for help center observed service desk details of other client too with screen welcome to the help center there should be access control in place where a should see only service desk wrt and other clients details should not be exposed did not find a documentation where an jira admin can have an option to configure access based on set of access rules request jiraatlassian team to provide any required helpdocumentation that unblock our activity regards harsha
1
not sure if this is due to the commonshttpclient upgrade or due to switching to jbossweb or something elsetestencservletviainvoker error expected reply expected reply at at at at at at at at
0
have a primary uhdscreen scaled to and a secondary fullhdscreen scaled to run qts online installer on the secondary screen it uses wrong sizes for widgets and fonts installerpngthumbnail all elements of the installer should be scaled correctly they should look like on the primary screen only in lower resolution
0
quarkus applications expecting rhosak service binding are unhealthyunrecoverableterminated before service binding becomes available if they are deployed to openshift this is because the applications can be connected by rhoas cluster bind only once the applications exist in openshift but the applications will inevitably become unhealthyunrecoverable without kafka configuration
1
currently the streamappender only accepts string and sends string as the format for all the logs it will be useful to have streamappender to accept and send other formats such as avro json etc so the idea is to move the encoding of the loggingevent to a serde
0
eap installation using automatic installation script aka autoxml stuck once post installation task is includedreproducecodebash java jar installerjar autoxml installation stuck at this pointcode thread dumpuse to reproduce the issue installation target is set to tmp
1
hi alli noticed that axiom doesnt serialize namespaces correctly namespaces in qualified elementsfor example we should be able to produce the following xml with the code that xmlns john omfactory fac omabstractfactorygetomfactory omnamespace ns faccreateomnamespace omelement personelem faccreateomelementperson ns omelement nameelem faccreateomelementname ns nameelemsettextjohn omelement ageelem faccreateomelementage ns omelement weightelem faccreateomelementweight ns add children to the person element personelemaddchildnameelem personelemaddchildageelem personelemaddchildweightelem string xml personelemtostringbut right now this produces the following person xmlns xmlns xmlns xmlnsthe repetition of the default namespace should be avoidedthis is the same even if we used a prefixed unqualified elements among qualified elements omfactory fac omabstractfactorygetomfactory omnamespace ns faccreateomnamespace omelement personelem faccreateomelementperson ns create and add an unqualified element omelement nameelem faccreateomelementname null nameelemsettextjohn personelemaddchildnameelem omelement ageelem faccreateomelementage ns omelement weightelem faccreateomelementweight ns personelemaddchildageelem personelemaddchildweightelem systemoutprintlnpersonelemthe above should produce the following axiom right now produces person xmlns xmlns xmlnswhat do u folks thinkthanksruchithps added a test case
1
error orgapachehadoophdfsservernamenodenamenode failed to start namenodejavaioioexception cannot request to call satisfy storage policy on path ssl as this filedir was already called for satisfying storage policy at at at at at
1
section the proper use of checrw branding in the image what needs to be changed to codeready workspacessection table should not use keycloak in the crw related matters fixfor example the hosted version of codeready workspaces that runs on cheopenshiftio uses gib for example the hosted version of codeready workspaces that runs on cheopenshiftio uses gib of memory
0
there has been introduced a new attribute for elytron serversslcontext wrap which is required to be set to false when using such sslcontext with undertow httpslistener and utilizing currently default value of that attribute is true but since the default should change to falseeven though the default value of the wrap attribute will be ok i think that we should still provide information maybe with brief explanation right in the description of sslsocket attribute in httpslistener that it requires writable form of sslcontext provided by elytron subsystem now it will be more visible to customer and might avoid some misunderstandings i think
0
the app former embedded is overriding the proxy setting for the configured repositories forcing proxy to be used if defined for all configured repositories even if the nonproxyhosts is set configured to ignore proxy for a specific repository proxy settings codejava genproxy true http code logs from kie server when resolvingdownloading the artifact codejava using transporter httptransporter with priority for debug using connector basicrepositoryconnector with priority for with usernameadminuser password via codejava using connector basicrepositoryconnector with priority for with usernameadminuser password code notice that one request goes through proxy while the other dont can be checked as well on the proxy access log codejava get applicationoctetstream connect connect code
0
these classes were deprecated after which is part of
1
if an index is created prior to using ‘upsert using load’ to load a table the index is not populated this only happens with ‘upsert using load’ an index created this way is populated fine with ‘upsert’ or ‘insert’this poses a query correctness problem as a query would return wrong results if the query plan selected uses indexscan to get the data this problem can be seen on the build installed on a workstationhere is the entire script to reproduce this problemcreate table t a int b int c intcreate index myindex on t a b cupsert using load into table t values xx from select from texplain options f xxexecute xxset parserflags from tableindextable myindexhere is the execution output of the scriptcreate table t a int b int c int sql operation completecreate index myindex on t a b c sql operation completeupsert using load into table t values rows insertedprepare xx from select from t sql command preparedexplain options f xxlc rc op operator opt description card root trafodionindexscan t sql operation completeexecute xx rows selectedset parserflags sql operation completeselect from tableindextable myindex rows selected
1