text_clean
stringlengths
3
77.7k
label
int64
0
1
while converting the request back to increment using protobufutiltoincrement we incorrectly use the optimization to avoid copying the byte arrayhbasezerocopybytestringzerocopygetbytes on a boundedbytestring the optimization was only meant for literalbytestring where it is safe to use the backing byte array however it ends up being used to boundedbytestring which is a subclass of literalbytestring this essentially breaks increments since we end up creating wrong cells on the server side
1
there seems to be a bug related to join cardinality estimation when views are involveddifferent plans are produced when running the same query against either base tables or identical views the view plan is worse and the joincardinality estimates in the view plan are consistent with not being able to find the column stats from the views base table columnsplan against base tablescodeselect countaintcol fromfunctionalalltypessmall ainner join functionalalltypes b on aid bidinner join functionalalltypestiny c on bid cid explain string estimated perhost requirements output countmergeaintcol output countaintcol join hash predicates bid aid runtime filters aid hdfs join hash predicates bid cid runtime filters cid hdfs runtime filters cid hdfs runtime filters bid bid codeplan against views created with create view as select from basetablecodeselect countaintcol fromalltypessmallview ainner join alltypesview b on aid bidinner join alltypestinyview c on bid cid explain string estimated perhost requirements output countmergeaintcol output countfunctionalalltypessmallintcol join hash predicates functionalalltypesid functionalalltypestinyid runtime filters functionalalltypestinyid hdfs join hash predicates functionalalltypesid functionalalltypessmallid runtime filters functionalalltypessmallid hdfs runtime filters functionalalltypessmallid hdfs runtime filters functionalalltypesid functionalalltypesid codethis is most likely regression from impala possibly introduced by not yet confirmed
1
could not initialize class orgapacheaxiswsdlfromjavatypes could not initialize class orgapacheaxiswsdlfromjavatypesjavalangnoclassdeffounderror could not initialize class orgapacheaxiswsdlfromjavatypes at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at
1
in the yarn command has unresolved merge conflictscommit see line
1
would it be possible to reimport the existing jiras for tephra as the previous json dump didnt contain all the fields
0
extract preferece api from uberfireextensions be able to use preferences without forms or user interface
0
i have the following content in my appsconfigurationconfigcodethe configuration is correctly active now i now remove the configuration in ie the node appsconfigurationconfig vanishesthe jcrinstaller is not correctly switching to the newly active configuration but rather stays active
0
the kafka metrics sink fails to buildthis problem started happening with this was the change
1
table mytableid intcolor color sql operation completecolor into mytable rows insertedcolor from mytable color executor assertion failurecolor time fri jan process file executorexfirstncppcolor line message exfirstntcbwork only last and last supportedcolor core dumpedcolor
0
at start of tomcat spring catch class not found of synthesizingmethodparameter in beanutilsinstantiateclassi have simple class annotated with restcontrollerwith spring start correcteley
1
the federated hdk build provides linkage artifacts under root folder named lib which accumulates artifacts of classlib and drlvm note that library naming clash is resolved with precedence of drlvm binaries oh that long living hythr problem however hdk sometimes contains wrong hythrlibexp files taken from classlib which causes compilation failures of drlvms cunit tests in snapshot testing here is the fix always overwrite libs when copying from drlvm
1
i ran the following code in a spark shell built with the latest master but got lots of error messages about accumulator the job finished successfully but the error messages made the shell very hard to until codecc error error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception futures timed out after at at at at at at error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception futures timed out after at at at at at at error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception futures timed out after at at at at at at warn executor issue communicating with driver in heartbeaterorgapachesparksparkexception error sending message at at at at at at at at at at at at at at by orgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at morecaused by javautilconcurrenttimeoutexception futures timed out after at at at at at at error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception cannot receive any reply in seconds this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception cannot receive any reply in seconds error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception cannot receive any reply in seconds this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception cannot receive any reply in seconds error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception futures timed out after at at at at at at warn executor issue communicating with driver in heartbeaterorgapachesparksparkexception error sending message at at at at at at at at at at at at at at by orgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at morecaused by javautilconcurrenttimeoutexception futures timed out after at at at at at at error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception futures timed out after at at at at at at error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception futures timed out after at at at at at at error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception futures timed out after at at at at at at warn executor issue communicating with driver in heartbeaterorgapachesparksparkexception error sending message at at at at at at at at at at at at at at by orgapachesparkrpcrpctimeoutexception futures timed out after this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at morecaused by javautilconcurrenttimeoutexception futures timed out after at at at at at at error utils uncaught exception in thread heartbeatreceivereventloopthreadjavalangunsupportedoperationexception cant read accumulator value in task at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at warn nettyrpcendpointref error sending message in attemptsorgapachesparkrpcrpctimeoutexception cannot receive any reply in seconds this timeout is controlled by sparkexecutorheartbeatinterval at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at by javautilconcurrenttimeoutexception cannot receive any reply in seconds morecode
1
required operations sadd smembers srem del of sets ac above commands implemented using functionsdelta propagation make set operations not on the above list minimally functional with functionsdeltas performance not important existing setsintegrationtests passing without ignores all other tests passing including nonset tests
0
for documentation and training purposes we need an example of how to use the new lucene index api with the flattening luceneserializer also the new serializer can be used as an example to anyone wanting to create a customer serializer
0
the following test case will work as expected except when assertions are enabled java ea connection conn drivermanagergetconnection jdbcrepositoryversiondbcreatetruestatement stat conncreatestatementstatexecutecreate table versionbundleid inttransientrepository rep new transientrepositorytry reploginnew simplecredentials new char catch exception e ignorerepshutdownstatexecutedrop table versionbundlenew transientrepositoryloginnew simplecredentials new charthe reason is the assertion in repositorycontextgetinternalversionmanager because of this assertion the repository lock is not released during the repository shutdown
0
the following jpms modules with exported packages exists that are not available via jboss modules javanethttp javatransactionxa jdkdynalink jdkjshell jdknamingldap jdkniomapmode jdkunsupporteddesktop
0
seeing daostest e subtest epoch slip hang when run over setup is server client boro boro commit codejava epochslip creating container opening container oid holding epoch synchronously check valid records epr inserting keys we hang here indefinitelycode server log with err attached
0
the documentliteral wrapped requestresponse objects in the cts are not consitent and are not always javabean conformantthe only way to fix this is to either replicate the jaxb annotation parsing logic and use injection instead or to delegate to the jaxb accessor api regardless we will need an accessor api since jaxrpc still needs to use javabean reflection access
1
performedtasks collection into workflowreslt is handled and changed in several pieces of code this colection results to be unmodifiable in many cases unsupportedoperation exception is often raisedchange into workflowresult in order to be sure to make performtasks collection modifiable
0
having a mode aggregate function which returns the mode average of a groupwindow would be very useful for example if the column type is a number it finds the most common number and if the column type is a string it finds the most common stringi appreciate that doing this in a scalable way will require some thinkingdiscussion
0
when i worked on i intended to expose all the needed public methods that exposes but i forgot to add getdiscountoverlaps
0
priority is set to blocker as it is incompatibility between minor version releasescustomer impact new clients can not reliably communicate with server new client is able to connect to server however when server is reloaded following error occursnoformat error failed to handle failover activemqincompatibleclientserverexception at at at at at at at at at at at at at at at at at works correctly with new server and old client
1
codecaused by javalangstringindexoutofboundsexception string index out of range at at at at at at at at at at at at at at at at at
0
trying to show in web browser openshift applications causes error in tooling it is similar error for all use cases showing in browsererror for show in web browser opened via context menu of a projectcodejavalangincompatibleclasschangeerror expecting nonstatic method orgjbosstoolsfoundationuiutilbrowserutilitycheckedcreateinternalbrowserljavalangstringljavalangstringljavalangstringlorgeclipsecoreruntimeilogv at at at at at at at at at at at at at at at at at at at at at at at at method at at at at at at at error for show in web browser opened from context menu of a routingcodejavalangincompatibleclasschangeerror expecting nonstatic method orgjbosstoolsfoundationuiutilbrowserutilitycheckedcreateinternalbrowserljavalangstringljavalangstringljavalangstringlorgeclipsecoreruntimeilogv at at at at at source at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at method at at at at at at at in web browser via context menu of an application expecting nonstatic method orgjbosstoolsfoundationuiutilbrowserutilitycheckedcreateinternalbrowserljavalangstringljavalangstringljavalangstringlorgeclipsecoreruntimeilogv at at at at source at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at method at at at at at at at
1
user is unable to open two different acceptance test steps to reproduce fixed steps to reproduce fixed do the last step in multiple browser sessions
1
the following codecodeval ds seqa b c an exceptionnoformatorgapachesparksqlcatalysterrorspackagetreenodeexception binding attribute tree at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at cause javalangruntimeexception couldnt find in at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at is because embedserializerinfilter rule drops the exprids of output of surrounded serializefromobjectthe analyzed and optimized plans of the above example are as followsnoformat analyzed logical plan stringproject serializefromobject true as as filter apply deserializetoobject newinstanceclass localrelation optimized logical plan project filter apply localrelation noformat
0
this issue provides a new handy utility class that keeps track of overridden deprecated methods in nonfinal sub classes this class can be used in new deprecationssee the javadocs for an example
0
parsing the following file does not honor keys that are string literals such assomekey hi there would you like to play with meyieldskey somekeyval hi thererather than the expected result ofkey somekeyval hi there would you like to play with me
0
make unittest results in partial success of the ut on failing at func which is part of package clustertest testmembershipreconfiguration error received unexpected context deadline exceededcolor fwiw the logs are accessible in the below url
0
check that the path through sort that notices low memory conditions and causes the sort to spill out of memory condition managementalso check to make sure we handle queries and fragment failures properly under these conditionshashjoin hashagg and topn use large amounts of memoryand may be unable tocomplete if their memory needs cant be met for all others the idea is that they can complete if they get their reservation
1
zeppelins is propagated to am and the am can not initialize properly because zeppelins will use zeppelinlogfile which is only for zeppelin server heres the error message in setfilenulltrue call failedjavaiofilenotfoundexception no such file or directory at method at at at at at at at at at at at at at at at at at at at at at at at either file or datepattern options are not set for appender class path contains multiple found binding in found binding in see for an actual binding is of type code
0
a nightly build on local filesystem is failing with the following error codejava in testnativefunctionsrace setupclientexecutesetupquery in execute return selfbeeswaxclientexecutesqlstmt useruser in execute handle selfexecutequeryquerystringstrip useruser in executequery handle selfexecutequeryasyncquerystring useruser in executequeryasync return selfdorpclambda selfimpservicequeryquery in dorpc raise impalabeeswaxexceptionselfbuilderrormessageb b e impalabeeswaxexception impalabeeswaxexception e inner exception e message analysisexception could not load binary testwarehouselibtestudfsso e failed to get file info filetestwarehouselibtestudfsso e no such file or directorycode
1
for jbide please perform the following make sure your component has no remaining unresolved jiras set for fixversion ensure your component featuresplugins have been properly upversioned eg from to note if you already did this for the previous milestone you do not need to do so againcodemvn dtychomodemaven update your root pom to use parent pom version code orgjbosstools parent ensure youve built run your plugin tests using the latest target platform versionscodemvn clean verify clean verify if still being stagedcodeorcodemvn clean verify clean verify if branch from your existing branch into a new not branch codegit checkout pull origin checkout b push origin close not resolve this jira when donesearch for all task jira or search for vpe task jira
1
query to get all the channels for a given peer query to get the instantiated chaincodes on a channel query to get the installed chaincodes on a peer
1
hibernate is lgpl licensed so we need to stop using it we can easily switch to a jdbc based solution
1
links that are made with directtool do not work correctly in the pda portal the user is taken to the full portal instead steps to reproduce log in from a mobile device select a site select message center notifications select new messages link or new in forums link expected behaviour user is taken to the correct place in the relevant tool within the pda portal actual behaviour user is taken to the correct place in the relevant tool however they are no longer in the pda portal they are in the full portal the problem the directtoolhandler handles the url correctly however then calls portalforwardportal which is implemented as skinnablecharonportalforwardportal in this method is the following noformat string portalpath serverconfigurationservicegetstringportalpath portal string portalplacementurl portalpath getportalpageurlp ressendredirectportalplacementurl noformat this means that the url will always be prefixed by portal as that is the url that is configured in sakaiproperties it does not take into account that the specific user might be on the pda portal suggested solution add a check in here to see if the user is on the pda portal and add pda in between so it would end up looking like noformat string pdafragment somelogic will be pda or null string portalplacementurl portalpath pdafragment getportalpageurlp noformat this is a continuation of
0
for jbide please perform the following tasksif you contributed anything to the or builds you will need to perform the following in the branch update your root pom to use parent pom version orgjbosstools parent ensure youve built run your plugin tests using the latest target platform versioncodemvn clean verify or once releasedmvn clean verify close do not resolve this jira when donesearch for all task jira or search for central task jira
1
windows server is expected to be available around the end of september we should support it with tomcat and http asap
1
sakai does not start successfully when osp is included in a trunk build with either an empty sakaiproperties hsql or mysql either myisam or innodbif you remove the osp directory it starts successfullylinux jvm mysql connector trunk verbosegc xxprintgcdetails xxprintgctimestampsthe first destroy is info mainorgsakaiprojecteventimplbaseeventtrackingservicesee attached catalinaout for startup logs configured to use mysql against an existing database created with a prior instance when osp was not included similar results are visible using only hsqlsimilar problems have been reported by also linuxmysql
1
previously specifying a config file was ignoring all other cmdline arguments which is just silly for input files
0
i am attempting to add a controller as followsroo controller class class webindexcontroller preferredmapping created srcmainjavaeduunlvcsladderswebindexcontrollerjavamanaged srcmainwebappwebinfwebxmlmanaged rootpomxmlundo manage rootpomxmlundo manage srcmainwebappwebinfwebxmlundo create srcmainjavaeduunlvcsladderswebindexcontrollerjavafile already existsroo i try to remove or rename the indexjspx but the same message exists i exit out of roo and delete the file but roo creates the file on startup i try manually creating the controller but it is not being called i am new to roo so i may be doing something wrong any help will be appreciated
0
while testing tx recovery in openshift i see that scale down of pod that has transaction indoubt on it isnt successfulscenarioejb client app txclient pod ejb business method lookup remote ejb enlist xa resource to transaction enlist xa resource to transaction call remote ejbejb server app txserver pod ejb business method enlist xa resource to transaction enlist xa resource to transactionejb server xa resource fails with xaexceptionxaexceptionxaerrmfailthen the test calls scale down size from to on txserver pod but scale down never completes log from recovery scaledown processingrequestnamespacemsimkanamespacerequestnametxserverpod recovery listener for processing scaledown at to find the transaction recovery port to force scan at pod recovery scan at transactions in object storerequestnamespacemsimkanamespacerequestnametxserverpod scan to be invoked as the transaction log storage is not empty for pod scaling down pod transaction list map recovery scaledown processingrequestnamespacemsimkanamespacerequestnametxserverpod recovery scan at during scaling down recovery processingrequestnamespacemsimkanamespacerequestnametxserverdesired replica of pods to be errorsn down statefulset by verification if pods are clean by was not fully scaled to the desired replica size while statefulset is to be at size some pods were not cleaned by recovery verify status of the wildflyserver statefulset to be up to date with the wildflyserver recovery scaledown processingrequestnamespacemsimkanamespacerequestnametxserverpod recovery scan at during scaling down recovery processingrequestnamespacemsimkanamespacerequestnametxserverdesired replica of pods to be errorsn down statefulset by verification if pods are clean by was not fully scaled to the desired replica size while statefulset is to be at size some pods were not cleaned by recovery verify status of the wildflyserver to update statefulsetstatefulsetnamespacemsimkanamespacestatefulsetnametxservererroroperation cannot be fulfilled on statefulsetsapps txserver the object has been modified please apply your changes to the latest version and try errorcontrollerwildflyservercontrollerrequestmsimkanamespacetxservererroroperation cannot be fulfilled on statefulsetsapps txserver the object has been modified please apply your changes to the latest version and try recovery scaledown processingrequestnamespacemsimkanamespacerequestnametxserverpod recovery scan at during scaling down recovery processingrequestnamespacemsimkanamespacerequestnametxserverdesired replica of pods to be errorsn down statefulset by verification if pods are clean by was not fully scaled to the desired replica size while statefulset is to be at size some pods were not cleaned by recovery verify status of the wildflyserver txserverstatefulsetnamespacemsimkanamespacestatefulsetnametxservernoformat
1
when run the in my machine which connect network by proxy it fails to execute sudo apk add update pypip in docker container because sudo will reset all environment variables including the proxy so it fails to download by sudo e it will execute the command with the environment of the user thus the proxy can be valid
0
dev tag diff to previous integrated release as wildfly core incorporates undertow we need to remove the overridden undertow version at when upgrading wildfly core
1
i have same problem but with java i compiled and replaced soapfaultbuilder class in axisjar for the newest one with java everything works perfectly but using it with java i received this error instread of domexception wrongdocumenterr nested exception is javalangillegalargumentexception node is not a orgapachecrimsontree implementation
0
console retry message is broken it throws below exception codejava operation retrymessagejavalangstring failed due to javalangillegalargumentexception no operation retrymessagejavalangstring on mbean exists known signatures long code
0
the harmony method new urinull null null null null nullgetauthority throws urisyntaxexception while ri return null testjava import javanetpublic class test public static void main string args throws exception systemoutprintlnres new urinull null null null null nullgetauthority cp showversion testjava version runtime environment standard edition build weblogic jrockitr build gc system optimized over throughput initial strategy singleparparres cp showversion testjava version subsetc copyright the apache software foundation or its licensors as applicableexception in thread main javaneturisyntaxexception expected host at index at at at at at
0
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
1
removed retry request from the webhdfs dispatch addressed yarn this should be addressed in the documentation as well
1
get this error when clicking the create issue button in jira am able to create issues just fine when toolkit plugin is disablederror from the log error exception caught in page nulljavalangnullpointerexception at at at source at sunreflectdelegatingmethodaccessorimplinvokeunknown source at javalangreflectmethodinvokeunknown source at at at source at source at sunreflectdelegatingmethodaccessorimplinvokeunknown source at javalangreflectmethodinvokeunknown source at at source at source at sunreflectdelegatingmethodaccessorimplinvokeunknown source at javalangreflectmethodinvokeunknown source at at source at at at at at at at at at at method at sunreflectnativemethodaccessorimplinvokeunknown source at sunreflectdelegatingmethodaccessorimplinvokeunknown source at javalangreflectmethodinvokeunknown source at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at javalangthreadrununknown source
1
this epic is a follow up of today openshift servicemesh defines a deployment model where prometheus grafana kiali instances are colocated with the mesh control plane instance if as result of the previous model may change that may impact kiali model as well its expected that at some point ossm will delegate the integration of prometheus with the instance deployed into the cluster that scenario may be an opportunity for kiali in the following areas a shared instance of prometheus between multiple federated meshes may improve the federation telemetry then kiali would improve the visibility of these scenarios within a single cluster if ossm integrates with addons that are deployed at cluster instance level prometheus grafana jaeger that may trigger a discussion if kiali could also benefit to be located at instance level depending of the progress of some questions should be answered from kiali perspective assuming that scope of this task is single cluster multicluster would have different effort if prometheus instance is located per instance cluster instead control plane kiali could be deployed per instance cluster as well a kiali instance would be able to connect with multiple control planes deployed within same cluster a single kiali instance may show endtoend federated scenarios within same cluster
0
stable channel should rollout a new edge in a phased manner so we can use the telemetry feedback to pull edge with issues before it impacts all customers
0
evaluation cannot startup using sakai because of circular dependenciescaused byorgspringframeworkbeansfactorybeancreationexception error creatingbean with name orgsakaiprojectevaluationlogicevaljoblogic definedin filecannot resolve reference to beanorgsakaiprojectevaluationlogicevalemailslogic while setting beanproperty emails nested exception isorgspringframeworkbeansfactorybeancurrentlyincreationexceptionerror creating bean with nameorgsakaiprojectevaluationlogicevalemailslogic bean with nameorgsakaiprojectevaluationlogicevalemailslogic has been injectedinto other beans inits raw version as part of a circular reference but has eventually beenwrapped for example as part of autoproxy creation this means thatsaid other beans do not use the final version of the bean this is oftenthe result of overeager type matching consider usinggetbeannamesoftype with the alloweagerinit flag turned off for examplecaused byorgspringframeworkbeansfactorybeancurrentlyincreationexceptionerror creating bean with nameorgsakaiprojectevaluationlogicevalemailslogic bean with nameorgsakaiprojectevaluationlogicevalemailslogic has been injectedinto other beans inits raw version as part of a circular referenceit looks like a series of in a dependencies were added at some point to the emails logic way back according to the history it was done in march by rwellis this is something spring should have warned us about but it also speaks to some inconsistent logic in the usage of the various beans and means there are probably methods which should have gone into other beans but ended up in the ones which are now refrences circularly specifically the evaljobslogic and the current evalemailslogic oneshere are the beans now bean idorgsakaiprojectevaluationlogicevalassignslogic classorgsakaiprojectevaluationlogicimplevalassignslogicimpl initmethodinit bean idorgsakaiprojectevaluationlogicevalemailslogic classorgsakaiprojectevaluationlogicimplevalemailslogicimpl initmethodinit bean idorgsakaiprojectevaluationlogicevaljoblogic classorgsakaiprojectevaluationlogicimplschedulingevaljoblogicimpl initmethodinit property namescheduledinvocationmanager reforgsakaiprojectapiappschedulerscheduledinvocationmanager bean idorgsakaiprojectevaluationlogicevalevaluationslogic classorgsakaiprojectevaluationlogicimplevalevaluationslogicimpl initmethodinit note there are a series of circular references here these beans depend on each other all these circles will have to be broken for this to start up in spring emails assignsemails evaluations emailsevaluations jobs evaluationsassigns jobs evaluations emails assigns this would be fixed if the other are fixedthis is not allowed in spring and should not have worked in spring really except that sakai circular checking was disabled these are also not the only circular references that have crept in there the others will need to be fixed as wellhere are the original beans from before the circular dependencies bean idorgsakaiprojectevaluationlogicevalassignslogic classorgsakaiprojectevaluationlogicimplevalassignslogicimpl initmethodinit property namedao reforgsakaiprojectevaluationdaoevaluationdao property nameexternallogic reforgsakaiprojectevaluationlogicevalexternallogic bean idorgsakaiprojectevaluationlogicevalemailslogic classorgsakaiprojectevaluationlogicimplevalemailslogicimpl initmethodinit property namedao reforgsakaiprojectevaluationdaoevaluationdao property nameexternallogic reforgsakaiprojectevaluationlogicevalexternallogic property namesettings reforgsakaiprojectevaluationlogicevalsettings property nameevaluationlogic reforgsakaiprojectevaluationlogicevalevaluationslogic
1
hithe nagios alert for namenode process reports error if nn port is changed from default to example etcnagiosobjectshadoopservicescfgdefine service hostname linuxhdp use hadoopservice servicedescription namenodenamenode process on linuxhdp servicegroups hdfs checkcommand c normalcheckinterval retrycheckinterval maxcheckattempts should either be configurable or should take the value from coresitexml based on value of below param fsdefaultfs if we change etcnagiosobjectshadoopservicescfg it is getting overwritten by ambari
0
when ejb client uses jbosslocaluser for silent authentication then during invocations he is seen as anonymous instead of localthis also means that he is not able to invoke methods annotated with rolesallowed which is supposed to allow everyone with an established security context on eap this works as expected and the ejb calls are performed as the user named local and it is allowed to invoke methods annotated rolesallowed
1
when building the ui locally on a mac laptop the following edit is done by the buildnoformat atezuisrcmainwebappbowerjson btezuisrcmainwebappbowerjson resolutions handlebars jqueryui jquerymousewheel antiscroll ember noformat
1
filing as bug for trackingrefrenceon may at am chris hostetter wrote spannodequerys hashcode method makes two refrences to includehashcode but none to excludehashcode this is a mistake yesnodate tue may erik hatcherto javadevluceneapacheorgsubject re spannotqueryhashcode cutpaste erroryes this is a mistake im happy to fix it but looks like you have other patches in progress
0
compared to eap administration and configuration guide there is almost no documentation about web console in eap configuration guide
1
groovy x groovy println xgroovy gogroovy x groovy println xgroovy goi understand that minus doesnt mean to remove elementsbut to return the difference of two collectionsit does return a setplus in contrast adds an element instead of returningthe union of elements as a second difference theresult is a list not a setthere are operations for sets like union and difference that havethis behaviour and sets have no doubled entries by definitionfor lists we have add and removethe problem is that the current implementation of plus and minusis a mix between the two conceptssuggestionmake different implementations for sets and lists for sets with uniondifference meaning for lists with addremove meaning
0
currently clientsubscription belongs to cacheserverclientsubscription defines the overflow attributes for the hacontainer we allow to create multiple cache servers and each gateway receiver will create a cache servercacheclientnotifier and hacontainer are singletons above design caused the clientsubscription definition in first cacheserver including gateway receivers will override that in other cacheservers since hacontainer is better to be kept as singleton for better performance the cacheclientnotifier should be moved to cache level and disallow creating multiple cacheservers explicitly it does not make sense to customerson the other hand cacheclientnotifier should not be a singleton it can be an instance object of acceptorimpl and will not keep the clientsubscription definition for hacontainer
0
we suffered a regression recently due to lack of testing coverage for group permissions for ac with the recent perf boost provided by it wouldnt be a bad idea to add checks for group level users to applicable unit tests in testaccesscontroller
0
karaf is now using jline the karaf shell works pretty wellhowever when stopping karaf using ctrld the linux terminal is left in a bad state input and output stream are lostwe have to do a reset to have a terminal back to normal
0
problems with current proton logging there are multiple logging systems that are not connected pnlog is used for some things that are not connected to an amqp connection but not all things pntransportt has its own logging system that is the major logging system in proton with an pntransportlogf and friends api and an ability to sink the log its output log messages using pntransportsettracer however not everything that might need to log messages is connected to a pntransportt so it makes a lot more sense for logging to be its own system that the pntransportt is connected to rather than the other way around the logging only has a set of vague somewhat amorphous trace flags to decide which messages get logged currently the only coherent use for this subsystem is to set the environment variable pntracefrm to get amqp frame traces there are other environment variable which produce output and flags can be set programmatically by client programs but which flags produce what output is not very systematic there is no way to output only logging related to ssltls for example the callback function that an application can attach to receive the log messages only gets the message and maybe the pntransportt that is associated but no indication of the severity or what pert of the library it came from
0
orgjunitcomparisonfailure planafter expected but was at at at at at at at method
1
this ticket is about apponly files which should be cleaned after appfinishi see these undeleted after should check for other leftover files too if any
1
having an unsatisfied dependency at injection point of class x while injecting an instance of interface y can have several causes the jar of class yimpl is not in the classpath the jar of class yimpl is in the classpath but the beansxml does not exist or is not being picked up for whatever reason such as problems with webinfbeansxml on some containers the jar of class yimpl is in the classpath the beansxml is picked up but the qualifiers etc dont matchso instead of just saying unsatisfied dependency it should say something like this unsatisfied dependency and there is no concrete implementing class of interface orgappy in the classpath unsatisfied dependency and the concrete implementing classes are not loaded as managed beans through a beansxml unsatisfied dependency and none of the managed beans of the same type matchnote springs exceptions differentiate between these cases they explicitly define their appcontextxml set so case is impossible and case throws a app context file not found exception
0
error during request to stacks api for config validation when trying to modify configurations with non admin user with clusteroperate permission
1
signup sign there should be a bit more space beneath the view label and all future meetings dropdown
0
mavens reproducible builds require projectbuildoutputtimestamp property to be defined maven release plugin permit s to automatically update the value when doing a release
0
indexwriterislocked or indexreaderislocked do not work with nativefslockfactorythe problem is that the method nativefslockislocked just checks if the same lock instance was locked before lock null if the lockfactory created a new lock instance this always returns false even if its locked
1
integration pack eg witk kie introduced a different groupid for bpm and rules componentscode orgjbossintegrationfuse switchyardcomponentbpm codenotice that thr groupdid was changed from orgswitchyardcomponents to orgjbossintegrationfusethe following problems occur with current switchyard tooling the bpm and rules are added with wrong groupid must be fixed manually the above problem occurs every time we do a bpmrules change in project capabilities in problems view there is an error required capability missing after fixing the groupid
1
change codejava public static list getallpartitionpathsfilesystem fs string basepathstr boolean usefilelistingfrommetadata boolean verifylistings boolean assumedatepartitioning throws ioexception if assumedatepartitioning return getallpartitionfoldersthreelevelsdownfs basepathstr else hoodietablemetadata tablemetadata hoodietablemetadatacreatefsgetconf basepathstr tmp usefilelistingfrommetadata verifylistings false false return tablemetadatagetallpartitionpaths code is the current implementation where hoodietablemetadatacreate always creates hoodiebackedtablemetadata instead we should create filesystembackedtablemetadata if usefilelistingfrommetadatafalse anyways this helps address change on master we have the hoodieenginecontext abstraction which allows for parallel execution we should consider moving it to hudicommon its doable and then have filesystembackedtablemetadata redone such that it can do parallelized listings using the passed in engine either hoodiesparkenginecontext or hoodiejavaenginecontext hoodiebackedtablemetadatagetpartitionstofilesmapping has some parallelized code we should take one pass and see if that can be redone a bit as well food for thought change there are places where we call fsliststatus directly we should make them go through the hoodietablegetmetadata route as well essentially all listing should be concentrated to filesystembackedtablemetadata
1
the sample provided for regexextract is not workinga load foreach a generate bthe script will fail with the below errorerror orgapachepigtoolsgruntgrunt error unexpected character
0
it would sometimes be beneficial to know where a dataset was loaded from for instance when attempting to dump an evaluation object to a config file to do this we will need to add an attribute to the dataset class and set this on load in the datasource modules
0
we need to reflect the delete related changes in our docsspecs
0
steps to open you will see this class extends token however there is no token import and there is no token class in the same package there is a class in the mxmlcjar file that ships with the sdk compiler actual results cant compile code expected results workaround if any none
0
get an error when using solr when distance is calculated for the boundary box past degreesaug pm orgapachesolrcommonsolrexception logsevere javalangillegalargumentexception illegal lattitude value at at at at at at at at at at at at at at at at at at at at at at at at
0
while trying to run the app with we are facing the below exceptionerrorerror converting bytecode to dexcause dex cannot parse version byte codethis is caused by library dependencies that have been compiled using java or aboveif you are using the java gradle plugin in a library submodule add targetcompatibility that submodules buildgradle filewhile parsing is giving below warningserror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class fileerror processing broken class file
1
at the tcnative jar in the multiarch container image is out out of sync with that in the broker distro itself and needs to be updated to match note that does contain the required sslsessioncacheclass unlike found for hence the addressed this issue is a follow up on where the tcnative versions were updated to to be in sync with the distro for this issue we are concerned with the broker aligned release of the broker on openshift container image which will need to align with netty tcnative
0
the code here does a buffer copy to to compute checksum this needs to be avoided codejava computes checksum for give data param bytestring input data in the form of bytestring return checksumdata computed for input data public checksumdata computechecksumbytestring bytestring throws ozonechecksumexception return computechecksumbytestringtobytearray code
0
this quick fix should offer to delete typed annotation
0
i filed but for now we should pin the dependency to
1
during editing a grails controller sts locks up and eventually a stackoverflowerror appears error popup states error occurred during requesting java ast from selection
1
there are two undocumented projectsjdg projects still valid jdgremotecache jdgremotecachematerializationand in dynamicvdbmaterialization there is undocumented vdb for internal materialization portfoliointermatvdbxml
1
according to the example atit shows an example of how to deploy an app using the eap xp container images noformatoc newapp p p p imagestreamnamespaceeapdemo p sourcerepositoryurl p p galleonprovisionlayersjaxrsserver p contextdirhellowordrs noformatunfortunately the use ofcodejava galleonprovisionlayersjaxrsservercoderesults in a server that does not have any microprofile subsystems and therefore doesnt seem like a good example to use in the xpmicroprofile docsadditionally it usesnoformat p sourcerepositoryurl p p contextdirhellowordrs noformatbut the named branch in this repository has no directory called hellowordrs or helloworldrs for that matter
1
aws efs instance and network type support support for aws efa type machinesets priority and mvp mapi is enhanced to enable aws efa instances and efa network type support for autoscaling support for aws placement groups pg priority and mvp users would desire to create efa instances in the same aws placement group to get best network performance within that aws placement group aws efa operator for easy setup and to manage lifecycle of the efa instances priority deploy aws efa kubernetes device manager plugin daemonset priority create efa security group and apply it to the network interface for the created node priority provide an option to apply hugepages configuration to target cluster priority provide an option to apply efa machineset to target cluster priority optionally run a test to ensure that the efa interfaces are up functional priority increase memlock at the efa instance rhcos crioconf level to unlimited
1
jobhistoryserver fails to pass service check in kerberized cluster due to kerberos to local account mapping failure codeorgapachehadoopipcremoteexceptionorgapachehadoopsecurityaccesscontrolexception permission denied userjhs accessreadexecute authtolocal fails to map jhshost to mapred user
1
guava is shaded inside sparkcore itselfthis causes build error in multiple components including graphmllibsql when package comgooglecommon on the classpath incompatible with the version used when compiling utilsclass bad symbolic reference a signature in utilsclass refers to term util in package comgooglecommon which is not available it may be completely missing from the current classpath or the version on the classpath might be incompatible with the version used when compiling utilsclass while compiling sparkgraphxsrcmainscalaorgapachesparkgraphxutilbytecodeutilsscala during phase erasure library version version compiler version version
1
noformatnopaneltruei create my prioject and use tortoise to maintain my work copyes when i do commit a file tortoise has conflict error apache has permission on all directories from svn i tested full permission in svn directory and the same error occurs i also tested creating a new project to see if the error is not related to cacarcteres invalid or corrupted database but in all cases the error persistsmy apache configuration davmodule loadmodule modules moddavsodavsvnmodule loadmodule modules moddavsvnsoauthzsvnmodule loadmodule modules modauthzsvnso dav svn svnparentpath data svn disable factor authentication svnpathauthz off limit write permission to list of valid users require ssl connection for password protection sslrequiressl authtype basic authname subversion authuserfile require valid user apache error could not delete could not abort transaction transaction cleanup failed cant remove b cant remove file b no such file or directory i do not know how to debug this error could you help menoformatoriginal issue reported by felypesantos
1
trying to test similar scenario as in also in elytron way it means same http client for requests on one application in first request negotiation is performed as expected second initial request ends immediately with no wireshark log strange is after first request no http session is established for some reasondelegated credential is available the use of the cached gsscontext should trigger that error as the callback is duplicated after authentication has completed darran can reproduce this even in firefox i info main testing testkdcnotaccessedoneachrequestdebug is true storekey false useticketcache false usekeytab false donotprompt false ticketcache is null isinitiator true keytab is null is true principal is null tryfirstpass is false usefirstpass is false storepass is false clearpass is falserefreshing kerberos configuration user entered username is succeeded entering logout logged out trace default created httpserverauthenticationmechanism for mechanism trace default handling mechanisminformationcallback typehttp namespnego hostnamelocalhostlocaldomain trace default evaluating spnego request cached gsscontext trace default obtaining gsscredential for the service from callback trace default no valid cached credential obtaining new trace default logging in using logincontext and subject info default debug is true storekey true useticketcache false usekeytab true donotprompt false ticketcache is null isinitiator false keytab is is false principal is httplocalhostlocaldomainjbossorg tryfirstpass is false usefirstpass is false storepass is false clearpass is info default principal is info default will use info default commit succeeded info default trace default logging in using logincontext and subject subject principal httplocalhostlocaldomainjbossorg private credential for httplocalhostlocaldomainjbossorg trace default creating gssname for principal info default found keytab for info default found keytab for trace default obtained gsscredentialcredential trace default handling servercredentialcallback successfully obtained credential type typeclass orgwildflysecuritycredentialgsskerberoscredential algorithmnull trace default using spnegoauthenticationmechanism to authenticate httplocalhostlocaldomainjbossorg using the following mechanisms trace default caching gsscontext trace default caching kerberosticket trace default sent http authorizations trace default request lacks valid authentication trace default created httpserverauthenticationmechanism for mechanism trace default handling mechanisminformationcallback typehttp namespnego hostnamelocalhostlocaldomain trace default evaluating spnego request cached gsscontext trace default obtaining gsscredential for the service from callback trace default used cached gsscredential gsscredential httplocalhostlocaldomainjbossorg accept httplocalhostlocaldomainjbossorg accept trace default handling servercredentialcallback successfully obtained credential type typeclass orgwildflysecuritycredentialgsskerberoscredential algorithmnull trace default using spnegoauthenticationmechanism to authenticate httplocalhostlocaldomainjbossorg using the following mechanisms trace default caching gsscontext trace default caching kerberosticket trace default sent http authorizations trace default processing incoming response to a trace default gsscontext establishing sending negotiation token to the trace default sending intermediate challenge info main negotiate response in http headertagged der sequence tagged der tagged trace default created httpserverauthenticationmechanism for mechanism trace default handling mechanisminformationcallback typehttp namespnego hostnamelocalhostlocaldomain trace default evaluating spnego request cached gsscontext trace default sent http authorizations trace default processing incoming response to a info default entered with info default java config name info default loaded from java info default keytabinputstream readname info default keytabinputstream readname info default keytabinputstream readname info default keytab load entry length type info default keytabinputstream readname info default keytabinputstream readname info default keytabinputstream readname info default keytab load entry length type info default keytabinputstream readname info default keytabinputstream readname info default keytabinputstream readname info default keytab load entry length type info default keytabinputstream readname info default keytabinputstream readname info default keytabinputstream readname info default keytab load entry length type info default looking for keys for info default found unsupported keytype for info default added key info default added key info default added key info default etype info default default etypes for permittedenctypes info default etype info default memorycache add to info default krbapreq authenticate info default etype info default delegated creds have snamekrbtgtjbossorgjbossorg info default setting peerseqnumber to info default etype info default setting myseqnumber to trace default associating delegated gsscredential with trace default gsscontext established trace default principal assigning prerealm rewritten realm name postrealm rewritten realm rewritten trace default role mapping principal decoded roles realm mapped roles domain mapped roles trace default authorizing principal trace default authorizing against the following attributes trace default permission mapping identity with roles implies orgwildflysecurityauthpermissionloginpermission trace default authorization trace default runas authorization succeed the same trace default handling authorizecallback authenticationid authorizationid authorized trace default authorized by callback handler true clientname trace default credential delegation enabled delegated credential gsscredential initiate initiate trace default handling authenticationcompletecallback trace default handling securityidentitycallback identity authorizationidentityempty realminforealminfonamefilesystemrealm trace default gsscontext established and authorized authentication trace default role mapping principal decoded roles realm mapped roles domain mapped roles trace default sending intermediate challenge info main negotiate response in http headertagged der sequence tagged der tagged der octet string kdbtlsyg srk tagged der octet string kdbtlsyg srkdebug is true storekey false useticketcache false usekeytab false donotprompt false ticketcache is null isinitiator true keytab is null is true principal is null tryfirstpass is false usefirstpass is false storepass is false clearpass is falserefreshing kerberos configuration user entered username is succeeded entering logout logged out trace default created httpserverauthenticationmechanism for mechanism trace default handling mechanisminformationcallback typehttp namespnego hostnamelocalhostlocaldomain trace default evaluating spnego request cached gsscontext trace default principal assigning prerealm rewritten realm name postrealm rewritten realm rewritten trace default role mapping principal decoded roles realm mapped roles domain mapped roles trace default authorizing principal trace default authorizing against the following attributes trace default permission mapping identity with roles implies orgwildflysecurityauthpermissionloginpermission trace default authorization trace default runas authorization succeed the same trace default handling authorizecallback authenticationid authorizationid authorized trace default authorized by callback handler true clientname trace default credential delegation enabled delegated credential gsscredential initiate initiate trace default handling authenticationcompletecallback trace default handling securityidentitycallback identity authorizationidentityempty realminforealminfonamefilesystemrealm trace default associating delegated gsscredential with trace default spnego orgwildflysecurityhttphttpauthenticationexception callback handler failed for unknown reason at at orgwildflysecurityhttpimpl at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at by javalangillegalstateexception no authentication is in progress at at at at at at trace default authentication failed orgwildflysecurityhttphttpauthenticationexception http authentication failed validating request no mechanisms remain to continue authentication at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at
1
logging this to request that user ntlanglois be able to deploy release artifacts for the comcernerbunsen group into staging and promote them into the maven central repository currently i username amarvakul have permission to do so but ntlanglois should be able to as well
0
when enablingtrue in binjbossclixmlvaulted string will get misinterpretedexample if a command has thenthe parser sees this as namedefaultvalue with namevault the sign in various ways did not help the only way is to switch back to falseideally the parser should recognize vaulted string and pass then in unchanged
0
description openshiftdevpem private ssh key from the openshiftsharedsecrets repository is going to be removed this week and references to it in our internal documentation will also be removed as winc team use this key to share access to build and test instances we need to develop a new process to do so a promising option is bitwarden which allows one to share keys and other sensitive artifacts securely between teammates if ci processes inject the openshiftdevpem private key or a matching public key into our build instances we need to follow the openshift ci documentation for adding secrets to replace it engineering details acceptance criteria cicd pipeline should work without any regression should be able to create clusters in awsazurevsphere with new solution openshiftdevpub public key is removed from vsphere golden images
0
the firstn operator is only supported by the java api see functionality needs to be ported to the scalaapi as well right now the corresponding methods are excluded from the scalaapicompletenesstest
1
when a task is submitted and takes a consumer node the code looks like thiscode if tailcompareandsetnexttailnext tailnextnext poolthreadnode consumernode poolthreadnode tailnext if consumernodecompareandsettaskwaiting runnable unparkconsumernodegetthread result exeok break otherwise the consumer gave up or was exited already so fall out and codethe issue is that if the consumer is a core thread it may move back from gaveup into waiting however as it has already been removed from the list it will never be notified againthis results in a hung thread that will never be unparked and will prevent the pool from shutting down
1
description of problemwhen kiesession is serialized a npe is thrown in see the attached stack trace for detailsi am not sure what exactly causes the npe to be thrown it probably depends on using some user defined facts if i removed them the npe did not occurplease see the reproducer from the pull request which will be attached shortlyversionrelease number of selected component if applicabledrools reproducibleplease run jpapersistentstatefulsessiontesttestmorecomplexrulesserialization test from the attached pull request
1
this bug was imported from another system and requires review from a project committer before some of the details can be marked public for more information about historical bugs please read why are some bugs missing informationyou can request a review of this bug report by sending an email to please be sure to include the bug number in your request
0
spanish translations of the messageproperties
0
when modifying an client in the admin ui you get an client already exists error the error can be reproduced as follows create a new client just assign name and redirect uri and save the client modify client scopes eg remove offlineaccess from assigned optional client scopes add offlineaccess to assigned default client scopes go back to settings tab make some modification eg enable implicit flow press save now you get error error client already exists the error seems to be triggered by moving client scopes between assigned optional and default also an error occurs that completely resets the client scope settings this can be reproduced by continuing the above flow press cancel to dismiss change in setting tab remove the offlineaccess scope from assigned default client scopes so that it now is in available client scopes go back to settings tab make a change and press save this will succeed but now the client scopes have been reset and the offlineaccess is in the assigned optional client scopes i am using postgres db if that should make a difference the error is triggered by a duplicate key value violates unique constraint ccliscopebind from line in is pretty sketchy to try to workaround as you need to reset client scope settings before you can make any other changes and then restore your client scopescolor
1
generated stub throws the folllowing exception when trying to invoke an inonly operation specified in the targeted incoming message input stream is null at at at at at
1