text_clean
stringlengths
10
26.2k
label
int64
0
1
description engineering description apparently as explained in david goldens response in the steps to build the tools have changed however the readmemd has not be updated to reflect this change additionally it mentioned a file setgopathsh that no longer exists scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
1
as a user i want to use a scroll bar to scroll the license agreement so i can click the agree button
0
there are some explanation and many reference to the concept of natural order ie natural operator the order of documents of collection stored on diskat first i believe it but i start to doubt it after some practical test i find this natural order is actually an alias of insertion order below is my testcode dbccfind dbccinsertbulkwriteresult writeerrors writeconcernerrors ninserted nupserted nmatched nmodified nremoved upserted dbccfind id id id id dbccfindshowdiskloc id diskloc file offset id diskloc file offset id diskloc file offset id diskloc file offset nremoved dbccfind id id id dbccfindshowdiskloc id diskloc file offset id diskloc file offset id diskloc file offset ninserted dbccfind id id id id dbccfindshowdiskloc id diskloc file offset id diskloc file offset id diskloc file offset id diskloc file offset codecolorbluethe space of has been reused on disk but the output order changed why i think there is some mechanism to maintain insertion order that assure above output order which is not told on the manual is it right color
1
this is a regression from parts of the low level io methods were rewritten and they are not properly throwing an endofstreamexception as did when the server closes the socket
1
shards a shard contains primary、secondary、arb shard have more than records avgobjsize≈ this is example of my shard is primary and crashed after some one query data noformat db fangjia collections objects avgobjsize datasize storagesize numextents indexes indexsize ok glestats lastoptime electionid noformat mongodb log: noformat i command command mydatasitelogs command find find sitelogs filter id in projection city region districtname renttype name totalprice area height floor fitment room hall toilet pdate url shardversion plansummary ixscan id hashed locks global acquirecount r database acquirecount r collection acquirecount r protocolopcommand i command command mydatasitelogs command find find sitelogs filter id in projection city region districtname renttype name totalprice area height floor fitment room hall toilet pdate url shardversion plansummary ixscan id hashed locks global acquirecount r database acquirecount r collection acquirecount r protocolopcommand i sharding cluster pinged successfully at by distributed lock pinger sleeping for i network end connection connections now open i network end connection connections now open i network end connection connections now open i network end connection connections now open i network end connection connections now open i network end connection connections now open i network end connection connections now open i network end connection connections now open i network end connection connections now open i network end connection connections now open e storage wiredtiger wtcursorsearchnear handleread pread failed to read bytes at offset inputoutput error i network end connection connections now open i invariant failure ret resulted in status unknownerror inputoutput error at srcmongodbstoragewiredtigerwiredtigerindexcpp i network end connection connections now open i network end connection connections now open i network end connection connections now open i network end connection connections now open i control begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname linux release version smp fri nov utc machine somap mongod mongod end backtrace i aborting after invariant failure noformat
0
upon inspecting a unit test we recently discovered that the servers logic for analyzing modified and renamed paths seems to mistakenly mark a rename from x to y like set y x as only renaming whereas it should also mark y as modified since the old value is getting overridden
0
what problem are you facing unable to upload files with gridfsbucket in macos only works fine in linux what driver and relevant dependency versions are you using steps to reproduce const stream requirestream const util requireutil pipeline utilpromisifystreampipeline const mongoclient gridfsbucket requiremongodb client await mongoclientconnectmongodb usenewurlparser true useunifiedtopology true const database const bucket new gridfsbucketdatabase bucketname aaa rs fscreatereadstreamvartmptesttxt ws bucketopenuploadstreamtest await pipeliners ws mongoservererror duplicate key error collection index dup key filesid n
1
since mongo we are getting spurious errors in our test suite because there is a delay between the creation of a mongo user and the access rights being actually available but only in sharded clusters our test creates a user that has a read only role to a single database and then instantly tries reading from the database using this user on this test fails about of the time it seems like there is a delay in the propagation of access rights i tried adding a delay of seconds between the call to create the user and the call that tests the access rights and the issue no longer appears so it seems like in mongo user creation andor propagation is asynchronous but i could not find anything in the docs about this i tried setting the write concern to all nodes but that does not fix it
0
compass version continually prompts for the keychain password at startup even when always allow is selected
1
on step of the manual installation this is missingsource bashrcotherwise of run wont work
1
lets assume we have the following recordcodejavascript dbusersfindone id info qu start stop se start stop se i start stop se ise m start stop se m start stop so start stop codeand update it with the following codecodejavascript infosestarta infose istarta infose ise mstart a infose mstarta infosostartacodeeverything seems to work out ok but infose actually was not updated but duplicated codejavascript dbusersfindone id info qu start a stop se start stop se i start a stop se ise m start a stop se m start a stop se start stop so start a stop id info se start stop se start stop code
1
part of the max staleness draft spec
0
this ticket is related to we are going to run atlas proxy with mongodb version in the future while trying to run the test harness with a new mongod in dev i found a breaking change we depend on the rolesinfo command to load the roles we are interested in with the current command params doesn’t populate inheritedprivileges which we use and doesn’t include the isbuiltin field binaries we were using for testing enterprise macos enterprise rhel code to reproduce the bug codejava arr arrpushrole backup db admin arrpushrole clustermonitor db admin arrpushrole dbadmin db admin arrpushrole dbadminanydatabase db admin arrpushrole enablesharding db admin arrpushrole read db admin arrpushrole readanydatabase db admin arrpushrole readwrite db admin arrpushrole readwriteanydatabase db admin arrpushrole readwrite db arrpushrole dbadmin db arrpushrole read db arrpushrole dbadmin db arrpushrole readwrite db arrpushrole readwrite db nolimitstest arrpushrole dbadmin db nolimitstest arrpushrole read db arrpushrole dbadmin db arrpushrole readwrite db nolimitstimtest res dbadmincommandrolesinfo arr showbuiltinroles showprivileges printjsonres code the output for and is attached in the comments for comparison
1
when a cursor is cleaned up because of logical session killing either logical session timeout or a killsessions command it currently cleans up silently this is nonobvious for users especially when using no timeout cursors and default sessions it would be a useful diagnostic if that cleanup triggered a log line explaining the reason for the kill
0
the serverstatusdur documentation is inaccurate with respect to the period of time covered by the reported statistics
1
noformatcprogram files visual studio warning deletion of pointer to incomplete type mongodbclientcursor no destructor called see declaration of mongodbclientcursor cprogram files visual studio while compiling class template member function stdautoptrautoptrvoid throw see reference to function template instantiation stdautoptrautoptrvoid throw being compiled see reference to class template instantiation stdautoptr being compilednoformat
1
while a query is yielded it may be killed for a number of reasons including collection drop database drop index drop killed queries that have generated a partial result set will return these partial results without returning an error we should consider having killed queries fail with a useful error message
0
we need to call out within the first sentence that this calls an outside js thread like mapreduce and is recommended against and link to group for aggregation frameworkill file another ticket to deprecate this or flag it as legacy or some such
1
im not sure if theres a succinct log snippet showing where this is erroringits breaking in windows and linux but seems to be limited to debug builds here are some examples of failures in mci buildlogger i think this was introduced by this commitheres the oldest recent occurrence in mci of this particular test failing
0
mongo has an perf issue where n is the number of collections in a database this is a regression from for our dataset this causes a minutes hang making mongo completely unusable the hang can be hit in many ways including when calling dbgetcollectionnames when starting mongod when doing a mongodump when a secondary mongod server in a replica set transitions to be primary the nature can be clearly seen by this chart showing measured time to perform a dbgetcollectionnames for a given number of collections theres also an attached graph showing a quadratic best fit number collections in collections time seconds context we have a multitenant system where each tenant is served by a new collection as a result we have on the order of collections in a database we started upgrading to mongo in our production environment but ran into this issue fortunately before we upgraded the primary and had to do an emergency rollback to were now stuck on for the moment but extremely eager to get the benefits of mongo to address pressing issues in production can you please acknowledge this bug and provide an estimate for when it can be fixed and released
1
codereplsettest await syncedtrue thu jul assertion not master or secondary cannot currently read from this replset member nsdbnamesystemindexes query thu jul problem detected during query over dbnamesystemindexes err not master or secondary cannot currently read from this replset member code err not master or secondary cannot currently read from this replset member code jul uncaught exception error failed to load
1
uninterruptiblelockguard only suppresses the interruptions in lock acquisition we should use opctxrunwithoutinterruption instead in case checkforinterrupt is called elsewhere opctxwaitforconditionorinterrupt in checkoutsessionforkill is an example
0
auth schema upgrade dropping indexes from warning auth schema upgrade failed to drop indexes on adminsystemnewusers unknownerror cant drop system ns
0
now they accept only stringdata which lead to code that has namespacestring has to create a temporary string to do the lookup as there are no conversion from namespacestring to stringdata
0
our msi version should include a component which is by default for example and can be used to release new msis as upgrades to previous msis without doing an entire new server release
0
listindexes since is no longer returning ns property verify that our indexes plugin is not using this property –
0
see linked bf ticket this is only currently an issue if itoa returns unterminated stringdata itoa is under no obligation to terminate its stringdata however
0
moped contained a few fixes written by durran and myself to reauthenticate if a not authorized response came back from the server we upgraded a few servers to the official ruby driver and saw sporadic not authorized responses cause various exceptions in our application
1
i had a patch build and noticed that the generatetask command took around minutes to complete this is long enough that i was concerned something was wrong i think it is worth exploring if there is something possible to do for patch build i dont think this is as much of a problem on mainline builds here is a link to the task i was watching
0
when sessionunstashtransactionresources throws a snapshottooold error we do not abort the transaction since unstashtransactionresources is not within the scopeguard that aborts the transaction however if the command was part of a multistatement transaction we leave the session in a state where txnstatekinprogress and txnresourcestash is empty this is an inconsistent state for the session it also means that when the mongos retries the first statement of the transaction it must leave off the starttransaction argument since the transaction is already in progress which is incorrect instead when we return a snapshottooold error we should forget that the current transaction ever happened so that the mongos can retry with starttransactiontrue
0
if a node with indexes is upgraded to the indexes will have an implicit version however a resync of the node or sync of a new node will implicitly recreate the index with explicit version this shouldnt be a big issue aside from possible but unlikely differences in behavior between replica set nodes once is fixed sharded clusters should handle this well
0
when stripping uuids the default mongorestore converts createindexes oplog commands into legacystyle oplog insertions into the systemindexes collection this was necessary because mongodb required a uuid for createindexes until available in the server removed support for systemindexes in applyops in causing this workaround to break some possible options to fix this problem ask for to be reverted at least the relevant parts to preserve the legacy api get buildinfo at the start of mongorestore to check the server version so we can skip the workaround for servers this would be the first versionspecific behavior in the tools remove the workaround entirely leaving tools broken for use with and
0
cant install as a service on windows dbpathemongodbtest logpathemongodbtestloglogtxt logappend port install servicename mongodbtest servicedisplayname mongodbtestlog infomation install has to be used with logpathps take effect with other of mongodb on my windows run as admin had install hotfix
1
we are trying to setup a recovery strategy from multiple master single slavethe current implementation of only will allow for only a single database per host to be replicated but we need a way to replicate a small subset of databases ie out of a possible databasesi have tried a workaround to no avail which was adding multiple source records with different only parameters but it does not seem to work
1
paneltitledownstream change the new shell will likely want to implement similar retry behavior in rsadd panel description of linked ticket the rsadd shell helper will be modified to retry with a higher ‘configversion’ on these errors
1
per conversation with the official releases page is here this page has stale release info that isnt being updated i suggest that we remove the specific link on this page and just link to that
1
these should be used to test raw performance of cancelationtokens
0
certain test suites like nopassthrough are reponsible for starting their own mongods as part of this the shell started by resmoke is responsible for printingredirecting the servers output the logging mechanism responsible for this has truncation enabled by default meaning a log line which is marked as truncation disabled like below may still ultimately get truncated when running under certain test suites codejava bt btattr obj code a quick way to reproduce this is to add an absurdly long log message somewhere in the server and exercise it in a nopassthrough test heres a patch which adds one to the plancachestats agg stage not a common code path base rev codejava diff git asrcmongodbpipelinedocumentsourceplancachestatscpp bsrcmongodbpipelinedocumentsourceplancachestatscpp index asrcmongodbpipelinedocumentsourceplancachestatscpp bsrcmongodbpipelinedocumentsourceplancachestatscpp it in the license file define mongologdefaultcomponent kquery include mongoplatformbasich include mongodbpipelinedocumentsourceplancachestatsh include namespace mongo boostintrusiveptr documentsourceplancachestatscreatefrombso parameters object must be empty found typenamespectype specembeddedobjectisempty static constexpr char fmtbt message from me bt stdstring giantstr for int i i i giantstr s giantstr remainder fmtbt btattr giantstr return new documentsourceplancachestatspexpctx code then run a test which exercises this code codejava myresmoke suitesnopassthrough jstestsnopassthroughplancachestatsaggsourcejs code one would expect the entire string to be printed including the word remainder but only a prefix is printed i am using the following fix locally so that i can continue other work it might actually make sense to commit this or something similar since i doubt we use the programoutputmultiplexer for anything besides testing and we can enforce that log lines are truncated on mongodmongos separately codejava diff git asrcmongoshellshellutilslaunchercpp bsrcmongoshellshellutilslaunchercpp index asrcmongoshellshellutilslaunchercpp bsrcmongoshellshellutilslaunchercpp void programoutputmultiplexerappendlineint port auto plainshelloutputdomain loggergloballogmanagergetnameddomainplainshelloutput loggerlogstreambuilder builder plainshelloutputdomain getthreadname loggerlog buildersetistruncatablefalse sinkprogramoutputbuffer sinkprogramoutputbuilder code
0
according to the server selection specification operations run with read preference of primary should succeed when directly connected to a single mongodb server regardless of its type
0
currently remove is treated like a let variable by the expression schema walker and so is represented by an notencryptednode in the schema tree however this means that aggregations cannot conditionally include or exclude encrypted fields with matching encryption in a project stage and then later match or group on that field since the combination of remove with the encrypted field will result in a mixedstatenode instead remove should just not result in the additionreconciliation of any schema nodes
0
pursuant to the vast array of tickets in the documentlevel locking epic summarized in we need to document behavior surrounding documentlevel lockingfor sprint pending details from pmtigerteam
1
these mongod options are and are being removed so we should not use them any longer in our tests
0
when running phpunit test suites for an application that uses this library deep down sometimes the objectid that is generated and objectid that is actually saved in the database differ this issue does not persist across test cases it happens only sometimes but whenever it happens it happens on the same test cases in the same lines this issue also disappears if the order of test case execution is changed ie if the test case is run individually it will not happen another peculiar thing is the generated and saved ids differ by example noformat expected actual noformat another example noformat expected actual noformat my stack looks like this noformat phpunit laravel framework laravelmongodb a mongodb helper package for laravel mongophplibrary mongophpdriver php alpine docker noformat i will add more description as i do more debugging
0
automation agent changelog version ability to change storage engine for replica sets with data nodes build more details logging when mongodb or monitoring and backup agent logs rotated use a fixed name for the kerberos credentials cache support new distro specific mongodb builds when deleting directories do not delete symlinks monitoring agent changelog version add explicit timeout for ssl mongod connections use a fixed name for the kerberos credentials cache backup agent changelog version add explicit timeout for ssl mongod connections use a fixed name for the kerberos credentials cache optimization for syncs of collections with lots of small documents
1
im getting a strange misbehaviour during heavy doc inputs and transformations with multiple connectionsim running a single server instance no replicationsharding the problem occured since server update from to and phpdriver update from to mongod terminates with the following log outputmon sep invalid access at address sep got signal segmentation faultmon sep backtrace logstreamget called in uninitialized statemon sep error clientclient context should be null but is not clientconnlogstreamget called in uninitialized statemon sep error clientshutdown not called conn
1
the native driver doesnt support causal consistencyreal time order as expected codejava var db clientdbtest var largeobj forvar i i i largeobj mathrandom var session clientstartsession var v datenow consoleloginit v v var collection dbcollectiontest collectionupdateoneid set objectassignv v largeobj session upsert true function consolelognew date a collectionfindoneid projection v session functionerr result consolelognew date err result collectionupdateoneid set v v session function consolelognew date b collectionfindoneid projection v session functionerr result consolelognew date err result settimeout dbcollectiontestfindoneid projection v session consolelog code expect execute order get versions native
0
dropconnections currently calls processfailure if it were to triggershutdown instead then it would generate the same effect but officially indicate not to spawn new connections until someone requests one it would also improve the reliability of dropconnectionsreplsetjs
0
tests are failing i am not sure how to interpret the logs
1
testagent failed on osx host project evergreen selftests commit diff revert change mongodb driver to gomongodborgmongodriver this reverts commit apr utc evergreen subscription evergreen event teststatussuite logs history testagentfailstostarttwice logs history
0
breaking up into smaller tickets this is the first like serverstatus metrics for elections create a class applicationapiversionmetrics an instance of this class will be stored as a decoration on service context use replicationmetrics as reference the class should be populated with the field “appnameversiontimestamps” of type map define a mutex as a private field in the class lastly add a member function addversiontimestampapplicationname apiversion which acquires the mutex and writes a timestamp to an applicationname apiversion pair in appnameversiontimestamps in execcommanddatabase of serviceentrypointcommoncpp and in runcommand of strategycpp invoke “addversiontimestamp”
0
version released backup agent will now take a clustershot even if the balancer cannot be stopped clustershots taken in this manner are not guaranteed to be consistent and will flagged as such in the ui
1
the legacy driver is in some sense we should almost certainly track bugfixes that affect the driver codebase
1
the documentation at contains several uses of workingdir parameter to shellexec yet there is no definition of what the workingdir is and how it works can the current behavior of workingdir please be documented
0
this ticket will facilitate ingesting logs to atlas so we will unify logical and file copy based initial sync and we will use field method in statistics to differentiate between them so after this ticket is done we will update the log ingestion rule created here to include method field
0
the official format for a binary object is binary type the type is a twochar hex string ref but jsonutil uses binary type the type is an integer insteadi noticed this bug when trying to read the output of mongoexport from pythonive attached a patch to fix this bug the patch also updates the doctests in the jsonutil modulethis change will break any code that has stored the broken json anywhere and then reads it back after updating
0
when we shard a collection shardservershardcollection runs a count command on the configchunks collection to make sure there are no chunks already in the collection but right now the count query uses readconcern local which can cause a problem in the following scenario create a sharded collection drop the sharded collection which succeeds on the config server primary try to create a sharded collection with the same name shardservershardcollection runs a count command to see if chunks exist from a previous sharded collection with the same name the count command targets the config server secondary where the replication of the drop from the previous collection has not completed and so there are still chunks in the collection the shardservershardcollection command fails with the error manualinterventionrequired a previous attempt to shard collection failed after writing some initial chunks to configchunks please manually delete the partially written chunks for collection testuser from configchunks even though the previous drop completed successfully we should use readafteroptime to do this count command so that even if we read from a config server secondary well wait until previous operations have replicated
0
the async mock stream framework offers a stream that pauses itself to allow the tester to inspect state that has been written to it or read from it by the network interface those pauses are implemented as follows code mockstreamwritebuffer handler blockkblockedafterwrite ioservicepostnext task code this causes the nias thread which has called write to hang until the test unblocks it then it cannot use that thread for other things like checking for timed out operations while the write is blocked on a socket we should return instead of blocking there essentially pausing that path of the state machine the call to unblock instead of actually unblocking should post a new handler to the ioservice that runs the part of write that would have happened after the block
0
var mongoclient requiremongodbmongoclient connection urlvar url mongodbreplicationurlmyprojectmongoclientconnecturl db readpreference nearest readpreferencetags functionerr db assertequalnull err consolelogconnected correctly to server var col dbcollectiontest colfindtoarrayfunctionerrresult the result array represent objects from locus
1
noformat bjoritaylorswift  sourcesmongoc   master  mongocping ok numberint bjoritaylorswift  sourcesmongoc   master  examplepool no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired no suitable servers found serverselectiontimeoutms expired noformat
1
im not sure what the impact of this could be but we should be logging the uuid of the collection here
0
hi this is my first jira ticket so please forgive me if i have missed anything in production i try to run mapreduce with a sharded inputcollection and a sharded outputcollection the input collection is fine however when i use a sharded output collection with the sharded true option i get the following error quote mongodbdriverexceptionruntimeexception unknown mr field for sharding jsmode in stack trace mongodbdriverserverexecutecommandmycommand objectmongodbdrivercommand objectmongodbdriverreadpreference mongodboperationmapreduceexecuteobjectmongodbdriverserver quote on the serverside there is already a ticket for this issue since the issue being quote a mapreduce command issued against a sharded collection will fail with the message unknown mr field for sharding jsmode quote on the serverside this issue is not a blocker as the user can simply avoid setting the jsmode flag altogether when using php i avoid setting the jsmode flag in my code but after execution is handed over to the mongodb php extension i get the error so it seems that the php extension is setting the jsmode flag under the hood when the server does not support it this means that in php mapreduce does not work with a sharded outputcollection i can only imagine that the solution would be to avoid setting a default value for jsmode in the extension but that may have knockon effects in any case i would appreciate suggestions for a workaround as i have a db with nearly a tb of data so sharding is necessary thanks sample code to cause the error on the server codejavascript function mapfunction emitthisid this function reducefunctiontestinputdocid testresults dbinputcollectionmapreduce mapfunction reducefunction out merge outputcollection sharded true this line causes drama wheter the value is true or false however the fix is easy just remove the line entirely jsmode false query id code sample code to cause the error in php codephp mapreduceoutoption merge selfcollectionname this line causes the error indirectly sharded true collgetcollectionmapreducemapfunction reducefunction mapreduceoutoption query scope testparams code
1
the migratefromstatus data structure is largely synchronized by the lock on the collection containing the chunk that is currently migrating this implementation assumes that modifications of a collection exclusively lock that collection which is no longer the case for documentlevel locking storage engines
1
right now it says assertion error reading file if a line is longer than
0
cant install on centos with yum
1
outofbounds read incorrect values read from a different memory region will cause incorrect computations outofbounds read from a overrun function call may return overrun assigning bytesneeded the value of bytesneeded is now overrun assigning firstusedbyte reinterpretcastvalue bytesneeded firstusedbyte now points to byte of value which consists of bytes
0
while investigating a deep stack we found that the libunwind printstacktrace implementation was doing its job and trying to give as much information as possible but unfortunately this fills up the log statement limit with just the raw instruction addrs so all the good stuff in the stack trace is truncated this is no good easiest fix is to just not use libunwinds cursor steps and just always do a rawbacktrace
0
attempting to configure in alpine linux produces the following error noformat checking openssl dir for mongodb yes checking whether to use system default cipher list instead of hardcoded value no checking php version tmppeartempmongodbconfigure configurelineno line syntax error unterminated quoted string error tmppeartempmongodbconfigure withphpconfigusrlocalbinphpconfig failed noformat bash shells seem to ignore this error however alpine uses a different shell by default which may be more strict
1
all db record accesses that occur while a pagefaultretryablesection is instantiated on a threads stack must be wrapped in an exception handler for pagefaultexception writebatchexecutorexecinserts violates this rule allowing pfe to escape up the stack leading to server aborts when records are determined to not be in memorycorrect use of pfrs is as followscodebool done falsepagefaultretryablesection pfaultsectionwhile done try dblock mylockdbname do all operations involving record access here done true catch const pagefaultexception pfe pfetouch code
1
hii was trying to get the rename code working and i think i stumbled across a possible issue with the example create person class with name and age serialized an instance of person to my code returns collection of person and displays to changed person class to name and oldage serialized an instance of person to my when attempting to display collection to console it exceptscodepublic class person isupportinitialize public objectid id get set public string name get set public int oldage get set public idictionary extraelements get set public void begininit public void endinit object agevalue if extraelementstrygetvaluealive out agevalue return var age intagevalue extraelementsremoveage oldage age problem when deserializing the second record extraelements is nullmy resolution the only way that i could make that code work was to do either of the add a constructor to new up extraelementspublic person extraelements new check if extraelements is nullif extraelements null extraelementstrygetvalueage out agevaluereporter richard oneilemail
1
automation agent released ensure that automation agent fails gracefully in the case where an expected user does not exist during an initial import
0
in we defined amcxxflags which contains the first set of compilation flags in we should have a complete list of flags we want to keep its not time to enable them for the benchworkgen files in we are overwriting amcxxflags to an empty list hence no flags are taken into account this ticket should remove that workaround and fix each warning is triggered by the flags defined in amcxxflags definition of done the cpp files in benchworkgen compile with the flags defined by amcxxflags and no warnings are generated during the compilation
0
version released fix issue determining the kerberos keytab for a process on ubuntu
1
we currently convert modeix db lock requests to any admin db to a modex db lock request here in the dblock constructor this should no longer be needed with the new finer grained locking it should be investigated what lock conversions auth code may still need if any on the collection locking level and the db lock modeix conversion to modex should be removed
0
i use server for setup mongodb server a rsashardserv bmongos rockmongoas usual i use rockmongo to connect to mongodb cluster i have created a database with normal user and readonly user when i first connect to mongodb cluster it is normal and correct which just show database however when i try to click on the database refresh a lot of times it suddenly list all the database include admin and the worst you can add admin useris that anything wrong it a bugs or setup problem
0
validate against the test plan
1
we have sslenabled replica set and were connection with driver without problems codejs var db requiremongodbdb var url var opts replset replicaset dbconnecturl opts functionerr db consolelogerr db code im getting an error if i connect with the same options with driver codejavascript var client requiremongodbmongoclient var url var opts replset replicaset clientconnecturl opts functionerr db consolelogerr db result name mongoerror message unabletoverifyleafsignature null code what is the trick to make it work
0
bottom section feel free to tag team with the basic gist is that there are a number of examples that have very terse explanations that need expansion and clarification the basic format should bequotebrief explanation of what the example is attempting to illustrate and what kind of situations you would want to use kind of operation for feel free to have this be minimal if there isnt an obvious utility for this kind of operation the example itself just trim the commentsan ordered list that outlines each stage of the pipe line and what it does this is largely an opportunity to cross reference to the aggregation operators themselves rinse repeat feel free to bounce this back to me or to kay as needed
1
for mongodb several metrics have been removed from mongod the following table details these changes please update our documentation to reflect metricremoved from diagnostic logremoved from profilerremoved from serverstatusnote keyupdatesyesyesnareplaced by keysinserted fastmodyesyesyescan be roughly derived via nmoved and keysinsertedkeysdeleted metrics idhackyesyesyesreporting via plansummary per operation which is now widely available movedyesyesnanot useful given we report nmoved
0
valsvals methods are missing from bsonobject but present in documentation and also mentioned in bsonelementh comments
0
create a scheduler to use when doing initial sync which is passed into the databasecloner and is responsible for the order concurrency and retry behavior during initial sync the scheduler should live above the databasecloner so that it make decision across databases it will be responsible for starting the actual collectioncloners will create a new scheduler which knows how to restart failed collectioncloners
0
i am getting the same error in a mongodb setup single replicaset no sharding noformat mongodb enterprise show dbs e query error listdatabases failed ok errmsg cannot add session into the cache code codename toomanylogicalsessions noformat mongodb version is noformat mongodb enterprise version noformat following are excerpts from mongod log noformat i network connection accepted from connections now open i network received client metadata from application name mongodb shell driver name mongodb internal client version os type linux name centos linux release core architecture version kernel i access successfully authenticated as principal rootuser on admin from client i command task unusedlockcleaner took i command task unusedlockcleaner took i command task unusedlockcleaner took i command task unusedlockcleaner took i control sessions collection is not set up waiting until next sessions reap interval sharding state is not yet initialized i control sessions collection is not set up waiting until next sessions refresh interval sharding state is not yet initialized i command task unusedlockcleaner took i command task unusedlockcleaner took i command task unusedlockcleaner took i command task unusedlockcleaner took i command task unusedlockcleaner took i network end connection connections now open i control sessions collection is not set up waiting until next sessions reap interval sharding state is not yet initialized i control sessions collection is not set up waiting until next sessions refresh interval sharding state is not yet initialized noformat any help would be useful the cluster is down and this is affecting our workloads also is there a way to reopen this jira or should i open a new one when we monitor number of sessions on mongodb we see it is always increasing command to monitor sessions noformat dbaggregate noformat this was observed in two separate single replicaset unsharded setup example rs status noformat mongodb enterprise rsstatus set date mystate term syncingto syncsourcehost syncsourceid heartbeatintervalmillis majorityvotecount writemajoritycount optimes lastcommittedoptime ts t lastcommittedwalltime readconcernmajorityoptime ts t readconcernmajoritywalltime appliedoptime ts t durableoptime ts t lastappliedwalltime lastdurablewalltime laststablerecoverytimestamp laststablecheckpointtimestamp electioncandidatemetrics lastelectionreason electiontimeout lastelectiondate electionterm lastcommittedoptimeatelection ts t lastseenoptimeatelection ts t numvotesneeded priorityatelection electiontimeoutmillis numcatchupops newtermstartdate wmajoritywriteavailabilitydate members id name health state statestr primary uptime optime ts t optimedate syncingto syncsourcehost syncsourceid infomessage electiontime electiondate configversion self true lastheartbeatmessage id name health state statestr secondary uptime optime ts t optimedurable ts t optimedate optimedurabledate lastheartbeat lastheartbeatrecv pingms lastheartbeatmessage syncingto syncsourcehost syncsourceid infomessage configversion ok clustertime clustertime signature hash keyid operationtime noformat
0
this generates a stepdown command which fails noformat ok errmsg stepdown period must be longer than secondarycatchupperiodsecs code noformat this is called by a member with a higher priority that is not currently the primary so it can have a change to do a priority takeover and to be elected the fact that this is currently broken doesnt affect the system because the primary will also stepdown on its own when it sees a higher priority member that can be elected fixing this might improve the timeliness of the stepdown and forthcoming election
0
currently retrywrites option is considered a noncrud option in cluster parlance which means when its given to clientwith a new cluster in created as far as i can see there is no reason to create a new cluster when retrying or not retrying writes therefore this option should be added to the list of crud options additionally crud options should not be passed to cluster because a cluster is reused for multiple clients with different crud options making these changes requires adjusting which clients some tests use due to
0
currently the windows bcryptencrypt function is called with padding enabled every time symmetricencryptorwindowsupdate is called this means that if it adds padding and then is called again there is padding stuck in the middle of the encrypted buffer that wont be removed upon decryption instead symmetricencryptorwindows should maintain its own buffer equal to one block width and only flush it to bcryptencrypt when it is full with no padding symmetricencryptorwindowsfinalize will also be refactored to make one last call to bcryptencrypt to encrypt whatever is left in the buffer with padding enabled
0
the new metadata feature needs to be disabled by default for and hidden behind the enableexperimentalfeatures flag
1
this is huge issuei ran reindex on one of our collections and this was the outputcode nindexeswas msg indexes dropped for collection errmsg exception no index name specified code ok codeafter doing this and calling getindexes and empty list is returned our indexes are indeed goneluckily we ran this on a secondary machine but this is a huge problemthis issue does not affect reindexing on a primary of course one typically does not reindex on a primaryworkaround start the target mongod without replset and on a different port reindex and then restart mongod again with its normal replset command line be sure to use a different port number so no traffic other than administrative hits the machine during this maintenace procedure
1
it would be very nice if http console would have relative urls to inner sections i have an http proxy to rewrite into so the evident problem is that i cannot follow the links
0
just want to report this issue i am getting this on mongodb version i see this error only after debug mode otherwise no error in logs and no data send to kafka brokers api codejava curl x post h acceptapplicationjson h contenttype applicationjson data config connectorclasscommongodbkafkaconnectmongosourceconnector keyconverterorgapachekafkaconnectjsonjsonconverter keyconverterschemasenablefalse valueconverterorgapachekafkaconnectjsonjsonconverter valueconverterschemasenablefalse databaseoznext collectionalbums publishfulldocumentonlytrue pipeline code codejava debug sending command getmore collection albums db oznext clustertime clustertime timestamp t i signature hash binary aaaaaaaaaaaaaaaaaaaaaaaaaaa subtype keyid lsid id binary subtype with request id to database oznext on connection to server debug execution of command with request id failed to complete successfully in ms on connection to server commongodbmongocommandexception command failed with error invaliduuid collection oznextalbums uuid differs from uuid on change stream operations on server the full response is ok errmsg collection oznextalbums uuid differs from uuid on change stream operations code codename invaliduuid operationtime timestamp t i clustertime clustertime timestamp t i signature hash binary aaaaaaaaaaaaaaaaaaaaaaaaaaa type keyid numberlong at at at at at at at at at at at at at at at at at at at at at at at at at at at at at debug sending command killcursors albums cursors db oznext clustertime clustertime timestamp t i signature hash binary aaaaaaaaaaaaaaaaaaaaaaaaaaa subtype keyid lsid id binary subtype with request id to database oznext on connection to server code thanks rajaramesh yaramati
0
it appears that in at least one case evergreen has reported that it successfully archived the data files for a task but they are not linked from the task page
1
problem statementrationale compass connects to a cluster it loads dbs collections and views views come with the information about the pipeline that represents the view that pipeline is however never refreshed and if i go and edit it i see the old version if in the meanwhile the view definition was changed outside of the compass instancecolor only way to get it refreshed is reconnecting even the refresh button on the sidebar does not helpcolor steps to reproduce compass connect to a cluster that has viewscolor a second compass connect to the same cluster and edit the definition of a viewcolor back to the first compass and edit the viewcolor and edit the view againcolor expected results least in case the view definition should be the new onecolor actual results view definition remains the old one until the user reconnectscolor additional notes additional information that may be useful to includecolor
0
in the mongodb drivers added implicit support for sessions which is triggered by the presence of the logicalsessiontimeoutminutes field in the ismaster response in mongos will always set its internal fcv to regardless of the fcv of the shards and as a result it will always report logicalsessiontimeoutminutes in the ismaster response this combined with the drivers spec means that the driver will always send session information regardless of the fcv version of the shards shards which are at fcv will reject any requests which contain session information and this essentially means that drivers cannot be used to talk to a cluster which is at fcv
1
currently enabling safe mode safemodeenabled true leads to mongosafemodeexception to be thrown if the command failed while its a fine thing to do generally throwing the exception if the error is expected think unique constraint violations for example isnt a good idea in the terms of performance in net under windows throwing an exception takes a lot of time hurts cache etci propose a new flag to be added to safemode safemodepermissive it should default to false cannot be set to true unless safemodeenabled is also true like safemodew cant be set to nonzero value while safemodeenabled isnt true and the checks that lead to an exception to be thrown inside the mongoconnectionsendmessage method should only be done if safemodepermissive is falsethe proposed solution completely preserves backwards compatibility and greatly aids if one wants to examine and handle mongodb error responses on his own without incurring the performance penalty of catching exceptions while a simple error code check would suffice
0
my application is write intensive the op type is only insert during the begin hours every thing is ok but after that the insert op may takes a very very long time i have no idea of why that happen the logcodemon dec insert agistesttp dec mem mb dec insert agistesttp dec mem mb dec insert agistesttp dec insert agistestrecords dec old journal file will be removed dec insert agistestrecords dec insert agistesttp dec insert agistestrecords dec insert agistesttp dec mem mb dec insert agistestrecords dec insert agistestrecords dec insert agistestrecords dec insert agistestrecords dec insert agistesttp look at this line mon dec insert agistestrecords dec command agiscmd command getlasterror w wtimeout dec command agiscmd command getlasterror w wtimeout dec command agiscmd command getlasterror w wtimeout show that one insert operation is processing while others are waiting for lockthe data size is about the index data is small against ramafter restarti must kill mongod process it will run normally hours again but hang again at last
1
replace is how many online retailers provide ways with is how online retailers provide ways
0
in the createcollection example wiredtiger should be wiredtiger
1
and log full bson documents input instead of truncating to help debug
0
readpreferences is only respected when included in the connection string when configured programmatically its ignored codego read preference is ignored reads when no primary is present yields notmasternoslaveok not master and slaveokfalse mongoconnstring client err mongonewclientwithoptionsmongoconnstring optionsclientsetreadpreferencereadprefprimarypreferred code codego read preference is respected mongoconnstring client err mongonewclientwithoptionsmongoconnstring code
0
please add bi connector release notes
0
the file size grows fast that doesnt as our expect data and index size under but is create over data files ever i purger old data from database day by dayand also found shard server are unavailable right now df kfilesystem used available use mounted show dbsadmin emptyconfig shstatus sharding status sharding version id version shards id host id host id host id host databases id admin partitioned false primary config id fryattmcmsproduction partitioned true primary fryattmcmsproductioncounterdata chunks too many chunks to print use verbose if you want to force print id test partitioned false primary id admin username admin password xxxxxxxx partitioned false primary id fryattlogmsproduction partitioned false primary
1
our codebase is littered with calls to check if things like resultresult exists or resultqueryerror for opquery messages this is because the wire protocol handlers are returning raw messages even in situations when a full result is not requested this should be cleaned up and contained to the wire protocol level
0