text_clean
stringlengths
10
26.2k
label
int64
0
1
function othertruedbevalfnullfcannot change id it seems like the dbeval case should return a doc with the error
0
currently if a command has an uninitialized startoperationtime because the client still has uninitialized optime when the command starts running computeoperationtime returns the replication coordinators lastappliedoptime as the operationtime to append to the response for the command this operationtime could sometimes be later than the optime for the operation performed by the command to prevent this this block of code should be removed so that we always check and try to use the client last operation time when possible
0
only finetue jun mongodb starting dbpathhomemongodb tue jun invalid access at address tue jun got signal segmentation faulttue jun backtrace usrlocalmongodbbinmongodmain tue jun error clientshutdown not called initandlisten
1
journalling tracks the lsn of the last write to be fsynced to the data files so it knows where to start applying the journal file during recovery due to a sequencing error in how it is updated it may be ahead of what is actually synced to the data files
1
when doing or queries when indexes are used without specifying sort clause no results are returned this happens for example when doing counts which ignores sort clauses below ive created a small example that reproduce the bugfirst ive created a new database and put some documents there use videostest switched to db videostest dbvideossavetags title dbvideossavetags title dbvideossavetags title i perform an or query searching by tags in the or clause and outside it dbvideosfindtags all or id tags title dbvideosfindtags all or id tags title dbvideosfindtags all or tags all id tags title id tags title everything is fine here the results are as expected now i create an index on tags dbvideosensureindextags dbvideosfindtags all or tags all dbvideosfindtags all or tags all sorttitle id tags title id tags title the first or query without sort doesnt return any documents but the second one with sort clause returns correctly i asked the server to explain the queries dbvideosfindtags all or tags all explain clauses cursor basiccursor nscanned nscannedobjects n millis nyields nchunkskips ismultikey false indexonly false indexbounds cursor basiccursor nscanned nscannedobjects n millis nyields nchunkskips ismultikey false indexonly false indexbounds nscanned nscannedobjects n millis dbvideosfindtags all or tags all sorttitle cursor btreecursor nscanned nscannedobjects n scanandorder true millis nyields nchunkskips ismultikey true indexonly false indexbounds tags b b the one without sort clause uses two clauses both of them dont use indexes reporting scanned objects the one with sort clause uses an index and reports scanned objects which is correctfinally ive attempt to merge the outside tag filter into the or query dbvideosfindor tags all id tags title id tags title dbvideosfindor tags all sorttitle id tags title id tags title it returns correctly even without sort clause explaining the queries gives this dbvideosfindor tags all explain cursor btreecursor nscanned nscannedobjects n millis nyields nchunkskips ismultikey true indexonly false indexbounds tags b b dbvideosfindor tags all sorttitle cursor basiccursor nscanned nscannedobjects n scanandorder true millis nyields nchunkskips ismultikey false indexonly false indexbounds now it uses an index when sort clause is not specified but it doesnt use the index when sort is specified maybe the query planner chose not to use index but i really dont know whyis this an expected behavior for or queries
1
im not sure why this happens so im hoping you understand the cli output there are mentions of not being able to create instances properly on a single thread originally i only had one core allocated to the vm and thought that was the problem i allocated a total of and reran make after a make clean same error now im not sure whats going wrong the build for the cdriver went fine the c driver install is failing somewhere in what looks like the unit tests
1
the shell needs this pr to be published by the bson package and then by the driver note i just realized this is not optional since we no longer can rely on the driver using the bson we really need this pr to be published in the drivers bson
1
there is only one task flagging function in the monitor which times outends tasks when the heartbeats expire we should move this functionality into an amboy job and remove it and its complicated machinery from the monitor
0
shutting down the service entry point outside tsan and asan builds immediately returns true without running any shutdown code a separate interface shutdownandwait is introduced to shutdown the service entry point codecbool serviceentrypointimplshutdownmilliseconds timeout if hasfeatureaddresssanitizer hasfeaturethreadsanitizer when running under address sanitizer we get false positive leaks due to disorder around the lifecycle of a connection and request when we are running under asan we try a lot harder to dry up the server from active connections before going on to really shut down return shutdownandwaittimeout else return true endif code we should remove the specialcase handling in shutdown and have it run the body of shutdownandwait this would also obviate the need for shutdownandwait acceptance criteria change the code to have if is at the higher level of the stack make sure we link these to the relevant shutdown project
0
running a simple program compiled with or that only initializes the driver on vanilla ubuntu yields this behaviorcodeattempt to add global initializer failed status duplicatekey globallogmanageraborted core dumpedcode
1
problem description on a windows machine without for example snappy in the require path mongodb ends up in an infinite loop when trying to optionally require snappy this is beacuse a bug in the package requireoptional the search starts in the given path if the file isnt found it continues with the parent folder and so on in a loop the stop criterias for the loop are the file is found the current path equals it never matches the second critera on a windows machine where paths starts with the drive letter for example c see bug for description a pr for this has been available since nov without being merge i think its safe to say the package is dead the infinite loop happens directly in the requiremongodb call possible fix switch to the package requireoptional the mongodb package uses requireoptional with only one parameter this is functionally equal to calling requireoptional with only one parameter so a search and replace for requirerequireoptional to requirerequireoptional is enough ive successfully monkey patched my systems nodemodulesmongodb this way
1
the latest node driver seems to spread reads to all nodes in the replica set instead of preferring secondaries for example this javascript code codejavascript const mongoclient requiremongodbmongoclient const mongouri mongoclientconnectmongouri functionerr conn if err consolelogerr for var i i i conncollectiontestfindoneseq i functionerr res consolelogres code will result in these numbers in mongostat code host insert query update delete getmore command code however a similar code in python spreads the reads only to the secondaries codepython import pymongo conn pymongomongoclient replicasetreplsetreadpreferencesecondarypreferred for seq in print conntestsequencesfindoneseq seq code which results in code host insert query update delete getmore command code i have also confirmed that the java driver behaves like the python driver since secondarypreferred should prefer the secondaries and only read from the primary when no secondaries are available i think the node driver should behave like the python the java driver
0
sun jun replicasetmonitor setname ismaster false secondary false hosts me maxbsonobjectsize localtime new ok sun jun uncaught exception map reduce failed code errmsg exception dbclientbase transport error ns testcmd query mapreduceshardedfinish mapreduce foo map function n emitthisval reduce function key values n return arraysumvaluesn query i gte out reduce bigoutreduce nonatomic true inputns shardcounts input emit reduce output input emit reduce output counts emit input output reduce ok sun jun socket recv connection reset by peer sun jun socketexception remote error socket exception server sun jun socketexception remote error socket exception server sun jun dbclientcursorinit call failed sun jun dbclientcursorinit call failed sun jun user assertion transport error ns admincmd query getlasterror sun jun assertion get updated shard list from config sun jun warning distributed lock pinger detected an exception while pinging caused by dbclientbase transport error ns admincmd query getlasterror edt code
1
the latest version of the backup changelog does not have an associated changelog entry on
0
i have just tried to run through the instructions for restoring a replica set from a downloaded mms backup i am running mongodb on osx have started a restore job on my mms group and then downloaded the targz file via httpsi have created a new empty replicaset then i follow step shut down the entire replica seti then follow steps through to however when i attempt to run the seedsecondarysh script in step this fails withcodeseedsecondarysh shell version to collection already existscodethe script appears to be attempting to create the oplogrs collection which i believe already exists in my replicaseti increased my logging to and then ran that command again to confirm connection accepted from connections now run command admincmd whatsmyuri command admincmd command whatsmyuri whatsmyuri run command localcmd create oplogrs capped true size create collection localoplogrs capped true size command localcmd command create create oplogrs capped true size locksmicros socketexception remote error socket exception server end connection connection now opencodei then delete all of my local database files and localns and retry step shell version to ninserted codestep now appears to be successful although im curious if theres a better way of working around this i i continue with step and step however i seem to hit an error at step when it asks me to run rsinitiatecodemongomongodb shell version to testserver has startup warning soft rlimits too low number of files is should be at least rsinitiate ok errmsg localoplogrs is not empty on the initiating member cannot initiatecodewould it be possible to fix the instructions in step as well as step please
1
were seeing the following message throuout the mongodblog i sharding refresh for collection configsystemsessions took ms and failed caused by commandnotfound no such command flushroutingtablecacheupdates bad cmd flushroutingtablecacheupdates configsystemsessions maxtimems clustertime clustertime signature hash keyid configserverstate optime ts t db admin i control sessions collection is not set up waiting until next sessions refresh interval no such command flushroutingtablecacheupdates bad cmd flushroutingtablecacheupdates configsystemsessions maxtimems clustertime clustertime signature hash keyid configserverstate optime ts t db admin noformat this secondary node is running on primary is on
1
during deserialization it is possible for an instance to have already instantiated collections in this case it would be better to simply use the existing collection rather than overwrite it
0
herefor standalone and replset cases we dont explicitly state that upgrading from is unsupported however we probably should and recommend that users first follow the upgrade process to reach first upgrade paths from older versions are not tested so we shouldnt guarantee that they are expected to work for the case of upgrading a sharded cluster there is a pretty stern warning
0
i have servers running in a replica set recently i had some issues with the primary one and whole replica has been reconfigured but the primary one stayed on its original place after ive added one server to the set free monitoring started to show blank graphs like these disabling and enabling monitoring does not help much ive noticed also that on restarting the primary there were some data in a new graph related to a secondary server however once primary back online graph related to secondary disappeared and the blank one without data back again is there anything i could try to safely remove free monitoring related data to try to set it up from scratch
0
i created an text index this way dbmycollectionensureindexfirstnametext lastnametext emailtext birthdatetextnametextindex that i can set a threshold maybe as a minimum score to consider that records are almost the samenow i create exampledummy documents for instancefirstnamerobert lastnamelane email imnotxaviermailcom and perform this query dbmycollectionfindtextsearch robert lanescoremetatextscorei get this id firstname robert lastname lane email imnotxaviermailcom score why does mongodb do this instead of returning a document with need to know a little more about the details of the scoring system in order to correctly set a threshold and make better decisions
1
if you look in srcmongoutilstackintrospectcpp youll see this if method multiplanscanner method queryplan method queryplanset method queryoptimizercursorimpl method queryplangenerator return falseif you remove those lines youll get fasserts when running teststo fix we cant access data in constructors
1
after a crash on docker container i cant start mongo again ive tried statrt it using –repair but without success igetting this log codejava i control mongodb starting dbpathdatadb i control db version i control git version i control openssl version openssl may i control allocator tcmalloc i control modules none i control build environment i control distmod i control distarch i control targetarch i control options i detected data files in datadb created by the wiredtiger storage engine so setting the active storage engine to wiredtiger i storage wiredtigeropen config e storage wiredtiger error filewiredtigerwt connection unable to read root page from filewiredtigerwt wterror nonspecific wiredtiger error e storage wiredtiger error filewiredtigerwt connection wiredtiger has failed to open its metadata e storage wiredtiger error filewiredtigerwt connection this may be due to the database files being encrypted being from an older version or due to corruption on disk e storage wiredtiger error filewiredtigerwt connection you should confirm that you have opened the database with the correct options including all encryption and compression options i assertion wterror nonspecific wiredtiger error srcmongodbstoragewiredtigerwiredtigerkvenginecpp i storage exception in initandlisten wterror nonspecific wiredtiger error terminating i network shutdown going to close listening sockets i network removing socket file i network shutdown going to flush diaglog i control now exiting i control shutting down with code
0
documents with id equal to and id equal to do not contain the students field since no array element matched the elemmatch criteriathis seems wrong especially given the sample output
1
python c import pymongo pymongoconnectiontraceback most recent call last file line in file pymongoconnectionpy line in init selffindnode file pymongoconnectionpy line in findnode raise autoreconnectcould not find masterprimarypymongoerrorsautoreconnect could not find masterprimary
1
in mongocsharpdriver databasegetstats is available for getting the state of server what is the alternative for the same in mongodb driver i have tried with serverdescription object but it is always returning as disconnected during this timei am able to access collections using the driver thanks in advance
1
the uri generated by the connection model is not longer setting a username and password in compass
1
to be done as part of the branching process of see as a past example you can use uuidgen to get a new random uuid
1
the following mentioned url is not working kindly check and advise
1
code class car include mongoiddocument embedsone seat field a end class seat include mongoiddocument embeddedin car field a validatespresenceof car end ccarnew cbuildseat csave moped insert databasex collectioncars documents flags cseatafoo csave moped update databasex collectioncars updatesetseatafoo flags moped update databasex collectioncars updatesetseatafoo flags code
1
as described in this section of the tenant migrations design
0
we need to use the mongocryptd from rhel for because that is the only platform where these architectures are supported by the server
1
maybenew numberlong perhaps should change output as wellor detect not sure
0
currently there is circular dependency between the catalog manager and the shard registry this needs to be broken so that shardregistry does not depend on the catalog manager
0
unittest failed on ubuntu ppc host project wiredtiger develop commit diff retry search if we race with prepare update commitrollback irrespective of whether the prepared update is visible or not retry the search again when the prepared update is either committedrollbacked inparallel to search dec utc evergreen subscription evergreen event task logs signature noformat fail subunitremotedtestcase testtoolstestresultrealstringexception traceback most recent call last file line in testcursoropenhandles selfassertequaldhafter dhbefore true file line in assertequal assertionfuncfirst second msgmsg file line in baseassertequal raise selffailureexceptionmsg assertionerror false true noformat
0
i propose that we temporarily disable the following test case testcase mongodbdrivertestsspecificationstransactionsconvenientapiteststransactionoptionsjsonwithtransaction explicit transaction options override client options until is resolved if we do this we should also create a ticket to reenable the test when is resolved
0
details of the issue are in this link i reported this issue at this time looks like this is happening because of mongodb dependencies note that after trying everything mentioned there eventually ended up with the below error i tried everything for days nights reinstalled nuget packages over and over again the only way to get it to work was to install vs on my azure vm and then the error stopped happening i went through all the dependencies of the driver found some documented ones missing installed them manually nothing worked very hard to debug this issue and it does not say which file is missing vs install fix indicates that the missing dll was probably in the gac note i even installed net and latest net core on that vm even that did not get the below to work application locussqlserverchangemanagementworkerservicefabricexe framework version description the process was terminated due to an unhandled exception exception info systemiofilenotfoundexception exception info systemiofilenotfoundexception at mongodbdrivercoreconnectionsclientdocumenthelpercreateosdocument at at systemruntimeexceptionservicesexceptiondispatchinfothrow at at mongodbdrivercoreconnectionsclientdocumenthelpercreateclientdocumentsystemstring at mongodbdrivercoreconnectionsbinaryconnectionfactoryctormongodbdrivercoreconfigurationconnectionsettings mongodbdrivercoreconnectionsistreamfactory mongodbdrivercoreeventsieventsubscriber at mongodbdrivercoreconfigurationclusterbuilderbuildcluster at mongodbdriverclusterregistrycreateclustermongodbdriverclusterkey at mongodbdriverclusterregistrygetorcreateclustermongodbdriverclusterkey at mongodbdrivermongoclientctormongodbdrivermongoclientsettings at locusmongodbcorelocusmongoclientdedicatedclusterinitializesystemstring systemstring systemstring systemstring systemstring at locusmongodbcorelocusmongoclientdedicatedclustergetsystemstring systemstring systemstring systemstring systemstring at locusmongodbconfiglocusconfigdatadatabase at locusconfigurationlocuscloudconfiggetloggingconfiguration exception info systemaggregateexception at systemthreadingtaskstaskcreationoptions at systemthreadingtaskstaskcreationoptions at locussqlserverchangemanagementchangetrackingtimerexecutetimer at locusframeworkprotectedtimermethodexecutesystemaction at locuscoretimersbaseworktimertimerelapsedsystemobject at systemthreadingexecutioncontextruninternalsystemthreadingexecutioncontext systemthreadingcontextcallback systemobject boolean at systemthreadingexecutioncontextrunsystemthreadingexecutioncontext systemthreadingcontextcallback systemobject boolean at systemthreadingtimerqueuetimercallcallback at systemthreadingtimerqueuetimerfire at systemthreadingqueueuserworkitemcallbacksystemthreadingithreadpoolworkitemexecuteworkitem at systemthreadingthreadpoolworkqueuedispatch
1
trying to create a new mongo client in a windows environment using a uri results in an endless stream of uninitialized constant mongounixsocket errors using version of ruby driver seems like this started happening after unix socket support was added in this issue didnt exist in x mongoclientnewurl d debug mongodb adding to the cluster d debug mongodb uninitialized constant mongounixsocket d debug mongodb adding to the cluster d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket
1
there must by a warning or an additional step in the procedure for migrating config servers with different names otherwise the user will find ssl invalid hostname messages when attempting to start the mongos processes
1
there is a typo in the title backup agent chagelogit should be backup agent changelog
1
in the preparedness section the upgrade checker script is no longer referenced is that on purpose and if not then the content should be added back
1
the onprem backup docs appear to be a copypaste job of the saas backup docs including things like needing to open firewall connections to mmsmongodbcom and needing a valid credit card and such whoops this should get fixed asap
1
in earlier commits support was removed for the secondaryacceptablelatencyms and tagsets properties on the client database and collection objects options to find and secondaryacceptablelatencyms as a uri option the goal of this ticket is to add that support back for backward compatibility use of any of these features will raise deprecationwarningrelated earlier commits
0
readpreferencemodesecondary tags fails on via mongos due to nonmatching tags returning an error returns documents on
0
followon from that ticket has test program debugging changes so open this ticket for continued work on the actual problem the important diagnostic information from that ticket heres what i know from the original failure it is a simple data loss checkpoints are not involved at the time this record is inserted the timestamp missing is record value the stable timestamp at the time the previous checkpoint starts is the starting checkpoint lsn is the stable timestamp at the time the checkpoint completes is the ending checkpoint lsn is the lsn of the equivalent oplogtable record is the next checkpoint doesnt start until lsn so were not in the middle of any checkpointrelated processing of any kind the failing threads first record after the checkpoint is record at timestamp the failing thread is using prepared transactions sometimes but did not use prepare on the missing records transaction
1
add an invariant over here such that we prevent getting prepare conflict in a runwithoutinterruptionexceptatglobalshutdown block for this ticket we should also add a function in operationcontext class which should allow us to read ignoreinterrupts
0
windows jscoresmalloplogin the smalloplog suite this test issues the closealldatabases command on the master while the slaves getmore into oplogrs is yielding when the getmore returns from the yield the dbtemprelease destructor acquires a context on the local database which attempts to reopen the local database which had been closed during the yield this trips an massert since opening a database requires a write lock the slave gets back an error from the getmore call and halts replication on the slave entirely the autoresync server option is not set so replication is never reattempted this prevents any further tests in the suite from running as smalloplog waits for the slave to catch up to the master inbetween testsassuming the above analysis is correct possible remedies include any of make closealldatabases not close local make dbtemprelease detect this failure turn on autoresync remove this test from the smalloplog suitetest log build index on jstestsdurdropracefoo properties v key x name ns jstestsdurdropracefoo building index using bulk build index done scanned total records databaseholdercloseall command admincmd command closealldatabases closealldatabases locksmicros opening db assertion open database in a read lock if db was just closed consider retrying the query might otherwise indicate an internal mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe mongodexe boostanonymous mongodexe mongodexe localoplogmain note not profiling because db went away probably a close on getmore localoplogmain exception cant open database in a read lock if db was just closed consider retrying the query might otherwise indicate an internal error locksmicros dropdatabase jstestsdurdroprace dropdatabase jstestsdurdroprace allocating new datafile datadbsconstestsjstestsdurdropracens filling with done allocating datafile datadbsconstestsjstestsdurdropracens size took allocating new datafile filling with done allocating datafile size took build index on jstestsdurdropracefoo properties v key id name id ns jstestsdurdropracefoo added index to empty insert jstestsdurdropracefoo query id locksmicros command jstestsdurdropracecmd command insert insert foo documents ordered true locksmicros build index on jstestsdurdropracefoo properties v key x name ns jstestsdurdropracefoo building index using bulk build index done scanned total records dropdatabase jstestsdurdroprace dropdatabase jstestsdurdroprace allocating new datafile datadbsconstestsjstestsdurdropracens filling with op err cant open database in a read lock if db was just closed consider retrying the query might otherwise indicate an internal error code halting caught repl sleep sec before next done allocating datafile datadbsconstestsjstestsdurdropracens size took all sources dead sync error no ts found querying remote oplog record sleeping for all sources dead sync error no ts found querying remote oplog record sleeping for secondscodesuite log excerptcode test dropindexjs command mongoexe port authenticationmechanism mongodbcr writemode commands eval testingreplication true date mon mar output suppressed see ms waiting for slave to catch up to master caught up replication ok for collections skipping waiting for slave to catch up to master caught up replication ok for collections test dropdbracejs command mongoexe port authenticationmechanism mongodbcr writemode commands eval testingreplication true date mon mar output suppressed see minutes timed out timed out tests succeeded tests didnt get run replication ok for collections timed out traceback most recent call last file buildscriptssmokepy line in main file buildscriptssmokepy line in main runteststests file buildscriptssmokepy line in runtests masterwaitforrepl file buildscriptssmokepy line in waitforrepl connectionportselfporttestingsmokewaitinsert file line in insert selfdatabaseconnection file line in dobatchedinsert clientsendmessageinsertmessageemptyjoindata safe safe file line in sendmessage rv selfcheckresponsetolasterrorresponse file line in checkresponsetolasterror raise operationfailuredetails details pymongoerrorsoperationfailure timeout running script task for command killallmcipkill mongo pkill mongod pkill mongos pkill f buildloggerpy pkill f smokepy in directory full command taskkill im mongodexe im mongosexe im mongoexe im testexe im buildloggerpy im smokepy im pythonexe im clexe f command successfully started and appended to running commandscode
0
version released fixes issue with users imported from a mongodb deployment running with see for details
1
all started with hunting a bug on mongoidorderable trying to reproduce the steps with a simple update leads me to the conclusion that there must be some trouble with mongoidthe following test demonstrates the problemcoderuby class survey include mongoiddocument embedsmany questions acceptsnestedattributesfor questions rejectif a ablank allowdestroy true end class question include mongoiddocument field content embeddedin survey embedsmany answers acceptsnestedattributesfor answers rejectif a ablank allowdestroy true end class answer include mongoiddocument embeddedin question field position type integer end codespecmongoidpersistableincrementablespecrb coderuby context when the document is embedded in another embedded document do sharedexamplesfor an incrementable embedded document in another embedded document do it increments a positive value do expectsecondanswerpositionto end it persists a positive inc do expectsecondanswerreloadpositionto end it clears out dirty changes do expectsecondanswertonot bechanged end end letsurvey do surveycreate end letquestion do surveyquestionscreatecontent foo end letfirstanswer do questionanswerscreateposition end letsecondanswer do questionanswerscreateposition end context when providing string fields do letinc do secondanswerincposition end itbehaveslike an incrementable embedded document in another embedded document end context when providing symbol fields do letinc do secondanswerincposition end itbehaveslike an incrementable embedded document in another embedded document end end codethe answer is deeply embedded in survey via question in the question i create two answers with different positions now i would incremente the second answer by one but get the followingcode id questions id content foo answers id position id position codeobviously it increments the first object from to but it should increment the second object from to as you can see herecode id questions id content foo answers id position id position codeso i call this a bug or is there some limitations i didn’t getall is documented as well here
1
since changing srcmongodbmatcherexpressiontreeh we now get errors during compilation on clang noformat error moving a local object in a return statement prevents copy elision noformat
1
when issuing something likecodecollectionupdatepid site vid sid ip setoninsert card push secs ev uplay ts upserttruecodeno new document will appear in the collection removing setoninsert will do
1
this pageneeds a very prominent warning explaining that ldap may only be used for completely fresh mms onprem installs ie no existing users groups etc
1
the following tests are failing in the master branch after upgrading to libmongoc and mongodb the failures are likely related to the server version bump rather than libmongoc but we need to investigate that and should resolve the failures either way noformat connect to mongodb with ssl and auth connect to mongodb with ssl and auth stream context connection should not reuse previous stream after an auth failure connect to mongodb with ssl and auth and username retrieved from cert connect to mongodb with ssl and auth and username retrieved from cert stream context noformat
1
running the attached client as followsnoformatbenzzyzxprojectsmongoutilsbuild inquery query time in miliseconds count inquery query time in miliseconds inquery query time in miliseconds count last execution never returns a result and the client is blocked in the call to dbclientcursormore note that the server logs the slow query at on my machine but the process continues to consume cpu for seconds
1
this page should probably mention what the behaviour of inc is if the field doesnt exist also no mention of how it works with findandmodify
1
when specifying readpreferencerpprimarypreferred we actually send secondarywhen specifying readpreferencerpsecondary we actually send primarypreferred
1
i design an academic distributed application in which i have a program who streams and collects tweets via twitter streaming api in particular profiles authors informations on a dedicated collection in my mongodb database in this collection i have a unique index applied on fields my distributed application works with apache camel framework and with rabbitmq server when i set a number of consumers behind my streamer i get duplicates in my collection more precisely for each duplicate entry i have an incomplete entry with numerous missing fields and a complete entry if i drop and try to recreate unique index i get an error saying duplicates are present in collection i think it is a concurrent access problem since collected date of each duplicate entries are very close i give below an example of duplicates with current applied indexes on the collection codemongoshell dbprofilesfindaccountid id broadcaster twitter accounttype account accountid collecteddate id broadcaster twitter accounttype account accountid collecteddate userid lang eng location kuala lumpur city username kakajan haytlyyev fbrparty useraccount utcoffset profilelink accountcreatedat description say what you mean and mean what you say npolitical correctness is not allowednresistance resistanceunited strongertogether isverified false geoenabled false profileimageurl profilebackgroundimageurl followerscount friendscount listedcount statusescount iscontributorenabled false istranslator false isprotected false dbprofiles dbprofiles dbprofilesgetindexes v key id name id ns documentsprofiles v unique true key broadcaster accountid id name appkey ns documentsprofiles v key broadcaster useraccount id name ns documentsprofiles code
0
we have an issue with replication that is preventing us from successfully adding any new nodes to the replica sets in our main mongo sharded shards cluster the team here is evaluating moving to a different db platform but im hoping that theres a patch or workaround that will allow us to continue growing the cluster without needing to boot up new boxes with double the ram just to successfully replicatefrom a new box dual core vm instance on joyentcloud running centos we installed the latest mongo and started replication for the shard the external sort finished but then mongo eventually crashed heres the end of the mongo log from varlogmongomongodlog note how fast it was going initially and then how slow it got near the end took minutes to get to and then hours or so to get from to before it crashedwed jun external sort used files in jun allocating new datafile filling with zeroeswed jun done allocating datafile size took jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun old journal file will be removed jun allocating new datafile filling with zeroeswed jun done allocating datafile size took jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun allocating new datafile filling with zeroeswed jun done allocating datafile size took jun old journal file will be removed jun old journal file will be removed jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun allocating new datafile filling with zeroeswed jun done allocating datafile size took jun task writebackmanagercleaner took jun allocating new datafile filling with zeroeswed jun done allocating datafile size took jun command admincmd command writebacklisten jun old journal file will be removed jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten jun command admincmd command writebacklisten etc and hours later its now up to jun command admincmd command writebacklisten the varlogmessages file we saw that the linux oomkiller was being invoked and killing mongodwhen mongo adds to a new member of a replica set the data seems to transfer over just fine but then it runs out of memory when it is building its indexes sort tables etcwe have other members of the replica set running with ram swap drive just fineive only experienced oomkiller being invoked once or twice ever and its only when stuff is really bad on the server and always always something configured wrong and running out of disk space and rammongo docs declare that mongo isnt supposed to do thiswe arent booting up small boxes here this failure is happening on a dual core machine with ram and a swap diskmongo docsswapit is useful for the linux kernel to have swap space to use in emergencies because of the way mongodb memory maps the database files none of this data will ever end up in swap this means that on a healthy system the swap space will rarely be used on a system only running mongodb having swap can keep the kernel from killing mongodb when physical memory limits are reachedyou may also want to look at using something which compresses swapmemory like compcachemongodb uses memory mapped files the entire data is mapped over time if there is no memory pressure the mongod resident bytes may approach total memory as the resident bytes includes file system cache bytes for the file pages open and touched by mongodvarlogmessagesand the oom info in the varlogmessages filejun kernel mongod invoked oomkiller gfpmask kernel mongod cpuset kernel pid comm mongod not tainted kernel call tracejun kernel dumpheaderjun kernel findlocktaskmmjun kernel oomkillprocessjun kernel outofmemoryjun kernel allocpagesnodemaskjun kernel allocpagescurrentjun kernel pagecacheallocjun kernel findgetpagejun kernel filemapfaultjun kernel dofaultjun kernel handleptefaultjun kernel memcgroupcountvmeventjun kernel handlemmfaultjun kernel dopagefaultjun kernel switchtojun kernel schedulejun kernel pagefaultjun kernel meminfojun kernel node dma percpujun kernel cpu hi btch usd kernel cpu hi btch usd kernel node percpujun kernel cpu hi btch usd kernel cpu hi btch usd kernel node normal percpujun kernel cpu hi btch usd kernel cpu hi btch usd kernel kernel kernel kernel kernel kernel node dma allunreclaimable yesjun kernel lowmemreserve kernel node allunreclaimable yesjun kernel lowmemreserve kernel node normal allunreclaimable nojun kernel lowmemreserve kernel node dma
1
hello compass allows transforming a document field type date to timestamp but not vice versa either on create either on edit i didnt manage to see any error in the console im sorry screen shot at the document fields type timestamp either on create either on edit are not revisable if you try to edit a timestamp value on create phase it throws the following error in the dev console the workaround is to transform the type to string edit the value and then transform back to timestamp code applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibreacterror… uncaught typeerror thispropsedit is not a function at editableelementfocuseditvalue applicationsmongodb at objectinvokeguardedcallback applicationsmongodb at executedispatch applicationsmongodb at objectexecutedispatchesinorder applicationsmongodb at executedispatchesandrelease applicationsmongodb at arrayforeach native at foreachaccumulated applicationsmongodb at objectprocesseventqueue applicationsmongodb at runeventqueueinbatch applicationsmongodb at objecthandletoplevel applicationsmongodb focuseditvalue applicationsmongodb compassappcontentsresourcesappasarsrcinternalpackagescrudlibcompon… invokeguardedcallback applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibreacterror… executedispatch applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibeventplugi… executedispatchesinorder applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibeventplugi… executedispatchesandrelease applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibeventplugi… foreachaccumulated applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibforeachacc… processeventqueue applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibeventplugi… runeventqueueinbatch applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibreactevent… handletoplevel applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibreactevent… handletoplevelimpl applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibreactevent… perform applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibtransactio… dispatchevent applicationsmongodb compassappcontentsresourcesappasarnodemodulesreactdomlibreactevent… superbugsnag applicationsmongodb code thank you
0
align the wt parameters in the fuzzer this line with which enforces evictiondirtytarget evictiondirtytrigger
0
when calling distinct on a field which is missing from some of the records the result either includes or does not include null depending on whether there is an index in that field
0
hi i have implemented the lab through mongo university course plan and created a replica set of nodes and completed the lab my system was shut down abruptly and now when i am trying to start mongod on the same host i am seeing below log attached steps were taken to resolve the issue are resumed the hosts deleted the lock files from all the servers and restarted the server but faced the same issue
1
paneltitleissue status as of may issue description and impact wiredtigers rollback to stable rts process runs at startup time to remove from page images any writes that occurred after the nodes stable timestamp because of this bug in mongodb the rts process can corrupt page metadata causing documents on affected pages to become invisible to mongodb any startup can trigger the bug including the initial upgrade to mongodb possible outcomes are most likely a fatal error and inability to restart due to duplicate key exceptions temporary query incorrectness if a crash does not occur fatal error and inability to restart most likely a fatal error and crash occurs during replication oplog recovery at startup or immediately after the node enters the secondary state operations stored in the replication oplog that are reapplied to affected pages tend to be incompatible with the current state of data for example an update to an invisible document becomes an upsert that collides with an invisible documents key in the unique id index leading to a duplicate key exception temporary query incorrectness if a crash does not occur documents on affected pages remain invisible temporarily normally the set of potentially impacted page images is limited to those pages that were evicted from memory just before the last checkpoint before the shutdown but a lagging majority commit point across the cluster can widen this set importantly depending on how the application responds to missing documents any query correctness issue can lead to logical data corruption it is probable that no user intervention is required for affected pages to be evicted and reloaded into memory correcting the issue diagnosis and affected versions this bug affects mongodb version only any nodes running mongodb version can be affected on any restart impacted nodes are most likely to crash with a caught exception during replication recovery message on startup as in the following noformat exception during replication recoveryattr duplicate key error collection index id tsf ccontrol ctxinitandlistenmsgwriting fatal messageattrmessageterminate called an exception is active attempting to gather more information tsf ccontrol fatal messageattrmessagedbexceptiontostring duplicatekey noformat it is also possible for an impacted node to crash with a writer worker caught exception after entering secondary status such as noformat crepl worker caught exceptionattrerrorduplicatekey keypattern id oplogentry noformat if a node does successfully start user applications may throw errors due to missing documents and the node may log nonfatal errors like erroneous index key found with reference to nonexistent record id when trying to access the document remediation and workarounds the fix is included in the production release if a node has crashed and cannot be restarted without error the most straightforward remediation is to restart the node on and upgrade the rest of the cluster to remediate the issue on resync the impacted node importantly restarting the node on is not a remediation to query correctness issues because any restart can trigger the bug panel original description wiredtiger transaction ids persisted to the disk should be reset to after database restart it relies on wiredtiger checking the pages write generation against the connection level base write generation if the pages write generation is smaller we should clear the transaction ids on that page we only update the connection level base write generation after we have done rollback to stable so that during rollback to stable we havent cleared the transaction ids the issue is that if we create a new disk image during rollback to stable and we are not writing it to disk ie update restore eviction or inmemory page split we neither clear the transaction ids on that page nor update its page write generation the page write generation is initialized as since the page is still in memory after we have updated the connection level base write generation if we read this page again we will not clear the transaction ids of the page because its page write generation is which causes the data consistency problem
1
changed in version mongodb includes support for two storage engines the storage engine available in previous versions of mongodb and wiredtiger mongodb uses the engine by default need add note that use wiredtiger by default
1
branch systemperfyml for change most references from master to remove the wtdevelop variants from the commitv branch evergreenyml for after branching in the branch change most references from master to remove the wtdevelop variants from the commit see work done in remove wt develop from systemperfyml and evergreenyml similar to what weve done for previous releases the wtdevelop build variants should be removed from the release mongodb branches update filename suffix to for nightly builds for branch after branching we should follow the same steps we when we branch in update perfyml to use enterprise module all of these main bullet points should be a separate commit but they should be pushed together in the same commit queue task the reason they should be pushed as separate commits is in the case of needing to revert one aspect of this entire task
1
the relationship between slowms or whatever it is called in and the log file messages is not clear lots of talk about how it relates to profiling on this page and elsewhere but it is not clear that it also affects messages in mongod log files
0
signature noformat error subunitremotedtestcase testtoolstestresultrealstringexception lost connection during test noformat unittestlong failed on ubuntu host project wiredtiger develop commit diff moved sversion packagespec to autoconf moved the generation of the packagespec to the autoconf generation function in sversion this logic is not needed for the cmake build system dec utc evergreen subscription evergreen event task logs unittestlong
0
when an invalid todb argument that contains a dot is provided copydatabase just ignores everything that comes after the dot and copy to the destination database instead of failing eg something like thiscode dbcopydatabasetest localwhatever localhostcodewill returncode ok code and the content of tests will be copied in local database
0
mongo secondary instance crashed during replication from primarytue jul assertion bad type tue jul assertion bad type tue jul fatal assertion mongod tue jul aborting after fassert failuretue jul got signal abortedtue jul backtrace mongod
0
there needs to be a paragraph here about restoring an entire replicaset or a whole cluster the procedure should indeed be trivial simply perform the below steps for each individual server but customers typically dont assume that its so easy and dont know what to dosuggested wording when restoring a replica set make sure all replicas are stopped then copy the restore image to the dbpath directories of all replicas then start the replicas up again for a sharded cluster stop the whole cluster then copy the appropriate restore images to all the shards replicasets and to all the config servers respectively then start the cluster again
1
the os x builder is failingcodethe following tests failed with exit jul build index testmany mon jul build index done scanned total records secsmon jul build index testmany test jstestsindexmanyjs test test test test are not equal one or more tests failederror printing stack trace at printstacktrace at doassert at functionasserteq at functionassertparalleltests at paralleltesterrun at jul javascript execution failed are not equal one or more tests failed at to load seems strange is that looking at the builder timeline apparently this build has succeeded previously on the same commit i attempt to reproduce this failure on my own mac osx i get a different but consistent failure when running jstestsparallelbasicjscode test mon jul cmd drop testobjnesttestmon jul build index testobjnesttest id mon jul build index done scanned total records secsmon jul info creating collection testobjnesttest on add indexmon jul build index testobjnesttest a mon jul build index done scanned total records secs minutestest userskangasworkspacemongojstestsparallelbasicjs exited with status it real or is it memorexedit as eric points out parallelbasicjs is just a wrapper for a number of other tests the underlying test which failed wascode test jstestscompactjs ok failederror printing stack trace at printstacktrace at doassert at assert at at at functiondatetimefunc at at arrayforeach native at numberparalleltestsfun at jul javascript execution failed assert failed at parallel test failed error error loading js file jstestscompactjscode
0
mongorestore was switched to apply one operation at a time per and however as noted in a server can detect where batches of crud ops can be applied so it may be possible this optimization could be reintroduced if mongorestore is connected to a server version greater than related discussion
0
see the abrupt time jump here seems to be related to a failed attempt to shut down moi have a patch here that just relies on the killallmci expansion to pwn mo since it kills all python processes and any mongosd that mo may have started however i think i want to investigate a bit more on a spawnhost first as giving up on clean shutdown is a bit hacky
1
not sure if it is a manual issue a build issue or maybe user error i am following the instructions at note that this page is definitely live as it is linked from the linux download area at i have created the repo file in step note that there is a reference in the repo file not sure if that is intentional code namemongodb repository baseurl gpgkey code step yields the following code sudo yum install y loaded plugins fastestmirror loading mirror speeds from cached hostfile base reposlaxquadranetcom extras mirrorsunifiedlayercom updates mirrorsocfberkeleyedu no package available no package available no package available no package available no package available error nothing to do code centos in virtualbox code uname a linux localhostlocaldomain smp sun sep utc gnulinux code
1
despite the instance setup failing the test when the client or collection are nil the test method continues to execute and emits a fail error attempting to unwrap the optional noformat swift test filter testlistindexes crashlog symbolicatelinuxfatal crashlog test suite selected tests started at test suite mongocollectiontests started at test case mongocollectionteststestlistindexes started at error mongocollectionteststestlistindexes xctasserttrue failed error mongocollectionteststestlistindexes failed setup failed commanderrormessage no suitable servers found serverselectiontryonce set fatal error unexpectedly found nil while unwrapping an optional value current stack trace libswiftcoreso swiftstdlibreportfatalerror libswiftcoreso function signature specialization of generic specialization of a a libswiftcoreso partial apply forwarder for closure swiftunsafebufferpointer in swiftfatalerrormessage swiftstaticstring swiftstaticstring file swiftstaticstring line swiftuint flags swiftnever libswiftcoreso function signature specialization of generic specialization of a a libswiftcoreso function signature specialization of swiftfatalerrormessage swiftstaticstring swiftstaticstring file swiftstaticstring line swiftuint flags swiftnever libswiftcoreso swiftfatalerrormessage swiftstaticstring swiftstaticstring file swiftstaticstring line swiftuint flags swiftnever mongoswiftpackagetestsxctest mongoswifttestsmongocollectionteststestlistindexes throws at mongoswiftpackagetestsxctest partial apply forwarder for mongoswifttestsmongocollectionteststestlistindexes throws at mongoswiftpackagetestsxctest reabstraction thunk helper from escaping calleeguaranteed error owned swifterror to escaping calleeguaranteed inguaranteed out error owned swifterror at mongoswiftpackagetestsxctest partial apply forwarder for reabstraction thunk helper from escaping calleeguaranteed error owned swifterror to escaping calleeguaranteed inguaranteed out error owned swifterror at libxctestso partial apply forwarder for reabstraction thunk helper from escaping calleeguaranteed inguaranteed out error owned swifterror to escaping calleeguaranteed error owned swifterror libxctestso partial apply forwarder for closure xctestxctestcase throws in xctesttest in throws xctestxctestcase throws libxctestso partial apply forwarder for reabstraction thunk helper from escaping calleeguaranteed guaranteed xctestxctestcase error owned swifterror to escaping calleeguaranteed inguaranteed xctestxctestcase out error owned swifterror libxctestso reabstraction thunk helper from escaping calleeguaranteed guaranteed xctestxctestcase error owned swifterror to escaping calleeguaranteed inguaranteed xctestxctestcase out error owned swifterrorpartial apply forwarder with unmangled suffix libxctestso partial apply forwarder for reabstraction thunk helper from escaping calleeguaranteed inguaranteed xctestxctestcase out error owned swifterror to escaping calleeguaranteed guaranteed xctestxctestcase error owned swifterror libxctestso xctestxctestcaseinvoketest libxctestso xctestxctestcaseperformxctestxctestrun libxctestso xctestxctestrun libxctestso xctestxctestsuiteperformxctestxctestrun libxctestso xctestxctestsuiteperformxctestxctestrun libxctestso xctestxctmainswiftarray swiftnever mongoswiftpackagetestsxctest main at libcstartmain mongoswiftpackagetestsxctest start exited with signal code noformat
0
is unavailable this affects all servers using this repo error is as follows failure repodatarepomdxml from mongodb no more mirrors to try failed connect to connection refused failed connect to connection refused failed connect to connection refused failed connect to connection refused failed connect to connection refused failed connect to connection refused failed connect to connection refused failed connect to connection refused failed connect to connection refused failed connect to connection refused
1
to support clientside operations timeout we must allow timeoutms to be set via uri options see the related spec changes in the urioptions spec
0
i have started moveing old project from mongoid to mongoid the project is using geospatial search with additional fts search of returned results the query looks like this coderuby docs adresscollectionaggregate geonear near coordinates maxdistance distancefield distance limit match fts regex isci limit toa code i have found out that aggregate syntax has changed and that parameters are now passed as array but then all kind of other errors appear can you please help me how to define this query in mongoid since documentation and examples are almost nonexisting thanks damjan rems
1
how do i call it i tried dbcollstats but get typeerror property collstats of object mydb is not a function
1
in particular heapusagebytes should use the genericcurrentallocatedbytes numeric property rather than mallinfo which will allow us to get a value
0
if you include mongoidtimestamps the model class has both updated and created so when you set timeless it increments the timer by but only decrements by one so each timeless call causes a future update to be timeless
1
as each is marked deprecated i used foreach that gave me the following warning referenceerror callback is not defined looking into cursorjs i discovered that cursorprototypeeach is calling each passing in a callback function that is not defined in that function and is a breaking change
1
explain needs to better handle migrations during the explain process similar to improvements for reproduces intermittently
0
since distinguishing between sharded and unsharded collections in mongos is nontrivial and not correct when the sharding status changes this also interferes with aggregation since cursor type does not necessarily correspond to input collection sharding state
0
description quote adds a type field to documents from the currentop aggregation stage will be idlesession idlecursor or op ticket description idlecursor documents already have a type field but all objects in curop output should have a type field as well quote scope of changes referencemethoddbcurrentop referencecommandcurrentop referenceoperatoraggregationcurrentop impact to other docs mvp work and date resources scope or design docs invision etc
0
maxdistance specifies a maximum distance to limit the results of near and nearsphere queries the and indexes support centersphere just a typo in maxdistance the and indexes support maxdistance not centersphere
1
with the guide to install mongodb in manually there is no way to use the tools like mongorestore because is not included in the installation process and there is not mention about that should said something related to the necessity to install mongodborgtools in order to use mongodump or mongorestore
1
ive noticed a minor ui bug with the documentation pagesclicking anywhere on the document toggles the options panels arrow whether the options panel is open or not
1
what’s new section for on the nodejs documentation shows the changes rather than update the what is new section to include changes on
0
this is a follow up to we still have issues with cors preflight requests still fail these are my observations after changes made in looking at the splunk logs allowedorginds are still strings and not regex addingheaders parameter also needs to change to reflect stringcontainssliceregex accesscontrolalloworigin header is still not added for options request i would like to log what addcorsheaders returns cors requests from currently blocked can we allow requests for all subdomains of corpmongodbcom and are similar requests we made in the past it will be great if can avoid these requests in future i have the same issue in staging as well evergreenstagingcorpmongodbcom blocks requests from cc
1
for a query shape that has never been run before it is not optimal to cache a plan if it ties with another because not enough information is available to conclusively rule out either plan however for a query shape that generates two plans that are guaranteed to always tie it is useless to run a trial for those two plans on every query a better solution is to cache an arbitrary plan after it ties n times in a row with the winning planrelated
0
when running checkreplicaset in replsettestjs we run collmod on all the collections to wait for any index builds before running awaitreplication however if there is a background thread running a command it could potentially drop a collection after listcollections but before collmod causing a namespacenotfound error to fix this check the output of runcommand to see if there was a namespacenotfound error and if so ignore it
0
in the same vein as given the chrono facility available in any environment targeted by this driver manually converting a humanreadable duration to seconds expressed as an seems a bit quaint so im curious if there would be value in using a chronoduration to express index ttl values
0
the replsetsyncfrom command if executed during file basedinitial sync before the files have been completely copied should restart the initial sync
0
the timestamp usage assertions are currently undocumented we should make them documented and fully supported
0
a user in reported an exception calling servergetinfo on a platform because the ismaster response included a integer with an outofrange value as a diagnostic function it would probably be helpful if we avoided such exceptions and instead downgraded integers to strings servergetinfo makes no promises about providing the exact ismaster response and there isnt a concern with data loss due to roundtripping document data the reason we throw such exceptions so i think it would be permissible to perform lossy decoding here assuming this requires a new bson decoding modeflag we might also consider it for other instances where bson decoding is invoked during a debug function
1
in the process we would like to make it unittestable it currently is in replicationcoordinatorexternalstateimplcleanuplastapplybatch and should be renamed to recoverfromoplog
0
codejava error bad request generated project is invalid cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks cannot redefine tasks in generatetasks job code
0
the test coverage is already handled elsewhere and the way the test inserts data creates performance problems on machines where it runs as part of the suite
0
replsetsyncfrom may be ignored by a node eg if the target node is down or behind the node receiving the does not consider this and will timeout while waiting for the node to switch its sync target rather than trying to tell the node to change sync target again
0
following an earlier issue with mongo config servers being out of sync and manually resyncing the config dbs im seeing the following error message in my logs and having trouble writing to the databaseaug s env dbcluster fri aug going to retry checkshardversion host oldversion timestamp oldversionepoch ns dbtrafficsourcesbyhour version timestamp versionepoch globalversion timestamp globalversionepoch errmsg client version differs from configs for collection dbtrafficsourcesbyhour ok ive tried restarting all mongos instances stepping down the primary flushing the router configs all without any success
1