text_clean
stringlengths
10
26.2k
label
int64
0
1
noformat panic runtime error invalid memory address or nil pointer dereference goroutine panic githubcomevergreencievergreencommandremotecommandstop githubcomevergreencievergreenhostutilchecksshresponse githubcomevergreencievergreencloudcloudhostissshreachable githubcomevergreencievergreenhostinithostinitishostready githubcomevergreencievergreenhostinithostinitsetupreadyhosts githubcomevergreencievergreenhostinitrunnerrun created by mainstartrunners noformat
1
maybe i didnt understand very well the rollback part but i think there is a mistake in the first and second possible rollback operation indeed the second paragraph tells me to follow the steps below but i have the impression that the steps are for the first paragraph not the second best regards
1
probleman observed performance regression has been introduced by good commit the throughput this commit the throughput git bisect for this is as is the first bad commitcommit scott hernandez date fri dec check parents in dotted paths during m m reproducecodejavascriptvar tests loadmongoperfutilutilsjsvar setupmms function collection collectiondrop var base id a h z for var i i i baseh for var j j j baseh n t v collectioninsertbase testspush name tags pre setupmms ops runtests test sanitycode
1
in certain situations some helper methods could recursively call poolgetsocket the example helper is databasecollectionnames which gets a socket from the pool and passes it to databaselistcollections listcollections returns a commandcursor which collectionnames immediately fully iterates while still holding the original socket if commandcursor has to send opgetmore to the database a second socket will be required this can cause a deadlock because poolsocketsemaphore uses a lock rather than an rlock which is correct and not the source of the bug fix collectionnames so that it iterates the commandcursor after returning the original socket and look for any other methods that might have a similar problem collectionoptions is a likely candidate the severity of this bug is limited with mongodb since the collection count must be very large to require multiple result sets from the server but can be hit with older versions of mongodb easily by having more than collections in the database collectionnames is called for
1
the following shell sharding helpers use a write concern of wmajority but do not specify a wtimeout shsetbalancerstate shdisablebalancing shenablebalancing shaddshardtag shremoveshardtag shaddtagrange shremovetagrange this means that users get no feedback if the operations are slow to propagate to a majority of config servers simply adding a wtimeout eg or causes the shell helpers to apparently fail if they take too long in addition to potentially confusing users this causes spurious jstest failures the shell helpers should specify a reasonably low wtimeout eg and then if the operation has timed out due to wtimeout the shell helper should react accordingly ie output a message to notify the user what has happened and commence polling to determine when the write has gone through to a majority eg call gle if possible or do appropriate readconcern majority reads
0
mongodbcnfmongodbdbmongosmongos always oomwe must be restart it everydaybecause it hold memory and dont free them
1
this release going out fix cases where replica set member alerts no primary number of healthly members etc could send false positives skip backupdaemon rootdirectory and mongobackupdbmongouri overlap check when the mongobackupdbmongouri is on a different server mmsgenkey script handles users effective group being different then the username security enhancements
1
the gssapi auth mechanism does not seem to be working in the intended manner looks like the code at seem to be breaking for the step this condition seems to be valid only for sasl step onwards when the inbuflen should not be code if saslstep outbuflen bsonseterror error mongocerrorsasl mongocerrorclientauthenticate sasl failure no data received from sasl request does server have sasl support enabled return false code should rather be code if saslstep inbuflen bsonseterror error mongocerrorsasl mongocerrorclientauthenticate sasl failure no data received from sasl request does server have sasl support enabled return false code
1
request summary please discuss with nicholas about whether changes are necessary or ticket descriptionthe query planner does not trim boundsgenerating inequality predicates from the expression tree when the value being compared to is one of the following bson types object undefined regex dbref code symbol codewscope as a result such predicates are not eligible for covering behavior and require an unnecessary additional match operation after the document is fetched from disk reproduce as follows codejs dbfoodrop true createdcollectionautomatically true numindexesbefore numindexesafter ok dbfoofindagtefunction return explainqueryplannerqueryplannerwinningplan stage fetch unexpected fetch does not need a filter with gte predicate here filter a gte function return inputstage stage ixscan keypattern a indexname ismultikey false isunique false issparse false ispartial false indexversion direction forward indexbounds a function return codewscope dbfoofindaeqfunction return explainqueryplannerqueryplannerwinningplan stage fetch expected no filter with eq predicate here inputstage stage ixscan keypattern a indexname ismultikey false isunique false issparse false ispartial false indexversion direction forward indexbounds a code
0
rocksdb has its own way of doing backups so we didnt plan to support backups from mongodb api at this time however currently a storage engine without backup support fails a lot of tests an example can we skip all the tests that depend on backup functionality if the storage engine doesnt implement it
0
i have a mongo machine with about of disk space for mongo data but i am only able to use about mb of it the stats are below looks like the data files are preallocated to about shouldnt i be able to use the entire preallocated allocated instead of just mb writes are failing due to lack of disk spacedb is the error that i see in the log filewed mar connection accepted from connections now openwed mar insert locksmicros mar allocating new datafile filling with zeroeswed mar insert locksmicros mar fileallocator posixfallocate failed no space left on device falling backwed mar error failed to allocate new file size failure creating new datafile lseek failed for fd with errno no space left on device will try again in
1
with multiple mongodb collections grouped inside a single wt table wts random cursor configuration has no way to tell its returning a document that belongs to the collection queried grouping collections into wt tables will not mix with the existing random cursor implementation passing nextrandom to the cursor config current options disregarding feasibility include leave the randomcursor method unimplemented for grouped record stores the query layer has a default walking implementation make calls to wt with a nextrandom cursor this can return a document that does not belong to the collection repeat calls until a valid document is returned add work to wt that allows a random cursor to respect a range of keys to select from a hybrid of the first two may cover most use cases when the collection to sample is small the overhead of a walking cursor is small when a collection is large relative to the entire table sampling random documents from the table will often return one belonging to the collection of interest large collections that are small relative to the table is a usecase not particularly suited to grouped collections
0
i am writing a virtual filesystem and i am blocked when a string is to be appended to an existing file the file system will emit an event writefdbuflengthposition where position if i am not mistaken i cannot handle this with gridfsbucketwritestream because i cannot start at a certain position alternatively and even better the previous gridfs had a seek method which is very flexible especially if you can go back and forth and read or write from there
1
after a command cursor is created libmongoc assigns the maxawaittimems and batchsize options to the command cursor i believe we have two options add batchsize as a new command constructor option akin to what we did for maxawaittimems in in this case we could capture batchsize and apply it after the command cursor is created in phongoexecutecommand this will then require changes in phplib to ensure that aggregate also copies batchsize to the new command constructor option instead of adding a command constructor option we can inspect the command document for a batchsize option earlier in phongoexecutecommand and apply it in the same place as above this requires no changes in phplib it also means that the same batchsize value will apply to the original command and subsequent getmore note the second choice inferring batchsize may have implications for where a user had asked if batchsize applied to getmore at all however they did not specifically request the ability to differentiate batchsize options for the initial aggregate and subsequent getmore commands im inclined to go with the second solution inferring batchsize as it requires minimal changes to the phpc api or phplib it also allows us the opportunity to introduce a command constructor option later which would take precedence
0
a new defect has been detected and assigned to bjori in coverity connect the defect was flagged by checker unusedvalue in file srcmongodbmanagerc function zimmanagergetservers and this ticket was created by bjori
0
the getdefaultcodecregistry methods doc comment seems to be missing jsonobjectcodecprovider this causes the api documentation of getdefaultcodecprovider to not list jsonobjectprovider as a default codec
0
the collection tabs should display the currently active subtab in the collection
0
each dbclientreplicaset instance must be created with a different replica set name if two dbclientreplicaset instances have the same name only the first one created is accessiblethis is an issue when an application is required to connect to any set of replica sets in fact this is a blocking issue when the user of this application does not have the control on the replica sets namingreporter louis benoitemail
1
hi i use mongosparkconnector save data to mongodb but for id field use objectid change string to objectid and schema is structfieldsobjectidid nullable false my code is codejava val ret rdd largerddmap t val objectid new rowobjectid val sqlcontext sparksessionbuildergetorcreate val df sqlcontextcreatedataframeret datatypescreatestructtypearray structfieldsobjectidid nullable false structfieldf integertype nullable true code but got error caused by javalangruntimeexception orgbsontypesobjectid is not a valid external type for schema of struct codejava user class threw exception orgapachesparksparkexception job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid executor javalangruntimeexception error while encoding javalangruntimeexception orgbsontypesobjectid is not a valid external type for schema of struct namedstructoid if validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrueisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype true as namedstructoid if validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrueisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype true oid if validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrueisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrueisnullat validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue getexternalrowfieldassertnotnullinput top level row object id assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype true validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype getexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue getexternalrowfieldassertnotnullinput top level row object id assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else validateexternaltypegetexternalrowfieldassertnotnullinput top level row object f integertype as if assertnotnullinput top level row objectisnullat null else validateexternaltypegetexternalrowfieldassertnotnullinput top level row object f integertype assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null validateexternaltypegetexternalrowfieldassertnotnullinput top level row object f integertype getexternalrowfieldassertnotnullinput top level row object f assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object d stringtype true as if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object d stringtype true assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object d stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object d stringtype getexternalrowfieldassertnotnullinput top level row object d assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object m stringtype true as if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object m stringtype true assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object m stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object m stringtype getexternalrowfieldassertnotnullinput top level row object m assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object s stringtype true as if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object s stringtype true assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object s stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object s stringtype getexternalrowfieldassertnotnullinput top level row object s assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object e stringtype true as if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object e stringtype true assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object e stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object e stringtype getexternalrowfieldassertnotnullinput top level row object e assertnotnullinput top level row object input at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at caused by javalangruntimeexception orgbsontypesobjectid is not a valid external type for schema of struct at orgapachesparksqlcatalystexpressionsgeneratedclassspecificunsafeprojectionevalifcondexprunknown source at source at orgapachesparksqlcatalystexpressionsgeneratedclassspecificunsafeprojectionapplyunknown source at more driver stacktrace at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at comxiaomiinfracodelabsparkroomcheckmainroomcheckscala at method at at at at caused by javalangruntimeexception error while encoding javalangruntimeexception orgbsontypesobjectid is not a valid external type for schema of struct namedstructoid if validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrueisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype true as namedstructoid if validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrueisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype true oid if validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrueisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrueisnullat validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue getexternalrowfieldassertnotnullinput top level row object id assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype true validateexternaltypegetexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid stringtype getexternalrowfieldvalidateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue oid validateexternaltypegetexternalrowfieldassertnotnullinput top level row object id structfieldoidstringtypetrue getexternalrowfieldassertnotnullinput top level row object id assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else validateexternaltypegetexternalrowfieldassertnotnullinput top level row object f integertype as if assertnotnullinput top level row objectisnullat null else validateexternaltypegetexternalrowfieldassertnotnullinput top level row object f integertype assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null validateexternaltypegetexternalrowfieldassertnotnullinput top level row object f integertype getexternalrowfieldassertnotnullinput top level row object f assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object d stringtype true as if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object d stringtype true assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object d stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object d stringtype getexternalrowfieldassertnotnullinput top level row object d assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object m stringtype true as if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object m stringtype true assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object m stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object m stringtype getexternalrowfieldassertnotnullinput top level row object m assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object s stringtype true as if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object s stringtype true assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object s stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object s stringtype getexternalrowfieldassertnotnullinput top level row object s assertnotnullinput top level row object input if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object e stringtype true as if assertnotnullinput top level row objectisnullat null else staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object e stringtype true assertnotnullinput top level row objectisnullat assertnotnullinput top level row object input null staticinvokeclass stringtype fromstring validateexternaltypegetexternalrowfieldassertnotnullinput top level row object e stringtype true validateexternaltypegetexternalrowfieldassertnotnullinput top level row object e stringtype getexternalrowfieldassertnotnullinput top level row object e assertnotnullinput top level row object input at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at caused by javalangruntimeexception orgbsontypesobjectid is not a valid external type for schema of struct at orgapachesparksqlcatalystexpressionsgeneratedclassspecificunsafeprojectionevalifcondexprunknown source at source at orgapachesparksqlcatalystexpressionsgeneratedclassspecificunsafeprojectionapplyunknown source at more code
0
changes for now display the lock requesters mode we need to display the lock holders mode as well we should change the following code printmongodb lock at held by waited on by format lockhead lockrequest lockholder lockwaiter code to code code to set the lockmode needs to be added printmongodb lock at held by waited on by format lockhead lockmode lockholder lockwaiter lockrequest code other references to lockrequest should be modified as well such that the mode is properly associated to holder
0
the collmod test case is one example specifically when run against a sharded cluster this operation can fail and leave the database or collection not created which in turn causes the rest of the test to fail with a missing database error
0
the current merge aggressiveness algorithm isnt working well in combination with the recent change to allow bulk loading into an lsm tree the algorithm is a hang over from when each lsm tree had its own set of worker threads
0
aggregations can involve read precursors stages that lead into write stages we should be able to mirror the read portion without mirroring the write portion additionally we need to figure out how to manage the inevitable cursors generated by aggregation
0
backup agent released pass through explicit collection options for wiredtiger storage engine
1
i am experiencing huge memory leaks when using this library hundreds in a few seconds here is an example nodejs application js use strict const mongoclient requiremongodbmongoclient const websocketserver requirelwsserver const config requireconfig let logs server connect to mongodb log database mongoclientconnectconfiglogdatabase functionerror mongo mongoonfullsetup function consolelogdb connected mongoonclose function consolelogdb closed mongoonreconnect function consolelogdb reconnected mongoonerror functionerror consolelogdb error error throw error mongoontimeout function consolelogdb socket timeout mongocollectionconfigmongocollection null functionerror logs logscreateindex user server new websocketserver port configwebsocketserverport consoleloglistening on port configwebsocketserverport serveronconnection functionsocket serveronmessage functionsocket message binary let data messagetostring let dataprepared try let parsed jsonparsedata if parsed instanceof array serversetuserdatasocket parsed return dataprepared servergetuserdatasocket dataprepared parsed dataprepared parsed dataprepared parsed catcherror set user if sent on a same line with log message if errormessageindexofunexpected token let datasplit datasplit serversetuserdatasocket datasplit if datasplitlength return dataprepared servergetuserdatasocket let parsed jsonparsedatasplit if parsed instanceof array consoleloginvalid message message return dataprepared parsed dataprepared parsed dataprepared parsed else consoleloginvalid message messagetostring return logsinsertonedataprepared w if you comment lines where the collection is used logsinsertone theres no memory impact that rules out the lws library being a potential culprit a client library is producing a hundred clients via an unsecured ws connection each sending a simple message approximately every milliseconds each message is of a bytes length same implementations in golang and c qt official produce no problems regarding memory
1
the doc for applyops currently statesquotethe applyops command is primarily an internal command to support sharded clustersquoteshouldnt that say it exists to support replication rather than sharding
1
current vast amounts of methods change the default php error handling that throws a warning to throwing exceptions this should not be done because phps general error handling for nearly any function or method albeit in core or in an extension doesnt do this it is incompatible with hhvms error handling in hhvm the parameter parsing happens before the native cfunction is entered this makes it impossible to convert warnings to exceptions even if we wanted to
0
the new driverconnection interface does not have a method for killing a connection completely as in closing the underlying netconn as opposed to returning the connection to the pool a new interface can extend the driverconnection interface with an expire method that kills the underlying connection or an expire method can be added to the existing interface there should also be an alive method to check if a connection is still alive ie its expire method has not been called
1
enabling the maxtimealwaystimeout fail point will cause any query or command run with a valid nonzero max time to fail immediately any getmore operation on a cursor already created with a valid nonzero max time will also fail immediatelyoriginal belowbq please add a server fail point that will make the next command or query time out so that we can run a command that says im going to test maxtimems please time out the next thing i do with appropriate error for ease of testing drivers
0
generate numbers idiomatically in java
0
the dot in the sentence dbrefs are a convention for representing a document rather than a specific reference “type” should be outside of the quotes
1
there are reports of databases where the number of preallocated log files grows unbounded in mongodb it does not appear that theyre being used and we keep allocating more one change that appeared in is the new file system interface to get a single file perhaps for windows wtwindirectorylistsingle is not working
1
currently we break some guarantees of atomicity by using separate wuows for each op and for the final logop this can result in a server performing the modification but not adding it to its oplog if the server is shut down in the middle of an applyops we cant use a single wuow in the general case because applyops needs to support operations like building an index that are illegal inside of a wuow however it should be possible to detect if all ops are simple singledocument crud ops and use a single wuow in that case this would cover all uses of applyops that are used when writing to the config servers
1
coverity is complaining about multiple places in our code where we do not check the return value of recordstorerestore code recovers from potential state changes in underlying data returns false if it is invalid to continue using this iterator this usually means that capped deletes have caught up to the position of this iterator and continuing could result in missed data if the former position no longer exists but it is safe to continue iterating the following call to next will return the next closest position in the direction of the scan if any this handles restoring after either savepositioned or saveunpositioned virtual bool restoreoperationcontext txn code it looks like all of the cases where we do this are safe and do not need to deal with the capped collections edge case however this is probably an indication that the api for recordstore could be improvedits a bit odd to have a return value that we throw away most of the time
0
hi i am trying to login into my account on atlas however i get an error message
1
hi it looks like the readpreference for a db command is not always working as expected case readpreference needs to be set as a document option even if already set in the connection url ie code mongoclienturi connectionstring new code this is the document used code document point new documenttype pointappendcoordinates coordinates document geoquery new document geoqueryappendgeonear places geoqueryappendnear point geoqueryappendquery condition geoqueryappendspherical true geoqueryappendlimit geoqueryappendmaxdistance geoqueryappendreadpreference new documentmode nearest code if this geoqueryappendreadpreference new documentmode nearest is commented the queries will always go to the primary if it is not commented it would balance across primary and secondaries based on ping as expected by using nearest case with the following code readpreference set to nearest seems to work as expected code mongoclient m new mongoclientnew code code basicdbobject mycmd new basicdbobject mycmdappendgeonear places mycmdappendtype point double loc mycmdappendnear loc mycmdappendspherical true mycmdappendmaxdistance commandresult r dbcommandmycmd code the node performing the queries is checked by using and looking at the commands on each node if any other information is required please let me know thank you regards marco
0
this is a followup ticket for the during the ticket implementation i noticed that sometimes the socket timeout exception is triggered after exceding the time that is mentioned in readtimeout the gap is relatively small the case which i noticed readtimeout but the timeout exception has been thrown only after seconds this situation led to the failed tests which used failpointblocktimems the used workaround when we increased blocktimems from to doesnt contradict to the test idea but we need to investigate the underlying reasons for this behavior and make the full sync of these tests example of changed json test the original json value is
0
under wire version the current wire version servers have two operation modes if its a partly upgraded server with users created with mongodbcr it will operate in a compatibility mode which with some overhead will allow users to connect using scram to those accounts if the server has been completely upgraded and all users have been migrated to native scram or the server has been freshly brought up on then it will only accept incoming authentication requests using scram mongodbcr requests will be deniedcurrently we only use scram when the max wire version of the server is greater than this means we will currently always use mongodbcr when connecting to a server if the user has not manually specified an authentication mechanism as in the auth method which accepts the username and password as strings we should default to using scram for wire version and allow the server to sort out how to handle it
1
until is resolved we should skip the one record multiple strings spec test because there is no way to handle it with go without a custom dns lookup solution
1
in the current fcv upgrade logic the primary node in a replica set executes a command and has the opportunity to acquire strong locks for consistency however the secondaries only get to observe writes to the fcv document which are executed while already holding a hierarchy of locks and as a result they are not allowed to take any further locks on their own out of risk of introducing deadlocks during development we uncovered cases where upgrade requires the acquisition of strong locks on secondaries and because of this it would be much more convenient if the fcv upgrade logic wrote a command oplog entry which on the secondary nodes executes serially
0
a few modules in the branch of pymongo import unicodeliterals from future this is causing minor problems in the auth module due to expecting a str as its first argument there may be issues in other modules as wellbefore releasing ga we must audit our use of unicodeliterals it may make more sense to remove the future import and work around any issues manuallythe modules to audit are authpy bulkpy and collectionpy
1
when mongot or mongotmock returns an error which gets propagated by mongod it would be nice if mongod appended something to the error indicating that the error came from another node
0
noformat thread thread lwp in wtreadlock in wtsessionlockdhandle in wtsessiongetbtree in wtconnbtreeapply in wtcurstatinit in wtcurstatcolgroupinit in wtcurstatinit in wtcurstatopen in wtcurstattableinit in wtcurstatinit in wtcurstatopen in wtopencursor in in mongoexporttabletobsonwtsession stdbasicstring stdallocator const stdbasicstring stdallocator const mongobsonobjbuilder in mongoappendcustomstatsmongooperationcontext mongobsonobjbuilder double const in mongostoragesizemongooperationcontext mongobsonobjbuilder int const in mongosizeondiskmongooperationcontext const in mongorunmongooperationcontext stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand int char const mongobsonobj mongobsonobjbuilder bool in mongoruncommandsmongooperationcontext char const mongobsonobj mongobufbuilder mongobsonobjbuilder bool int in mongonewrunquerymongooperationcontext mongomessage mongoquerymessage mongocurop mongomessage bool in mongoassembleresponsemongooperationcontext mongomessage mongodbresponse mongohostandport const bool in mongoprocessmongomessage mongoabstractmessagingport mongolasterror in mongohandleincomingmsgvoid noformatandnoformat thread thread lwp in llllockwait from in from in pthreadmutexlock from in wtevictfileexclusiveon in wtcacheop in wtcheckpointclose in wtconnbtreesyncandclose in wtconndhandlecloseall in wtschemadrop in wtschemadrop in in mongodropmongostringdata const in mongodropidentmongooperationcontext mongostringdata const in mongocommit in mongocommit in mongocommit in mongorunmongooperationcontext stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand int char const mongobsonobj mongobsonobjbuilder bool in mongoruncommandsmongooperationcontext char const mongobsonobj mongobufbuilder mongobsonobjbuilder bool int in mongonewrunquerymongooperationcontext mongomessage mongoquerymessage mongocurop mongomessage bool in mongoassembleresponsemongooperationcontext mongomessage mongodbresponse mongohostandport const bool in mongoprocessmongomessage mongoabstractmessagingport mongolasterror in mongohandleincomingmsgvoid noformat
1
the libraries dbssharding and dbqueryquery are directly cyclic this pulls the sharding subsytem into the dependency tangle identified in it also introduces an indirect dependency cycle with dbqueryinternalplans since dbssharding depends on dbqueryinternalplans which is also and already cyclic with dbqueryquery
0
in overview section there is a typo in replication introduction link phrase which is written as repication introduction please fix it
1
same code put it all in one place
0
the connection hooking interface described in is also needed on networkinterfacemock and networkinterfaceimpl
0
first question complicates the answer by mixing locking behavior and granularity of locks this should be split out and lock def should link to wikipedia page with better description in wiki docsyielding happens for more than disk access which was added in and expanded in multiple document writes yield as well as readers periodically this can allow reads during long multidoc write operationsand there are more issues here so it is best if this put back on the list for more review
1
find show dbs and show collections prints out the json instead of the more friendly output this is probably affecting other commands too in particular find prints the cursor instead of the results
1
we are running load test with mongodb and below mentioned error is thrown when reading the binary file from gridfs i had raised a bug refer and it is one of the target for while testing the above mentioned bug fix with the fixed codecreated release build by getting code from github and in the same environment i got below error codejava typegridfschunkexceptionmessagegridfs chunk of file id is missing 場所 batch 場所 cancellationtoken 場所 cancellationtoken 場所 cancellationtoken 場所 buffer offset count code please assist thanks in advance irshad
1
in order to create a new command when starting subsequent batches of grouped writes in the commandwriter we need to create a new bsonobjbuilder and bsonarraybuilder because they cannot be reusedthis functionality is required for bulkwrites
1
for example if the command was joined here need to also audit what happens when the command is retried but fell into the else branch instead of joining like in the above link the configsvrshardmongos versions of shard collection should probably assert that uuid field always exists
0
this is a feature request to enhance the behaviour of the sample aggregation command by adding to the plan optimizer the wiredtiger “readonce true” option for mongodb cursors the intended purpose behind this enhancement is so sample does not or is less likely to cache the result set a sample by definition is unlikely to be used again by subsequent samples thereby caching has no benefit and only serves to add unwanted cache pressure and workload contention
0
navigation panel is on the bottom all pages looks like it just started todaythis seems to happen to pages that show up in search as pages not pages they arent redirecting or somehow mixing old and new
1
mongodb collections were dropped and we need to find out how it happened and restore the database in parallel
1
what problem are you facing in tls is set to true unconditionally since options object options take preference over url options it is impossible to set tlsfalse or sslfalse using the url even though it should be possible to do so what driver and relevant dependency versions are you using steps to reproduce connect to a mongodb server with tls disabled using a mongodbsrv url and tlsfalse
1
backporting from the master branch where development is in progress to the branch i naively copied the entire cmakeliststxt this included the desired changes for parsing version numbers from config files but it also included unintentional changes that referenced source files only present on the master branch not in the error from cmake is code cmake error at addlibrary cannot find source file usersemptysquarevirtualenvscdrivermongocdriversrcmongocmongocfindandmodifyc code
1
just started seeing this on jenkins starting with the test that triggers it writes files to gridfs using threads in python a few of the threads end up throwing this error all writes are done with write concernthe test code is herean example of the failure in jenkins can be seen hereive attached the log for this failure i cant reproduce the problem locally just in jenkins but the failure is pretty consistent in jenkins let me know how to help debug
1
resolving problem while downloading module descriptor invalid problem while downloading module descriptor invalid module not found cant build project because cant resolve driver
1
typo in minute resolution the hour tab should hours of datashouldnt the hour tab should hours“by ” which replots all charts with fiveminute averages the “window” options are“six hours” which charts hours of data“twelve hours” which charts hours of data“twentyfour hours” which is the default window for this selection and charts hours of data “fortyeight hours” which charts hours of data
1
hiif i am inserting a documents as a batch what if it fails and i get an exception when it is doing the documentwill the documents be inserted if yes how can i know how many documents the batch could insert if no then it is fine we are happy to try the batch againplease let me know
1
it seems like the change in replica set majority calculation introduced in broke balancing on some existing cluster setups since it bases the strict majority on the total number of members not the number of nonarbiter ones we recently upgraded a cluster from to and lost our ability to balance the cluster in its original setup the cluster has shards and each shard is a replica set with four members a primary a secondary and an arbiter in one datacenter and a nonvoting zeropriority hidden secondary with a replication delay in another datacenter after the upgrade balancing the cluster failed since it was waiting for the operations to replicate to a majority out of of the replica set members rather than a majority of the nonarbiter members out of with the third nonarbiter member being on a delay that didnt go very well i expect the same would happen on individual shards if either storage member had become unavailable as a temporary fix to get the balancing going again we removed the replication delay to the offsite secondary not sure if this is the same issue as or just related to it
0
hello guys may you help me with my problem i installed your product but when i try run command wt into console i see next error wt error while loading shared libraries cannot open shared object file no such file or directory step by step how i installed it git clone gitgithubcomwiredtigerwiredtigergit mv wiredtiger wiredtiger cd wiredtiger sh autogensh configure enablezlib cppflagsiusrlocalinclude ldflagslusrlocalinclude enablepython make make install os linux mint cinnamon
1
the version of the driver introduced a regression which can cause corrupted wire protocol messages to be sent to the server in practice the impact of the bug is mitigated by these circumstances occurs when connected to a replica set or multiple mongos using the new ha support for mongos but not a single mongos or a standalone its triggered only if the driver gets an ioexception while performing a normal query not commands and attempts to retry the query if both of these occur it becomes likely that the driver will send corrupted messages to the server and keep sending them until the application is restartedthe affect of sending corrupted messages to the server is undefined in some cases the server will assert and send back an error in others it will crash and in others it will add corrupt documents to the database using objcheck can mitigate the latter case but not fully
1
the error returned is illegal instruction should fix this by troubleshooting the parameter we pass to g visible here and we should prevent this by testing on a machine after building on a machine
1
while working on multirequest testing for phongo i realized phongo hasnt been updated to deal with the sdam changes we correctly keep the stream alive between requests but upon returning the existing stream on the next request mongoc will still do a full sdam demanded discovery which includes the whole ismaster shebang routines this means on every php request we issue ismaster call to every node asif we were creating a new connection
1
i can easily reproduce this in our tests when trying to list database collection our current deployment in based so this appears to block a pymongo driver upgrade create a new mongodb instance code dbpath scratchuserjblackburntmptestmongo code in python code ipython python default apr type copyright credits or license for more information ipython an enhanced interactive python introduction and overview of ipythons features quickref quick reference help pythons own help system object details about object use object for extra details in import pymongo userwarning module readline was already imported from but is being added to syspath in c in for i in cdbinsertone in cdatabasenames out in cdbcollectionnames assertionerror traceback most recent call last in cdbcollectionnames in collectionnamesself includesystemcollections results selflistcollectionssockinfo slaveokay names for result in results if sockinfomaxwireversion mongodb and older return index namespaces and collection in nextself advance the cursor if lenselfdata or selfrefresh coll selfcollection return colldatabasefixincomingselfdatapopleft coll in refreshself if selfid get more selfsendmessage getmoreselfns selfbatchsize selfid else cursor id is zero nothing else to return in sendmessageself operation assert doc selfretrieved result batch started from s expected s doc selfretrieved selfretrieved doc assertionerror result batch started from expected in code
1
replica set with mongo boxes slaveok disabledsuddenly the mongod process go unresponsive the box just keeps on getting connections from other mongod servers or applicationscausing the numsockets to keep on increasing also at that time i am not able to login to mongo console neither it gets stopped from our regular stop script the mongod process simply go unresponsive this had happened some times with different serverssometimes on primaries and secondary last week and at that time i just have to force kill that process and then restartthis is the primary log where we can clearly see that suddenly the read queries stopped cominglogging at the same time mongojavaclient reporting timeout exception also this is not the fact that queries took more time the queries hardly take ms neither there is any spikes in cpu load except for the totalopensocketsneed some answer for this behaviorlet me know if some more specific information needed
1
move tests to typescript and mocha config with tsnode
0
got the following on shutdown the process did i network end connection connections now i network end connection connections now i network connection accepted from connections now i network connection accepted from connections now i network connection accepted from connections now i network connection accepted from connections now i command terminating shutdown command i storage got request after i repl stopping replication applier i storage got request after i storage got request after i query assertion interrupted at shutdown nsmmsdbconfigconfighosts query cid hp ladsmng i query i storage got request after i storage got request after i invariant failure eventisvalid srcmongodbreplreplicationexecutorcpp i network end connection connections now i storage got request after i storage got request after i control begin backtrace backtraceprocessinfo mongodbversion gitversion modules enterprise uname sysname linux release version smp wed feb utc machine somap end backtrace i aborting after invariant failurenoformat
0
i noticed the validate option had broken our performance tests on newer mongodb the validate option dictates whether or not we validate user supplied bson documents before adding them to the relevant operation mongocbulkoperationinsert mongoccollectioninsertone however this option appears to be broken we must not send this as part of the command but it appears we do a test like this fails code static void testvalidateoption void mongocclientt client testframeworkclientnew mongoccollectiont coll mongocclientgetcollection client db coll bsonerrort error bsont reply bool ret test setting validate for any set of options that take it mongocbulkoperationt bulk bulk mongoccollectioncreatebulkoperation coll false null ret mongocbulkoperationinsertwithopts bulk bconnew x bconnew validate bsonvalidatedollarkeys error assertorprint ret error ret mongocbulkoperationexecute bulk reply error assertorprint ret error bsondestroy reply mongoccollectiondestroy coll mongocclientdestroy client code with an error message noformat unknown option to insert command validate noformat ive confirmed with wireshark that we are sending valildate as part of the insert command and that the server is generating this error this seems to go back as far as possibly this was introduced in idl parsing of our existing validate tests dont seem to check the successful case where we expect validation to succeed it seems mongodb didnt care that we sent the extra validate option and just ignored it but newer at least havent checked does
1
tick tick boomnightly linux build test writebackbulkinsertjs command port authenticationmechanism mongodbcr nodb eval testdata new objecttestdatatestpath writebackbulkinsertjstestdatatestname writebackbulkinserttestdatanojournal falsetestdatanojournalprealloc falsetestdataauth falsetestdatakeyfile nulltestdatakeyfiledata nulltestdataauthmechanism mongodbcr date fri jul output suppressed see received signal secondstest exited with status fri jul dbconfig unserialize writebackbulkinsert id writebackbulkinsert partitioned false primary fri jul found dropped collections and sharded collections for database writebackbulkinsert fri jul creating new connection fri jul backgroundjob starting connectbg fri jul connected connection fri jul creating writebacklistener for serverid fri jul initializing shard connection to fri jul initial sharding settings setshardversion init true configdb serverid authoritative true fri jul backgroundjob starting fri jul creating new connection fri jul backgroundjob starting connectbg fri jul connected connectionfri jul typeerror cannot call method getcollection of null at to load
1
currently the time window obsolete checking code doesnt check global visibility in a way which is consistent with other global visibility checking mechanisms eg wttxnvisibleall
0
in a network error at certain points in the tls code path causes the driver to spin infinitelyto reproduce first install pip then mongomockupdbcode wget sudo python getpippy sudo python m pip install codethen save this in a python file and run it like python filepycodefrom mockupdb import mockupdb commandserver autoismastertrue verbosetrue ssltrueserverautorespondspingserverrunserverreceivescommandlistcollections wait up to seconds for a driver to issue listcollections then when it receives the command it hangs upive started this mock server and run this file using the c driver built from master intmain int argc char argv mongocclientt client mongocdatabaset db bsonerrort error mongocssloptt ssloptions const char uristr char names null int i mongocinit if argc uristr argv client mongocclientnew uristr ssloptionsweakcertvalidation true mongocclientsetsslopts client ssloptions if client fprintf stderr failed to parse urin return exitfailure db mongocclientgetdatabase client test if names mongocdatabasegetcollectionnames db error printf got collection namesn for i names i printf collection sn names bsonstrfreev names else fprintf stderr command failed sn errormessage mongocdatabasedestroy db mongocclientdestroy client mongoccleanup return exitsuccesscodethe python server hangs up and the client hangs and spins the cpuif i remove ssltrue from the python and ssltrue from the c and rerun both sides i instead get an immediate log about the hangup from the c driver and it exits as desiredcodestream failure to buffer bytes failed to buffer bytes within millisecondscodethe c code also gets a nonnull return value from mongocdatabasegetcollectionnames which is filed as a separate bug
1
description redhat enterprise does not have either the noop or the deadline scheduler the equivalent schedulers would be none and mqdeadline scope of impact to other mvp work and resources scope or design docs invision etc
0
ive a single mongo instance running on a ram cenos vm its running very slowly mostly of the time here is the mongostat outputcodeinsert query update delete getmore command flushes mapped vsize res faults locked db idx miss qrqw araw netin netout conn repl time pri pri pri pri pri pri pri pri pri pri command line options are rest fork master port dbpath datamongodbdbdata logpath datalogsmongodblog authhere is the top outputcodetop up days users load average total running sleeping stopped zombiecpus total used free buffersswap total used free cached pid user pr ni virt res shr s cpu mem time root s mongodcodethe weird thing is mongodb consistently only consumes only of ram however the output from top shows all ram is used upa couple of other observations the cpu load from top is usually between to the lock range between to when the read queue goes up to to the system is unusable the number of faults is not too bad but iostat shows io itilization on the mongo partition also sometimes the await time in iostat is more than db has one large collection having about million documents all other collections are small the db serves about users frontended by tomcat which has almost no load i cannot figure out why it uses less than of ram while the system has are we doing something obviously wrong here
1
from a splunk alert suggests the currently running job is stuck with the scope
0
when dropping a database any users with privilege documents in that databases systemusers collection should have those privileges revokedsame is true for removing a user any other way
0
in current mongodbenterprise master the snmp agent does not start with either the snmpmaster or snmpsubagent arguments instead it exits on snmpagent not enabled
1
several of the snmp metric categories available in pull data directly from internal mongod objectsvariables these metrics should be sourced from serverstatus instead to improve maintainability the categories affected are memory global opcounts system uptime assertsnote that after this change the only snmp metric not using serverstatus will be servername which consists of hostnameport while serverstatus provides hostname is does not provide mongod port at present
0
monitoring agent released database collection statistic moved into separate internal thread now collects statistics for all databases accurately support for nondefault kerberos service names support for backup agent released when tailing the oplog do not prefetch next batch of oplogs before current batch is completely exhausted support for nondefault kerberos service names support for please update the following two pages to include the new gssapiservicename parameter this is a string with the following description the default service name used by mongodb is mongodb you can specify a custom service name with the gssapiservicename option
1
the php and hhvm drivers jailbreak the mongoc api and use several private symbols this means that the only way these drivers can be included in linux standard distribution packages is if mongoc ships its private symbols which implicitly makes those symbols public and part of the api this clearly isnt a good idea and there are no intentions for mongoc to move all of its internal symbols into implicit public api that can never change or fixed the php team needs to compile a list of the internal symbols it uses so we can expose these symbols one by one in mongoc in the public api this epic should contain all the symbols and track the progress of moving them into mongoc api and removing them from phongo and hippo
1
commongodbsparkexceptionsmongotypeconversionexception cannot cast string into a nulltype
1
uninitialized scalar variable the variable will contain an arbitrary value left from earlier computations use of an uninitialized uninit declaring variable op without initializer
0
mongostat should require authentication credentials to connect to a mongod running with the auth flag
0
we currently only run jstestfuzzsharded with wiredtiger note well need to verify that we wont run into disk issues with the larger files when adding this variation
0
chmod results in below error chmod seems to work error output from iostatios no historic data cant create varlibmuninpluginstatemuniniostatiosstate permission denied at etcmuninpluginsiostatios line service iostatios exited with status error output from iostatios no historic data cant create varlibmuninpluginstatemuniniostatiosstate permission denied at etcmuninpluginsiostatios line service iostatios exited with status error output from iostatios no historic data cant create varlibmuninpluginstatemuniniostatiosstate permission denied at etcmuninpluginsiostatios line service iostatios exited with status
1
we have a pretty intensive process that stresses our mongod servers when this process is running we encounter the following issue suddenly theres a sudden increase in of connections mongos opens and then it starts taking more and more cpu until it becomes totally stuck please see the graphs attached when this happens mongos logs shows a lot of openclose connection messages to the mongod servers this trigger for this process seems to be a chunk move when the move is complete the chunkmanager refreshes and then the connections starts opening and closing endlessly note that our mongodmongos run on super strong machines is this known any workaround opened by myself seems to be related but fixed only in we are using
0
when modify a string field using the change wont be savedit can be described as the following spec context when modify a string field using concat do letperson do p personcreatet pt t psave p end it saves the change do expectpersonreloadtto equalt end end is this indented behavior if so is there any workaround such as an alwayssave attribute on the field
0
the definition of index key limits might be clearer if it referred to index value limits as its a limit on the value of the key
1
apologies for the late addition a user had a question which made me realize we missed one useful bit of information when you upgrade to all automation actions will be disabled until the automation agents are upgraded to the latest version the automation agents can be upgraded by clicking the link that will appear in the please upgrade your agents banner that will appear on the deployment page
1
fields to be deprecatedgridfs protected final db db protected final string bucketname protected final dbcollection filescollection protected final dbcollection chunkcollectiongridfsfile protected gridfs fsfollowing methods will be added to use instead gridfsgetfilescollection gridfsgetchunkscollection gridfsfilegetgridfsfor other deprecated fields there are getters already
0
in mongodb university course the lecture in chapter basic joins the video shows screen capture of compass ui showing an aggregation tab to enable building join queries and export to programming languages i am missing this functionality in my compass installation please advise
1
might be useful to have a set unique array type or support setlike operations on arrays pushunique no real specific use case in mind just something redis supports that seems cool
0
attributechanged and attributewas activerecord methods will be deprecated inside of aftercreateupdatesave callbacks source does mongoid support mettods savedchangetoattribute and attributebeforelastsave to maintain consistency
0
the define definition is wellknown to be a c design problem it has to appear before including headers etc this inverted dependency of a header upon its includer affects considerations of precompiled or c all we have to do is move that definition down below the include files like any other local macro and its much more conventional c if you want to define it in a header and do some logging from inline functions thats ok just undef it at the bottom there is no odr problem if an inline function has statements as long as it always contains the same body which it will nothing bad happens logh can switch from asserting that the macro is defined to asserting that the macro is not defined when logh is included codecxxdefine x include code to codecxxinclude define x code then we dont have a define that must appear before header includes a sideeffect of this is change is that shouldlogs component argument becomes mandatory this is easy to do and only affects or calls
0
the test output is remarkably abruptcodeusing cursor id oid name eliot num id oid name sara num end connection connections now edt connection accepted from connection now opencodethe scons output says it completes in mscuriously buildbot is green on right now heres buildbots output for the same testthe test started failing on mci on fri sept first visible failure was on commit by eric but immediately before that wascodefail eric milkie o build windows dynamic library c driveruntested eric milkie o allow libdeps to process emitter intermeduntested jason rassi o add new fail point maxtimealwaystimeoutuntested jason rassi o separate processing of maxtimems from untested jason rassi o minor block formatting in parsedquerycppuntested andrew morrow o dont declare storage for static integragood spencer t brody o small cleanup to write concern in user macode
1
they currently use cursorcommandpassthrough which uses a scopeddbconnection to establish the cursor and run the query
0
description to come
0
problemwith db files and fails to start withcodemon feb opening db feb opening db feb opening db localmon feb admin web console waiting for connections on port feb waiting for connections on port feb select failure invalid argumentmon feb select failure invalid argumentmon feb now exitingcodereproducecode mongod dbpath datadbbug logpath datadbbugserverlog fork smallfiles noprealloccodecreate enough dbs so that startup will be successfulcode mongo adminfor i i var dummydb dbgetsisterdb fred i dbshutdownservercodestartup will be okcode mongod dbpath datadbbug logpath datadbbugserverlog fork smallfiles noprealloccodeadd another db and shutdowncode mongo adminvar dummydb dbgetsisterdb will now failcode mongod dbpath datadbbug logpath datadbbugserverlog fork smallfiles noprealloccodenoteif you take the same db files and now start them with then it will startup ok
1