text_clean
stringlengths
10
26.2k
label
int64
0
1
bug in unreleased code caused by misinterpreting littleendian wire protocol header data introduced during my cursor refactoring for and
1
i have an ssl built mongo running version i started a mongod instance using the following configuration mongod port dbpath homeadministratordata sslonnormalports sslpemkeyfile etcsslmongodkeymaterialpem sslpemkeypassword xxxxx sslcafile etcsslrootcapem logpath homeadministratormongodlogtxtthere is no way to provide sslpemkeypassword option with the current python driver
1
the following command may remove uuids from collections even if in fcv noformat dbruncommandapplyops applied results ok noformat quote i access successfully authenticated as principal sajack on test i command removing uuid from collection testcol quote
1
codecodevar mongodb functionerror db dbdropdatabasefunction dbcollectiontestinsert name val result function dbcollectiontestfindone name val functionerror doc consolelogjsonstringifydoc a result of the following code in mongodbcore
1
instead of etcinitdmongod start stop status we should tell users to run service mongod start stop statusto avoid this confusing error noformatrather than invoking init scripts through etcinitd use the eg service mongod startsince the script you are attempting to invoke has been converted to anupstart job you may also use the utility eg start mongodnoformati verified that the service commands work in ubuntu and
1
got this error while saving a new distro noformat distro settings decode error errors decoding mountpointssize expected type int got unconvertible type string noformat it also lost everything id typed in
0
steps to reproduce create a model with a string field create and save an instance of the model with the string field populated add localize true to the field trying to edit the instances string field like instancestringfield make me a hash throws nomethoderror undefined method merge for examplestring it wants a hash but cant be changed because string case is not handled
0
from the linked bf instead of verifying that the aggregate times out as expected this test would be not only more robust but also more correct if we verified that the contents of the output collection are what we expect more precisely the purpose of this test is to verify that when merge outputs to the collection that is being aggregated over it can trigger an infinite loop of updates so in the control case ie the aggregate which outputs to a different collection it doesn’t matter whether the aggregate takes under or not rather it matters that each document has a value of ‘a’ that is double to its original value this would confirm in the control case that each original document was updated exactly once as expected
0
in the master branch the find command uses selector makepinnedselectorsess collwriteselector but it should use a read selector as the default to makepinnedselector this bug blocks mongotools support as it breaks read preference support in tools
1
hi i am trying to build mongocxxdriver with with boost but i am getting following error please help noformat o wlenablenewdtags fpic pthread wlznow rdynamic stdliblibc lsupc wllstdc wlstartgroup wlendgroup lm lpthread lssl lcrypto cannot find crtbegino no such file or directory skipping incompatible when searching for lsupc skipping incompatible when searching for lstdc skipping incompatible when searching for lstdc error linker command failed with exit code use v to see invocation scons configure no noformat
1
regarding our current mms backup restoration documentation is focussed on single member and replica set secondary restoration we should include a page within the same group of documents that details how to restore an mms backedup sharded cluster
1
read only user can get write priority by access other userss pwd hashsample dbsystemusersfind id user sa readonly false pwd id user ro readonly true pwd nonce ok dbruncommand authenticate user sa nonce key ok
0
connect mongocclientt with a replset uri containing no valid hosts mongodbabreplicasetfoo do an operation that requires a connection like mongoccollectionfind then call mongocclientdestroy nodeslen is now the length of the seed list but nodes itself is null because it was set to bsonrealloc segfault here i introduced the bug in while attempting to fix here
1
this is only a theoretical issue as the bson size of a key is limited to bytes so bytes of data would require in numeric fields which is impossible
0
can you reformat the output of stats and explain calls to include commas if i wanted machinereadable data id be using a driver from some other language or the http endpoint as a human i spend a lot of time figuring out where the commas are thanks
0
currently the documentation regarding geospatial index saysblockquotein order to use the index you need to have a field in your object that is either a subobject or array where the first elements are xy coordinates or yx just be consistent it might be advisible to use orderpreserving dictionarieshashes in your client code to ensure consistencyblockquotethis feels a bit weird as in no other place of document usage field order actually matters so in case the location is a nested document it would be cool if we could specify the the properties to be used for comparison on index definition side or when queryingcodeindex location options name location min max x foo y barcodeorcodelocation near foo bar prefer the former as this wouldnt require the mapping information to be restated for every query
0
the test suite hangs on latest in testraisewtimeout code testraisewtimeout testreadwriteconcernspectestreadwriteconcernspec command stopped early context canceled code the server version was this test uses the stopreplproducer failpoint which may have changed
0
fragile ordering of calls to destructors of static objects have been causing intermittent test failures the best way to get rid of this behavior is to just rid ourself of fragile ordering guarantees in creationdestruction of static objects
1
in this tasks logs there is a very large section that is repeated over and over with the same messages and same timestamps for most of the logs page including the line finished running posttask commands
0
currently servers are pooled mongoserverservers by mongoserversettings this leads to separate mongoserver instances to be created for different slaveok values leading to having two connection pools for one server that was initialized with different slaveok values this by itself is perfectly fine but it appears that server slaveok parameter is not used when appropriate connection is searched let me illustrate with an examplei create server with connection string that specifies slaveok as false the default one then create a database collection and then open a cursor that has slaveok set to true this leads to connection pool of server that is marked as not slaveok to contain connection that is slaveok while this is not an error i still would like to ask if this is intentional since this is a bit misleading when debugging
0
configset allows the user to save any keyvalue pair to configjson while this might be convenient for storing arbitrary information that is remembered across sessions it has the side effect of letting the user overwrite some properties that are important for mongosh to function properly eg userid we should either have a list of allowed properties and only let the user set those or a list of disallowed properties and let the user set anything but those id lean towards the former to avoid configjson to become too big if users put random stuff into config and ends up slowing down the shell when it starts but id be also open to consider the latter
0
problem in initial connection if one of the seed mongo processes is down the connection wont be established a machineprocess can be down for a long time this means no new connection can be established during this time this defeats the purpose of high availability of a replication set expected result as long as one of the seed process is working connection should be established note not repro in nodejs driver so its c drivers issue error at mongodbdrivercoreserversclusterableserverddmovenext serverid clusterid endpoint endpoint state disconnected type unknown heartbeatexception mongodbdrivermongoconnectionexception an exception occurred while opening a connection to the server systemnetsocketssocketexception no connection could be made because the target machine actively refused it
1
version db version git version ese enabled running mongod w the parameters below replset port dbpath storageengine wiredtiger logpath enableencryption encryptionkeyfile sslmode allowssl sslpemkeyfile mongodpem sslcafile capem setparameter saslhostnamelocalhost sslallowinvalidhostnames environment windows issue access violation as results of passing invalid pointer code g access violation code second chance mov dword ptr ebp k childsp retaddr call site mongodmongoaesencrypt mongodmongoanonymous namespaceencrypt mongodwtencrypt mongodwtlogwrite mongodwttxncommit mongodsessioncommittransaction mongodmongotxnclose mongodmongocommit mongodmongocommit mongodmongooperator mongodmongoquery mongodmongocopy mongodmongocopydb mongodmongoanonymous namespaceinitialsyncclone mongodmongoanonymous namespaceinitialsync mongodmongosyncdoinitialsync mongodmongorunsyncthread mongodstdlaunchpad go vfbasics ntdllrtluserthreadstart code
0
mongoorchestrations master has removed support for the uri field that we parse the port number of the mongo process out of as travisci uses mongoorchestrations master this breaks all travis builds
1
i am upgrading a test throwaway replica set from to started by upgrading the binary for one secondary as expected i got the detected index from earlier version restart with repairinstead of doing a repair i figured i would just resync the nodewhen i attempted this i i control i control db version i control git version i control build info linux smp fri jan utc i control allocator i control options config net port processmanagement fork true replication replsetname tiger storage dbpath engine wiredtiger systemlog destination file path i network connection accepted from connections now i repl initial sync i repl syncing from i repl initial sync drop all i storage dropalldatabasesexceptlocal i repl initial sync clone all i repl initial sync cloning db e repl invalid parameter expected an object e repl initial sync attempt failed attempts i network connection accepted from connections now i network connection accepted from connections now i network connection accepted from connections now i repl initial sync i repl syncing from i repl initial sync drop all i storage dropalldatabasesexceptlocal i repl initial sync clone all i repl initial sync cloning db e repl invalid parameter expected an object e repl initial sync attempt failed attempts i network connection accepted from connections now i repl initial sync i repl syncing from i repl initial sync drop all i storage dropalldatabasesexceptlocal i repl initial sync clone all i repl initial sync cloning db e repl invalid parameter expected an object e repl initial sync attempt failed attempts i repl initial sync pendingnoformatif the answer to this is simply sorry cant do this its an rc thats fine by me its all garbage data
0
hi team wanted to check if we are using the following query to serve the purpose is it valid lets suppose i have two documents as given below id test id test i wanted to identify if there are any other keys present in the test object other than a so i went for some solution and following worked for me dbcolfindtest gt a but i did not found any such case over the mongodb official documentation to search on the json object so should we consider it to use this is useful as we can do the index on the test object which will improve the query performance thanks
1
the filename in the following statement is incorrectyou can specify alternate log and data file directories in etcmongodbconfit should be etcmongodconf
1
the docs at dont mention whether the unique constraint is being honoured in indexesreference
0
hello d i have a database user with username user and password passwordutfééùù var db new dbtest new serverlocalhost dbopenfunctionerr db dbauthenticateuser passwordutfééùù functionerr result consolelogerr dbclose the output is name mongoerror message auth fails code ok errmsg auth fails somewere in the lib code in places in nodemodulesmongodb you can find password but if you change it to password it will authenticate with success d doc in mongoshell it works with dbauth function node mongodb version nodejs version mongodb version thanks d
1
once hygienic can reproduce the existing dist and distsrc tarballs we can flip over to using hygienic in evergreen by default
0
i am receiving the error below when i try and run the attached codeexception in thread main javalangillegalargumentexception db message size is too big max is at at at at at at at at at at at
1
when or configdbs present in the mongos command line arent available mongos fails to starthere are some reproduction scripts for a single host and then the log and failure message when or configsrvs are down setupmkdir tmplogsmkdir tmpconfigmkdir mongod mongos start a configsvrmongod port logpath tmplogsconfiglog dbpath tmpconfig directoryperdb configsvr quiet logappend fork give the configsvr time to get listeningsleep start a mongosmongos port logpath tmplogsrouterlog configdb quiet logappend fork observe that mongos failed to startsleep pgrep mongos echo no mongos try again with config serverskillall mongod mongosmongod port logpath tmplogsconfiglog dbpath tmpconfig directoryperdb configsvr quiet logappend forkmongod port logpath dbpath directoryperdb configsvr quiet logappend fork give the configsvrs time to get listeningsleep port logpath tmplogsrouterlog configdb quiet logappend forksleep pgrep mongos echo no mongos try again with config serverskillall mongod mongosmongod port logpath tmplogsconfiglog dbpath tmpconfig directoryperdb configsvr quiet logappend forkmongod port logpath dbpath directoryperdb configsvr quiet logappend forkmongod port logpath dbpath directoryperdb configsvr quiet logappend fork give the configsvrs time to get listeningsleep port logpath tmplogsrouterlog configdb quiet logappend forksleep pgrep mongos echo no mongos server restarted fri apr mongos db version pdfile version starting help for usagefri apr git version apr build sys info darwin richardkreutersmacbookprolocaldomain darwin kernel version sat jan pst apr warning couldnt check on config ok for now socket exception server mongos connectionpool error couldnt connect to server apr warning couldnt check on config ok for now socket exception server mongos connectionpool error couldnt connect to server apr warning only config server reachable continuingfri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connect fail to errmsg couldnt connect to server apr syncclusterconnection connecting to fri apr syncclusterconnection connect fail to errmsg couldnt connect to server apr trying reconnect to apr reconnect failed couldnt connect to server apr trying reconnect to apr reconnect failed couldnt connect to server apr scopeddbconnection conn nulluncaught exception in mongos syncclusterconnectioninsert prepare failed socket exception socket exception server restarted fri apr mongos db version pdfile version starting help for usagefri apr git version apr build sys info darwin richardkreutersmacbookprolocaldomain darwin kernel version sat jan pst apr warning couldnt check on config ok for now socket exception server mongos connectionpool error couldnt connect to server apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connect fail to errmsg couldnt connect to server apr trying reconnect to apr reconnect failed couldnt connect to server apr scopeddbconnection conn nulluncaught exception in mongos syncclusterconnectioninsert prepare failed socket exception server restarted fri apr mongos db version pdfile version starting help for usagefri apr git version apr build sys info darwin richardkreutersmacbookprolocaldomain darwin kernel version sat jan pst apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr about to contact config servers and shardsfri apr waiting for connections on port apr web admin interface listening on port apr listen bind failed address already in use for socket apr addr already in usefri apr config servers and shards contacted successfullyfri apr balancer id started at apr apr created new distributed lock for balancer on lock timeout legacy timeout ping interval process legacy fri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr creating distributed lock ping thread for and process sleeping for apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr syncclusterconnection connecting to fri apr distributed lock acquired now id balancer process state ts when new who why doing balance round fri apr distributed lock unlocked fri apr distributed lock acquired now id balancer process state ts when new who why doing balance round fri apr distributed lock unlocked fri apr distributed lock acquired now id balancer process state ts when new who why doing balance round fri apr distributed lock unlocked fri apr distributed lock acquired now id balancer process state ts when new who why doing balance round
0
currently writeconcernexception which is used for soft write concern errors extends writeexception which is directly used for hard write errors this makes users do something like the following if they with to ignore soft errors code try catch writeconcernexception e do nothing catch writeexception e do something code instead we should have class writeconcernexception extends writeexception class writeerrorexception extends writeexception abstract class writeexception extends runtimeexception any place where we currently throw writeexception should be changed to writeerrorexception any classes that extend writeexception eg duplicatekeyexception should be changed to extend writeerrorexception
1
group and sort must take in all their input before they can produce a resultat present when the output is produced it is simply scrolled over by the documentsource we could start freeing values that have been returned by these sources as they are returned this may be more noticeable allow more processing when more than one group andor sort appear in the same pipeline documents will flow from one to the next requiring a constant amount of memory instead of accumulating in each aggregator in the pipeline
0
the following code will convert both dates to utc but with the same hour this output is am notice the am notice the shows wrong hour should be not this is correct public class testdateserialize public datetime entrydttm get set public datetime exitdttm get set public void testserializedatetime var obj new testdateserialize objentrydttm objexitdttm var doc objtobsondocument consolewritelineobjentrydttmtolongtimestring consolewritelineobjexitdttmtolongtimestring consolewritelinedocasbsonvalue consolewritelinedocasbsonvalue
1
all of the session tests against latest replica sets on distros besides rhel and ubuntu are failing with sessions are not supported by this mongodb deployment code error testaggregate testsessiontestsession traceback most recent call last file line in testaggregate selftestopsclient agg file line in testops with clientstartsession as s file line in startsession serversession selfgetserversession file line in getserversession return selftopologygetserversession file line in getserversession sessions are not supported by this mongodb deployment configurationerror sessions are not supported by this mongodb deployment code
1
leak of memory or pointers to system resources defect staticc checker resourceleak subcategory none file srcmongocmongoccursorc function mongoccursorparseoptsforopquery srcmongocmongoccursorc line colorredassigning dollarmodifier storage returned from bsonstrdupprintfs keycolor dollarmodifier bsonstrdupprintf s key code srcmongocmongoccursorc line colorredresource dollarmodifier is not freed or pointedto in bsonseterrorcolor bsonseterror cursorerror code srcmongocmongoccursorc line colorredvariable dollarmodifier going out of scope leaks the storage it points tocolor return null code
0
the aggregation pipeline caused a mongod termination on windows this was discovered while running the concurrency sharded replication tests on windows
0
the vc runtime should only be included for enterprise builds
1
if mongodb rolls back an explicit collection creation it will not record the dropped data to disk this issue was made more acute in mongodb when all implicit collection creation was changed to explicitly create an oplog entry thus downstream replicating nodes now create all collections explicitly
0
jackson allows for finer grained control over which properties are serialized the setprivatefieldsconvention allows some flexibility by allowing private fields to be set directly however this could be extended to allow for greater flexibility an example of the objectmapper api allows flexibility for controlling getter setter and field properties code mappersetvisibilityisgetter none setvisibilitygetter none setvisibilitysetter none setvisibilityfield any code
0
i have a mongodb cluster because the server is only available on private interfaces access control is not used when i attempt to connect to the cluster via this error occurs auth error sasl conversation error unable to authenticate using mechanism authenticationfailed authentication failed however when i dont specify the database the connection succeeds the same connection string with the database specified works fine with other drivers additional notes given that this particular issue has been fixed as of the release if you are still seeing the above error the most likely cause is simply that your credentials are incorrect
1
the code for selftrue in the members array of replsetgetstatus shows fewer fields than for remote hosts most of these are related to network like pingms or remoteness like the heartbeat fields and so dont make sense to show for self however when running replsetgetstatus on a secondary it still makes sense to see the syncingto field since the secondary is still syncing to some other member and so it is confusing for the field to be absent it is easy to miss the toplevel syncingto field when focussing on the status of the members
0
would be good to add the mongo shell to the command otherwise people following will execute mongo and get bash mongo command not found sudo yum install y mongodborgshell
1
mongodb failed to buid due to error identifier not found this issue is caused by revision could you pleaes help to take a look at this thanks in advance
0
the current default is to have no socket timeout at all in case of a replicaset when a primary is not reachable this blocks server threads that try to write i would consider this a bad default it makes a lot more sense to have a sane default of say to allow the software to gracefully handle this situation now due to the current setup it caused thread starvation on our glassfish server so it went completely down while read operations should have still been possible imhotrace logsthread synchronization statisticsnumber of times this thread was blocked to enterreenter a monitor of times this thread waited for a notification ie it was in waiting or timedwaiting state cpu time for this thread seconds nanosecondsuserlevel cpu time for this thread seconds nanosecondsobject monitors currently held or requested by this thread ownable synchronizers eg reentrantlock and reentrantreadwritelock held by this thread thread execution informationthread threadid threadstate runnable running in native at method at at at at at at at at at at at at at at at at at at at at
0
we kill operations as part of the beginning of stepdown calling autogetrstlforstepupstepdown starts the killop thread we start to kill user operations before we disabling writes on primary and before transitioning the server to secondary these are the things that update the server description and trigger a topologyversion bump the killed operation error response is appended with a topologyversion that hasnt been incremented yet since the topologyversion is not incremented the driver will try to reselect the same server to run the command even though it may still be in the process of stepping down we can consider adding an extra incrementation to the topologyversion before scheduling the killops we already increment the topologyversion twice as part of stepdown – once for when we disable writes and another when we complete the transition to secondary another alternative is to delaying the killops logic until the topologyversion is properly incremented
0
using a connection string with the format codejava mongodbsrvfireflydevhorrkgcpmongodbnetretrywritestruecode i can successfully connect to sharded cluster using the c client version unfortunatelly when attempting to connect to the same cluster with the c client version using the same connection string and the same code i get a timeout exception hence not being able to connect codejava a timeout occured after selecting a server using compositeserverselector selectors mongodbdrivermongoclientaresessionssupportedserverselector latencylimitingserverselector allowedlatencyrange client view of cluster state is clusterid connectionmode replicaset type replicaset state disconnected servers code this is the stack trace on this exception codejava at mongodbdrivercoreclustersclusterthrowtimeoutexceptioniserverselector selector clusterdescription description in clustercsline at mongodbdrivercoreclustersclusterwaitfordescriptionchangedhelperhandlecompletedtasktask completedtask in clustercsline at mongodbdrivercoreclustersclusterwaitfordescriptionchangediserverselector selector clusterdescription description task descriptionchangedtask timespan timeout cancellationtoken cancellationtoken in clustercsline at mongodbdrivercoreclustersclusterselectserveriserverselector selector cancellationtoken cancellationtoken in clustercsline at mongodbdrivermongoclientaresessionssupportedafterserverselectioncancellationtoken cancellationtoken in mongoclientcsline at mongodbdrivermongoclientaresessionssupportedcancellationtoken cancellationtoken in mongoclientcsline at func cancellationtoken cancellationtoken in mongoclientcsline at mongodbdrivermongoclientdropdatabasestring name cancellationtoken cancellationtoken in mongoclientcsline at in i am missing something on the connection string
1
to be done right before branching see as a past example uuidgen is your friend
1
as can be seen here set readahead on each devicenblockdev setra setra setra setra setra read ahead section is setting all the values to when our actual recommendation is
0
mongos always crashed and mongos crashed almost at the same time the reason is that it got not master for then dbclientcursorinit call failed and it received signal the version is this bug is similar to backtraces below— —thu sep primary for replica set changed to sep primary for replica set changed to sep primary for replica set changed to sep primary for replica set changed to sep primary for replica set changed to sep primary for replica set changed to sep primary for replica set changed to sep dbclientcursorinit call failedthu sep writebacklistener exception dbclientbase transport error ns admincmd query writebacklisten thu sep chunkmanager time to load chunks for infodbdocinfo sequencenumber version based on sep chunkmanager time to load chunks for textdbdoctext sequencenumber version based on sep got not master for sep chunkmanager time to load chunks for infodbdocinfo sequencenumber version based on signal — —and— —thu sep chunkmanager time to load chunks for textdbdoctext sequencenumber version based on sep socket recv connection reset by peer sep dbclientcursorinit call failedthu sep socketexception remote error socket exception server thu sep dbclientcursorinit call failedthu sep writebacklistener exception dbclientbase transport error ns admincmd query writebacklisten thu sep warning db exception when initializing on current connection state is state conn vinfo textdbdoctext cursor none count done false retrynext false init false finish false errored false caused by dbclientbase transport error ns admincmd query setshardversion textdbdoctext configdb version timestamp versionepoch serverid shard shardhost auth thu sep got not master for sep primary for replica set changed to signal — —and — —tue sep socket recv connection reset by peer sep socketexception remote error socket exception server tue sep dbclientcursorinit call failedtue sep writebacklistener exception dbclientbase transport error ns admincmd query writebacklisten tue sep connection accepted from connections now opentue sep got not master for signal — —
1
poolresetcounter is used to guard against processing stale messages in drivers spec when we have a problem when contacting a monitored host the failedhost method of rsm will be called to notify the sdam subsystem if this problem is a network timeout or timeout we should increment the poolresetcounter for the associated serverdescription there is already a generation member variable on the connection pool that is used for the same purpose as poolresetcounter we should determine if we want to use this value or track this separately in the rsm code either way the sdam system needs to be modified to ignore messages coming from connections associated with older versions of poolresetcounter
0
backup restore is running into startup failures when recovering a mongod possessing recordpreimages in a collections persisted metadata this code errors if not in replica set mode and it is called by collectioninit which is part of server startup
1
the command line to create the service has mixed paths it has enterprise ad standard too so it will not work if someone has the standard for example
1
is there a list of all commands in alphabetical order any morethe link at the top of this page claims to go there but it is incorrect
1
the resource constraints in the constructor can cause to hit an internal oom condition and abort or return an empty handle when attempting to allocate a objectfor now the goal is to remove resource constraints if we want to add similar functionality in the future it may be best to implement our own so we can determine when to terminate or pause execution
1
includes work to remove boost
0
the summary states the goal pymongos pure python bson module is slow this hasnt historically been a huge problem since most users have taken advantage of cbson to dramatically improve performance the popularity of pypy and to a much lesser extent jython means we have to provide better pure python performancethe primary issue is memory copies serializing to bson and splitting strings decoding back to python dicts to solve these problems we will probably try a few different approaches using cstringiobytesio andor list comprehensions with generators we will also investigate the newer buffer protocol but that isnt supported everywhere people want to use pymongo
0
sudo mongod i journal journal dirdatadbjournal i journal recover no journal files present no recovery needed i journal e journal insufficient free space for journal files i journal please make at least available in datadbjournal or use smallfiles i journal i storage exception in initandlisten insufficient free space for journals terminating i control now exiting i network shutdown going to close listening sockets i network removing socket file i network shutdown going to flush diaglog i network shutdown going to close sockets i storage shutdown waiting for fs preallocator i storage shutdown final commit i storage shutdown closing all files i storage closeallfiles finished i control dbexit rc sudo mongod repair i control mongodb starting dbpathdatadb i control warning you are running this process as the root user which is not recommended i control i control db version i control git version i control build info linux smp thu apr utc i control allocator tcmalloc i control options repair true i storage repairdatabase mong i fatal assertion outofdiskspace cannot repair database mong having size bytes because free disk space is bytes i control
1
highlights that geospatial indexes cannot support covered queries it may be helpful to add this to covered query limitations and limits
0
im using ruby with mongoid to connect to mongoid today when im trying to insert embedded documents or list them mongodb gives me a replicable segfault each time i try to do so i tried to do this with a nightly build of mongodb as well but get the same result each mar assertion handle type status new mongo mongo fri mar mongo got signal segmentation fault stack trace fri mar mongo
1
code diff git atestsjsonconnectionurivalidauthjson btestsjsonconnectionurivalidauthjson index atestsjsonconnectionurivalidauthjson btestsjsonconnectionurivalidauthjson description escaped username gssapi uri valid true warning false hosts type hostname host localhost port null auth username userexamplecom password secret db null options authmechanism gssapi authmechanismproperties servicename other canonicalizehostname true description atsigns in options arent part of the userinfo uri mongodbalicesecretexamplecomadminreplicasetmyreplicaset valid true code
0
add macos arm to tools
1
from debugging the i have found that the and combines the increments of both history store wt data files when performing a searchnear for a particular key without any visibility in a populated btree the wttxnread function checks the if a particular entry is valid to be returned the visibility of the entry determines if an entry is valid the function will first check inside the update list ondisk and finally the history store for any visible entries now when we are looking inside the history store another search near is started to now track if there exists an entry in the history store now the and gets incremented because we have done a search near inside the hs this means to gather valuable information around number of entries traversed through a single search function cant be determined whether if it was from the history store or the data file the idea behind this ticket is to potentially add another statistic so that we can increment statistics seperately from the hs this benefits wt to better diagnose if we are tracking next calls from a data file or hs file
0
we followed the below steps for installing mongo db enterprise root access sudo is available for downloaded the tar file from tar file placed the tar file under tar zxvf cp r n why is service mongod start and service mongod stop not root access will be revoked after days how should we manage the process after that
1
in order to support multithreaded replication we apply operations in batches on secondary nodes the batching enables us to guarantee that read queries on secondaries do not see operations applied out of ordereach batch is divided up among many threads and while they are writing we block all readers even if the writing threads yield the multithreaded writing allows for concurrent cpu usagein addition before the batched writing begins we prefetch all the memory pages which contain records we are about to write including the pages we need to traverse in all indexes this prefetch stage provides concurrency for io and is probably providing the majority of the speedup that users are seeing it also allows us to hold the write lock for a minimal amount of time since there should not be any page faults taken in the write phase
1
problem statementrationale is going wrongcolor steps to reproduce could an engineer replicate the issue you’re reportingcolor expected results do you expect to happencolor actual results do you observe is happeningcolor additional notes additional information that may be useful to includecolor
1
since we dont double map on windows anyway the justification for not journaling on doesnt apply to we just need to test that everything works correctly in this case
0
for immediate release backup agent version released upgraded to go monitoring agent version released upgraded build to go
1
in number of connections blockin below line i think it should be connections not collectionsavailable the total number of unused collections available for new clients
1
as our first step for writing our security architecture guide we should create the initial set of markdown files as outlined by the design document we should populate these files with the sections and subsections listed in the design
0
several links to symptoms of the issue at the end of the day this looks like a problem in but not which is a huge problem since i just switched all my code to this bug has severe performance issues on queries reducing performance anywhere from
1
this has come up in a few unrelated code changes sometimes we have a mutable span of char and it would be more efficient to write directly into it than to write into an itoa objects buffer and memcpy it out the itoa algorithm should be separated from its buffering representation
0
doublerollbackjs waits for node to roll back using the following checks code jstestlogwait for nodes to roll back and both node and to catch up to node waitforstatenodes replsetteststatesecondary waitforstatenodes replsetteststatesecondary rstawaitreplication code however these checks are not reliable on and the first check can succeed because node was in state secondary prior to entering rollback the awaitreplication only checks livenodes on and which may not include node if the last call to ismaster failed due to the node closing connections when entering rollback we require a more reliable way to wait for node to complete rollback
0
attempting to prepare a logged transaction will assert it should gracefully error as its a reasonable application mistake
0
description journaling docs at says mongodb syncs the buffered journal data to disk every milliseconds starting in mongodb however this is incorrect or at least not completely the answer and is potentially causing users to think mongodb cant provide a highly available data solution ie write concern majority and also yield response times in far less than milliseconds at the same time however for example i am seeing in my own tests with write concern majority for an atlas hosted replicaset across availability zones in one region an average response time of around and maximum response time of around ive just been informed that actually for write concern majority at least the journal behaviour is write is journaled and flushed to disk immediately on the primary no waiting for next journal batch write mongod does this on a separate thread so that multiple writes can be part of the same flush its not one flush per write once journaled to disk on primary disk then the change is available to be replicated each secondary listening to the primarys oplog does the same and flushes journal to disk as soon as it received the change before then acknowledging back therefore the latency of a client performing a write to a node replica set using a write concern of majority is network roundtrip which will be in the order of milliseconds for ssd disks and a fairly local network of replicas scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
0
i observed the following build warning when compiling libmongoc for phpc noformat in file included from phpcsrclibmongocsrclibmongocsrcmongocmongoctopologyc in function ‘mongoctopologyrescansrv’ warning too many arguments for format trace sd msg note in expansion of macro ‘trace’ trace polling for srv records null in file included from phpcsrclibmongocsrclibmongocsrcmongocmongoctopologydescriptionc in function ‘mongoctopologydescriptionhandleismaster’ warning too many arguments for format trace sd msg note in expansion of macro ‘trace’ trace wrong set name null noformat this appears to have been introduced in for and pertains to how trace is defined in mongoctraceprivateh as a proxy to mongoclog the macros msg argument is always expected but subsequent arguments are taken with removing the null argument resulted in an outright error noformat in file included from phpcsrclibmongocsrclibmongocsrcmongocmongoctopologyc in function ‘mongoctopologyrescansrv’ error expected expression before ‘’ token vaargs note in expansion of macro ‘trace’ trace polling for srv records noformat if we cant fix this directly perhaps itd be best to simply suppress any wformatextraargs warnings around the trace macro
0
i would love to see a native mongoorg debian deb for armhf architecture to install on my nas thanks philipp
0
i do have an unique index with a partialfilterexpression on a collection but duplicate data is sometimes inserted index creation code getcollectioncreateindexnew basicdbobjectuserid new basicdbobjectname uidxsomethinguser appendpartialfilterexpression new basicdbobjectpropertiessometing new basicdbobjecteq true appendunique true code the index from the getindicies command code v unique true key userid name uidxsomethinguser ns somewheresomething partialfilterexpression something eq true code the duplicated documents code id userid express false something true items recipient id id userid express false something true items recipient id code mongodb version seems also to happen with at least a sidenote when dumping the collection and restoring it a duplicate key error is thrown why is it sometimes possible to insert duplicate data and how to avoid that
1
signature noformat fail smokesh t process running t successful run completed seconds t process running segmentation fault core dumped fail smokesh exit status noformat no config was dumped spinlockpthreadadaptivetest failed on ubuntu host project wiredtiger develop commit diff error if restoring a backup with metadata verification on error if restoring a backup with metadata verification on turn off verify metadata for backup document backup is incompatible with verifymetadata dec utc evergreen subscription evergreen event task logs spinlockpthreadadaptivetest
0
the implementation of find command on mongod checks the shard version outside of collection lock this means that the collection might change after it has been checked but before the command actually starts returning results and thus it may return results which do not belong to the shard being queried this should be easy to fix by moving the collection lock up
0
problem statementrationale in the repos are signed with the auth keys this causes this issuecolor codejava total size m installed size m is this ok y downloading packages warning header signature key id nokey retrieving key from the gpg keys listed for the mongodb repository repository are already installed but they are not correct for this package check that the correct key urls are configured for this repository failing package is gpg keys are configured as code
1
it would be useful if the driver were able to return the database servers current time as a datetime counting on a client machines internal clock is unreliable and there are many cases where a single definitive time source is much more appropriatepossible uses create update timestamps in records calculation of elapsed time since a recorded datetime sequencing incoming records coming from multiple client machines rationalizing updates coming from different time zoneswhere machines are clustered this call should always be satisfied from the primary machine or arbiter
0
i have installed libbson and c mongodb driver i then tried to install the c new driver which gave me this error cmake error at message command failed make install see also homecortanadesktopmongocxxdrivermasterbuildsrcbsoncxxthirdpartyepmnmlstccoreprefixsrcepmnmlstccorestampepmnmlstccoreinstalllog make error make error make error how do i remove this error and install the driver
1
remove global x lock acquisition for cloner this is because acquiring global lock in x mode can be blocked by prepared transactions the enqueued global x lock can block oplog queries which need the global is lock if these oplog queries and the data replication are needed to satisfy the prepared transactions write concern then the prepare transaction and replication cannot make progress thus a deadlock occurs alternatively if removing global x lock is not an option deprecate the usage or make sure it wont be blocked on prepare transactions
0
in microbenchmarks we run the standalone variants every and repl variants every hours we should make both run every hours
0
example page the lists under tasks and patch status appear to be in exactly the opposite order i think the tasks order is the same as the lefttoright colored boxes on the main page for the patch so it seems the more consistent ordering
0
on linux if during configure we find posixfallocate but not fallocate we try the syscall for fallocate to see if it works going back as far as rhel the syscall is in fact there and working but not in libc
1
i was trying to be smart and use expansions project as a value for paramsworkingdir for shellexec and paramsfile for expansionsupdate but it didnt work out as evergreen didnt expand those variables it works fine for values to params though what is the logic of which value sections allow expansions and which not
0
when we sweep the lookaside table we can only start removing a block at a full record if we start at a modify update that update will be lost when the record is read
1
running the nightly build builds before that one do not allow calling functions from systemjsit appears that functions extending the javascript array class get added to the array when it gets stored
1
paneltitledownstream change specific downstream impact tbd panel description of linked ticket paneltitleepic summary add expressions for datetime manipulation that do not involve time durations motivation this feature is necessary for us to release an mvp for a time series product cast of characters product owner katya kamenieva project lead tbd program manager craig homa drivers contact tbd documentation scope document technical design document product description panel
0
the current capitalization is inconsistent with phps datetime class name datetime is also not a word while the class name itself isnt casesensitive this does matter for documentation and expected output for vardump and the like has already made this change in the hhvm extension
0
for immediate release version various fixes for sharded cluster containing some nodes on linux and some on windows fix prevent possibility of short downtime during conversion to ssl
1
the grantrolestouser and revokerolesfromuser actions dont produce auditrecordsthe spec says that they do for text output noformatgrantrolestouser na granted to user the roles revokerolesfromuser na revoked from user the roles noformatalthough in a related bug the spec doesnt list any bsonfile output forthese actionshowever neither format actually produces any trace in the audit log
1
note the scope of this ticket is limited to fixing a rollback operation failed with the wtrollback error formatstressppczseriestest failed on ubuntu ppc host project wiredtiger develop commit diff fix conflict between concurrent operations in cursor modify reproduced the error on my machine however i do not think that reduced cache is needed for the test functionality as with the largemodify where the error occurs we are not testing eviction but checking for the modified content in the history store after recovery hence increasing cache to additionally committing each modification instead of a huge chunk of modification so to avoid cache pressure did not get the errors after making the above changes moreover added statistics to test whether or not data is being inserted into the history store also improved comments aug utc evergreen subscription evergreen event task logs formatstressppczseriestest
0
internal server error traceback most recent call last file line in getresponse response callbackrequest callbackargs callbackkwargs file line in wrappedviewfunc response viewfuncrequest args kwargs file homesitespythonprojectnewsblurutilsviewfunctionspy line in wrapper output funcrequest args kw file homesitespythonprojectnewsblurappsreaderviewspy line in index return welcomerequest kwargs file homesitespythonprojectnewsblurappsreaderviewspy line in welcome statistics mstatisticsall file homesitespythonprojectnewsblurappsstatisticsmodelspy line in all values dict file line in iterresults selfpopulatecache file line in populatecache selfresultcacheappendselfnext file line in next doc selfdocumentfromsonrawdoc file line in fromson else fieldtopythonvalue file line in topython value objectidvalue file line in init selfvalidateoid file line in validate texttypename typeoidtypeerror id must be an instance of str unicode objectid not when i run newsblur it throw this errori could not find the reasonplease help
1
hi we have replicasets used in our production environment and we are seeing at odd times the following error in our application logsexception failed on findone no replica set monitor active and no cached seed found for set cant find a reference to this error code or message so putting in a ticket hoping someone can help us understand what happens this usually fails and recovers minutes laterwhen this happens we check and the replica set is alive and healthywe are using the c driver and using mongo
1
our docs generation repo is there we can add a task to the makefile that builds the docs using typedoc for the latest branch
0
as it gives shorter link times evergreen probably requires continued use of as it seem to use less memory need to let zi soak with some developers to ensure its safe to use as default
0