text_clean
stringlengths
8
6.57k
label
int64
0
1
wt engine deletes the oplog manager on shutdown while holding the oplog manager mutex codec void wiredtigerkvenginedeleteoplogmanager stdxuniquelock lockoplogmanagermutex invariantoplogmanagercount oplogmanagercount if oplogmanagercount oplogmanagerreset code oplog managers destructor waits for oplogjournalthread to join however the oplog journal thread may be setting the the oldest timestamp which needs oplog managers mutex thus causing a deadlock noformat thread wtoplogjournalthread thread lwp in llllockwait from in from in pthreadmutexlock from in gthreadmutexlock mutex at stdlock this at stduniquelocklock this at stduniquelockuniquelock m this at mongosetoldesttimestamp this oldesttimestamp at in mongosetstabletimestampforstorageinlock thisthisentry at in mongoupdatecommitpointinlock thisthisentry at in mongoupdatelastcommittedoptimeinlock thisthisentry at in mongosetmylastdurableoptimeinlock thisthisentry optime isrollbackallowedisrollbackallowedentryfalse at in mongosetmylastdurableoptimeforward this optime at in mongowaituntildurable thisthisentry forcecheckpointforcecheckpointentryfalse stablecheckpointstablecheckpointentryfalse at in mongooplogjournalthreadloop this sessioncache oplogrecordstore at in stdexecutenativethreadroutine p at in startthread from in clone from noformat attached is the debuggers log and the lock dependency graph the join wait is not shown in the graph
0
by passing the updated mutablebson document into the collection update interface we can move oplog generationlogging and other storage behavior down and out of the update stage this can allow storage engines to more concisely and efficiently handle inplacedelta updates
0
replicapair setup and the first segmentation fault is the masteri launch a mapreduce command with the ruby driver the command fails i receive this exception then i launch collectioncount i get a proper resultand finally i launch the same mapreduce command but with less data thanks to a query and mongod crashes i know that this last mapreduce command behaves correctly when launched on a freshly started servertue apr cmd drop apr cmd drop apr got signal segmentation faulttue apr backtrace the sad part is that i did not notice that the master crashed autoreconnect and i crashed the second pair right after i attached the logs of the two servers and the mapreduce code in a ruby filei can reproduce the problem on my laptop mongodb osx with the same collection and a single mongod instance but it does not raise a segmentation fault mongod just gently kills itselftue apr connection accepted from apr connection accepted from apr cmd drop apr cmd drop failed probably invalid string why typeerror malformed character sequence at offset apr mr failed removing collectiontue apr cmd drop apr cmd drop apr query randomdbnamecmd command apr cmd drop apr cmd drop failure oldfpdormantnext at apr got signal abort traptue apr backtrace mongod libsystembdylib sigtramp mongod mongod mongod mongod main mongod start apr dbexit tue apr shutdown going to close listening socketstue apr going to close listening socket apr listener on port abortedtue apr going to close listening socket apr shutdown going to flush oplogtue apr shutdown going to close socketstue apr shutdown waiting for fs preallocatortue apr messagingport recv bad file descriptor apr shutdown closing all filestue apr end connection apr closeallfiles finishedtue apr shutdown removing fs locktue apr dbexit really exiting nowso i suppose that this problems appears as soon as a previous mapreduce command fails
1
we have code that looks like code client mongoclientnew connect sharded database admin inprog clientfindall code this code works fine to find current operations for replicaset setups but it does not work for sharded setups the error we encounter is code mongo dbclientbase transport error ns admincmdsysinprog query query all true from validate from block in executemessage from withconnection from withconnection from executemessage from execute from sendinitialquery from each code its worth noting were also using ssl looking at the code from the mongo shell it seems like the only thing they do differently is disabling the read preference for the command i have tried a few different read settings none have worked in my testing would it be possible to get a special method to get current operations for a database the way the code works i dont think i can easily remove the read preference for a query
0
it would be nice if project managers could specify in a file what builders are required when sending patch builds so the evergreen cli could read those defaults when users dont specify them wed have to decide what to do when users have specified default builders in their evergreenyml file probably printing a warning and honoring users settings would be enough
0
the setindexbuildcommitquorum cmd uses the helperupsert function to write to the configsystemindexbuilds collection helperupsert eventually hits a call to namespacestringislegalclientsystemns whereupon it fails attempts to write to the collection because we have not whitelisted it update namespacestringislegalclientsystemn to allow writes to the configsystemindexbuilds collection
0
right now the build drops all sorts of things in the root of the source tree the scons directory the sharedclient directory various statically linked executables the libmognoclient binary etcthis leads to a rather complicated gitignore and a bad user experience where getting a clean tree is hardall build artifacts should be captured under the build directory
1
tasks context as a database contributor when i run a patch build containing google microbenchmarks then i expect to see redgreen resultsor async signal processed bfs as a way to alert me to performance regressions ac ensure signal processing is running in google benchmarks check historical values for google benchmarks through the evergreen api of the base commit make the threshold configurable in a yaml file in google benchmarks well use for this use case meaning if the task is worse for latency since google benchmarks only reports latency than the baseline commit then the task will be marked failed red the key in the yaml file should be the key for the metric in the json perf output
1
christian im not sure if your noticed my final reply in after you closed the issue this demonstrates the issue i raised i do feel that this is an issue that can easily create problems for people and is best addressed i welcome your thoughts
0
to make builds green we should allow the error code number todo as a result of running the sample query here this is just to make the build green until we finish work on and prevent
0
testing locally enableicuoff as specified in our docs here results in linking errors for example the error usrbinld undefined reference to however mongocenableicuoff compiles without issue but for some users see this does not prevent libmongoc from linking against libicu we should investigate why the options above do not prevent libicu from linking
0
as of the value of deletestageparamsshouldcalllogop has no effect as part of this work the call to collectiondeletedocument in deletestagework should be passed a null pointer for the deletedid parameter
0
much like did for unsupported wire versions the test suite skips many tests when the primary server cannot be reached we should either have a test that ensures the driver can connecting using the mongodburi environment variable uri constant in tests at all times or remove some of the skip logic introduced in
1
versions of the server have two implementations of the update subsystem the old and earlier system and the new system in srcmongodbupdate which is both more performant and supports more expressive array updates the old and new systems have different behavior with respect to field ordering in order to ensure that the field ordering is consistent across all nodes in the replica set the primary and secondaries must use the same version of the update subsystem the is achieved via the feature compatibility version mechanism users must set the feature compatibility version fcv to in order to enable the new update system the fcv check however does not guarantee that a given update uses the same version of the update code on every node consider the following sequence of events a two node replica set is started both nodes are version but have fcv the client concurrently issues an update and these operations take compatible locks and therefore execute concurrently on the server the setfcv command writes its update to adminsystemversion to the oplog at optime t after this oplog entry is written but before the inmemory fcv state changes the update is logged with some optime greater than t this uses the old update system since the fcv inmemory state has not yet been changed the two oplog entries are applied on the secondary since the adminsystemversion write has an earlier optime and must be applied in its own batch the update uses the new update system i was able to reproduce a dbhash mismatch against a twonode replica set by running two scripts concurrently from two shells connected to the primary node the first script repeatedly issues an update with two sets that will result in different field ordering depending on which version of the update implementation is used code function use strict dbcdrop for var i i i assertwriteokdbcinsertid i assertwriteokdbcupdateid i set b a code the second script repeatedly sets the fcv from to and back again code function use strict while true assertcommandworkeddbadmincommandsetfeaturecompatibilityversion assertcommandworkeddbadmincommandsetfeaturecompatibilityversion code after the first script completes running the dbhash command against the test database on each node should show different hashes for testc
1
this behavior was already changed on the and later branches with from wed like to do the same on the branch to avoid having the shardingstate command response exceed the maximum bsonobj size when sharding isnt even enabled
0
run the splitchunk command with out of bound splitkeys the command fails but it still updates the chunks hence results in corrupted config database with overlap chunks and a chunk with reverse order weve tested and all have the same issue the chunks before running the splitchunk command noformat before id testuserxminkey lastmod lastmodepoch ns testuser min x minkey max x shard id lastmod lastmodepoch ns testuser min x max x maxkey shard noformat the chunks after running the splitchunk command noformat after id testuserxminkey lastmod lastmodepoch ns testuser min x minkey max x shard id lastmod lastmodepoch ns testuser min x max x maxkey shard id lastmod lastmodepoch ns testuser min x max x shard noformat jstest attached it fails with noformat assert are not equal split chunks failed but the chunks were updated in the config database e query error are not equal split chunks failed but the chunks were updated in the config database failed to load splitchunkoutofboundjs noformat also for the primary shard fasserts after configchunks has been modified noformat i fatal assertion illegaloperation cannot split chunk x minkey x at key x at srcmongodbssplitchunkcommandcpp i aborting after fassert failure f got signal abort trap noformat fassert is not an apporpriate response to the server receiving a command with bad parameters the command should just fail and return an error with no side effects
1
when i want to store datetime it stored like bsonwriterwritestringdatetimetostringyyyymmddthhssfffffffkproblem now i want to query my datetime property by eq query query serializer generate something like i got empty result set because dates doesnt match date in base with milliseconds generated date withouti can do only workaround about thisdatetimeproperty gte lt like query date datetime query date seconds to handle milliseconds
1
looks like distinctscan added in doesnt request the full document like it does correctly individual fields noformat correct dbnewtestaggregate groupid valuedate first first farmid id first id first incorrect dbnewtestaggregate groupid valuedate first first root id first valuedate id first valuedate noformat
1
we have been using mongodb for a long time and now we are starting to use spark in some querys we match null field what has been impossible to do it with pyspark and the mongodb connector could be a possibility to use none to match null
1
and mongoccursorgetid
0
when all config servers are down admin commands can be executed on a mongos without the necessary privileges reproduction steps start sharded cluster with keyfile authentication add users kill all config servers log into mongos and execute admin commands that would otherwise require certain privileges eg serverstatus would require clusteradmin rolejstest is attachedexample shell transcriptwith config server here only runningcodemongomongodb shell version to testerror while trying to show server startup warnings not authorized on admin to execute command getlog startupwarnings mongos dbadmincommandserverstatus ok errmsg not authorized on admin to execute command serverstatus code config servercodevetrenterdocumentstmp psmongotr s mongod dbpath userstrdocumentstmpdataconfigdb logpath userstrdocumentstmpdataconfigmongodlog port logappend keyfile userstrdocumentstmpdatakeyfile configsvr forktr s mongod dbpath logpath port logappend keyfile userstrdocumentstmpdatakeyfile forktr s mongod dbpath logpath port logappend keyfile userstrdocumentstmpdatakeyfile forktr s mongos logpath userstrdocumentstmpdatamongoslog port configdb logappend keyfile userstrdocumentstmpdatakeyfile forkvetrenterdocumentstmp kill the same command againcodevetrenterdocumentstmp mongomongodb shell version to testmongos dbadmincommandserverstatus host enterlocal version process mongos pid uptime
1
primary losses connectivity with replica set setfeaturecompatibilityversion is run on primary setfeaturecompatibilityversion is run on remainder of replica set old primary will rollback remove the uuid on adminsystemversions old primary will fetch fcv from the remaining members rollback loop continues forever codetext i repl starting rollback due to oplogstartmissing our last op time fetched ts t sources gte t s t hashes i repl replication commit point ts t i repl rollback using the rollbackviarefetchnouuid method because uuid support is not feature compatible with featurecompatibilityversi on i repl transition to rollback from secondary i network skip closing connection for connection i network skip closing connection for connection i rollback starting rollback sync source i asio successfully connected to took connections now open to i rollback finding the common point i rollback our last optime i rollback their last optime i rollback diff in end of log times seconds i rollback rollback common point is ts t i rollback starting refetching documents i rollback finished refetching documents total size of documents refetched i rollback checking the rollbackid and updating the minvalid if necessary i rollback setting minvalid to ts t i rollback dropping collections to roll back create operations i rollback deleting and updating documents to roll back insert update and remove operations i rollback exception in rollback nsadminsystemversion id featurecompatibilityversion illegaloperation tried to complete upgrade but adminsystemversion did not have a uuid code
1
we tag each query for profiling using addspecial and commentdbtestfindfieldvalueaddspecialcomment these show up correctly in the logs and dbcurrentop when using collectionfind methodbut any flags added via addspecial are not used by the collections getcount method we use the count method to display a querys total countdbtestfindfieldvaluecountdbtestfindfieldvalueaddspecialcomment countthe count disregards any special flags so profilingtracingdbcurrentop do not show our unique tag for that querybased on the source code the dbcollectiongetcount method does not accept any special flags parameters so it looks like a core server issuesee attached console session where a findaddspecial shows the comment via dbcurrentop but it is not displayed when i issue a findaddspecialcount
1
matching on a positional path only seems to work as expected within a facet if there are no subsequent stages count project group all behave as though they receive zero input documents even though results are correctly returned when no subsequent stage is provided sort however seems to function correctly original description the below script leads to an empty scorerank code dbtestdrop dbtestinsertmany id quizzes score id quizzes score const res dbtestaggregate facet scorerank match gt count count toarray printjsonstringifyres null code i get the below output code scorerank code if you replace with quizzesscore move match out of facet or move the entire scorerank pipeline out of facet i get the expected result code scorerank count code
1
when i rename a field in mongoid it successfully updates the field in mongodb but it does not update the field in the ruby code class example include mongoiddocument field apple type string as capple end exampleall mongoidcriteria selector options class example embedded false exampleallrenameapple dapple moped update databasecore collectionexamples selector updaterenameappledapple flags command databasecore runtime updatedexistingfalse writtentonil errnil e examplenew edapple nomethoderror undefined method dapple for
1
in mongod correctly returns an error when trying to index a key over the maximum allowed length there are plenty of workarounds for new applications hash the value truncate the value but for existing applications with millions of documents this causes major issuesallowing users to relax this restriction will permit them to gradually deal with these bad index entries they can use the upgradecheck tool provided in to get an idea if this is something they need or notto enable this flag starting in start mongod with setparameter it defaults to true
0
im running and have ran into a problem that appears similar to i dont have a replica can you help me recover the data there are also some other issues that appear to have the same problem based on these ive already tried the following the options for repair etc switching to and but all of these run into the same messages below code w detected unclean shutdown datadbmongodlock is not empty w storage recovering data from the last clean checkpoint i storage wiredtigeropen config e storage wiredtiger filewiredtigerwt connection read checksum error e storage wiredtiger filewiredtigerwt connection wiredtigerwt encountered an illegal file format or internal value e storage wiredtiger filewiredtigerwt connection the process must exit and restart wtpanic wiredtiger library panic i fatal assertion i control begin backtrace backtraceprocessinfo mongodbversion gitversion uname sysname linux release version smp sun jun utc machine somap mongod mongodwteventv mongodwterr mongodwtpanic mongodwtbmread mongodwtbtread mongodwtbtreetreeopen mongodwtbtreeopen mongodwtconnbtreeget mongodwtsessiongetbtree mongodwtmetadataopen mongodwiredtigeropen mongod mongodmain mongod end backtrace i aborting after fassert failure code
1
python import motor motordictkeys dictkeys theres no motorasyncio or motorclient so motor is effectively unusable also note on python there is no motorasyncio attribute im guessing the stuff under motorasyncio was moved into the toplevel namespace so the tutorial is inaccurate
1
ldap proxy documentation has the external database listed in examples using doublequotes this can cause problems as many shells will interpret it as a variable instead of a literal string we should probably change it to either be singlequotes or escape the
1
presently these commands can encounter writeconflict errors when theyre run in certain concurrency suites the commands should use writeconflictretry loops to wrap their writes
0
specifically in order to enable verbose logging in the wiredtigeropen call the wiredtiger build needs to be configured with –enableverbose see list of build options at
0
hello so after a bad servermongod shutdown i am unable to restart mongod im confident that my issue is that same as this one like the above thread ive uploaded my wiredtiger files in the hopes that someone can respond with fixed wiredtiger files i would be extremely gratefulthank you if anyone has any further questions feel free to ask thank you
0
after long queues of jobs do not seem to be a problem for amboy anymore we had previously attempted to put cedar test results in a queue which had caused amboy problems so we reverted however there is a risk that a sudden burst of test results will overwhelm whatever number of iops we have configured putting a queue in place would help protect the system an alternative would be to not use amboy at all since potentially amboys machinery is too complex jobs must be saved to queues and then gotten from them an alternative implementation would be to use a producerconsumer pattern in go with a fixed number of workers since we might want to change the number of workers without changing code we would also i think want to add a configuration option for number of workers per app server
0
inserting a document with unique words can generate a group commit of trips assertion failure in alignedbuildergrowreallocate in utilalignedbuffercppnoformat verify a noformatthis leads to a server shutdown reproduce with after creating a content text indexcode import pymongo testdb pymongomongoclient numwordsindoc largedoc content joinmapstr numdocs for i in xrangenumdocs largedoc i testdbfooinsertlargedoc codein my environment above assertion trips on insert logfile attached
0
hello i encountered a strange behavior in mongo i am using mongoid in ruby talk to mongodb and mongoid allows having default sort command when i run the following command personfindsomeindex in mongoid it is translated into dbpersonsfinddeletedat null id objectidsomeindexsortcreatedat is not really smart by generating such query because we know we will get either or result so the sort operator does not make sense however ive found cases where mongodb is even less smart and caches the wrong index including the createdat field i would expect mongodb to automatically use the id index whenever it is present in the query no matter other fields that are present in the query in partcular sort commandsi already had a discussion about this on google group
1
mongobridge empty help sitereporter fh
0
the current compatibility matrix for drivers does not show java mongo driver we are trying to migrate it to this some of our apps run on java and db is mongodb
0
triggering the rlptokenlengthlimit error from leads to a leak during query execution server command valgrind showleakkindsdefinite leakcheckfull suppressionsetcvalgrindsuppressions mongod basistechrootdirectoryoptbasis leak report noformat thread direct indirect bytes in blocks are definitely lost in loss record of at operator newunsigned long in by mongoanalyzedataaccessmongocanonicalquery const mongoqueryplannerparams const mongoquerysolutionnode by mongoplanmongocanonicalquery const mongoqueryplannerparams const stdvector by mongoanonymous namespaceprepareexecutionmongooperationcontext mongocollection mongoworkingset mongocanonicalquery unsigned long mongoplanstage mongoquerysolution by mongogetexecutormongooperationcontext mongocollection mongocanonicalquery mongoyieldpolicy mongoplanexecutor unsigned long by mongogetexecutorfindmongooperationcontext mongocollection mongonamespacestring const mongocanonicalquery mongoyieldpolicy mongoplanexecutor by mongorunquerymongooperationcontext mongoquerymessage mongonamespacestring const mongocurop mongomessage by mongoreceivedquerymongooperationcontext mongonamespacestring const mongoclient mongodbresponse mongomessage by mongoassembleresponsemongooperationcontext mongomessage mongodbresponse mongohostandport const by mongoprocessmongomessage mongoabstractmessagingport by mongohandleincomingmsgvoid by startthread noformat
0
currently any exception in an assertsoon will cause a failure when we probably just want to retry like in replsettestjsawaitlastopcommitted
1
in vectorclockmongodtestprimary setup the clusterrole is set to shardserver and the replica set member state to primary such global state should be cleared in the teardown because subsequent tests can depend on it for instance the gossipout test is currently working by hazard because tests based on vectorclockmongodtest fixture are always executed before vectorclockmongodtestprimary ones changing such order eg by changing fixtures alphabetical order would cause the test to break because the result depends on clusterrole ps this ticket is probably a good reason to push for a randomization of unit tests order
0
this issue was introduced by the changes from it appears that indexcatalogentryimplismultikey isnt getting set to true after the set of multikey paths are updated however i havent been able to determine how that is connected to an earlier failed operation codecpp opctxrecoveryunitoncommit ismultikeystoretrue code
1
it would be very nice if http console would have relative urls to inner sections i have an http proxy to rewrite into so the evident problem is that i cannot follow the links
0
the changestream iterators rewind method does not appear to resume if it encounters a resumable error it likely needs to adopt the logic found in next additionally rewind should capture the resume token by calling extractresumetoken this ensures we can resume from the initial change document provided we rewound if there is an error on the very first call to next
0
add logic to assertsoon to automatically call hanganalysis prior to throwing add an additional optional parameter to assertsoon which is additional params to pass to hanganalysis js function for now do this in addition to throwing such that hanganalysis is just a fancy new bonusfeature need to do this while some users of assertsoon are using assertsoon as a retry mechanism a separate ticket will fix all callers of assertsoon
0
visual c compiler team is improving conformance of conditional operator for the upcoming update release the improvements will be available under a switch but they will also be implied by the permissive switch our qa team currently builds regularly mongodb as a part of rwc suite and so far mongodb been clean under permissive there is one place right now that will fail to compile under upcoming zc constiterator cfindconst k key const auto it thismapfindkey return it thismapend thisend itsecond line the problem is not so much in the code itself as other compilers are able to compile it without problems but in our stl implementation that currently uses inheritance between mutable and const iterators stephan acknowledged the library team is planning to get rid of this trick and replace it with conformant implementation in the next major breaking release but its not going to happen anytime soon while as is the above statement is ambiguous according to the standard note other compilers would have complained as well should theyve been using our stl implementation we would like to ask you guys to patch the above code with an explicit cast to resolve the ambiguity caused by our current iterator implementation as following pull request is following constiterator cfindconst k key const auto it thismapfindkey return it thismapend thisend constiteratoritsecond line this would allow us to keep building mongodb clean under permissive for validating compiler changes thank you yuriy
0
get mongodbdriverlegacy building and all tests passing against net core
0
this commanddbcollectionupdate arrayfield value use arrayfield as basicdbobject key the exception sayexception in thread main javalangillegalargumentexception document field names cant have a in them bad key dimensionarrayv at at at at at at
0
i have just tried to run through the instructions for restoring a replica set from a downloaded mms backup i am running mongodb on osx have started a restore job on my mms group and then downloaded the targz file via httpsi have created a new empty replicaset then i follow step shut down the entire replica seti then follow steps through to however when i attempt to run the seedsecondarysh script in step this fails withcodeseedsecondarysh shell version to collection already existscodethe script appears to be attempting to create the oplogrs collection which i believe already exists in my replicaseti increased my logging to and then ran that command again to confirm connection accepted from connections now run command admincmd whatsmyuri command admincmd command whatsmyuri whatsmyuri run command localcmd create oplogrs capped true size create collection localoplogrs capped true size command localcmd command create create oplogrs capped true size locksmicros socketexception remote error socket exception server end connection connection now opencodei then delete all of my local database files and localns and retry step shell version to ninserted codestep now appears to be successful although im curious if theres a better way of working around this i i continue with step and step however i seem to hit an error at step when it asks me to run rsinitiatecodemongomongodb shell version to testserver has startup warning soft rlimits too low number of files is should be at least rsinitiate ok errmsg localoplogrs is not empty on the initiating member cannot initiatecodewould it be possible to fix the instructions in step as well as step please
1
if a shards primary steps down but the mongos cant connect to the new primary the mongos continues thinking that the old primary is still the master this will cause it to send setshardversion commands to the former primary which will fail and be retried many times
0
current nightly build redefines so you dont have to define this yourself as the code seems to use itundef is a reserved symbol and should never be changes this breaks eg compilation of intels threading building blocks because they expect it to be defined as a numbersee
1
not necessarily in the ui maybe by email goal is to give the server team a punchlist of the flaky tests they need to fix up in priority order
0
new false i ran into this issue when creating a where function that compared numbers
0
reports i can see that now every i time i call wtsessioncreateindexmyindex for fullscan of the main table is performed to fill index table we used to call wtsessioncreate for all our tables and indices on each start and now its not free anymore
0
i am trying new feature on mongodriver for function here is my syntax codejava function fmtsprintffunctionp return p addfunc bsond addfields bsond addfunc bsond function bsond body function args bsonayes lang js code i use that addfunc init my aggregate pipeline and it took long to proses if add that addfunc into my pipeline aggregate and at the end i got error must not be null i dont know whats wrong with it is that any idea for my case or mongodb go not support function yet i am using latest mongo atlas
0
paneltitleissue status as of feb issue description and impact in mongodb wiredtiger fails to parse the desupported huffmankey option during table creation this prevents initial syncs and mongorestores of collections created prior to mongodb if any collections were created with a usersupplied wiredtiger configuration string in support for huffman encoding of keys was removed but collections created prior to mongodb still contain the huffmankey option that was provided at collection creation time attempts to reuse this option via initial sync and mongorestore with options trigger the bug diagnosis and affected versions this bug exists in mongodb and affects collections created in any earlier version methods for doing this include explicit createcollection commands that specify a storageenginewiredtigerconfigstring value that includes huffmankey any collection creation performed while mongod was running with the wiredtigercollectionconfigstring parameter enabled for these collections on initial sync fails and logs a collection clone failed message with an unknown configuration key huffmankey invalid argument error mongorestore also fails with codeerror running create command badvalue invalid argument wiredtigerconfigvalidate configchecksearch unknown configuration key huffmankey invalid argument code remediation and workarounds upgrade to to resync or mongorestore it is not currently possible to change a usersupplied wiredtiger collection configuration string inplace if necessary to sync a node in use the sync by copying data files method or start the new node using version and upgrade it when sync is complete mongorestore can be used in with the nooptionsrestore flag panel original description as part of the huffmankey encoding support is removed along with the configuration options to control it the older versions of mongodb where the option of huffmankey is configured to all the tables that are created in wiredtiger metadata whenever these databases are upgraded to the newer versions like and above leads to a problem in parsing the old configuration option that is not known
1
per if i start the node as replica set i cannot add user wo doing an rsinitiate which in these instructions is step if i start the node as standalone then the user i created in step will not be replicated to other nodes what i ended up doing was do rsinitiate and then create user
0
noformat shaddtagrangetesttest mon sep uncaught exception cant have in field names noformat
1
description feedback comment can you plz put actuall examples instead of i cant connect o monogs because sytnax is wortng scope neither the tutorial nor the linked bindip docs make it super clear exactly how we expect a hostname or ip to be listed theres no harm in adding an example using examplenet or the test ip ranges and its a simple enough fix to make scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
0
right now theres no way to access the operationsbatches after a bulk op is executed see right now were looking directly in bulkopsbatchescurrentbatchcurrentupdatebatchcurrentinsertbatchcurrentremovebatch to find the list of operationsbatches is there a better way
1
the following took place on a shard of a node cluster config shard router the error took place while using mongoimport i noticed other bugs reported for but im using the other bug reports recommended upgrading im running xenial on distributor id ubuntu description ubuntu lts release codename xenial noformat f invalid access at address f got signal segmentation fault begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname linux release version smp thu jan utc machine somap mongod mongod kernelrtsigreturn end backtrace noformat
1
relies on permissions for systemindexes and systemnamespaces for listcollections and listindexes actions among othersthis should be rewritten to work on both and wt
1
states quote alternatively you can shut down a secondary and use mongodump with the data files directly if you shut down a secondary to capture data with mongodump ensure that the operation can complete before its oplog becomes too stale to continue replicating quote this is no longer true with version which has dropped support for the dbpath option on mongodump however stopping a secondary is a useful approach when backing up by copying the files therefore the above quote should be removed from the backup with mongodump section and added with necessary tweaks to the backup by copying underlying data files section probably in or after the paragraph starting if your storage system does not support snapshots
0
there is a todo in the codebase referencing a resolved ticket which is assigned to youplease follow this link to see the lines of code referencing this resolved ticketthe next steps for this ticket are to either remove the outdated todo or follow the steps in the todo if it is correct if the latter please update the summary and description of this ticket to represent the work youre actually doing
0
running an explain using the explain query option returns invalid parameter expected an object when run against a sharded collection with a shard and mongoscoderassirassiworkmongo fork logpath devnullabout to fork child process waiting until server is ready for connectionsforked process process started successfully parent exitingrassirassiworkmongo port fork logpath w sharding running with config server should be done only for testing purposes and is not recommended for productionabout to fork child process waiting until server is ready for connectionsforked process process started successfully parent exitingrassirassiworkmongo port shell version to has startup i i note this is a development version of i not recommended for i mongos shardadded ok mongos shenableshardingtest ok mongos shshardcollectiontestfoo id collectionsharded testfoo ok mongos error err invalid parameter expected an object code at
0
code dbsystemjssaveidr valuefunction return dbevalreturn r dbevalreturn r code noformat illegal instruction noformat
1
as a dag engineer id like to add a triage section to change points being detected in cedar such that this triage section can be integrated with further triaging functionality ac triage section added and persisted in default untriaged status for all change points being created add default to all old change points see technical design of associated epic for format
0
btree key too large to index failing back in we at ssssfelt compelled to provide the hundreds of millions of underbanked ssss who do not have the ability to make payments onl kindly help how to resolve this issue
0
otherwise we could panic
1
this process should create task queue items from the tasks that we get out of the task finder this is also probably the right place to assign point values and do any additional look ups or preparation this includes building the groups
0
we need to make the page much more simple and focused on concrete aspects of backup solutions rather than abstract concepts relevant to backups while the current conceptual information is largely correct its difficult for readers to attach that information to their applications and deployments and a revision of this document should make this more clearthe structure of the page should be backup methods mms backup file system snapshots mongodump oplogmongorestore oplogreplay mongodumpmongorestore each section should discuss requirements usecases considerations benefits restrictions when used with standalonesreplica setssharded clusters as appropriate and link to the appropriate tutorial or tutorials for the optional additional information as needed its possible that we wont need this section but it might be required to cover some of the general details about backing up a sharded cluster
1
the socketexception seen below is indicating that mongod is no longer running ie the process has exited i have to go back and restart it after manually deleting the lock filec mongo authenticationmechanismgssapi authenticationdatabaseexternal username password mongodb shell version to socket recv an existing connection was forcibly closed by the remote host socketexception remote socket exception server dbclientcursorinit call error dbclientbase transport ns externalcmd query saslcontinue bindata conversationid at login failed
1
sporadic performance issue affecting a simple querycode query atoken orderby id index misses are being reported most of the time the query returns in under on occasion we run up to running the query manually during these times shows the same if i remove the orderby the query returns as expected seems to be running out of order
1
when using a connection string uri with the compressors uri option the mongo shell does not connect to the server with compressor the mongo shell will connect with compression when the networkmessagecompressors flag is used
0
paneltitleissue status as of apr issue description this issue only affects deployments that use the wiredtiger storage engine and remove documents from collections that have at least one index with a unique true specification and those indexes use a partialfilterexpression deployments that do not meet all three conditions are not affected issue impact in affected deployments when a document that does not pass the partial index filter is removed from a collection other documents that contain matching keys to the removed document and do pass the partial index filter are erroneously unindexed from the partial index consequently queries that utilize this index may not return all stored results this also has the effect of being able to successfully insert documents that violate the unique key constraint of the index and also pass the indexs filter please see the reproduction steps above for an illustrative example diagnosis and affected versions indexes created on mongodb to or to with the wiredtiger storage engine may be affected additionally the index must match both of the following criteria is a unique index eg created with unique true is a partial index eg created with partialfilterexpression foo bar to determine if an index matching the criteria above has been affected by this bug execute dbfoovalidatetrue the validation will fail for affected indexes remediation and workarounds the fix is included in the and production release to resolve this issue affected indexes should be rebuilt after upgrading to a version of mongodb containing the fix panel original description this issue affects wiredtiger only it only affects indexes with unique true and it only affects indexes using a partialfilterexpression removing a document that does not pass the filter and thus is not indexed will erroneously unindex documents that do pass the filter and contain matching keys to the removed document see the reproduction steps for an illustrative example this has the effect of being able to successfully insert documents that violate the unique key constraint of the index and also pass the indexs filter
1
the gssapi auth mechanism does not seem to be working in the intended manner looks like the code at seem to be breaking for the step this condition seems to be valid only for sasl step onwards when the inbuflen should not be code if saslstep outbuflen bsonseterror error mongocerrorsasl mongocerrorclientauthenticate sasl failure no data received from sasl request does server have sasl support enabled return false code should rather be code if saslstep inbuflen bsonseterror error mongocerrorsasl mongocerrorclientauthenticate sasl failure no data received from sasl request does server have sasl support enabled return false code
1
add the ability to pass a particular set of components to waitforinmemoryvectorclocktobepersisted that are expected to be persisted up to a certain value this will avoid to schedule unnecessary tasks on the executor
0
my pr tester for my wtperf config changes resulted in a reconciliation assertion failure running testfops noformat t filewiredtigerwt wtcursorsearch srcreconcilerecwritec updp null txnid updptxnid wttxnnone txnid wtsessionischeckpointsession t filewiredtigerwt wtcursorsearch aborting wiredtiger library noformat from this might be related to
0
the power cycle testing performs random unclean crashes of the server after the server reboots mongod is run with repairin this particular instance the repair failed due to filewiredtigerwt connection read checksum errornoformat mongod storageengine wiredtiger i control mongodb starting dbpathdatadb i control i control warning syskernelmmtransparenthugepageenabled is i control we suggest setting it to i control i control warning syskernelmmtransparenthugepagedefrag is i control we suggest setting it to i control i control db version i control git version i control build info linux smp fri jan utc i control allocator i control options repair true storage engine wiredtiger w detected unclean shutdown datadbmongodlock is not w storage recovering data from the last clean i storage wiredtigeropen config e storage wiredtiger filewiredtigerwt connection read checksum error e storage wiredtiger filewiredtigerwt connection wiredtigerwt encountered an illegal file format or internal e storage wiredtiger filewiredtigerwt connection the process must exit and restart wtpanic wiredtiger library i fatal assertion i control begin backtrace backtraceprocessinfo mongodbversion gitversion uname sysname linux release version smp tue jul utc machine somap mongod mongod mongodwterr mongodwtpanic mongodwtblockextlistread mongodwtblockextlistreadavail mongodwtblockcheckpointload mongod mongodwtbtreeopen mongodwtconnbtreeget mongodwtsessiongetbtree mongodwtmetadataopen mongodwiredtigeropen mongod mongodmain mongod end backtrace i aborting after fassert failurenoformat
1
fri sep query failed to exceptionthis message should actually print out the exception
0
according to the release notes pymongo is required for mongodb supportthis is incorrect pymongo which was released a month or so ago fully supports mongodb please fix the documentation
1
looks like a function declared with the intention of defining it and using it in a test
0
the ismaster command result contains two fields related to size maxbsonobjectsize the maximum size of a document that will be stored in a mongo collection maxmessagesizebytes the maximum size of a wire protocol messagethis is a request for a third maxbsonwireobjectsize the maximum size of a document not intended for storage that is included in a wire protocol messagethe reason this is necessary is that current drivers will not correctly send a command that includes a document that is close to the limit because drivers apply maxbsonobjectsize to all documents including command documents ao for example most drivers will reject a findandmodify that contains a replacement document with the new write commands they will reject an insert of a documentlooking at the server code the current value of maxbsonwireobjectsize is noformatmongobsonutilbuilderh const int bsonobjmaxinternalsize bsonobjmaxusersize noformatdrivers can use this new field to impose a different limit on command documents than they impose on documents intended for storage without they will have to hardcode the value in a constant which is less than ideal as it may change in the future
0
when temporarykvrecord fails to remove the temporary ident in its destructor it aborts the program with an invariant this is inconsistent with the treatment in kvdroppendingidentreaper which fails with an fassert
0
currently the various views dashboard patch etc that show performance against a baseline use green to show better or the same amber to show slightly worse and red to indicate badly worse what you cant see easily is when things have gotten much better could we just use a nice darker green for significant improvement in all the various views that use color to indicate relative performance
0
lint failed on racedetectorhost mcibf ticket generated by
0
in swiftbson extended json aka legacy extended json parsing support was accidentally omitted which could break user applications we should reintroduce support for parsing this format of extended json to match the expected behavior from the libbsonbased library see here for repro
1
i dont know how to reproduce this issue but here is the log code i write insert ksrealtimedbequipmentrtdata query id isload isrun power src timestamp key locks global acquirecount r w database acquirecount w collection acquirecount w oplog acquirecount w i command command ksrealtimedbcmd command insert insert equipmentrtdata ordered true documents locks global acquirecount r w database acquirecount w collection acquirecount w oplog acquirecount w f invalid access at address f got signal segmentation fault begin backtrace mongodbversion gitversion uname sysname linux release version smp fri jun utc machine somap elftype b buildid b elftype buildid b path elftype buildid b path elftype buildid b path elftype buildid b path elftype buildid b path elftype buildid b path elftype buildid b path elftype buildid b path elftype buildid b path elftype buildid mongod mongod mongod mongodwtreconcile mongodwtcacheop mongodwtcheckpoint mongod mongodwttxncheckpoint mongod mongod end backtrace code my os is ubunut server i suggest add respawn to etcinitmongodconf to let mongodb auto restart if crashed
1
code buildvariant does not have a display name buildvariant does not have a display name buildvariant does not have a display name buildvariant does not have a display name mongoid buildvariant does not have a display name buildvariant does not have a display name code
0
hello we have problem with running one of shards in sharding cluster after unclear shutdown we tried to repair it with repair but log is the same as for service mongod restart noformat sudo u mongod mongod repair f etcmongodconfrepair i control mongodb starting dbpathvarlibmongo i control db version i control git version i control openssl version openssl feb i control allocator tcmalloc i control modules none i control build environment i control distmod i control distarch i control targetarch i control options config etcmongodconfrepair repair true storage dbpath varlibmongo engine wiredtiger wiredtiger collectionconfig blockcompressor none engineconfig directoryforindexes true journalcompressor snappy statisticslogdelaysecs indexconfig prefixcompression true i storage detected wt journal files running recovery from last checkpoint i storage journal to nojournal transition config e storage wiredtiger filewiredtigerwt connection wiredtigerwt read error failed to read bytes at offset wterror nonspecific wiredtiger error i assertion wterror nonspecific wiredtiger error i storage exception in initandlisten wterror nonspecific wiredtiger error terminating i control dbexit rc noformat ive added wiredtigerturtle and wiredtigerwt it attr
0
mongodb failed to buid due to the command line is too long mongodb can be built sucessfuly with master barnch revision but it failed to build with master branch latest revision could you pleaes help to take a look at this thanks in advance
1
consider having a replica set of machines with assigned priorities of and correspondingly is primary other two are secondaries now restart and the primary will shift according to priorities to which is correct now restart it will also lose its primary state which is also correct but now if writes were coming at a steady pace the oplog of would be several operations ahead of and this leads to replica set not getting a primary since while is freshest and should become primary the is up and has higher priority i understand that its a hard choice either ignore priorities in favor of freshness or ignore freshness and possibly cause rollbacks leading to a likely data loss and favor priorities i still think both of these solutions are better than leaving a replica set in the infinite no primary state by the way temporarily shutting down the higherpriority server helps the freshest server becomes primary and the restarted higherpriority server just catches up and becomes primary again after a new electionps weve seen this with also moved to but it appears to still ocur
0
i noticed this in our system our code was using a lot of cpu even when not much was going on after some investigation i nailed it down to the short code snippets below im using mongodb c driver both collections used have around documents in em i see around of cpu consumption when i use the code below codejavaimongodatabase mytenantmongodatabase mongodbmongodatabasegetconnectionstring database findoptions myfindoptions new findoptions myfindoptionsnocursortimeout true this can be a long running operation so we dont want any timeouts await mytenantmongodatabasegetcollectioncollection findnew bsondocument myfindoptions foreachasyncasync document string myleftvalue documentgetvaluefieldx stringemptyasstring string myrightvalue documentgetvaluefieldy stringemptyasstring filterdefinition myleftobjectfilter myleftobjectfilter buildersfilterand buildersfiltereqisdeleted false buildersfiltereqtitle myleftvalue filterdefinition myrightobjectfilter myleftobjectfilter buildersfilterand buildersfiltereqisdeleted false buildersfiltereqtitle myrightvalue filterdefinition mycountfilter buildersfilterormyleftobjectfilter myrightobjectfilter myobjectneededforrelationcount await mytenantmongodatabasegetcollectionothercollection countdocumentsasyncmycountfilter code when i change it to the code snippet below i see around of cpu consumption codejava imongodatabase mytenantmongodatabase mongodbmongodatabasegetconnectionstring database findoptions myfindoptions new findoptions myfindoptionsnocursortimeout true this can be a long running operation so we dont want any timeouts await mytenantmongodatabasegetcollectioncollection findnew bsondocument myfindoptions foreachasyncdocument string myleftvalue documentgetvaluefieldx stringemptyasstring string myrightvalue documentgetvaluefieldy stringemptyasstring filterdefinition myleftobjectfilter myleftobjectfilter buildersfilterand buildersfiltereqisdeleted false buildersfiltereqtitle myleftvalue filterdefinition myrightobjectfilter myleftobjectfilter buildersfilterand buildersfiltereqisdeleted false buildersfiltereqtitle myrightvalue filterdefinition mycountfilter buildersfilterormyleftobjectfilter myrightobjectfilter myobjectneededforrelationcount mytenantmongodatabasegetcollectionothercollection countdocumentsmycountfilter code so the change is to not use the async await on count in the foreachasync this is a simplified version in my other code i see a drop from if its not clear please let me know
1
currently the iterator interface methods eg current rewind do nothing beyond zpp argument validation they should invoke the function handlers defined in phpphongoc in case a user actually gets this class from queryresultgetiterator and tries to walk the results manually
0
i was doing some performance tests of my app in golang using mongodriver ive seen my app spent lots of time in decodecommandopmsg function after further investigation ive realized that decodecommandopmsg does decode full bson payload all the way down just to merge wiremessage sections toghether and return back as binary data maindocunmarshalbson and then maindocmarshalbson back again my theory is that it doesnt need to bother about internal structure of a document whatsoever just merge toplevel fields ive written simple benchmark and it seems it can be speeded up by the order of magnitude you can see benchmarks code here and a proof of concept of the improvement here results without a change go test benchmem benchbenchmarkreadopmsg benchmark goos linux goarch pkg gomongodborgmongodriverbenchmark nsop bop allocsop nsop bop allocsop pass ok gomongodborgmongodriverbenchmark code results without my change go test benchmem benchbenchmarkreadopmsg benchmark goos linux goarch pkg gomongodborgmongodriverbenchmark nsop bop allocsop nsop bop allocsop pass ok gomongodborgmongodriverbenchmark code fun fact is that without my change in most cases documents returned from mongodb would be unmarshaled twice once in decodecommandopmsg and time when drivers user wants to read his data eg by cursorunmarshal any thoughts
0
from the docscurl o tar zxvf r n mongodbetcthis needs to be fixed asap since im assuming we dont want our enterprise customers running a beta in productionnicholas
1
we recently upgraded from to and several of our replica sets have had or more nodes crash an example is in a replica set the primary has crashed twice today bottom of mongodlog output belowthu sep replset member is upthu sep replset member is now in state secondarythu sep replset warning caught unexpected exception in electselfthu sep invalid access at address from threadthu sep invalid access at address from threadthu sep got signal segmentation faultthu sep got signal segmentation faultthu sep backtrace
0
after performing the microbenchmarks we need to generate the composite scores from their results
0
turned on initialsyncstatus in replsetgetstatus by default and that causes it to be included in ftdc however this means that for a user with a very large number of collections there will be a very large number of keys and a lot of expensive schema changes in ftdc using a very large amount of space and becoming unworkably large this causes problems for consuming tools and means that ftdc can roll over before initial sync completes we should exclude the initialsyncstatus section from in replsetgetstatus ftdc
0
probably due to changes in server since mongooplog hasnt changed in a long while
0
setup replicaset with two nodes use the javadriver with the constructor below via springdatas mongofactorybeancode new public mongo list seeds mongooptions options codeif the master goes down all further writeattempts failnoformataug am commongodbconnectionstatusupdatablenode updatewarning server seen down javaioioexception message connection refused connectnoformati analyzed the driver class replicasetstatus the old master is as expected no longer in acceptablemembers the old secondary is now marked as master but the old master is also marked as master the method to find the actual master uses variable all instead of acceptablemembers and returns the old dead master fixed it locally used acceptablemembers instead of all in findmaster and it worked fine for the first failoverbut if comes up again and goes down i get another errormessage and then again the same error message as abovenoformataug pm commongodbdbportpool goterrorwarning emptying dbportpool to bc of errorjavaioeofexception at at at at at at at at at at at at at at at at at at at at at at pm commongodbconnectionstatusupdatablenode updatewarning server seen down javaioioexception message connection refused connectnoformat
1