text_clean
stringlengths
8
6.57k
label
int64
0
1
should include forcetrue in step down will not succeed if there isnt a secondary within seconds of the primarys optime to minimize timetillelection forcetrue skips this requirement and forces the stepdown
0
as part of parsing a collection validator with action warn or level moderate will fail if there are any encryptionrelated keywords in the expression when such a validator is replicated the secondary will fail to apply the oplog entry as a stop gap the multiversion test in the title should instead run against a standalone mongod until the secondary issue is addressed
0
hi there we have got snapshot mongodb directory from broken server hardware when we are trying to start mongo noformat i control mongodb starting dbpathvarlibmongodb hostlang i control db version i control git version i control openssl version openssl nov i control allocator tcmalloc i control modules none i control build environment i control distarch i control targetarch i control options config etcmongodbconf storage dbpath varlibmongodb systemlog destination file logappend true path varlogmongodbmongodlog quiet true w detected unclean shutdown varlibmongodbmongodlock is not empty i detected data files in varlibmongodb created by the wiredtiger storage engine so setting the active storage engine to wiredtiger w storage recovering data from the last clean checkpoint i storage wiredtigeropen config i storage wiredtiger message txnrecover main recovery loop starting at i storage wiredtiger message txnrecover recovering log through i storage wiredtiger message txnrecover recovering log through e storage wiredtiger error filemdbcatalogwt wtsessionopencursor read checksum error for block at offset block header checksum of doesnt match expected checksum of e storage wiredtiger error filemdbcatalogwt wtsessionopencursor mdbcatalogwt encountered an illegal file format or internal value wtblockreadoff e storage wiredtiger error filemdbcatalogwt wtsessionopencursor the process must exit and restart wtpanic wiredtiger library panic f fatal assertion at srcmongodbstoragewiredtigerwiredtigerutilcpp f aborting after fassert failure f got signal aborted begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname linux release version smp preempt wed jan utc machine somap mongod mongod mongod mongod mongod mongodmain mongodstart end backtrace noformat then repair attempt deleting all our data noformat mongod repair dbpath varlibmongodb i control mongodb starting dbpathvarlibmongodb hostlang i control db version i control git version i control openssl version openssl nov i control allocator tcmalloc i control modules none i control build environment i control distarch i control targetarch i control options repair true storage dbpath varlibmongodb w detected unclean shutdown varlibmongodbmongodlock is not empty i detected data files in varlibmongodb created by the wiredtiger storage engine so setting the active storage engine to wiredtiger w storage recovering data from the last clean checkpoint i storage detected wt journal files running recovery from last checkpoint i storage journal to nojournal transition config i storage wiredtiger message txnrecover main recovery loop starting at i storage wiredtiger message txnrecover recovering log through i storage wiredtiger message txnrecover recovering log through i storage wiredtigeropen config i storage repairing size cache i storage verify succeeded on uri tablesizestorer not salvaging i storage repairing catalog metadata e storage wiredtiger error filemdbcatalogwt wtsessionverify read checksum error for block at offset block header checksum of doesnt match expected checksum of i storage verify failed on uri tablemdbcatalog running a salvage operation i control i control warning access control is not enabled for the database i control read and write access to data and configuration is unrestricted i control warning you are running this process as the root user which is not recommended i control i control warning this server is bound to localhost i control remote systems will be unable to connect to this server i control start the server with bindip to specify which ip i control addresses it should serve responses from or with bindipall to i control bind to all interfaces if this behavior is desired start the i control server with bindip to disable this warning i control i control i control warning syskernelmmtransparenthugepageenabled is always i control we suggest setting it to never i control i storage createcollection adminsystemversion with provided uuid i command setting featurecompatibilityversion to i storage repairdatabase admin i storage repairing collection adminsystemversion i storage verify succeeded on uri not salvaging i index build index on adminsystemversion properties v key id name id ns adminsystemversion i index building index using bulk method build may temporarily use up to megabytes of ram i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage dropping unknown ident i storage finished checking dbs i network shutdown going to close listening sockets i network removing socket file i storage wiredtigerkvengine shutting down i storage shutdown removing fs lock i control now exiting i control shutting down with noformat there is no replica server or working backup unfortunately im sharing files bellow any help would be really appreciated regards
1
a bulk operation with w and wtimeout sent using legacy opcodes calls getlasterror and may receive a wtimeout error but the error is ignored and is not reported in the replys writeconcernerror array as speced see mongocwriteresultmergelegacy vs mongocwriteresultmerge
0
we started batching applyops oplog entries with crud operations in and unpacking the embedded crud operations within the applyops command for the writer threads to apply during the oplog application process however the unpacking is done only on storage engines that support document level concurrency this penalizes the performance of the crud operations in the same batch when we run the applyops command logic under the global write lock we should either unpack the applyops crud operations under all storage engines or batch applyops with other crud ops when the storage engine supports document level concurrency
0
at the moment we let update chains grow arbitrarily long limiting their length only by considering the total page memory footprint having long update chains can lead to performance issues as traversing them is slow and happens while serializing access to a page we could consider queuing any page with an excessively long update chain for forced eviction similarly to how we force evict pages with lots of deleted refs
0
problem description mongod running with with pss architecture on ubuntu suddenly a replica set node was down log files are attached to reproducecolor expected results actual results additional notes
0
when using rangebased partitioning the exchange class is responsible for extracting the value of the key on which we are partitioning in practice this will be the shard key this key extraction is not implemented properly when the key pattern contains a dotted field this can cause an invariant to be tripped when attempting to assign an input document to a particular exchange partition see the repro steps for a detailed example
1
as of now creating a new user requires a cleartext password without hashing i would appreciate having the option to use a hashed password for example dbcreateuserusr pwd hashedtrue thank you guy
0
mongodump failed with error failed error reading collection invalid cursor i see how to extend the cursor timeout as per the article but the commands are for mongo version and above please provide the command or steps to extend the cursor timeout limit for version thanks
0
currently our mms docs list logmmsagentlog as the agent log path by default in the rpm and deb the path us varlogmongodbmmsmonitoringagentlog
1
addresssanitizer heapuseafterfree in mongonext in wttxnread
0
the icecreampy tool uses some generators to interpolate in the icecream environment variables however these generators rely on behavior that the ninja builder disables leading to a conflict there doesnt seem to be a real need to defer or reevaluate the value of these variables so using generators seems unwarranted we can just evaluate earlier before the ninja generator takes over
1
description quote this introduces a new log component initialsyncinitsync as a subcomponent of replicationrepl this is designed similar to the rollback component in ticket description creating a log component dedicated to the initial sync process could aid in debugging and log readability we have already done something similar for rollback quote scope of changes sourcereferencelogmessagestxt sourcereferencemethoddbgetlogcomponentstxt setlogcomponent and parameters has examples but not specific ones sourceincludesoptionsconfyaml sourceincludeslistlogcomponentsettingcorrespondencerst sourcereferenceconfigurationoptionstxt rel notes impact to other docs none mvp work and date resources scope or design docs invision etc
0
hi when trying to run the below export i am getting error as assertion code failedtoparse failedtoparse bad characters in value code mongoexport host xxxxxcom port username xxxxx password xxxxx db xxxxxx collection surveyresponses query retailformatflsrecordednew csv fields out moodcsv code
0
should yield bson primitives as well as golang primitives where possible
0
hiif i am inserting a documents as a batch what if it fails and i get an exception when it is doing the documentwill the documents be inserted if yes how can i know how many documents the batch could insert if no then it is fine we are happy to try the batch againplease let me know
1
im running mongodb used the upgrade document and upgraded the storage engine to wiredtiger i created a user in of the databases with the read role when i connect to the database and select another database it gives me the authentication failed message that is ok when i connect to the database i created the user in it connects but then i can use and do a show collections it displays everything from that database where the user shouldnt have access to i can even show and modify documents is this a bug or is there something missing in the migration manual
1
actual behavior if you look here you can see that we are creating new email alerts for failures on and but if you look here you will not see any alerts for those branches this is happening even though i turned on jira alerts for those branches over an hour ago and a few failures on those branches have happened expected behavior new alerts should be created in bfg project identical to google groups note as usual ill stay close to my computer to help troubleshoot just in case this is a jira misconfiguration or something that i can otherwise help with
1
what problem are you facing requireoptional has js syntax thats not supported in our matrix of node versions i will file an issue on the repo but we can also resolve this by replicating the logic we have in for optional dependencies what driver and relevant dependency versions are you using steps to reproduce import our driver when using node
1
when performing logrotation on mongos logs sometimes it crashes showing following infofri nov error failed to rename varlogmongomongoslog to no such file or directoryfri nov fatal assertion nov error failed to rename varlogmongomongoslog to no such file or directoryfri nov fatal assertion fri nov end connection connections now open usrbinmongos to rotate is as followsmongobinusrbinmongo eval printjsondbruncommand logrotate
0
after upgrading from to i started getting systemtimeoutexceptions a few minutes after starting my app the timeouts are occurring after which seems like its a problem systemtimeoutexception timed out waiting for a connection after at mongodbdrivercoreconnectionpoolsexclusiveconnectionpoolacquireconnectionhelperenteredpoolboolean enteredpool at mongodbdrivercoreconnectionpoolsexclusiveconnectionpoolacquireconnectioncancellationtoken cancellationtoken at mongodbdrivercoreserversclusterableservergetchannelcancellationtoken cancellationtoken at binding cancellationtoken cancellationtoken at the bug is that queries using the legacy api can fail to return a connection to the connection pool the bug is triggered when the result set is large enough that the results are returned in multiple batches each time such a query is made one connection is failed to be returned to the connection pool and once all the connections in the pool have been leaked no further queries can be made with the symptom being that a timeoutexception is thrown from acquireconnection
1
automation agent changelog version released support mongodb authentication for managed bi connectors without also requiring ssl backup agent changelog version released fix send compound index keys as ordered bson fix send less detailed data in the initial summary payload at the start of an initial sync collect more detailed data for each collection individually
1
see in right now one has to click to load older and older results which gets slower and slower as one advances through the result list
1
the kmipservercafile and kmipclientcertificatefile options should verify that their values are absolute paths when used with windows services see and for similar bugs and fixes
0
the server recently changed from to true for map reduce output ok to mean still tests forunless result should be updated to true
1
description the realm node cli tutorial instructions is missing instructions for the user to populate schemas in the schemajs file this file in the start branch contains three separate todos and the file in the final branch has these values populated scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
1
in we made it an error for index key pattern values to be nan or nonnumeric nonstring types existing deployments may have and indexes which break the validation rules as currently implemented this will prevent upgrade to since the validation is applied to all index versions in order to ensure a smooth upgrade process we should only apply the new index validation to indexes which are new in indexes support collation and the decimal data type original description related to change made in starting up on older mongod data directory will fail if it finds one of these technically invalid index definitions unfortunately the error message is not very clear as far as what the user must do to fix this noformat f index found an invalid index v key softdeletetime sparse true name ns sessioncachesession on the sessioncachesession collection bad index key pattern softdeletetime sparse true values in index key pattern cannot be of type bool only numbers numbers and strings are allowed i fatal assertion noformat is already open to document the changed behavior due to but i think it should be linked to directly from the error message on startup otherwise people will not have any idea what to do and how to fix this
1
three times on the last hours all our mongos were deadlocked on serving requests to sharded collectionusing outofprod mongos to a sharded collection was blocked toduring the lock it was possible to use dbcurrentop and show collection on the sharded databaserestarting all mongos was the only way to get out of thisin attachment a cleaned log of one mongos during the last failure
1
cluster server parameters also need to be synchronized from the csrs to mongoses on sharded clusters unlike replica sets there is no way to monitor a source of truth on disk and make inmemory changes appropriately therefore mongoses will have to run a periodic job in the background that runs getclusterparametergeneration on the csrs to poll for changes the frequency of these checks will be tunable via a new setparameter clusterparameterpollingfrequencysecs define a new private member in clusterparametermanager called poller that is a uniqueptr to a periodicjobanchor define a library synchronize function that generates a vector of all cluster parameter names from the clusterparametermap calls getclusterparametergeneration on the csrs primary with all of those names parses each of the returned values and compares them with the cached generation for each serverparameter then it compiles the names of all parameters with mismatched generations and calls getclusterparameter on all those parameters finally it calls the set and setgeneration methods for those parameters so their values and generations can be updated define a new nodespecific server parameter clusterparameterpollingfrequencysecs that is used to initialize the periodicjob to run the synchronize function at that frequency ensure that the poller member is initialized when the clusterparametermanagermongos is constructed write a unit test to ensure that the poller works as expected
0
there is a new deadlock between drop waiting for all operations to drain and an lsm worker thread getting the checkpoint lock in order to start a bulk load noformat schedyieldwtyieldlsmtreeclosewtlsmtreedropwtschemadropwtsessiondropsessiondropwrapsessiondrop llllockwaitpthreadmutexlockwtspinlockwtcurfileopensessionopencursorintsessionopencursorwtbloomfinalizelsmbloomcreatewtlsmworkbloomlsmworkergeneraloplsmworkerstartthreadclone noformat first seen in
1
currently dbclientrstest and tenantmigrationrecipientservicetest use the scanningreplicasetmonitor because it supports some mocking capabilities and the streamablereplicasetmonitor does not since were removing production support for scanningreplicasetmonitor we should also remove the need for it in unit tests some potential ideas for implementation add a lightweight mockreplicasetmonitor that only does whats necessary add the ability to mock some behaviors of streamablereplicasetmonitor make the tests in question depend on higher level components rather than directly on the rsm and mock those instead keep the scanningreplicasetmonitor and only use it for tests in which case wed just close this ticket if the decided outcome involves the complete removal of the scanningreplicasetmonitor this ticket should either remove all remaining references to the scanningreplicasetmonitor from the codebase or if it makes more sense to do it separately make a separate ticket to do that
0
we have a gis based database all documents contain longitude and latitude currently we have our shard key set to longitude we are considering changing it to a combination of longitude and latitude we believe that in doing so we will not only greatly increase the cardinality of keys but will also distribute our data more evenly across the shards we also expect to see an improvement in the time it takes to load and query data wed like your opinion on thisthanks in advance for you help
1
the helper should support output as a cursor explain read preference
1
mongo save causes error insert doesnt i want to use save my client javascript xmlhttpopenpost xmlhttpsetrequestheadercontenttype xmlhttpsendjsonstringifymydata post im running server app by executing node appjs where appjs has the following code code var port apppost req res var buyer new buyer email buyerobjemail save thenitem consolelogsaved to database ressendrecord has been saved catcherr consolelogerr err to save to database err which i changed to var buyers email buyerobjemail mycollectionsinsertbuyers ressendrecord saved code this took care of the error i also had to add authsourceadmin to the connection string to make insert work i want to be able to use save and not insert
1
paneltitleissue status as of march summaryfrom major release version on the option is enabled by default rounding up the space allocated for a record to the next power of two this makes the current default for chunk sizes in gridfs which is kb a bad choice the overhead of id and foreign key for chunk documents increases the size to just over kb and would therefore cause kb space allocation with almost half of the space wasteduser impactin the release cycle is not enabled by default this only affects users who have manually enabled on their gridfs chunks collection solutionthe fix is to reduce the default chunk size of gridfs documents to kb this leaves enough space for the extra fields to still only allocate kb of space for each documentworkaroundsdriver versions designed to be used with the release will include this fix clientsidealternatively disabling also prevents the space overhead but can affect space reuse efficiency especially in situations were documents are frequently deleted and recreated this can lead to extent fragmentationaffected versionsall recent production release versions up to are affectedpatchesthe fix is included in the production release and the release candidate which will evolve into the production original descriptionnow that the server uses power of by default if the default chunk size for gridfs is we will almost always be throwing away some storage space this is because if the bindata field of a chunk will occupy an exact power of then the id and foreign key reference to the files collection etc will take up additional space that will cause the documents allocated storage to be rounded up to the next power of this would be a huge waste considering it would round up every chunk for a given fileinstead if we make the default chunk size then we have an extra to store the id and other metadata so that when the document is saved we round up to and not upon persisting the document
1
currently remotecommandtargeterselectfindhostmaxwaittime will always return a default timeout this is the timeout which we use when targeting a particular host within a shard instead of using a default we should attach a timeout to the operationcontext potentially the maxtimems value and use this as the host targeting timeout
0
hash keys in btree to bytesthe can used fixed width btreeit may be bettter to just make hash keys more compressed efficient instead
0
weve noticed a regression while upgrading to bson from bson bsondocumentmerge yields to the provided block even when the key is not present in both hashes this is incompatible with previous version of bson at least and most importantly rubys hashmerge ruby only yields for keys present in both hashes code a k old new old new a b code while bson yields to merge even if keys are not present in both hashes this results in the following code a bsondocumentnewa amergeb k old new old new undefined method for nilnilclass code i will submit a patch soon that fixes the issue
1
description quote description this introduces the plancachekey field to explainloggerdebug output to summarize the difference between plancachekey and queryhash queryhash is stable across catalog changes that is the query shape and thus the hash of theh query shape is derived from the query the user wrote down and does not depend on any state inside the server it will not change if you adddropchange indexes plancachekey is unstable across catalog changes it depends both on the query shape and which indexes are available this is slightly confusing even for us so if you have questions definitely slackemail meianborosianb engineering ticket description according to the documentation quote index filters exist for the duration of the server process and do not persist after shutdown mongodb also provides a command to manually remove filters quotewhile a given index filter does persist across index creation and drops its application towards queries does seem to be influenced by such actions quote scope of changes sourcereferenceexplainresults profiler log messages possibly regarding plan cache filter analyzing mongodb perf corequeryplans corequeryoptimization tutorialevaluateoperationperformance tutorialmanageindexes impact to other docs mvp work and date resources scope or design docs invision etc
0
we have a collection with approximately million records one of the fields is called lastupdated its a date field and we created an index for it when we query the collection with codelastupdated gt new get the cursor back very quickly similar with a count when we add a explain to the query it never completes we waited for minutes and our disks go beserk with read activitywhen we change the query tocodelastupdated new run it with explain it behaves as expected as an aside we had issues creating that index in production where weve got replica sets and sharding and mil records we pulled some of our data into a standalone instance in order to play around with the index without affecting production when we had issues in production it had similar symptoms disks would be slammed with read activity we didnt run explain to trigger our production issue but it seems spookily similar to this issue just deleting the index would cause the system to operate normally again
1
the enum values introduced in for appear to conflict with existing constants on windows i picked this up during an appveyor build for the php driver see here i suggest changing these to more unique names additionally there are a few cases within mongococsptlsextstatuscb where the success case in our enum is used to check return values from other tls functions eg ocspbasicverify id suggest we define explicit constants for those functions rather than reuse our internal enum for the callbacks return value
0
about an hour after deploying to production the mongo shard servers ran out of connectionsafter restarting the application the connections drop back down again but start a slow and gradual increase in the number of connections being used we have rolled back to of the ruby driver and the problem has gone away
1
code fri jul assertion failure eeoo srcmongosutilnetdbbsonbsonobjbuilderh fri jul error futurespawncomand part exception caused by error running command on server mongos mongos mongos mongos mongos mongos mongos mongos mongos libsystembdylib pthreadstart mongos threadproxy libsystembdylib threadstart
0
hii have an application that is virtually partitioned on a relational database mysql and we are moving to mongo we have accounts with an accountid partition field i have a question surrounding our design approach with moving to mongowe currently hoped to give each account its own database and then have a central account manager db to control routing this appears to be working really well and performance is awesomewe could not find any limit anywhere on db numbers per installation there was a reference to the file access limits which we increased to on our ubuntu installationshould we collapse all accounts into a single db using an account key field apply combined indexes etc and then rely on autosharding or is it ok to continue to create new databases per account to reduce the index overheads for large accountsyour help is much appreciated
1
are there any plans to allow the using dbsystemjs stored functions in function like where or mapreduce i havent found this feature in the version
0
in the pch research project some time was spent investigating another option which is said to produce slightly more gains it was found to increase a single pch installmongod target with clang by almost related links
0
we have upgraded a production replica set from to secondary members first leaving the two last members primary and its datacenter active fallback to the end of the process the other day weve changed priorities between the afore mentioned two remaining members upgraded one to and then set it back as a primary node leaving us with one secondary member left as once that newly upgraded host became primary again after db restart to have the new binaries apply application using this replica set had their performance and response times were deteriorating rapidly node log file was showing for almost every query logged which wasnt occurring in any of the other members when examined server status for cursor metrics number of timed out cursors was rising gradually while number of pinned cursors was around and number of no timeout cursors was around one thing that was changed apart from upgrading this primary node to was to explicitly set its maximum configured wt cache size to based on the new wiredtiger rule of thumb saying its of free physical ram minus one gb we suspected that the fact that the recently upgraded node was just restarted thus having its cache empty being completely cold when massive application traffic started performing reads and writes was the root cause for these cursor exhaustion and performance drop so we shortly after fallen back to the left secondary to become primary which has resulted an immediate significant performance improvement and a stop to cursor exhaustion please note that on the we had no explicit configuration of cachesizegb but left it to the default behaviour of wiredtiger after primary was set on the last remaining we decided to have leave the node which failed to take the load as primary to pre heat its cache for on query traffic about read statements per second and gave it another go as primary the day after without restarting it as it was already set on version yet the same behaviour of cursor exhaustion and massive app performance drop occurred again forcing yet another fallback to the node to become primary again which again mitigated things back to what they were before the change enclosed please find are log files from both the and nodes notice how same queries generate different outcomes in terms of cursor exhaustion even though they share the same execution plans one extremely popular query is the one issuing find on listsitemsposts in its different permutations for host ive omitted the biggest log which covered sep to as other logs also contain the symptoms reported here for host we have other applications based on other replica sets which are fully upgraded to including primary of course which doesnt display this behaviour so this could very much have to do with the way the application driver is setup or simply on how its written or a mix of both kindly try and assist in analysing how come this behaviour occurs and recommend of methods to try and overcome it this is quite urgent for us as we wish to have it completed by the end of next week many thanks in advance avi k dba wixcom
1
from libbson ftbfs with code the package fails to build in a test rebuild on at least with but succeeds to build with the severity of this report may be raised before the buster release there is no need to fix this issue in time for the stretch release code in the debian build log the last test that succeeds is so the test that fails must be the next one i rewrote the parser for but its unknown whether the test would have passed with gcc before my rewrite or not gcc is unreleased i think we just need to fix this before debian buster with gcc arrives or other gcc distributions its possible its a new gcc bug that will be resolved without us
0
change index key format to no longer use bson in wiredtiger benefits include much faster key comparison reduced storage size for indexes take much better advantage of wt key prefix compression no longer require custom collator which means stock wt tools like repair work with no modification as this is an ondisk format change indexes created before this change with wt will no longer be usable without running mongod storageengine wiredtiger repair
1
we now have patterns where we expect failure like here where failure might be recoverable like here or where failure accounting isnt interesting like here we should provide a new macro internalassert that logs at something like instead of and does not increment counters see here we should prove this out by confirming that awaitable ismaster jstests see here on both mongod and mongos do not increment the assertion counter
1
when indexing array fields most indexes contain one key per element of the array hence the terminology multikey indexes documented here however compound indexes use a different format for the fields namely all of the array elements are stored in a single key whose value is itself an array consider the following example code dbcdrop false dbccreateindexa bc createdcollectionautomatically true numindexesbefore numindexesafter ok dbcinserta b writeresult ninserted dbcfindhinta bc a bc code as you can see the inserted document leads to just a single index key the value of this index key for the bc field is the array because of this index format predicates against the trailing fields of an index typically are not used to generate bounds against the index a predicate like bc eq would normally result in point lookup in the index for the value however this would incorrectly miss the above document because the index key value is the entire array instead the predicate is attached as a filter to the ixscan stage as you can see from the explain of the query below noformat dbcfinda geowithin center bc eq queryplanner inputstage stage ixscan filter bc eq keypattern a bc indexname ismultikey false isunique false issparse false ispartial false indexversion direction forward indexbounds a bc rejectedplans noformat when the predicate over the trailing field is an elemmatch however the planner incorrectly generates bounds noformat dbcfinda geowithin center b elemmatch c queryplanner inputstage stage ixscan keypattern a bc indexname ismultikey false isunique false issparse false ispartial false indexversion direction forward indexbounds a bc rejectedplans noformat as a result this query misses the matching document code dbcfinda geowithin center b elemmatch c no results code
1
the expansion key setsudo can cause an errexit since it uses the following logic code set o grep errexit grep on code it would better if we did the following code set o tmpsettingslog set o errexit grep errexit tmpsettingslog grep on errexiton code
0
we discovered a bug with ops manager startup if no internet is available we will be fixing this in however to minimize user pain lets please add an advisory note in the documentation for each installation page at the top of the configure the ops manager connection to the ops manager application database please add quote if the ops manager application server does not have access to the internet you must also add the following parameter code automationversionssourcelocal code quote
1
can anyone suggest best practices to use when developing aws lambda functions net core our main concern is mongoclient does not provide connection close method so this might leave too many opened connections and cluster might run into resource problems
0
see this will break some of the ods tests which is intentional they will need to be removed since bsonserialize should not be in the business of returning atomic modifiers at least not with persistable objects
1
when reloading the same or a new version of an ejb using the mongo driver nullpointerexception occurs continuously in glassfish logs as the precedent thread was not stopped and tries too use a closed connectionwe need to restart glassfish server each time we want to redeploy our code which is not possiblethis didnt happen with the version of the seen down at at at at at at at seen down at at at at at at at
1
i have a record dataset including geo locations upon which i want to search to find points within a selected area the operation yields results quickly but i need a count of the points in the area as well if i make a selection of a city or entire state on this data the count query takes seconds or more to executethere is a similar problem if i want to sort the results of a find not a count in this case regardless of indexingthis thread has additional info
1
there are mongodb test failures where the cached allcommitted timestamp doesnt match the current value in wiredtiger there could be several explanations for this including wiredtiger returning a stale value or mongodb not querying for an update in some case add a callback to wteventhandler indicating to the application when the allcommitted timestamp moves forward
0
stdtostring is not defined in
0
has fixed a backward compatibility problem when the configuration options are removed in the newer versions that are used in the older version databases to find out these incompatible problems much earlier than the customer reported update the compatibility test to cover these scenarios codejava verifybranches echo verifying access method am wiredtigerconfigext wt bflag h dir verify tablewt echo dump and load access method am wiredtigerconfigext wt bflag h dir dump tablewt dumpwttxt wiredtigerconfigext wt bflag h dir load f dumpwttxt done code with the above patch it is possible to test part of the configuration options that are incompatible across the newer versions but not all the options enhance the above test if possible to cover all scenarios if not merge the above change once is merged into branch
0
connection string from atlas i am connecting to the instance using robo and was previously connecting using compass so i suspect the issue is new since this release i tried to attach a screenshot but get jira could not attach the file as there was a missing token please try attaching the file again although it does appear for me below
0
i do not have this problem with driver ive upgraded to the driver series twice with the same result timed out waiting on socket read for queries on large collections interestingly i did not have this problem running in my development environment on my mac this is blocking me from moving up to and therefore moving to mongodb there is a thread on the mongodbuser group to which i recently added comments
1
for upgradedowngrade from to a few checks for featurecompatibilityversion were added to the causal consistency codebase these should be removed once is released before the next releases
0
i worked on multihost task group patches where the setup group failed with setupgroupcanfailtask set to true and i saw later tasks in the group run on hosts that hadnt had setup group complete successfully the correct thing here would be to either run setup group again when possible not consider a host to be running the group until group setup has completed we should also consider making setupgroup always fail the task as the permissive option is rarely what you want additionally setup group failures were system failures in my observation and should probably be setable just like other failures
0
a lot of process insert into mongoddb,when wt cache full,will trigger eviction,even in mongodb insert thread but evict is too slow and so many race conditions cpu heavy server hang cant server any request code thread thread lwp in wttxnupdateoldest in wtevict in evictpage in wtcacheevictionworker in sessionbegintransaction in mongotxnopenmongooperationcontext in mongogetsessionmongooperationcontext in mongowiredtigercursorstdbasicstring stdal locator const unsigned long bool mongooperationcontext in mongoinsertrecordsmongooperationcontext stdvector bool code
1
cursors time out despite having no values specified according to the documentation the default is false however the test considers true to be the default if unspecifieddocumentation test
0
broke leafpage only searches the wtcursorsearch and wtcursorsearchnear operations are supposed to search any pinned btree leaf page that is any page pinned as a result of a previous operation assuming locality of reference that a search is likely to hit on the currently pinned page commit broke this by checking if the current key was internal referencing an onpage key or external referencing a key set by the application using wtcursorsetkey that test was correct for nonsearch operations for example wtcursorupdate but not for search operations
0
hi was trying the tutorial here cloned the repo switched to branch start npm install installed pods with pod install repoupdate launching the app with npx reactnative runios fails compiling it never launches the app environment macos big sur node xcode
1
see for updated details
0
easiest way to reproduce this is to call getlasterror with a fresh mongo shell sessionnoformat dbruncommand getlasterror connectionid n syncmillis writtento null err null err null ok
0
hii am facing a problem in updating an embedded object array using mongoosehere is the schemavar event eventtime type date default datevar schedule events var group groupname typestring startdate type date default datenow schedules studiesdbadd id string groups i am want push and pull values in the eventtime but i cant execute my query which gives an error which is belowtoo many positional ie elements found in paththe thing which i want to do is that to update the array of eventtime in events with an object of schedules i am using this querydbstudymodelupdateid mongodb groupsgroupnamedatabasegroupsschedulesid i am struggling with problem from almost days and i have to came to know that mongodb doesnt allow to go beyond level so how to achieveplease help me on this problem plus i am using version of mongodb
1
make primaryonlyserviceexecutor to be immutable initialize it in the pos constructor and available for the entire pos object instance lifetime so by which pos methods can read primaryonlyserviceexecutor without any synchronization rules
0
we have an application which writes records to a remote mongodb on observing connection loss to the remote we destroy the mongo client and create a new one only the remote connection is established back sometime on connection loss i see below crash code thread thread lwp in mongocstreamgetrootstream from in mongocstreampoll from in mongocasyncrun from in mongoctopologyscannerwork from in mongoctopologyscanonce from in mongoctopologydoblockingscan from in mongoctopologyselectserverid from in mongocclientendsessions from in mongocclientdestroy from in mongoclientopsdestroyclient this at in eventsdbhasinkstopsendingevents this at code the code for destroying the client is code void mongoclientopsdestroyclient rwmemlog getmemlogptr destroy client connection if bulk destroys mongo bulk in the destructor delete bulk bulk nullptr collection should be destroyed with the client if collection mongoccollectiondestroycollection collection nullptr if database mongocdatabasedestroydatabase database nullptr if client mongocclientdestroyclient client nullptr return code is there something wrong with my destroy sequence
1
currently the order members of a sharded cluster are shutdown in is config servers mongos shard members see if we change the shutdown order to be whats below the we can shave minute off the shutdown time in some configurations mongos shard members config servers we think this is because the shard members may try to contact the config servers after theyve been shutdown which could lead to the shard members entering retry loop
0
on mongodb if limitvalue is specified and value the explainallplansexecution will only work through documents noformat dbtestcolexplainallplansexecutionfinda gte queryplanner plannerversion namespace testtestcol indexfilterset false parsedquery a gte winningplan stage limit limitamount inputstage stage fetch inputstage stage ixscan keypattern a indexname ismultikey false direction forward indexbounds a rejectedplans executionstats executionsuccess true nreturned executiontimemillis totalkeysexamined totaldocsexamined executionstages stage limit nreturned executiontimemillisestimate works advanced needtime needfetch savestate restorestate iseof invalidates limitamount inputstage stage fetch nreturned executiontimemillisestimate works advanced needtime needfetch savestate restorestate iseof invalidates docsexamined alreadyhasobj inputstage stage ixscan nreturned executiontimemillisestimate works advanced needtime needfetch savestate restorestate iseof invalidates keypattern a indexname ismultikey false direction forward indexbounds a keysexamined dupstested dupsdropped seeninvalidated matchtested allplansexecution serverinfo host mubuntu port version gitversion ok noformat if batchsizevalue is specified the limitvalue will take effect noformat dbtestcolexplainallplansexecutionfinda gte queryplanner plannerversion namespace testtestcol indexfilterset false parsedquery a gte winningplan stage limit limitamount inputstage stage fetch inputstage stage ixscan keypattern a indexname ismultikey false direction forward indexbounds a rejectedplans executionstats executionsuccess true nreturned executiontimemillis totalkeysexamined totaldocsexamined executionstages stage limit nreturned executiontimemillisestimate works advanced needtime needfetch savestate restorestate iseof invalidates limitamount inputstage stage fetch nreturned executiontimemillisestimate works advanced needtime needfetch savestate restorestate iseof invalidates docsexamined alreadyhasobj inputstage stage ixscan nreturned executiontimemillisestimate works advanced needtime needfetch savestate restorestate iseof invalidates keypattern a indexname ismultikey false direction forward indexbounds a keysexamined dupstested dupsdropped seeninvalidated matchtested allplansexecution serverinfo host mubuntu port version gitversion ok noformat limit works fine with explainallplansexecution on
0
the original symptom of this problem is that the sysperf project has purple instead of red tasks the db has jsonsend in run test step of as the last command run but in fact steps and succeeded step failed here is an example from the evergreen selftests noformat finished shellexec in runmake in command timeout set to noformat setting command timeout which also sets the current command should happen immediately after we log running command v step and before we run the command there may be a race between startidletimeoutwatch and checkin since startidletimeoutwatch calls checkin startidletimeoutwatch sends on the same channel its listening on which could conceivably cause unpredictable behavior since checkin uses a nonblocking send a deadlock could cause it not to set tccurrentcommand
1
i performed the following sequence of operations connected to a replica set using m mongoreplsetconnectionnew mprimary did an rsstepdown on the master on the next request as expected an error was raised minsert safe truemongo operation failed with the following exception connection reset by peer from receivemessageonsocket from receiveheader from insertdocuments on the next request it appears the connection is trying to write to the wrong node minsert safe truemongo not master from sendmessagewithsafecheck from insertdocuments indeed the primary hasnt updated mprimary
1
we have a helper assertthrows which returns the error which was thrown a common pattern looks like this codejs let err assertthrows collaggregate asserteqerrcode expectedcode code we should add a helper to the assertion library to condense this and simplify some of our tests it will be a good counterpart to assertcommandfailedwithcode
0
hi we had indicate previously in that we re having trouble upgrading today we have tried upgrade again from to and left it so it can start up it took about hours to start noformat i network admin web console waiting for connections on port i repl did not find local voted for document at startup nomatchingdocument did not find replica set lastvote document in localreplsetelection i network starting hostname canonicalization worker noformat unfortunately for us looks like mongo decided to disregard all the date in the oplog and cannot sync as its to stale noformat i repl syncing from w repl we are too stale to use as a sync source i repl syncing from i repl could not find member to sync from e repl too stale to catch up entering maintenance mode i repl our last optime term timestamp aug i repl oldest available is term timestamp aug i repl see i repl going into maintenance mode with other maintenance mode tasks in progress noformat to be perfectly clear this node was warmed up and in production without issue before we attempted this upgrade oplog db size was very big as well noformat dbgetreplicationinfo logsizemb usedmb timediff timediffhours tfirst sun aug utc tlast sun aug utc now wed aug utc noformat as you can see mongo decided somehow that oplog have to be cleared in comparison see below same info from other replica member noformat floowprimary dbgetreplicationinfo logsizemb usedmb timediff timediffhours tfirst sun aug utc tlast wed aug utc now wed aug utc noformat i can provide log from that period but there is nothing indicating any unusual behaviour no errors noformat i repl syncing from w repl we are too stale to use as a sync source i repl syncing from w repl we are too stale to use as a sync source i repl could not find member to sync from e repl too stale to catch up entering maintenance mode i repl our last optime term timestamp aug i repl oldest available is term timestamp aug noformat in current state we cannot reliably upgrade our database to as this results in desync
1
how can i write the below code in c thanks lookup from temp localfield brandid foreignfield brandid as docs unwind docs match brandref in docsbrandid in docsfirstname in
1
after disk outage wiredtigerturtle became zerolength codejava mongod repair dbpath varlibmongodb i control mongodb starting dbpathvarlibmongodb hostdemo i control db version i control git version i control openssl version openssl feb i control allocator tcmalloc i control modules none i control build environment i control distmod i control distarch i control targetarch i control options repair true storage dbpath varlibmongodb i detected data files in varlibmongodb created by the wiredtiger storage engine so setting the active storage engine to wiredtiger w detected unclean shutdown varlibmongodbmongodlock is not empty w storage recovering data from the last clean checkpoint i storage detected wt journal files running recovery from last checkpoint i storage journal to nojournal transition config i assertion no such file or directory i storage exception in initandlisten no such file or directory terminating i control dbexit rc code is it possible to repair db
0
after arbiters no longer report logicalsessiontimeoutminutes code dbversion dbismaster hosts arbiters setname setversion ismaster false secondary false primary arbiteronly true me lastwrite optime ts t lastwritedate majorityoptime ts t majoritywritedate maxbsonobjectsize maxmessagesizebytes maxwritebatchsize localtime minwireversion maxwireversion readonly false ok featurecompatibilityversion version ok code this breaks the driver sessions spec process for how to check whether a deployment supports sessions
1
trying to create a new mongo client in a windows environment using a uri results in an endless stream of uninitialized constant mongounixsocket errors using version of ruby driver seems like this started happening after unix socket support was added in this issue didnt exist in x mongoclientnewurl d debug mongodb adding to the cluster d debug mongodb uninitialized constant mongounixsocket d debug mongodb adding to the cluster d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket d debug mongodb uninitialized constant mongounixsocket
1
per we should be very explicit that the default pythonkerberos module may need to be uninstalledon this pageaddsudo aptget uninstall pythonkerberosright abovesudo easyinstall pymongo kerberos
1
descriptiona few weeks ago one of the major ssl root certificates expired serving an expired root certificate raises an ssl error in some http clients but not in most browsers for example in the latest macos catalina curl error ssl certificate problem certificate has expired note that it is not the docsmongodbcom cert that has expired but one of the intermediate certificates that your server is sending in the ssl chain the solution is simple edit your webserver conf to remove the expired ca cert from the bundle you dont need to replace it with anything because all clients trust this vendor by default in their own ca bundle for more info see here scope of impact to other mvp work and resources scope or design docs invision etc
1
nov evergreen killing process nov evergreen panic runtime error invalid memory address or nil pointer dereference nov evergreen nov evergreen goroutine nov evergreen panic nov evergreen nov evergreen githubcomevergreencievergreenmonitorrunhostteardown nov evergreen nov evergreen githubcomevergreencievergreenmonitorterminatehost nov evergreen nov evergreen nov evergreen nov evergreen nov evergreen nov evergreen created by githubcomevergreencievergreenutilrunfunctionwithtimeout nov evergreen
1
add the historical data caching info card on the general settings page design
0
this conversion is wrong because the underlying bson type is an integer so we should use numberlong instead of long to retrieve it as long long type
0
this test assumes that creating a collection while its containing database is being dropped will not return an error locking changes have recently made this operation return an error noformatcannot create collection database is in the process of being dropped noformat
0
we have setup a shard mongodb cluster with a replication factor of starting the router process default chunk size and oplog size was chosen by not specifying the values for these has a chunk size of mb while the rest have mb per chunkall shards are similar type of instances on amazon environmentwhat we have noticed using dbgetsharddistribution command is as followsshard at data docs chunks estimated data per chunk estimated docs per chunk at data docs chunks estimated data per chunk estimated docs per chunk at data docs chunks estimated data per chunk estimated docs per chunk at data docs chunks estimated data per chunk estimated docs per chunk at data docs chunks estimated data per chunk estimated docs per chunk at data docs chunks estimated data per chunk estimated docs per chunk data docs chunks shard contains data docs in cluster avg obj size on shard shard contains data docs in cluster avg obj size on shard shard contains data docs in cluster avg obj size on shard shard contains data docs in cluster avg obj size on shard shard contains data docs in cluster avg obj size on shard shard contains data docs in cluster avg obj size on shard
1
the fuzzer routinely performs inserts directly into systemviews many of these are invalid which causes listcollections to fail the dbhash and validation hooks both rely on listcollections and the presence of invalid views will cause tests to fail for now we should skip the hooks entirely in jstestfuzz suites if listcollections fails with an invalidviews specific error code
1
currently during the transition of jstests to the jscore suites we have more than one shell write mode being used default shellwritemode legacy gle shellwritemode legacy jscore shellwritemode commandscompatibilitythe change would be to run with shellwritemode of commands unless overridden by the suite explicitly like for the gle suitein addition we need to change the default when no explicit shellwritemode is specified as an argument to align to what is required for each suitetest
1
the serverstatusoutput documentation does not include the sharding section here is what it looks like on all shard nodes excluding the config server configsvrconnectionstring string the connection string for the csrs config server or prior to the sccc config server lastseenconfigserveroptime bson the latest op time of the csrs config server primary seen so far during communication with any sharding node this value only moves forward and is used to ensure that shards see the latest writes done on the csrs config server when talking to a secondary node
0
when rebooting a firewall this morning every single member of the replicaset crashed noformat i repl error in heartbeat request to exceededtimelimit couldnt get a connection within the time limit i invariant failure connectionisinitialized srcmongoexecutornetworkinterfaceasiooperationcpp i aborting after invariant failure f got signal aborted begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname linux release version smp debian machine somap mongod mongod mongod mongod mongod mongod mongod mongod mongodexecutenativethreadroutine end backtrace noformat
1
looks like this error i am running a juju controller that had other host in a replication state however during a mishap the other two got wiped and upon rebooting and attempting to restore the database i am running in now that shows the following errors quotefeb hqosjuju systemd started juju state database feb hqosjuju mongod w control no ssl certificate validation can be performed since no ca file has been provided please specify an sslcafile parameter feb hqosjuju mongodb starting dbpathvarlibjujudb hosthqosjuju feb hqosjuju db version feb hqosjuju git version feb hqosjuju openssl version openssl mar feb hqosjuju allocator tcmalloc feb hqosjuju modules none feb hqosjuju build environment feb hqosjuju distarch feb hqosjuju targetarch feb hqosjuju options net true port ssl pemkeyfile varlibjujuserverpem pemkeypassword mode requiressl replication oplogsizemb replset juju security authorization enabled keyfile varlibjujusharedsecret storage dbpath varlibjujudb engine wiredtiger journal enabled true wiredtiger engineconfig cachesizegb feb hqosjuju wiredtigeropen config feb hqosjuju wiredtiger filewiredtigerwt connection read checksum error for block at offset block header checksum of doesnt match expected checksum of feb hqosjuju wiredtiger filewiredtigerwt connection wiredtigerwt encountered an illegal file format or internal value feb hqosjuju wiredtiger filewiredtigerwt connection the process must exit and restart wtpanic wiredtiger library panic feb hqosjuju fatal assertion feb hqosjuju aborting after fassert failure feb hqosjuju got signal aborted begin backtrace mongod mongod mongod mongodwteventv mongodwterr mongodwtpanic mongodwtblockextlistread mongodwtblockextlistreadavail mongodwtblockcheckpointload mongod mongodwtbtreeopen mongodwtconnbtreeopen mongodwtsessiongetbtree mongodwtsessiongetbtree mongodwtsessiongetbtreeckpt mongodwtcurfileopen mongod mongodwtmetadatacursoropen mongodwtmetadatacursor mongodwiredtigeropen mongod mongod mongodmain mongodstart end backtrace feb hqosjuju systemd jujudbservice main process exited codedumped quote it was running on a vm that was restarted incorrectly and caused some disk corruption which has been since resolved i currently have a backup of the corrupt vm a backup of the database which was ran before the vm was corrupted and a semiworking restored database on a new vm i was able to get the database working on a new vm with the issue in juju not mongo related that the cert is incorrect so i cannot use that server i want to restore to the original vm with the right certs and disk corruption issues resolved but cannot start mongod without these issues above it looks like the wiredtigerwt file is corrupt i have seen multiple forum posts and jira issues where you guys have repaired the issue but provided no insight in to how soi am posting the files here if there is a way to restore a database without starting a database i would love to see documentation on that as thus far i can find none i have both bson files from a dump as well as a restore with all of the wt files a ton of them i provided all of the wiredtiger files as most previous posts have requested
1
the following python code causes a segmentation fault from bson import objectid from pymongo import mongoclient client mongoclienthostmyhost tzawaretrue data client datafindoneid faultthis issue seems to be caused by a corrupted document querying for other documents does not cause any problems to be precise a dbref field seems to be broken the mongo shell is able to query for this document and displays dbdatafindid parentid id parentid dbrefdata undefined is there any way to display the internal structur and its values of the field parentid ie to display the document without converting it to a dbref object
1
currently mongoasyncquerycursor does not kill the cursor if iteration is halted prior to cursor exhaustion it should
0
this is to fix bf where tenant migration recipient is failing with oom while conducting parallel migrations apparently only amazon builds are affected as a variant make it conditional on amazon and decrease count from to in general i would like to know if we receive a feedback signal from cloud on nearoom condition the proper strategy would be gatekeep concurrent migration on systemwide signals in general and nearoom ram in particular if any such handler exists please make a comment ifwhen such a fix is made it would be better to scale the migration count back up
0
experienced problems during sharedclient compilationsteps to reproducegit clone git checkout track mongoscons sharedclient error logg o libmongoclientso fpic pthread rdynamic wlasneeded wlzdefs shared pchos buildinfoos dbindexkeyos dbjsobjos bsonoidos dbjsonos dblasterroros dbnonceos dbqueryutilos dbquerypatternos dbprojectionos shellmongoos dbsecuritycommonos dbsecuritycommandsos utilbackgroundos utilutilos utilfileallocatoros utilassertutilos utillogos utilramlogos utilconcurrencyvarsos utilconcurrencytaskos utildebugutilos utilconcurrencythreadpoolos utilpasswordos utilversionos utilsignalhandlersos utilhistogramos utilconcurrencyspinlockos utiltextos utilstringutilsos utilconcurrencysynchronizationos utilnetsockos utilnethttpclientos utilnetmessageos utilnetmessageportos utilnetlistenos clientconnpoolos clientdbclientos clientdbclientrsos clientdbclientcursoros clientmodelos clientsyncclusterconnectionos clientdistlockos sshardconnectionos dbcommandsos clientclientonlyos clientgridfsos lpthread lstdc lboostsystemmt lboostthreadmt lboostfilesystemmt lboostprogramoptionsmtdbsecuritycommonos in function mongoisauthorizedstdbasicstring stdallocator const int undefined reference to mongoisauthorizedspecialchecksstdbasicstring stdallocator const constdbsecuritycommandsos in function mongorunstdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder undefined reference to mongogetuserobjstdbasicstring stdallocator const stdbasicstring stdallocator const mongobsonobj stdbasicstring stdallocator undefined reference to mongoauthenticatestdbasicstring stdallocator const stdbasicstring stdallocator const booldbsecuritycommandsos in function undefined reference to vtable for ld returned exit statusscons error building terminated because of errors
1