text_clean
stringlengths
10
26.2k
label
int64
0
1
the changes from as part of added mongodb version to the list of other versions to download however the platform argument for the series has been changed from to windows the changes from as part of effectively need to be backported to the branch
0
theres an extra in on this page login in with the user administrator’s credentials and create additional users linei would submit a pull request on github and change it and im still happy to do that but i cant find in your style guide whether you like login or log inthanks
1
bydistro and byproject flag lump all failed statuses together eg systemfailed and systemtimedout
0
catchuptakeovertwonodesaheadjs disables data replication on some nodes and expects the other nodes to sync from the primary successfully with chaining the running secondaries may sync from the disabled secondaries so that the test timed out while waiting for writes to get replicated
0
action dropallrolesfromdatabase has a few problems one of them big problem the command seems to produce no entry in audit logs the command is called droprolesfromdatabase without an all i think the all is clearer droprolesfromdatabase suggests that one might drop selected the spec for the text message is dropped all roles from there shouldnt be a role in this there is no single role involved
1
helloenvironnement replica set has been offline and is now too far behind to catch up i will need to resynci perform full resync but always crashed when build indexi have docs on my collection and crash appear on second indexs buildingi try to increase oplogsize and make rsync during low primary activity no success its possible to build index in background someone have an ideasolutions sorry for my in english i try to improve it kind regard
1
compile and dbtest work but theres a failure to find the debugsymbols file see noformat tests succeeded mv mongodbsrczip distsrczip mv mongodbzip mongodbbinariestgz mv debugsymbolszip mongodebugsymbolstgz true mv cannot stat debugsymbolszip no such file or directory running command archivetargzpack step of noformat this is holding
1
the collectionimplvalidate logic does not appear to need access to any of the private internals of the collectionimpl class the code was probably just parked there at some point it should be move out into a separate file probably just a namespace not a new class with well document functions
0
codepython import pickle import pymongo db pymongomongoclientdemo dbdropcollectiondemo dbcreatecollectiondemo validatorjsonschema required try dbdemoinsertmany except exception as e exc e exc code noformat pymongoerrorsbulkwriteerrorbatch op errors occurred full error writeerrors writeconcernerrors ninserted nupserted nmatched nmodified nremoved upserted noformat codepython pickleloadspickledumpsexc code noformat attributeerror traceback most recent call last in pickleloadspickledumpsexc in unpickleexceptionfunc args cause tb def unpickleexceptionfunc args cause tb inst funcargs instcause cause insttraceback tb in initself results def initself results superbulkwriteerror selfinit batch op errors occurred results in initself error code details maxwireversion errorlabels none if details is not none errorlabels detailsgeterrorlabels superoperationfailure selfinit formatdetailederrorerror details errorlabelserrorlabels attributeerror str object has no attribute get noformat
1
i was able to run a patch build based on on successfully but not it isnt clear to me why the srcdsi directory isnt empty the compile task failed after being restarted it may be worth noting that the restarted task ran on the same host it originally did however that host was able to run other sysperf compile tasks successfully task logs noformat running command gitgetproject step of head is now at move the distlockmanager to only be available on mongod head is now at move the distlockmanager to only be available on mongod head is now at move the distlockmanager to only be available on mongod head is now at move the distlockmanager to only be available on mongod head is now at move the distlockmanager to only be available on mongod after retries operation failed problem with git command error waiting on process exit status command failed after retries operation failed problem with git command error waiting on process exit status task completed failure noformat agent logs noformat head is now at add config option and warning when password appears on command line set o errexit git clone srcdsi fatal destination path srcdsi already exists and is not an empty directory noformat
0
this is to avoid repeated code and make it easier for us to write tests for
0
now that the version scheme has been finalized we should update the server fcv constants from to
0
paneltitleissue status as of may issue summary while a background index build is in progress document updates modifying fields contained in the the index specification may under specific circumstances cause mismatched index entries to appear this bug may affect backgroundbuilt index builds only and only in combination with update operations user impact this bug may affect the behavior of queries that use an affected index which may return incorrect results symptoms that indicate a backgroundbuilt index is affected by this bug include find queries covered by the index may return documents that dont match the query predicate display documents that no longer exist in the collection display more than one document with the same id field count commands may return incorrect counts remove commands may remove documents that dont match the query predicate operations that do not use affected indexes are not impacted by this bug note that collection data consistency is unaffected by this bug only index entries may be affected workarounds there are no workarounds for this issue while not all deployments using backgroundbuilt indexes will be impacted all users running the affected configurations should upgrade to a version containing the fix and rebuild all backgroundbuilt indexes to make sure theyre not impacted by this bug affected versions mongodb with the wiredtiger storage engine while a background index build is in progress any document updates modifying fields contained in the the index specification that happen at the same time as the background index build is reading those same documents will result in mismatched index entries technical details about this but can be found in and mongodb with and wiredtiger storage engines while a background index build on a set of fields is in progress any document updates modifying fields contained in the index specification may result in mismatched index entries if the background index build has already processed the documents to be updated or if such documents are being processed by the index build at the same time technical details about this bug can be found in and fix version the fix is included in the production release a fix for mongodb for users of the wiredtiger engine is included in the production release after upgrading to one of these versions all backgroundbuilt indexes need to be rebuilt to avoid this issue panel original description after doing a background index build its possible for there to be more index keys than documents in a compound nonmultikey index codejavascript dbindexmultifindcount code this is caused by multiple index entries pointing to the same document codejavascript dbindexmultifindid id recordid dbindexmultifindid id recordid id recordid code index spec codejavascript v key name ns testindexmulti background true code here is a summary of affected versions and storage engines where yes means we have been able to reproduce the index corruption yes
1
hi the intel team would like to be able to use buildupdateunlesschangedinbackgroundquery from crudstorejs in order to get the updated fields when user updates a document in data explore they should be exposed through hadron documents as we would like to get access to them through hadrondocuments
0
currently we use rollbacktestjs nodes psa and rollbacktestdeluxejs nodes pssaa for double rollbacks for testing our rollback algorithm unfortunately neither of these test fixtures will work with prepared transactions because we cannot run prepared transactions on replica sets with arbiters since we will likely want more test coverage of rollback with prepared transactions we should make a new test fixture that uses a pss architecture we can achieve something similar to the psa architecture by setting one of the secondaries to priority and periodically stoppingrestarting replication on that node
0
testing more with i found a problem currently in the branch we skip unstable entries after a downgrade its this code in btpagec codestatic inline bool unstableskipwtsessionimpl session const wtpageheader dsk wtcellunpack unpack we should never see a prepared cell it implies an unclean shutdown followed by a downgrade clean shutdown rolls back any prepared cells complain and ignore the row if fissetunpack wtcellunpackprepare wterrsession einval unexpected prepared cell found ignored return true skip unstable entries after downgrade to releases without validity windows and from previous wiredtigeropen connections return unpackstopts wttsmax unpackstoptxn wttxnmax dskwritegen wtprocesspageversionts code however this means that if a page has nothing but unstable entries on it it will be read in as an empty page the verify code and perhaps the runtime code for all i know isnt prepared to accept pages that have no entries on them the verify failure looks like this codewt filewtwt wtsessionverify verifyrowintkeyorder vsmaxaddrsize wt filewtwt wtsessionverify wtabort aborting wiredtiger library thread ltwt received signal sigabrt aborted gdb where in raise from in abort from in wtabort session at in verifyrowintkeyorder session parent ref vs at in verifytree session ref addrunpack vs at in verifytree session ref addrunpack vs at in wtverify session cfg at code whats happening is the verify code assumes there will always be a leaf page verified before an internal page is verified because the walk is depthfirst and that page will have a key on it that continues to be true in but its possible for to read in the leaf page any not find any entries so theres no key and asserts im not seeing an obvious solution im pretty sure were not prepared to assert the release can handle such empty pages
0
automation agent changelog version support for performing a restore via automation agents support for rolling index builds send error codes in log messages support for configuring wiredtiger encrypted storage for mongodb monitoring agent changelog version support for mongodb config servers as replica sets
1
what problem are you facing my query is match group ne null group id group amount sum quantity count sum limit result in shell id fruit amount count id wjdbj amount count id wer amount count result with node js driver mongoservererrormessagethe match filter must be an expression in an objectstackmongoservererror the match filter must be an expression in an objectn which gives proper result in mongo shell but not with node js driver what driver and relevant dependency versions are you using mongo db version installed is npm package mongodb v steps to reproduce
1
currently the source tarball is produced as part of the sourcepushpy manual build step it is a simple git archive of the current checkout we should move the responsibility for producing the source archive into scons so that scons can adorn the source archive with preinterpolated versioning information eg versioncpp additionally the build system should be updated so that if it detects preinterpolated versions of these files it does not need to regenerate the interpolated files this may require moving some items out of buildinfocpp which should reflect the current build and therefore should not be preinterpolated and into versioncpp
0
the mongo shell command history is no longer persisted no history in noformat ➜ mongo gitmaster ✗ nodb mongodb shell version print c bye ➜ mongo gitmaster ✗ more dbshell noformat correct behavior in noformat ➜ mongo gitmaster ✗ nodb mongodb shell version print c bye ➜ mongo gitmaster ✗ more dbshell print noformat possibly related to
1
this is the overarching task to implement a criticalsectionlike mechanism for participants in a ddl operation to block crud and ddl access to a collection while it is undergoing ddl changes
0
i would like to be able to connect to an authenticated replica set with the following command when the mongodbtcpmycluster srv record existed and a txt record existed containing authsourceadminreplicasetmycluster noformat mongo mongodbsrvmyclustertest username cory password noformat using this command against the shell fails because the test database is used as the auth source this occurs because the uri parsing ignores the authsource in the txt record because username is not also specified in the uri if i were to move the username to the uri then the shell no longer prompts for a password and does not authenticate properly ie noformat mongo mongodbsrvmyclustertestusernamecory password noformat the current workaround is to specify authenticationdatabase admin on the command line instead of using the txt record ideally the first example would work and the shell would use the authsource from the uri via the txt record even though the username is specified on the command line and not explicitly in the uri the second example also seems acceptable but less consistent
1
currently the migrateclone phase uses upsert calls for incoming documents this is inefficient and also unnecessary because just before this stage we run the range deleter to clean up the chunk range
0
the docs for maxscan implies it has the same function as cursorlimitin practice it looks like maxscan is a hard limit on how many docs mongodb will churn through to get results but cursorlimit will limit the output of the query no matter how many documents are scanned
1
we recently upgraded from mongojavadriver to and started getting duplicatekey exceptions in a scenario where we do a remove for a key followed by an insertall of our operations use writeconcernsafe so they should be serialized and the duplicatekey exception should be impossiblei dug into the code and found the following change in if w instanceof integer integer w w instanceof string w null if w instanceof integer integer w w instanceof string commandput w w it looks to me like the driver is not sending w with the gle command when w is which corresponds to writeconcernsafe or writeconcernacknowledged but i believe it should be if it doesnt send w mongod does not wait for the previous operation to finish so we are effectively getting writeconcernnormal instead
1
expected resultthe statement dbcommandcollstats c should return ints for all numeric keysactual resultthe statement dbcommandcollstats c returns for the key ok and for the key paddingfactor this does not match the docs
0
collectioncreateindexes returns the wrong value in instead of returning the index spec from the server it only returns the index name as a string im filing this as a bug because the createindexes function signature says it should return a document with on a standalone codejava await cdbtestcollectioncollcreateindexes createdcollectionautomatically false numindexesbefore numindexesafter ok code with on standalone codejava await cdbtestcollectioncollcreateindexes code
1
paneltitleissue status as of jul summarythe query planner in mongodb contains a mechanism called explode for sort that allows for efficient nonblocking sorts using indexes for queries that are unions of point intervals the explode for sort mechanism creates a number of individual index scan stages that can later be combined with a merge sort to yield the final sorted resultthe current implementation of the query planner limits the number of such index scan stages to perform a merge sort to as a hardcoded value this limit can be reached quickly and needs to be increasedexample for an index on and a query of the type codedbcollfind a in b in sort c codethe query planner would create index scan stages this number is above the threshold of and the query would not be eligible for the explode to sort optimization and could therefore not use the index to sort instead it would execute an inmemory sortuser impactusers with sorted in arrays leading to more than index scan stages might see significant performance impact due to the blocking sort workaroundsnoneaffected versionsall production release versions since are affected by this issuefix versionthe fix is included in the production releaseresolution detailsthe maximum number index scan stages eligible for the explode for sort optimization has been increased from to and additionally the constant has been made a query knob exposed in original descriptioni have these shard keyobserved that query of the form a in id lt value sorted by id picks id index when there are no results that match but of course where there are results aid index is much better edit long list happens to have elements just above the threshold examplenoformat dbcontentfind a in id lt cursor btreecursor id reverse ismultikey false n nscannedobjects nscanned nscannedobjectsallplans nscannedallplans scanandorder false indexonly false nyields nchunkskips millis indexbounds id allplans cursor btreecursor id reverse ismultikey false n nscannedobjects nscanned scanandorder false indexonly false nchunkskips indexbounds id cursor btreecursor reverse ismultikey false n nscannedobjects nscanned scanandorder true indexonly false nchunkskips indexbounds a id server filterset false stats type fetch works yields unyields invalidates advanced needtime needfetch iseof alreadyhasobj forcedfetches matchtested children type ixscan works yields unyields invalidates advanced needtime needfetch iseof keypattern id boundsverbose field ismultikey yieldmovedcursor dupstested dupsdropped seeninvalidated matchtested keysexamined children noformatsame query with id changed to match more dbcontentfind a in id lt cursor btreecursor reverse ismultikey false n nscannedobjects nscanned nscannedobjectsallplans nscannedallplans scanandorder true indexonly false nyields nchunkskips millis indexbounds a id allplans cursor btreecursor reverse ismultikey false n nscannedobjects nscanned scanandorder true indexonly false nchunkskips indexbounds a id cursor btreecursor id reverse ismultikey false n nscannedobjects nscanned scanandorder false indexonly false nchunkskips indexbounds id server filterset false stats type sort works yields unyields invalidates advanced needtime needfetch iseof forcedfetches memusage memlimit children type keepmutations works yields unyields invalidates advanced needtime needfetch iseof children type fetch works yields unyields invalidates advanced needtime needfetch iseof alreadyhasobj forcedfetches matchtested children type ixscan works yields unyields invalidates advanced needtime needfetch iseof keypattern a id boundsverbose field field ismultikey yieldmovedcursor dupstested dupsdropped seeninvalidated matchtested keysexamined children noticed this when queries were slow and it turned out it was using id plan profiling showed that
0
replacing initialsyncshareddata with our own equivalent would likely be helpful towards implementing the retry logic for the tenant cloners
0
to whom this may concernfollowed build process but instead of getting primary and replica sets we are getting getting all primary sets that cannot talk to eachother tried both dns and ips and getting the same result please respond to developersxencom or this thread thanks xen
1
for release on wednesday may version released delete mongodb binaries on disk that are no longer used by any managed process fix for management of monitoring and backup agents by automation agents on windows validate upfront that mongodb processes are running as the same user as the automation agent
1
movechunk takes a secondarythrottle true parambalancer setting can have a secondarythrottle true setting
1
since there might be many operations in the same transaction the oplog batcher needs to count intransaction operations towards the batch size for “commit” of unprepared transaction to keep the batch size of actually applied operations more accurate in contrast “prepare” and “commit” for prepared transactions are always applied in their own batches we will count the number of operations by keeping the total number of operations in the “committransaction” command the count doesn’t include the commit itself with the new “count” field the “o” field of committransaction oplog entry looks like noformat committransaction prepared false count noformat operations in the same transaction must be applied in the same batch even if the transaction size exceeds the current batch size when this happens the transaction can be scheduled into the next batch if it’s not the first operation in the batch
0
there is a failing test case in where a cursor is opened on the most recent wiredtiger checkpoint and that cursor is not returning the expected content we need to ensure that cursors open on a checkpoint are returning the correct data
0
we are removing all optional configuration options from the configuration file to avoid conflicts on package updates these options therefore need to be documented outside of the configuration fileplease backport to mms onprem for consistency
1
formatstressppczseriestest failed on ubuntu ppchost wiredtiger diff merge branch develop into jun utcevergreen subscription evergreen event task logs
0
for queries that include a sort operation without an index the server must load all the documents in memory to perform the sort and will return all documents in the first batch sounds wrong
1
the following query is eligible for a whole ixscan solution on any index prefixed by a codedbcfind b id code however because we break in this block as soon as any index which provides the sort is available we miss out on covered plans that is we only consider the first whole ixscan solution we see so for example if i were to run code dbcdrop dbccreateindexa dbccreateindexa b dbcfind b id code we would get a noncovered plan because the a index comes first in the list of indexes in the catalog but switch the order we create the indexes in code dbcdrop dbccreateindexa b dbccreateindexa dbcfind b id code and now we have a covered plan because the a b index came first when looking for a whole ixscan solution note that this is a separate issue from which is a problem when using a whole ixscan solution and the filter is nonempty this ticket is relevant for cases where the filter is empty ideas to fix this delete that break statement and the similar one for reverse index scans this will cause both plans to go through the multi planner and plan ranker since both plans will have the same outputwork ratio the covered one will be chosen by the plan ranker if we want the planner to only ever generate one whole ixscan solution we can have it generate all of the possible ones and then see if any are covered picking that one this would save work of multi planning which is basically useless for this case since theres no filter and all of the indexes are equally selective but add yet another special code path to the planner
0
we want to track the total number of transactions started since the last server startup to do this we should add a totalstarted counter field to servertransactionsmetrics that tracks this metric we will increment this counter every time a new multidocument transaction starts additionally we will need to add a totalstarted field to the transactionsstatsidl class so that we can serialize this counter to the serverstatus output object
0
useradd and groupadd are used in the rpms but these binaries arent listed as requires for the rpm this will likely only manifest when this rpm is being added at actual installation time as opposed to added as a postinstall kickstart item or after the os is installed
1
i have secondary up and runningrunning following codecode try commandresult res dbcommandnew basicdbobjectismaster readpreferencesecondarypreferred catch exception e getloggerloglevelinfo egetmessage e codethis errors with followingcodeinfo cant find a mastercommongodbmongoexception cant find a master at at at at at at also fails with readpreferenceprimarypreferred readpreferencesecondarypreferredbut it works fine using slaveokcodecommandresult res nodegetserverdbcommandnew basicdbobjectismaster bytesqueryoptionslaveokcode
0
you cannot case a command within a function definition with the variants field as a workaround the variants field does still work for regular commands or for function references that is code func generate and upload coverage variants code will still work
0
starting in all database metadata indexes collections must be accessed via getindexes and getcollection helpers rather than reaching directly into the system collections directly the mgo driver will do the right thing automaticallysee
1
in documentation at one of the valid types is decimal but this isnt documented
0
the sphinx internal link identifiers like dbcmdmapreduce are showing up in the title attributes of a tags eg codebehavior including the mapreduce group andcodethe title attribute should be the value of the shorttitle of the page or the value of the text of the target link but not the id of the link
0
example failure operationexception note from execcommand ok errmsg not master thrown in the test bodyexample failure code errmsg not master anymore while waiting for replication this most likely means that a step down occurred while waiting for replication thrown in the test bodyexample failure mo followed by seh exception
1
in certain scenarios the replicaset with priorities can end up in a state where a primary cannot be elected anymore until some election causing event occursie for data nodes a b c and arbiter d e b d being on same machine following sequence of events caused this to occurpriority for a cpriority for b timeline astate bstate cstate comments t primary secondary secondary t not reachable primary not reachable a and c not reachable from earb b selected primary t recovering secondary secondary t primary secondary secondary bp stepped down because of lower priority earb not able to see any primary t not reachable secondary not reachable a c not reachable from b and barb t not reachable primary not reachable b elected primary since it was not reachable from b darb earb t secondary primary secondary a relinqueshed primary since b was more recently elected primary a syncing to c t rollback primary secondary t recovering secondary secondary a while still recovering steps downs b that has lower priority and is only seconds ahead of c t secondary secondary secondary a c not electing because they are not freshest which implies very likely b has the latest optime since no rollback was seen when it was stepped down by a b does not elect itself saying earb will veto for lower priority b is ahead by a few seconds of c t xx down secondary primary shutting down a causes sync target change for c rollback followed by a fresh election in the replicaset
0
for release on automation agent changelog version fix issue with rolling storage engine upgrades for authenticated replica sets that include an arbiter improved handling of rolling operations for replica sets that contain more than one fix issue in which automation agent could not repair users that were imported into an existing deployment ensure that credentials used during an import existing job are not cached or reused monitoring agent changelog version ensure that profiler data collection is not impacted by clock skew stop collecting databasespecific recordstats information backup agent changelog version minor optimization to explicitly set contenttype on http requests
1
order of completion and status compile unscheduled sharding unscheduled compile passes sharding fails compile and sharding are now scheduled compile passes compile is now unscheduled sharding never runs
0
hi we are getting an issue in user creation call of rubydriver to be exact the issue occurs in this particular line error thrown noformat error undefined method for noformat stack trace noformat i info – creating user for db i info – backtrace block in updateclustertime synchronize updateclustertime login block in authenticate handleauthfailure authenticate connect ensureconnected write deliver block in dispatch publishcommand dispatch block in dispatchmessage withconnection withconnection dispatchmessage execute execute block in create withsession withsession create noformat
1
requires drivers making a distinction between command error responses returning an error code and those not providing one while implementing this i noticed that error code is automatically changed to mongoerrqueryfailure quoting from the pull request link to comment quote mongoccmdcheckok and mongoccmdcheckoknowce rely on parseerrorreply to extract message and code information from a bsont reply if no code is found in the reply document a code of is returned the two check methods then take this information and build a bsonerrort of it but only after changing the error code to mongocerrorqueryfailure if it was essentially defaulting to that error code mongoccmdcheckoknowce and mongoccmdcheckok should not change the error code but instead this logic should be moved to wherever the mongocerrorqueryfailure is needed quote pointed out that other code may assume an error code to mean that everything was successful so this will have to be covered a viable strategy may be to expand bsonerrort to expose if an error code was given the code field in that struct is declared as so we cant use to show that no code was provided adding a hascode field to bsonerrort would work but this may cause unwanted side effects in public apis to cover this
0
i discovered this problem while trying to solve a problem with a nagios mongodb monitoring plugin that was returning critical general mongodb error codec cant decode bytes in position illegal encoding while checking several mongodb statistics from the serverstatus commandsee the attached output from serverstatus using a browserfirst note the interesting and large array of fields in locks and recordstats the documentation doesnt explain that note near the bottom of each field alphabetically there are several keys that appear to be binary and possibly badly encodedim supposing that these badly formed strings are breaking the python string codec in the monitoring pluginrestarting the mongodb instance resolves the issue by removing nearly all of the locks and recordstats
0
this is such a letdown i dont know why youd do this in the section install mongodb manually you create fantastic steps and then when you have elevated everyones hopes and lifted the expectation level you bring them crashing down with an indecipherable step which talks about editing a path or creating symbolic links how is anyone supposed to know how to do that so ive done steps and but now ned to spend time hours perhaps or days trying to figure out how to do step very disappointing
1
code fri apr fooa assertion failure la srcmongodbbtreeh fri apr update fooa query id update set x exception assertion
1
we have been running into a pretty bad bug where we noticed some objects were failing to be inserted into mongo we were getting the following stack tracesystemindexoutofrangeexceptionindex was outside the bounds of the array at mongodbbsoniobsonwritercheckelementnamestring name at mongodbbsoniobsonwriterwritenamestring name at bsonwriter type nominaltype object value ibsonserializationoptions options at mongodbbsonserializationbsonclassmapserializerserializememberbsonwriter bsonwriter object obj bsonmembermap membermap at mongodbbsonserializationbsonclassmapserializerserializebsonwriter bsonwriter type nominaltype object value ibsonserializationoptions options at mongodbbsonserializationbsonclassmapserializerserializememberbsonwriter bsonwriter object obj bsonmembermap membermap at mongodbbsonserializationbsonclassmapserializerserializebsonwriter bsonwriter type nominaltype object value ibsonserializationoptions options at mongodbdriverinternalmongoinsertmessageadddocumentbsonbuffer buffer type nominaltype object document at mongodbdriveroperationsinsertoperationexecutemongoconnection connection at mongodbdrivermongocollectioninsertbatchtype nominaltype ienumerable documents mongoinsertoptions options at mongodbdrivermongocollectioninserttype nominaltype object document mongoinsertoptions optionswhen researching this issue we came across this which seemed like it was the issue but we were still experiencing itupon looking at the code we tracked this down to the checkelementname name is not checking for an empty string just null its just trying to access the first character which there are none the following unit test reproduces this issue in the smallest amount of code possible public void testemptydictionarykeyserialization var dictionary new dictionary k v emptykey using var buffer new bsonbuffer using var bsonwriter bsonwritercreatebuffer new bsonbinarywritersettings bsonwritercheckelementnames true var serializer new bsonserializationserializersdictionaryserializer serializerserializebsonwriter dictionarygettype dictionary serializerdefaultserializationoptions
1
the final phase of an index build currently misinterprets the replica set mode as a primarystandalone when the server is restarted in a recoverfromoplogasstandalone maintenance mode in this startup mode the index build should run as though it is applying the oplog on a secondary or during recovery old description old title indexcoordinator startup does not expect recoverfromoplogasstandalone to be turned on in creating a test for a different bug i got the following invariant noformat crepl command cindex build inserted keys from external sorter into cstorage build waiting for next action before completing final cstorage build received commit cstorage build committing from oplog c failureattrexprismaster replstateindexbuildstateiscommitpreparedmsgindex build index build state prepare c after invariant failurenn ccontrol fatal messageattrmessagegot signal abortedn ccontrol smp mon may utc ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol ccontrol mongo program was not running at process ended with exit code noformat
0
at present slotbasedstagebuilders constructor has some logic that searches the querysolutionnode tree to retrieve some information from the collectionscannode if one exists prior to slotbasedstagebuilderbuild being called in the future if we need to gather other information from the querysolutionnode tree prior to slotbasedstagebuilderbuild being called we should consider adding a formal analysis pass to slotbasedstagebuilder
0
i do not have this problem with driver ive upgraded to the driver series twice with the same result timed out waiting on socket read for queries on large collections interestingly i did not have this problem running in my development environment on my mac this is blocking me from moving up to and therefore moving to mongodb there is a thread on the mongodbuser group to which i recently added comments
1
instead of unconditionally establishing a snapshot right away when beginning a unitofwork only do this when acquiring a new optime this keeps the transaction lifetime as short as possible and should fix while still avoiding starting transactions inside of the getnextoptimes mutex
1
i initialize client as so mongoclient new mongoclientnew i then do mongoclientopenfunctionerrmongoclient var consolelogresult is true even though password is wrong iferr never arrives here else always says authentication is correct for a valid user and incorrect password even though i know it is wrong background user admin has been created and i am trying to log in user to a non admin database of which i created the user on that database and assigned a username and password i can log in with the right user name and password via the command line my packagejson says mongodb
1
pymongos gridout class has an id property which motorgridout and asynciomotorgridout forgot to wrap
1
this is an attempt to guard against calling getters on temporaries a harmless thing to do that cant be prevented by this technique anyway getting rid of this stuff will simplify the idl generator and give us generated classes with behavior that is less surprising thx for the link
0
it looks like createtaskdirectory is missing a call to taskconfigexpansionsputworkdir newdir this means that the taskconfigexpansionsgetworkdir will still refer to agttaskconfigdistroworkdir instead of the unique subdirectory that was created
1
the crux of problem occurs when i try to add the user using a localhost connection to mongos i get mongos dbadduserzz user z readonly false pwd id sep uncaught exception couldnt add user syncclusterconnectioninsert prepare failed errmsg need to login ok errmsg need to login ok errmsg need to login ok at this point we are running with security mode enabled eg keyfile switch but have not yet added any admin users anywhere here are some key details this happened with mongodb for windows versions and our topology is we have three servers each running a mongos a config and mongod replica member mongos mongod config mongod replica mongos mongod config mongod replica mongos mongod config mongod replica all replica instances belong to a single replicaset rs this error occurs with or without a shard first being added the problem only seems to surface when the servers are actually separate physical machines or separate virtual machine instances in other words if all three sets of processes run on one machine then this error is not encountered reproducing the problemwe have built a set of command line batch scripts for windows that replicate the problem see the attachments the scripts just need to be copied next to your mongo binaries for example heres how we run it follow the prompts at the command line for each execute runabat execute runbbat execute runcbatrunabat is identical to runb and runc except that it will also create a mongos process and attempt to add a user if you will be trying it out be sure to update folder paths ip addresses and ports as appropriate for your environment by modifying all three bat scripts and the initshardabcjs scriptfor comparison purposes if you just run runshardinfrastructurewithauthbat it will run everything on a single machine and you will not encounter this error
1
when logging messages to a file mongodb summarizes long log messages rather than truncating because the end of the message often contains valuable information such as query execution timemongodb should do the same when writing to syslog but use a maximum line length appropriate for syslogoriginal descriptionwhen a long line is logged it is truncated to ensure that the logs dont bloathowever syslog also truncates long lines and has a lower max length i believe as a result the ms duration is missing this line in in should either be configurable or should be syslog aware
0
we are getting the no block given yield error we are getting this when trying to retrieve a field called building in our contact model we are able to access all the fields but not this field we are getting this error we are using mongoid to read data from mongodb please let me know if this is a bug in mongodb or mongoid
1
consider a node initial syncing from a primary the initial syncing node before the oplog application phase has the following collection uuid mappings noformat testfoo does not exist testbar testshardedcoll noformat it also happens to be that this is the correct goal state now consider the sequence of oplog entries that are played as part of the oplog application step of initial sync noformat ts op c ns testcmd ui wall o renamecollection testbar to testshardedcoll staytemp false droptarget true ts op c ns testcmd ui wall o renamecollection testfoo to testbar staytemp false droptarget true ts op c ns testcmd ui wall o create foo idindex v key id name id ns testfoo ts op c ns testcmd ui wall o renamecollection testbar to testshardedcoll staytemp false droptarget ts op c ns testcmd ui wall o renamecollection testfoo to testbar staytemp false droptarget true noformat those operations result in the initial syncing node to only have testshardedcoll while missing the expected testbar collection i can make statements about behavior change that fix this specific sequence but i cant speak to their correctness globally i do have two observations that might illuminate whats going wrong notes that when collections have uuids a replicated renamecollections droptarget should never be true it should be changed to false if the target did not exist or it should be the uuid of the collection that was dropped perhaps more subtly in this sequence which is true for using applyops as well as what was observed on the initial syncing node the creation of testfoo fails because the uuid already exists from the collection that was dropped as the target of the previous rename noformat i command cmd create testfoo existing collection with conflicting uuid is in a droppending state noformat one clarification about the reproduction script the first two oplog entries are simply to recreate the state of the initial syncing node before it began oplog application had a primary produced that sequence of oplog entries including the initial creates would be a very different problem
0
normally when a measurement field x contains an array the controlminx field on the bucket contains the min for each position in the array code dbeventsfind x x dbsystembucketseventsfind control min x max x code but if x also contains nonarrays then controlminx or controlmaxx will be a nonarray code dbeventsfind x x x dbsystembucketseventsfind control min x max x code the predicate pushdowns dont account for this so multikey queries can incorrectly exclude this bucket code dbeventsfind x lt no results because internally dbsystembucketseventsfindexpr lt code a similar thing can happen if x is a mixture of objects and nonobjects code dbeventsfind time x id time x y id dbeventsfind xy gt no results code this happens because although controlmaxx is the max of x controlmaxxy is not the max of xy controlmaxxy is missing but missing isodate
1
what problem are you facing callback is called twice when an error occurs when using insertone i suspect itll affect all collection apis what driver and relevant dependency versions are you using mongodb driver database version steps to reproduce call insertone after connection has been closed to force an error notice that the callback is executed twice i believe the error is here driversnodemongodbnativecollectionjs line codejava catch error collection operation may throw because of max bson size catch it here see if typeof callback function callbackerror this should return else thisconnemitoperationend id opid modelname thismodelname collectionname thisname method i error error if typeof lastarg function lastargerror this calls the cb again else throw error code higher up the file line you can see that the callback already calls lastarg codejava callback function collectionoperationcallbackerr res if err null thisconnemitoperationend id opid modelname thismodelname collectionname thisname method i error err else thisconnemitoperationend id opid modelname thismodelname collectionname thisname method i result res return lastargapplythis arguments already calls lastarg code
1
accumulate features into draft release notes revise as features are addedremoved
0
hi there seems to be a typo here update exiting deployment im sure that you had wanted to say update existing deployment
0
it should follow what pail does and export environment variables instead and remove aws since we cant guarantee that other projects will clean up after themselves
0
i accidentally tried to import a document and got an error which was hard to decipher the server log did show something more meaningful so even pointing me there would be nice noformat mongoimport verbose collectionspot filesize bytes using fields connected to localhost ns testspot connected to node type standalone using write concern jfalse fsyncfalse using write concern jfalse fsyncfalse testspot testspot error inserting documents write tcp write broken pipe testspot imported document tail usrlocalvarlogmongodbmongolog i network end connection connections now open i network connection accepted from connection now open i network recv message len is invalid min max i network end connection connections now open i network connection accepted from connection now open i network recv message len is invalid min max i network end connection connections now open i network connection accepted from connection now open i network recv message len is invalid min max i network end connection connections now open noformat
0
johnliu evergreen host list all runtime error invalid memory address or nil pointer dereference
0
in one way it is a nice feature to create db or collection without any prior error is there any onoff switch or strict mode that will stop this feature because below is the situation that puts my production database in to problem
1
do we still need to remove size method from rubys symbol class or is there a way to contain this removal to codebases running ruby came across this issue while trying to call size on a symbol in the rails console ruby rails
0
paneltitleissue status as of nov issue description and impact matching by type on a looked up field after a lookup stage in an aggregation operation leads to incorrect results because the match on type is pushed into the lookup without removing the reference to the new field created by lookup this failure is unique to the type operator for example the following operation matches on lookedupname after a lookup into the lookedup field noformat lookup from second localfield field foreignfield foreignfield as lookedup unwind lookedup match lookedupname type string noformat mongodb incorrectly optimizes the aggregation pipeline by attempting to match on lookedupname in the second collection where lookedupname does not exist the correct behavior is for the optimized lookup stage to match on “name” instead diagnosis and affected versions all versions since are affected by this issue running the explain method will help identify the incorrect matching stage noformat dbfirstexplainaggregate lookup from second localfield field foreignfield foreignfield as lookedup unwind lookedup match lookedupname type string noformat if the query optimizer moves the match stage into the lookup stage while still referencing the field created by lookup the operation is impacted for example noformat lookup from second as lookedup localfield field foreignfield foreignfield unwinding preservenullandemptyarrays false matching lookedupname type noformat where noformat matching “lookedupname” type noformat should be noformat matching “name” type noformat remediation and workarounds this fix will be included in to work around this issue you can add a new field newfield that represents lookedupname run the match on newfield then unset newfield noformat dbfirstaggregate lookup from second localfield field foreignfield foreignfield as lookedup unwind lookedup addfields newfield lookedupname match newfield type string unset newfield noformat panel original description a lookup stage followed by a match on the type of a looked up field does not work it seems to be using the wrong field name when the match gets pushed inside the lookup stage steps to reproduce create two collections as follows collection first contains field value collection second contains foreignfield value name thomas run the following aggregation on collection first codejavadbfirstaggregate lookup from second localfield field foreignfield foreignfield as lookedup unwind lookedup match lookedupname type string code in the explain you can see that the match gets pushed into the lookup stage but uses the original field name lookedupname which doesnt exist in the remote collection codejava lookup from second as lookedup localfield field foreignfield foreignfield unwinding preservenullandemptyarrays false matching lookedupname type code if a nontype query is used eg equality match the field instead is just name which is the correct behavior codejava lookup from second as lookedup localfield field foreignfield foreignfield unwinding preservenullandemptyarrays false matching name eq thomas code expected results aggregation returns the document because the field lookedupname is a string actual results aggregation returns no documents — hat tip to who discovered this bug
1
curl o distributesetuppysudo ln s usrlocalbinsudo pymongo may on darwintype help copyright credits or license for more information import pymongotraceback most recent call last file line in file line in from pymongoconnection import connection file line in from pymongomongoclient import mongoclient file line in from import b file line in from codec import importerror no module named codec
1
scenarioa sharded system has uneven chunk distribution the user removes a shard setting it to drain and adds new shards the balancer prioritizes moving chunks to reach equilibrium before draining the removed shard this priority is not configurabletwo alternative solutions stop the balancer write a script to manually move chunks off of the draining shard using the movechunk command restart the balancer set the maxsize on the new shards to some low value so they cannot accept more chunks which should stop the balancing operation and allow draining to continue with chunks likely moved to the full shards restore the original maxsize value afterwards when changing maxsize in the config database you probably need to run the flushrouterconfig command to refresh mongos versions of mongos prior to wont support this so youd need to restart mongos
1
paneltitleissue status as of dec issue description and impact this issue causes incorrect checkpoint metadata to sometimes be recorded by mongodb versions and starting in versions and wiredtiger uses that incorrect metadata at start up which can lead to data corruption upgrading directly to any mongodb version or from mongodb versions and can leave data in an inconsistent state this ticket currently tracks the implementation of a safe direct upgrade path to a future version of mongodb and this fix is included starting in mongodb versions and diagnosis this issue can cause a duplicate key error on startup that prevents the node from starting however nodes can also start successfully and still be impacted if a node starts successfully it may still have been impacted by data inconsistency within documents specific field values may not correctly reflect writes that were acknowledged to the application prior to the shutdown time and documents may still exist which should have been deleted incomplete query results lost or inaccurate index entries may cause incomplete query results for queries that use impacted indexes missing documents documents may be lost on impacted nodes impact on a node that starts successfully can be checked by running the validate command the output from validate reveals the impact by reporting on inconsistencies found between documents and indexes in the form of extra index entries including duplicate entries in unique indexes missing index entries remediation and workarounds for clusters still on versions and it is possible to avoid this issue by following a safe upgrade path from versions and by downgrading to first reference the following list to consider our recommended response to this issue clusters on versions and are safe to upgrade to or but should upgrade to recommended versions or clusters on versions or should downgrade to to and then upgrade to versions or clusters running versions can and should upgrade to or be aware that affects versions and requires its own remediation for clusters that have already upgraded to from versions and if you previously followed remediation steps for and detected corruption you will have remediated any corruption that occurred as part of this bug if you have not validated all collections since upgrading to from or we recommend validating all collections if corruption is detected data can be recovered from other nodes in the replica set this may be operationally intensive and we’re working on simplifying this process as a top priority panel
1
replica set with members both upgrading secondary to it crashes when trying to sync with the master after restartslog file attached
1
right now write concern is achieved in the c driver via calls to getlasterror instead we should add the ability to pass the write concern directly to methods which write
1
the c driver uses csharpnull true to represent a bsonnull property whose value is c null for examplecodepublic class c public bsonnull a public bsonnull bvar c new c a bsonnullvalue b null consolewritelinectojsoncoderesults incode a null b csharpnull true codethe c driver needs to use some other representation for bsonnull values of c null because this representation either causes replication to halt prior to server or the secondaries to crash in server and is also a server bug in the sense that no value that a client driver provides should be allowed to halt replication or crash secondaries if the server considers this representation to be invalid then it should have been rejected outright by the primary
0
currently mongodb will rebalance chunks on inserts but in some cases the collection key is a numeric sequence which makes all first x amount of records go to one server and the second to the otherthis is ok as long as the load on that collection is not high but if the collection usage is high it means that manual sharding should be invoked by the admini think that rebalancing a collection based on its load should also be added to the balancer when a collection chink have a high load of queries the chunk should be splited and move to the other serverservers
0
as seen here we should encourage usage of newer mongdb versions by showing the style startup options and linking to the legacy style ones
0
normal synchronization between the test thread and the executor doesnt consider the db worker thread so we should synchronize it explicitly in tests otherwise a second very soon election in test will change the state db worker expects in writelastvoteformyelection
0
sometimes we just change jstests or want to run a test repeatedly on a given variant it would be very useful to add an option to use the existing binaries or build them if they dont exist for the base commit it would very nice if there were automatic rules to detect that changes only in the jstests dir should turn this on by default
0
as part of the work on we made snapshotnames be a simple monotonically increasing counter rather than using names derived from optimes this made it possible to establish a happens beforeafter relationship between snapshots and events that do not change the optime including background index builds on secondaries and reindex and repair operations this had the downside of introducing an additional notion of time that had to be tracked to ensure that wmajority could fulfil its full contract we now think these operations can simply use an optime now and request the creation of a new optime on an n oplog entry using a back channel such as heartbeats to handle a steadystate idle system where newer writes are not coming in this will need to use an optime now both as the last optime for the client and as the minimum optimes for collections and indexes this should enable us to undo the change from which was a temporary solution to this problem
0
ive attached a screenshot of the main patch build view right now if i want to find out if migrationfailurejs has run i have to individually click on each suite run and then search this is made more difficult by the fact that the test runs in multiple suites and that individual suites are broken up into subsuites so even if i know which one i care about i have to click through of them to find the test itd be really nice if in addition to being able to search by task and by variant we could search by test name thats usually what i care about anyways
0
there has been a new release we should consider upgrading the builtin timezone rules and potentially backporting we have already published the new files to our website linked from the docs the linked file there is just called latest but i verified it is named after downloading
0
if you visit you can see the docs found that return a edit there are many other items that are not redirecting properly for example
1
this fields is only used by the ui and should be removed from the db structs also add a comment by buildvariantdisplayname field indicating that it is used by an aggregation and is not stored in the db
0
this page is missing docs on the maxtimems option for find and modify the option is mentioned in the crud spec and linked to the above page so i think thats where it should go thanks
0
presently the driver only uses a single connection which is insufficient for serverside mongo we need to wrap libmongocs client pool in order to realize the full power of the driver
0
identical code for different branches the condition is redundant the same code is executed regardless of the identicalbranches ternary expression on condition mongogfeatureflagchangestreamsoptimizationisenabledandignorefcv has identical then and else expressions should one of the expressions be modified or the entire ternary expression replaced
0
should this page link to the connection string spec
0
the name of this storage engine has changed
1
in multiple sections of this page we have the localhost interface is only available since no users have been created for the deployment the localhost interface closes after the creation of the first user this should be the localhost exception is only available since no users have been created for the deployment the localhost exception closes after the creation of the first user in this page we dont need to move to the config db for stopping the balancer
0
we will migrate to a new account need update the credentials for these projects
0
if is called with an invalid sequence eg one that begins with it doesnt notice that and are returning nil it loops forever on the invalid character appending the escape sequence for nil to the output string until it fails to realloc the output buffer and aborts
0