text_clean
stringlengths
10
26.2k
label
int64
0
1
userflags is not provided by the storage engine we should not rely upon its existence
1
this has the same issue that the cedar root ca had when its certificate expired because the grpc service certificate does not rotate this is likely because the cert rotation amboy job was removed during the ca outage in august
0
when creating a scopeddbconnection with a replica set initialiser there is assertion see
0
original issue was filed at it looks like was put in place for what i had filed but the issue still occurs heres a simple mongoid spec you can paste in to illustrate the problem code mongologgerlogger loggernewstdout default mongoidclientdefault puts defaultoptions bandwithwrite w bsonobjectidnewupdateoneset foo bar mongoidclientdefault puts bandcollectionfindid bsonobjectidnewupdateoneset baz qux code produces logs code databasemongoidtest usermongoiduser passwordpassword authsourceadmin d debug mongodb mongoidtestupdate started updatebands updates orderedtrue d debug mongodb mongoidtestupdate succeeded databasemongoidtest usermongoiduser passwordpassword authsourceadmin d debug mongodb mongoidtestupdate started updatebands updates orderedtrue d debug mongodb mongoidtestupdate succeeded code you can see from the logs that the writeconcern of remains on the second query and that repulling the client has its write concern affected and set to
1
i filled a collection with entries but when i wanted to read them with dbcollectionpersonseachfunctionerr entry the programme crash at the last element because it was null after tests i realized that the each loop called himself times i fix this replacing return callbackerr from utilsjs by return true but i am worried that it broke something other it seem be the same problem than here
1
we need to pick up relevant changes that happened in the server master branch from the point when the branch diverged up until when that is released those changes should flow into the c driver legacy branch as appropriate
1
hi i have many open fd by mongod process lsof grep mongod grep wc l over time number fd is growing to lsof grep mongo grep grep wc l mongod open file times what caused this behavior i use wt engine and version
0
users of the gomongodborgmongodriver golang package are starting to see snyk vulnerability alerts due to the dependency snyk link per the snyk vulnerability version of the package appears to be fixed please deliver a new version of the mongodriver package that avoids this vulnerability
1
i believe the ninserted value should have a value of rather than since the first insert for id value of succeeded also i ran the same command on my local system and it has a value of as expected thanks
0
incorrect installation instructions at filename is enterprise not subscription
1
this would allow nonstandard compilers to build the server if there are warnings that are desirable to ignore specifically it would assist typedef yeartype locally defined but not used with gcc and type attributes are honored only at type definition with gcc other cases will also arise as new compiler versions have new warnings
0
a race causes an failure under asan worker thread was still running after the sharedsemifuture dies fixes
0
the oldest timestamp is the time to which the storage engine maintains history it can service all reads with readtimestamp oldesttimestamp the commit pointcommitted snapshot in replication is the timestamp which a majority of voting nodes have durably replicated to service majority read reads of data that cannot be rolled back replication advances the commit point then updates the stabletimestamp its a subtle detail that updating the stable timestamp internally updates the oldest timestamp to the same value however there are conditions where replicationcoordinatorimplupdatecommittedsnapshotinlock does not in fact move the commit point forward this inaction is not captured in the return value and the calling function unconditionally follows by setting the stable timestamp this leaves the server in a state where a majority read would fail — the server is no longer keeping enough history to satisfy a read at the commit point notably the disablesnapshotting failpoint can cause a consumer test readcommittedonsecondaryjs to fail its unclear if the contract of setstabletimestamp should explicitly state the value may not be set ahead of the commit point or whether the storage engine should consider exposing to steady state replication a way to advance the oldest timestamp where this relationship must instead be enforced
0
in order to measure the cost of a full endtoend build add the total price of a given task to documents in the evergreen tasks collection if hosts vary their bids given the spot instance pricing the total price should reflect each hours pricing on a given host
0
today the mongos router process in our staging environment started crashing repeatedly if we do not attempt to open any mongo connections via it it will stay up but when you try to connect to mongo via it the connection hangs eg with the mongo cli you never reach the prompt and if you terminate the connection eg ctrlc the mongo cli the mongos immediately crashes with the following messages being logged i network connection accepted from connection now open i network received client metadata from application name mongodb shell driver name mongodb internal client version os type linux name centos linux release core architecture version kernel i operation was interrupted because a client disconnected f terminate called no exception is active begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname linux release version smp tue jun utc machine somap mongos mongos mongos mongos mongos mongos mongos mongos mongos end backtrace i control server restarted i control automatically disabling tls to forceenable tls specify ssldisabledprotocols none i sharding mongos version i control db version i control git version i control openssl version openssl feb i control allocator tcmalloc i control modules none i control build environment i control distmod i control distarch i control targetarch i control options config etcmongosconf net bindip port processmanagement pidfilepath varrunmongodbmongospid security keyfile varrunmongodbkeyfile sharding configdb systemlog destination file logappend true path varlogmongodbmongoslog i network starting new replica set monitor for i connpool connecting to i sharding creating distributed lock ping thread for process sleeping for i network confirmed replica set for stagemetricconfig is i sharding updating sharding state with confirmed set i sharding received reply from config server node unknown indicating config server optime term has increased previous optime ts t now ts t w sharding pinging failed for distributed lock pinger caused by lockstatechangefailed findandmodify query predicate didnt match any lock document i ftdc initializing fulltime diagnostic data capture with directory varlogmongodbmongosdiagnosticdata i network listening on i network listening on i network listening on i network waiting for connections on port i shrefr refresh for database config from version to version uuid lastmod took ms i shrefr refresh for collection configsystemsessions to version took ms i ftdc unclean fulltime diagnostic data capture shutdown detected found interim file some metrics may have been lost ok i connpool ending idle connection to host because the pool meets constraints connections to that host remain open i connpool ending idle connection to host because the pool meets constraints connections to that host remain open i connpool ending idle connection to host because the pool meets constraints connections to that host remain open i connpool connecting to
1
have to make sure slave is up before any ops start on master
1
in connectionpooldropconnections here we used a rangebased for loop to iterate through the connectionpools pools member a stdxunorderedmap within the loop we call connectionpooltriggershutdown on each specific pool we find in the pools map this member function will erase this specific pool from the parent connectionpools pools member here according to the stdunorderedmap documentation for the erase overload set references and iterators to the erased elements are invalidated other iterators and references are not invalidated while the abseil documentation is a bit unclear i think the same applies for the abslnodehashmap were actually using and in any case we should probably live by the standard library documentation for stdx types this means that once the specific pool element is erased in triggershutdown any iterators to it are invalidated which implies when that when the rangedbased for loop attempts to advance to the next element in the map by calling next on the iterator it will be usingdereferencing an invalid iterator our current version of abseil seems to leave this iterator in a valid state long enough for the forloop to advance but this is not guaranteed when we attempted an upgrade to the newest version of abseil see in at least one test we see that we fail an internal assert inside abseil when attempting to advance the iterator in this rangebased for loop suggesting that the iterator the loop is attempting to advance is invalid
0
unittest failed on os x wiredtiger developcommit diff insert one key per transaction to make pass on macos feb utcevergreen subscription evergreen event task logs
0
starting with a new database dbcstatsensureindex date expireafterseconds dbcstatsinsertname datedate dbcstatsfind id name fri oct mst this doc never gets deleted
1
our amazon tutorial suggests to use as fs for mongodb even though we usually recommend xfs sudo devxvdf sudo devxvdg sudo devxvdh
0
confirmsocketsetoptiononresetconnections may have a race in it wherein an asio sockets connect call may not return by the time the asio acceptor on the server end returns from accept in confirmsocketsetoptiononresetconnections this allows the server side to close the connection before connect finishes which causes connect to fail and throw failing the test add a synchronization mechanism to ensure that the onaccept callback associated with the acceptor doesnt run until connect has fully completed
0
when doing a dropdatabase using on versions from the relevant folders which contain the data when using directoryperdb true are also removed however when performing the same dropdatabase using the wiredtiger engine and directoryperdb true the directories are left intact and only the related files are removed note only tested with directoryforindexes true in
0
the spec says that drivers should only consider the body when sizing batches for example that means that for delete dbcollection deletes only the size of query counts to the size of the batch the problem is that if you do max ops per batch deletes in a batch the overhead that doesnt count for max batch size adds up to more than the difference between and bsonobjmaxinternalsize this leads to an assert on the server for what is specified to be a valid operation the assert is in a place that the server decides that there is irreparable network corruption and closes the connectioni spoke with and we think the best solution would be to change the writecommands spec to say that either the entire command object with all overhead must fit under or it must only have a single op in the operations array
1
collection empidnameeducationnow how do one get the first two schools for an employeemmtia
1
causes failing tests due to brokenness with sharding
1
inserting data using mri is incorrect it doesnt fail but viewing the data with the mongo console shows that all values are null even weirder i can mongodump the collection and bsondump to json format and the data is correct even though the mongo console told me it was all nullcomparing the output of bsondump in debug mode shows that something is very wrongthe following gist shows that this works okay with mri and fails with confirmed this is related to the c extension removing the c extension and doing a pure ruby insert from works as expected when viewing through mongo console the pure ruby insert shows the number directly using mri and the c extension lists all integers as numberlong x so the c extension that does work has slightly different behavior than the c extension on linuxosxi was only able to reproduce this on windows i could not reproduce on unix
1
i have a replicaset with the primary running mongodb and a secondary running and an arbiterthe secondary just crashed and cannot restart due to corruptionhere is the error lognoformatfri nov assertion size is invalid size must be between and first element eoo usrbinmongod usrbinmongod fri nov assertion size is invalid size must be between and first element eoo usrbinmongod usrbinmongod fri nov error writer worker caught exception bsonobj size is invalid size must be between and first element eoo on ts timestamp h v op u ns webdocproductiondigestmails id o set sentat new fri nov fatal assertion usrbinmongod fri nov aborting after fassert failurefri nov got signal abortedfri nov backtrace usrbinmongod noformat
1
this is confusing the mongdodb doc on acknowledged write concern leads with with a receipt acknowledged write concern the mongod confirms the receipt of the write operation this implies the acknowledgement is just of receipt of the write it does not imply that the client waits for success to continue however in the caption to an illustrative diagram the docs say the client waits for acknowledgment of success or exception that is much clearer and somewhat contradictory ie the acknowledgement is of successfailure not simply of receipt of the write
1
running the pymongo unittest suite leads to a hang with some op hold the global write lock the server responds to currentop and rsstatus but any operation requiring a lock hangs indefinitelythe point in the suite where we reach this hang is unpredictable but most commonly in tests with or of python threadsserver never hangs almost always hangs at some point in pymongos test run a successful run lasts about minutes and executes about tests interestingly doesnt hang when started with fork example ops holding the write lock at the point where hangs in one test runcode opid active true secsrunning op query ns pymongotest query create test capped true size client desc threadid connectionid locks w pymongotest w waitingforlock false numyields lockstats timelockedmicros timeacquiringmicros r w codein anothercode opid active true secsrunning op insert ns pymongopoolingtestsunique insert client desc threadid connectionid locks w pymongopoolingtests w waitingforlock false msg index external sort numyields lockstats timelockedmicros timeacquiringmicros r w codebacktraces from the latter run
1
description the mongoperf tool will be removed it should not be mentioned in documentation scope remove from or remove extractsrunfromcmd listmongodbenterprisepackagesrst listmongodborgpackagesrst optionsmongoperfyaml stepsinstallmongodbonwindowsunattendedlyaml programtxt add blurb to compatibility notes add redirect impact to other docs outside of this product none except tutorials whose content will be overhauled and then not impacted mvp work and date resources eg scope docs invision ticket description the mongoperf tool is no longer useful remove it
0
we should add cypress tests to the ci tests in the evergreen repo so we can make sure dont accidentally break spruce with any changes in evergreen
0
database crashed i upgraded and was running for a few minutesthu aug got signal segmentation faultthu aug last op opid active secsrunning op update ns lindexsyndfeeds query id bindata inlock client thu aug backtrace homelindexbinmongodbbinmongod homelindexbinmongodbbinmongod homelindexbinmongodbbinmongod homelindexbinmongodbbinmongod homelindexbinmongodbbinmongod homelindexbinmongodbbinmongod homelindexbinmongodbbinmongod thu aug dbexit thu aug connection accepted from aug messagingport recv error aug end connection aug messagingport recv error aug end connection aug messagingport recv error aug end connection aug listener on port abortedthu aug closeallfiles finishedthu aug dbexit really exiting now
1
when server already runs a mongodb instance on default port and default hostnamedetermined ip address initializing another instance with the same port but different ip address failswe have server with interfaces and server hostname resolves to hosts a node of replica set called a i want to have a node of another replica set called b to be hosted on a rs node has and b rs node has after initiating rs b on the node using rsinitiate i get the replset exception loading our local replset configuration object nonmatching repl set name in id field check replset command line which is obviously wrong since i make sure that the node data directory was clean before i spent two hours trying to diagnose the problem solution was subtle i had to change port from default one to another one and it worked now it appears that rs code tries to do some magic using host names and ips and connects to wrong ip address determines the wrong replica set name and then bails out this was very frustrating and i hope you will fix this soon also the message might disclose a little more information for example what replica set name was encountered instead of the expected one i think ive seen the case that was asking for that but cant find this nowalso i believe that connecting to a wrong machine might lead to disastrous events in case of samenamed but different replica setsand while youre at it please fix the mongo host vs mongo hostzone problem if there are no dots in name shell always tries local host instead of host relative to search order specified in resolvconf i think its the same problem
1
using buildindexes false will fail the new initial sync using the datareplicator the hang in has been resolved in already due to the mishandling of the buildindexes false error old description if i initialize a replica set with one member having buildindexes set to false that member gets stuck in this bug appears in but did not appear in
1
ucrtbasedllabort unknown nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded c nonuser code symbols loaded unknown no symbols loaded
1
code mon jul cmd drop jul got signal segmentation faultmon jul invalid access at address from thread mon jul backtrace code
1
objtrackertrack fails to insert the tracked object into the set of tracked objects this is causing memory to leak when script execution completes but the gc doesnt run
1
start two mongod instances on same boxmongod dbpath data port replset logpath vvvvvvmongod dbpath data port replset logpath vvvvvvconnect using mongo shellcall rsinitiate as followsvar config id members rsinitiateconfigone serve will crash with sig see
0
bq when adding a user to multiple databases use unique usernameandpassword combinations for each databasewhile this sentence is true for itself it may lead to the conclusion that a user who needs to access different databases has to be added to each database individually however its recommended to add such an user only once by giving himher multiple roles for all the databases heshe needs to access for examplecode id homekari user kari db home credentials mongodbcr roles role read db home role readwrite db test role appuser db myapp customdata zipcode codethis user has been added only once to the home database having access to the three databases home test and myappsee
1
currently the route is limited to superusers it would be nice if it could be extended to project admins part of our release process involves using the project ui to update the branchname field it would be a huge help to us if it was possible to do this through the api as a project admin
0
currently the multiversioned routing table is only used when selecting a global read timestamp to verify that the set of shards used to compute the read timestamp matches the set that would be targeted at that timestamp now that mongos supports multi statement transactions subsequent statements should route their requests using the multiversioned routing table corresponding to the already selected read timestamp
0
a data set of docs in one collection is being updated to add data to each documentevery few million records processed mixed reads and writes many fewer writes mongos becomes unresponsive this number has some down from gradually until it now locks after only a few millionqueries against each of the shard and config mongod instance show them to be responsive to requestsqueries against mongos hang indefinitely as does dbstatsgdb stack trace attached shows many threads in mongogetshardsforquery waiting to obtain mongorwlock
1
ac update windows and see if it fixes mongocrypt and kerberos
1
requires update for filesystem store option
1
an interface must exist to visit an expression node that is determine its dynamic type and call into a provided external interface
0
change most references from master to remove the wtdevelop variants from the commit
1
it doesnt seem that this returns a collection name as indicated it takes a name as a parameter and returns an object im wondering if dbhellofind is the same as dbgetcollectionhellofind
0
with the new behavior of using minvalid to determine if a initial sync has been done and regular replication should start leads to some significant problemscases upgrade a set without minvalid collectoin all nodes going into if no primarysecondary is up otherwise all nodes dropresync all data variations of this will cause the initial sync to failelect a primary because of the oplog stale rules removing the minvalid collection causes a full resync ind of dataindexes or host going offline in until one can be donethe upgrade case is bad since we had no need for the minvalid collection and it was not maintained nor guaranteed esp on the primary or if replicas were seeded with a copy of the files without it
1
opening in response to the singleshard transaction behavior described in and bq single shard transactions dont send coordinatecommit and the current behavior is that attempting to recover a single shard transaction commit will make it wait for the coordinator to timeout and then check for the commit state of the transaction locally it would be a great improvement if recovering the state of a committedaborted singleshard transaction did not block for seconds or transactionlifetimelimitseconds cc
0
in heavy cpu loading condition our web application get stucked all threads are blocked and after second in the log are added those errors on entitycontrolleronexception systemtimeoutexception a timeout occured after selecting a server using compositeserverselector selectors writableserverselector latencylimitingserverselector allowedlatencyrange client view of cluster state is clusterid type sharded state disconnected servers as you can see all nodes are disconnected this is our connection string
1
there should be a way to name and manage volumes so you know what youre attaching to a host maybe another table in the ui
0
three times on the last hours all our mongos were deadlocked on serving requests to sharded collectionusing outofprod mongos to a sharded collection was blocked toduring the lock it was possible to use dbcurrentop and show collection on the sharded databaserestarting all mongos was the only way to get out of thisin attachment a cleaned log of one mongos during the last failure
1
in the usage section the sample configuration file shows authentication as an option this should be authorization in the usage section the sample configuration file shows authenticationmechanism this should be authenticationmechanisms in the usage section one example shows a username option this is incorrect it should be user
1
sat jun warning too much data written uncommitted jun warning too much data written uncommitted jun warning too much data written uncommitted jun warning too much data written uncommitted jun warning too much data written uncommitted jun warning too much data written uncommitted jun warning too much data written uncommitted jun warning too much data written uncommitted jun warning too much data written uncommitted jun warning too much data written uncommitted usrlocalmongodbbinmongod
1
the document referencesystemdefinedroles has outstanding questions describe the overall purpose of the dbadmin role describe the overall purpose of the dbowner role should we document the mmsbackup collection in the systemcollections document should we document the following collections systemnewusers systembackupusers systemversion
0
hi all i suspect increment and timestamp fields in struct btimestamp are swapped output of this snippet codejava auto before bsoncxxbtimestamp beforetimestamp beforeincrement bsoncxxdocument builder builderappendbsoncxxkvpts before auto vv builderextract stdcout bsoncxxtojsonvv stdendl auto after vvviewgettimestamp stdcout before after stdendl stdcout t aftertimestamp i afterincrement stdendl code is noformat ts timestamp t i t i noformat i verified it happens with mongocxx but it should be the same with current version of mongocxx it seems depending on gettimestamp code return statement codejava typesbtimestamp elementgettimestamp const bsoncxxtypecheckktimestamp bsoncxxciter timestamp increment bsonitertimestampiter timestamp increment return typesbtimestamptimestamp increment code because btimestamp is defined codejava struct bsoncxxapi btimestamp static constexpr auto typeid typektimestamp increment timestamp code
0
i am executing a query to get counts for a status and i have run into some performance problems i got the mongo driver logging to work but now i would like to execute an explain on the result executing this within mongoidrawmessagesuccessfullyprocessedsupportedhostcountresults in the following ruby driver outputmongodb weshopstagingsupportedhostsfind mongodb weshopstagingcmdfindcountemailmessages querystatussuccessfullyprocessed hostin fieldsnil would like to take the last query and execute an explain on it either using the ruby driver or within the mongo console i would rather just stick with the ruby console as i wouldnt have to convert from ruby syntax to javascriptthxkarlh
0
the adminsystemtransactions collection will be maintained outside of the regular replication mechanism in order to ensure atomicity with respect to the respective writes for this reason its writes should be excluded from generating separate oplog entries
0
its defined as the hostname and port of the backup daemon which is not correct its the hostname and port of the backup http service it says you must set mmsbackupcentralurl even if you are only using ops manager monitoring and not ops manager backup this is in contradiction to the notes in the actual file which says noformat note that changing the servers backup port requires updating both the mmsbackupcentralurl here as well as the backupbaseport property in confmmsconf default port is required only if using backup eg noformat i think the docs are correct in this regard and you do need to set it
0
implement throughput messages as well as size measurements of logging ability for various grip senders look at the mongodbgodriver benchmarking toolframework and see if you can steal someall of that for evergreen integration this will require some change to the evergreen configuration and the makefile which can be spun off onto its own ticket if the scope grows too much
0
queryconditionlist inienumerable valuesit calls if valuescontainsnull with a null argument at the same time bool containsbsonvalue value there isif value nullthrow new argumentnullexceptionvaluethus in seems to be always throwing an exception in
1
a find with a sort on a mongos may return an empty result set noformat mongos writeresult ninserted mongos dbcfind id x mongos dbcreateviewcview c ok mongos dbcviewfind id x mongos noformat theres a simple workaround which is to use the aggregation pipeline noformat mongos dbcviewaggregate id x noformat
0
windows builds are going purple for the mongocdriver project unrelated to any code or config changes in the project as a test i simply added a newline to the news file but the windows builds are purple in that patch build for the c driver its not a timeout its been running a little less than minutes the log output stops in the middle ill try again and see how it fares this morning
1
requiring building against and using openssl was deemed not kosher and the php and hhvm drivers would have to vendor in required code to make work when hhvmphp was installed without extopenssl and openssl otherwise not available
1
we are experiencing this error as we attempt to write to a mongodb enterprise version we saw from issue that we needed to upgrade the driver and bson package we performed that upgrade and continue to get the response clipped below more information node server is in aws instance running the latest ami linux mongo is hosted in the same aws region on as well code error messagekey clustertime must not start with nameerrorstackerror key clustertime must not start with n at querytobin at serializecommands at poolwrite at executewrite at wireprotocolinsert at serverinsert at executewriteoperation at replsetinsert at replsetinsert at insertdocuments at insertone at at new promise n at executeoperation at collectioninsertone at authenticationlog code
1
mongodb lacks an icon in addremove programs david kolub found this
0
we tag each query for profiling using addspecial and commentdbtestfindfieldvalueaddspecialcomment these show up correctly in the logs and dbcurrentop when using collectionfind methodbut any flags added via addspecial are not used by the collections getcount method we use the count method to display a querys total countdbtestfindfieldvaluecountdbtestfindfieldvalueaddspecialcomment countthe count disregards any special flags so profilingtracingdbcurrentop do not show our unique tag for that querybased on the source code the dbcollectiongetcount method does not accept any special flags parameters so it looks like a core server issuesee attached console session where a findaddspecial shows the comment via dbcurrentop but it is not displayed when i issue a findaddspecialcount
1
the moveprimary command does a check that the host name of the shard exists in the configshards collection but we got the host name by looking up the shard identifier which means the shard must exist this check is left over from when moveprimary took the target shard by hostname instead of shard name now it is redundant of the other checks in place for shard existence
0
we shouldnt be calling ismaster during the dbclientconnectionconnect methodwe should populate server version specific values like maxbsonobjectsize upon the first request instead
1
i update my mongodb in node to i notice that the databaseon functions is remove what is the alternative function for this i use this to listen if something error in my database connection thisdatabaseonerror delete thisconnectionpromise thisdatabaseonclose delete thisconnectionpromise
1
tests have been failing recently with oom killer issues some of the tasks start up their own sharding test harnesses and replsettest harnesses this puts excessive memory pressure on the test host this is a temporary measure until we shuffle the tests around
0
were using mongo vie spring data and everything else seems to working aok except one geoquery were trying to run a bounding box query and sort it so that we only get newest records for some reason if we dont specify sort all of the coordinates will be in a same very small area i assume that query query new querywherelocwithinsearchbox is sorting the results by distance then if we specify the sortobject like querywithnew sortsortdirectionasc another the query will be slowest of the slow weve created a combound index for that querycould someone please help
1
seems to implicate noformat i mongodb starting i i note this is a development version of mongodb i not recommended for production i i targetminos windows server i db version i git version i openssl version openssl jun i build info windows servicepackservice pack i allocator system i options net port security keyfile storage dbpath i exception in initandlisten error dbpath does not exist create this directory or give existing directory in dbpath see terminating i invariant failure storageengine srcmongodboperationcontextimplcpp i mongodexe mongoprintstacktrace i mongodexe mongologcontext i mongodexe mongoinvariantfailed i mongodexe mongooperationcontextimpl i mongodexe mongoexitcleanly i mongodexe mongodbmain i mongodexe wmain i mongodexe tmaincrtstartup i basethreadinitthunk i i noformat
1
relevant commit from is this error condition was not stated in the original spec change that introduced the hint option for update operations in also adds rationale for why we do this for opmsg and opupdate but not opquery ie update command before opmsg if drivers have not yet implemented they should consider doing this alongside that issue
0
there are handful of cases where we are supposed to error per driver spec but dont we need to update the test json with the missing tests and new ones when added
0
the draft release notes for are returning file not found instead of redirecting to release notes the expected location of release notes redirects to it looks like the release notes can only be found at
1
i am surprised to see that ordering by date is not working properly no matter what i try on the latest c driver pretty sure this was working before here are some examples where dates are not ordered properly and coming out of sequence code var row mongocollectionasqueryable orderbyp pmydatetimevalue orderbyp potherid wherep pmyid mydatamyid firstordefault var row from p in mongocollectionasqueryable where pmyid mydatamyid orderby pmydatetimevalue ascending orderby potherid ascending select pfirstordefault row from p in mongocollectionasqueryable where pmyid mydatamyid potherid rowotherid orderby pmydatetimevalue ascending orderby potherid ascending select pfirstordefault code none of these work i get dates out of sequence as shown in the attachment i even tried adding an index on the date column in the server made no difference
1
in order for geonear to avoid scanning the same index cells multiple times it needs to keep track of a union of the cells and be able to take the difference between two unions functions need to be added to and to be used in geonear
0
revised there are many connection options supported in the private level but the connection string and client object should support only two for consistency with other drivers maxpoolsize maximum number of nonmonitor simultaneous connections per host maxidletimems time after which idle connections are closed if set then after an activity spike up to maxpoolsize idle connections will drain away original quote i dont see how connection pool options in the connection string which appear to be nonstandard anyway are wired up to affect the actual connection pooling we should investigate to confirm if this is working or broken and if broken develop a plan to fix it we dont necessarily need to fix problems for the alpha release but we do need to be able to describe known issues to address in the beta quote
0
hello support team alright so we are in a bit of pickle right now and need to escalate resolution asap very recently we received a mongodb system alert notification that required us to upgrade our customers from up to due to a data loss issue that occurs when mongodb is not properly shut down on our side we packaged up an upgrade of our platform and created an organizational wide motion to get our customer base upgraded as we were set to start executing against our internal program we received yet another mongodb system alert bulletin that has told us to not upgrade unfortunately we couldnt stop our upgrade program in time and have had a couple customers hit this upgrade issue per the most recent alert bulletin linked above there is a call out to create a support case which we are doing we need an update on the timing to this critical bug fix and if there is workaround available we would like those communicated to us asap the nature of these back to back critical bugs is very costly to our business and the perception of these issues is becoming more impactful to our customers and is spilling over to build a negative perception of mongodb as well please advise
1
looks like its possible to hint an index that has one collation defined but specify a different collation on the query itself it should probably be an error similar to requests in but i can also see this being a warning since its possible to conceive a query that is using default collation but wants to use an existing different collation index for filtering in a scenario where this does not impact the results
0
summaryif you have added a new secondary running to a replica set which did initial sync then document writes applied to that secondary during the “fast index” phase will be applied but secondary index updates will be ignored as a result queries against that instance may miss the document updatesnote that this issue does not apply if you have upgraded an existing impacton secondary nodes the data is intact and properly replicated however the secondary indexes may be corrupt this will affect your applications if you are doing slaveok queries ie queries against this secondary node or if this secondary becomes the primary node of the replica workaroundsif you have a secondary node which performed an initial sync you need to run mongod repair on this secondary for the repair procedure see do a new initial sync with or on this secondary for the resync procedure see patchesmongodb will address only this problem by disabling the fast initial sync code introduced in
1
hybrid index builds are the default in and we can remove the feature flag which will greatly simplify the code in multiindexblock
0
right now some of our buildvariants pull down the latest mongodb distributions but some are fixed at change download urls to use the latest keyword which should keep us uptodate now and moving forwardideally all buildvariants should run against both versions as well as
1
both shardregistry and catalogcache lookups trigger a shardexhaustivefindonconfig that has a default timeout consider the following scenario a mongos access its shardregistry producing a cache miss on the underlying readthroughcache a shardregistrylookup targeting the nearest config replica set node is started communication with that specific config replica set node is lost due to network partition the rsm marks the host as failed all the subsequent requests that hit the same mongos require access to the shardregistry and arrive before the current lookup times out they will all try to join the ongoing shardregistrylookup started at all those requests will fail with networkinterfaceexceededtimelimit as soon as the original lookup times out in practice even if we have more then one config server replica set node and even if we are using the readpreferencenearest to fetch data from them if we loose communication to one of them it can happen that the mongos will not be able to serve any request for up to secs the same reasoning can be applied to the catalogcache because it also builds on top of the readthroughcache and implements the lookups through the same shardexhaustivefindonconfig
0
the following wt configuration passed into wiredtigeropen can be used to encourage more interesting races its unclear how much of a perf impact this is so we may need to play to find the right set of tests that can take advantage of this noformat timingstressfortest noformat
0
theres a separate ticket for macos storagemixin failures the errors for this ticket looks like just a couple tests failing can be viewed here
1
the new test added in for metricsreplnetworkoploggetmoresprocessed requiresdocumentlocking this is because the test blocks oplog fetching on a failpoint while trying to do oplog writes under ephemeralfortest the oplog writes acquiring x lock would conflict with the oplog reads holding s lock thats blocked on a failpoint normally thats not a problem because oplog writes should take modeix locks and oplog reads should take modeis when the storage engine supports document level locking
0
this can be a simplified version of the host termination job it should include a termination reason for event logging purposes
0
reserve this code in case we need to use it to simplify any cases where we need to reestablish the stream on all shards such as when an unsharded collection becomes sharded by pushing the retry behavior up to the drivers
0
a new defect has been detected and assigned to in coverity connect the defect was flagged by checker copypasteerror in file srcmongosshardkeypatterncpp function mongoflattenboundsconst mongoindexbounds const and this ticket was created by renctan
0
description of drivers ticket a common complaint from our support team is that they dont know how to get debugging information out of drivers some drivers provide debug logging for this purpose others do not all drivers implement support for publishing events useful for driver and application debugging application performance monitoring connection monitoring server discovery and monitoringmonitoring the goal of this project is to define a default set of event listeners using the events from these three specs that behave the same and provide the same messages in every driver it should be trivial for users to enable and disable these listeners providing tses ces and users an easy way to get debugging information out of our drivers along with the event listeners each driver should provide easily discoverable documentation on how to enable these listeners and what features they provide the key idea here is to provide information and logging messages that are consistent across drivers making it easier to support all drivers and far easier to document how to debug specific issues see for updated details
1
querying on an indexed field which contains infinity and infinity does not produce expected results when using range predicatesnote the upper bound on the second query is equal to max double value which compares as less than infinity if you do an equality search it will use the index just fineheres an example with explain output noformatt infinitytsavenumber infinitytfindnumber infinityexplain cursor btreecursor ismultikey false n nscannedobjects nscanned nscannedobjectsallplans nscannedallplans scanandorder false indexonly false nyields nchunkskips millis indexbounds number infinity infinity server gte cursor btreecursor ismultikey false n nscannedobjects nscanned nscannedobjectsallplans nscannedallplans scanandorder false indexonly false nyields nchunkskips millis indexbounds number server
0
alludes to a skip option but it isnt demonstrated in the code example
0
getlasterror returning oldwrongno data the code that we are running works when not shardedwe have a shard setup each shard with servers in a replica set config servers and mongos running on the application serverall querys are using safemodeafter going an update with a query that would match documents the following is returned via updatedexistingtrue lastop errnull no documents will have been updated so should be wrong as well as updatedexsistingwhen using a query that does match a document and does successfully update occasionally we will get the following from lastop errnull writeback updatedexistingtrue writebackgle lastop errnull is sometimes the same if that means anything
1
see the merging results section of the bulk write spec
1
see for updated details
0
make the change in etcpipcomponentsexternalauthreq to fix builds on linux
1
hii am facing an issue related to the mongos process in my setup mongos process is consuming more amount of swap memory initially when i start the mongos process it consumes around of swap memory after that i start provision data to the database sometime later i observed mongos process is consumes around gb of swap memory swap memory which is configured to the linux machine is consumes almost all swap memory configured in setupconfiguration of my setup is below created replicasets which having members each including arbiter created config servers created the mongos serverseach mongod and mongos processes are running in individual linux machine steps which i followed to recover swap stop mongos process start mongos processafter that consumption of swap memory reduce and consuming of swapcould you please suggestthanks regardskrishnachaitanya
1
create the feature flag for this project
1