text_clean
stringlengths
10
26.2k
label
int64
0
1
driver tests designed to exercise the resumablechangestreamerror label introduced in will need to use a failpoint to induce the desired exceptions however for getmore commands the existing generic failcommand failpoint runs too early in the command path long before the getmore has checked out its cursor and set curoporiginatingcommand the error label code therefore cannot determine that this cursor is a change stream and will not attach the error label we should add a new failgetmoreaftercursorcheckout failpoint to support these tests
0
i have mongodb node setup like primary secondary and an arbiterin this case if secondary node is down then write concern replicaacknowledged is not working it is trying to replicate data with second node but arbiter node doesnt have data replication so i am unable to read the data from primary node due to primary node gets locked at the time of write operation which is not acknowledge by secondary node
1
hello i followed the steps to install upgrade from but at the aptget update step i got an error on two different servers code w failed to fetch unable to find expected entry in release file wrong sourceslist entry or malformed file e some index files failed to download they have been ignored or old ones used instead code can you help thanks
1
version released fix for addresses issue deriving the default port for config servers which were started with the configsvr option but no port specified
1
currently i have a process where i am tailing a mongo operations log intermittently i have an issue where the process hangs and becomes unresponsive to kill signals or keyboard interrupts i believe the issue is occurring in the tailing of the mongo oplog below is my code i am using and the latest version of pymongo any idea would be appreciatednoformatfrom pymongo import mongoclientreadpreferencemongoreplicasetclientimport loggingclass mongohelper def initselfkwargs if replicaset in kwargs and kwargs selfclient mongoreplicasetclienthostkwargs portkwargs if port in kwargs and typekwargs is int else replicaset kwargs readpreferencereadpreferencesecondarypreferred tzawaretrue else selfclient mongoclienthostkwargs portkwargs tzawaretrue selfdb selfclient selfdbauthenticatekwargs kwargs sourcekwargs selflogger kwargs if logger in kwargs else logginggetloggername def collectionselfcollectionname return selfdbnoformatnoformatfrom pymongoerrors import autoreconnectfrom bson import timestampfrom time import sleepfrom collections import ordereddictimport loggingclass mongotailer schema ordereddict name pythontype string sqltype varchar length time pythontype integer sqltype integer ordinal pythontype integer sqltype integer def initselfkwargs selfclient kwargsclient selfsql kwargs selfcreatetable if reimport in kwargs and kwargs true selfcurrentts selfcurrentordinal selfgetcurrentoplogtime else selfcurrentts selfcurrentordinal selfgetstoredoplogtime selflogger kwargs if logger in kwargs else logginggetloggername def getcurrentoplogtimeself r if r return rtime rinc else raise notimplementedno oplog detected def getstoredoplogtimeself q select time ordinal from etlmongooplog r selfsqlfetchrecordsq if lenr if r is not none and r is not none return r r return selfgetcurrentoplogtime def createtableself query select from informationschematables where tablename etlmongooplog exists selfsqlfetchrecordsquery if lenexists selfsqlexecuteselfsqlgeneratecreatetableschemaselfschemanameetlmongooplogifexiststrue selfsqlexecuteinsert into etlmongooplog nametimeordinal values s s s def updateoplogtimeself ts query update etlmongooplog set time s ordinal s selfsqlexecutequery def tailself instances selfinstances instances query ts gt timestamptimestampselfcurrentts selfcurrentordinal cursor selfclientoplogrsfindquerytailabletrue timeoutfalsesortnatural try while cursoralive for doc in cursor selfhandleoplogdoc except autoreconnect as e selfloggerwarnlost connection to mongo finally if cursor cursorclose def handleoplogself doc if doc n selfloggerinfosystem message in replica set strdoc elif doc in selfinstances selfloggerinfohandling an oplog event for doc selfhandleknownoplogdoc else args ns strdoc doc strdoc selfloggerwarnunknown namespace for oplog event extra args selfupdateoplogtimedoc def handleknownoplogselfdoc if doc i selfhandleinsertdoc elif doc u selfhandleupdatedoc elif doc d selfhandledeletedoc def handleinsertself doc selfloggerdebughandling an insert document selfinstancesinsertdoc def handleupdateself doc selfloggerdebughandling an update on document selfinstancesupdatedoc def handledeleteself doc selfloggerdebughandling a delete on document selfinstancesdeletedoc strdocnoformat
1
the geojson spec says that positions should be decimal values and the java driver correctly expects a bsondouble however mongod allows one to index geometries with integer positions when a geo spatial index is defined i am not convinced that it should be okay to deserialize integral positions but assuming one has them indexed in mongo they wont be able to use codecs to read them
0
according to discussion at it seems as if the server dispatches opcommand requests based on the wire protocol command name field but the network layer never verifies that the command name field matches the first element of the command request this can yield confusing error messages in command execution as exhibited by the thread above and can possibly trip an assertion if any command implementations assert that the first field of the command request object is the expected command name
0
new to mongodb i have already install successfully in my pc but when i run my code as follows this error happened the console outputs the invalid bson what should i do to solve it codec int main bsoncxxdocument streamdoc streamdoc id name opendocument first john last backus closedocument contribs openarray fortran algol backusnaur form fp closearray awards openarray opendocument award ww mcdowell award year by ieee computer society closedocument opendocument award draper prize year by 你好 closedocument closearray bsoncxxview view streamdocview stdcout bsoncxxtojsonview stdendl return code
0
typed command conversion of the authenticate command inadvertently swapped the user and db fields resulting in replies like noformat external dbruncommandauthenticate mechanism dbname ouwidgetsostuff inccusstnew yorklnew york citycnwidgetbob user external ok noformat this happens here noformat return authenticatereplysessiongetusernametostring sessiongetdatabasetostring noformat this initializes the reply through two string args to the constructor which inobviously are passed in the wrong order db comes first we can fix this with a swap noformat return authenticatereplysessiongetdatabasetostring sessiongetusernametostring noformat but a more durable fix which doesnt reply on a generated constructor signature would be to construct by parts noformat authenticatereply reply replysetusersessiongetusername replysetdbsessiongetdatabase return reply noformat this way theres no ambiguity or hard to spot ordering issues
0
we introduced an improved error api version in but the flawed old version is the default for backwards compatibility in c driver switch the default to the improved version
0
os x debugfreebsd to decide how to handle these failures either improving the machines or increasing the timeout on the failing tests
1
inclusion of a digest of the full username to the logical session id in addition to the current guid is necessary to fully disambiguate logical sessions in degraded clusters when the authoritative record for a session is unreachable semantics for the uid are as follows session creation via startsession sessions can only be created with one and only one user authenticated the composite key is created from a guid created on the spot as well as the digest of the currently authd username only the session guid is returned to the user this prevents outside users from attempting to send back a value wed have to check its preferable to decorate the guid with the user digest per command rather than having to check a value the user might send session use for a command sessions are passed via the lsid top level field in any command sessions are only meaningful for commands which requireauth for sessions which dont require auth we strip session information from the command at parse time session ids are passed as an object which can optionally include the username digest it is illegal to pass the username digest unless the currently authd user has the impersonate privilege the system user does this enables sessions on shard servers via mongos
0
we need to add collatorinterfacegetcomparisonkey in order to support index key generation and index bounds building
0
there have been two occasions where creating an ecs container and then waiting for it to come up errors because the container is missing in such a case the containertesterpy script should try to create a new container
0
makechecktest failed on os x wiredtiger developcommit diff fill content for the cursor subpage in the architecture guide jan utcevergreen subscription evergreen event task logs
0
the call to restart waits to connect to the restarted node by default but in this case the node is expected to crash after starting up so the call to restart is racy and can fail if the node crashes before restart is able to connect to it the test should pass waitforconnect false as the options object to this call to restart
0
timegm is not a posixcompliant function and thus fails to compile on some variants of solaris see for more details
1
when document fails validation because the number of array elements did not match using json schema maxitemsminitems the message doesnt return the length of the considered array this may be not convenient for large arrays as user has to count the items in consideredvalue to understand if the count was off by or more significantly meanwhile on violation of maxpropertiesminproperties the error contains the number of properties codejavav jsonschema properties myarray bsontype array maxitems maxproperties dbcreatecollectionc validator v dbcinsertmyarray keyvaluecode noformat failingdocumentid details operatorname jsonschema schemarulesnotsatisfied operatorname properties propertiesnotsatisfied propertyname myarray details operatorname maxitems specifiedas maxitems reason array did not match specified length consideredvalue operatorname minproperties specifiedas minproperties reason specified number of properties was not satisfied numberofproperties noformat
0
mongomirror suffers from the same problem as in mongodump may as well to fix we need to do a forward scan query on the oplog to ensure that all previous operations are committed like this
0
what we want to doupgrade all indexes to version in our replicasethow we do thiswe start with one secondary shut it down change port and remove repset param start it with repair command to reindex all indexesafter repair reset configuration and connect it to the repset wait until the slave is to proceed with the next secondarywhat is the problemthe secondary is not able to catch up with the master it has a single process running with cpu usage and almost idle io cpu boundit falls slowly more and more behind all hosts have the same hardwarewhat we suspectwe have a database which has some indexes with the old version and some with the new if a secondary upgrades the indexes it has all indexes on the latest version and this locks the replayresync of the oplog from the master which still has the mixed version indexeswe downgraded the indexes again with an older mongod binary after this was finished we connected the secondary to the replicaset again and it replayed the oplog without a problem and is now in sync againall hosts have the mongod binary version attached iostat and mongostat output the host with the problem is the mongodlog is no error message just some reoccurring message about the cursorwed nov repl old cursor isdead will initiate a new oneregardssteffen
1
goal is to migrate away from kill cursor op code because it does not have the namespace the new command should require the namespace or an array of namespaceid pairs so auth and locking can be improved
0
metatrackapply metatrackunroll have unnecessarily complex error handling
0
as of search will become an alias for searchbeta and is used everywhere except internally when communicating with a mongot where searchbeta is still used replace this usage of the searchbeta mongot command with the search mongot command
0
we have mongo users and authentication database that are like numbers with leading we see in mongo log that compass translate the authentication database string in number in the example string become number so the uthentication fail example of log row with error i access authentication failed for on from client usernotfound user not found the authentication database is not but the string is converted to number
1
this is kinda confusing although packages are the preferred installation method for linux systems without supported packages see the following guidethis table lists mongodb distributions by platform and version we recommend using these binary distributions but there are also packages available for various package managers
1
it would be nice to know what parts of the driver are being adequately tested by the tests that we have includedthis is going to involve extending the cmake setup to pass any gcov state down to the externalprojectadd for the client library and probably doing something to capture the output files
1
currently the least significant byte is timestamp and the second least counter but because the counter is always incremented we essentially get a byte change for both an update in timestamp or counter if we rearrange to have the counter in the least significant byte we should be able to get a smaller byte delta when only the counter is updated
0
the version of killprocs on macos has been demonstrated to hang for days specifically the ps call when i logged into the affected systems running ps from the command line also hanged hung we should either figure out why this is hanging and fix it or at least add a timeout
0
tldr monitoringonly sockets must not send scram mechanism negotiation in ismaster monitoringonly sockets must not authenticate at all nonmonitoring sockets eg connection pool or singlethreaded client do a normal handshake and authenticate if there are credentials an authentication error on a socket must close all and only nonmonitoring sockets to the same server possible backward breaking change some drivers were resetting a servers topology description to unknown on an authentication error and should stop doing so this means the topology will always be correct even when authentication fails it will no longer be possible for authentication errors to be masked as server selection errors detailed changes
0
heres some data code dbscratchfind id a baz id a foo b bar code heres a pipeline desired effect is to concat a and b if i have both fields or just use a if b isnt present code dbscratchaggregate a concat id ab null id ab foobar code but when b is not present and i want a i get null
0
see releasestring on
1
i feel like this ssl page needs some expansion perhaps into one all encompassing how do i configure mms and sll tutorial pagethere are various points for ssl as discussed with cailin via https for connection to mms serveralthough any sensible organization will ignore that and just put an https proxy in front of mms ssl for mongodb connections between agents and the monitoredbackedup mongodsand also settingswould make sense to merge all these various pages somewherecc
1
this may be related to the changes in codepython error testcertsslvalidationhostnamematching testssltestssl traceback most recent call last file line in wrap return fargs kwargs file line in testcertsslvalidationhostnamematching file line in connected clientadmincommandismaster force connection file line in command with clientsocketforreadsreadpreference as sockinfo slaveok file line in enter return selfgennext file line in socketforreads with selfgetsocketreadpreference as sockinfo file line in enter return selfgennext file line in getsocket server selfgettopologyselectserverselector file line in selectserver address file line in selectservers selferrormessageselector serverselectiontimeouterror ssl handshake failed verify failed code
0
this idea came up in a discussion with and can we improve compact efficiency by having the block manager identify the blocks that need to be moved this way the upper layer wouldnt need to walk the btree looking for blocks to move the block manager has the block allocation information so it knows which blocks are live and which are free the idea is that the higher layer ie above the block manager would ask the block manager what blocks would be helpful to move in response the block manager could suggest blocks to move ie those with the highest offsets in the allocated extent list and pass back the address cookies to the higher layer the higher layer would figure out what the block is possibly by cracking the address cookie or by reading the block and examining its contents then it would write the block to a different address there is synergy here with ideally we can just tell the block manager to move a block without unpacking and reconciling it one challenge is overflow values there isnt information in those blocks to identify the leaf page that points to the overflow since this is a rare case esp in mongodb we could just ignore those blocks likewise if we find other corner cases where it is hard or impossible to figure out what a block is it wont violate correctness to leave it where it is a scheme like this could also be useful when we implement garbage collection for tiered storage in that case we have the same problem as compact we have an object with a small number of blocks of live data we want to identify those blocks and rewrite them so they will be allocated elsewhere
0
with mongodb set a short sockettimeoutms like and begin a tailable query with awaitdata after mongoccursornext times out do any subsequent operation expected the socket has been closed so the next operation opens a new socket and proceeds normally actual the socket used for the query is still open so it reads the query response instead of the current operations response the response will be like code cursor nextbatch id ns dbtesttestcapped ok code the bug is in the getmore command path with mongodb wire protocol version but not in the legacy opgetmore path
1
mms server changelog released support for archive restores targz for databases whose filenames exceed characters api skip missed points in metrics data instead of returning empty data api return correct number of data points when querying metric data with the period option backup agent update to agent released with onprem use notimeout cursors to work around
1
hiin authentication dbadduser was used to add a user as well as change the password for existing users but it looks like you get a duplicate key error in if you try to use adduser to change a passwordi think the new command is changeuserpassword but i dont see any documentation on it
1
i can no longer use the set rdp password to set passwords on the windows spawn hosts the form appears to have been submitted successfully but the password is never changed i have to login to the server over ssh and reset it manually
0
using mongodb every time i connected to the server within a function i did this boostscopedptr connmongogetscopeddbconnection localhost then i would use connget and when done conndone in i am doing this within each function mongocxxinstance inst mongocxxclient connmongocxxuri this is apparently wrong to do this every time i want to make a connection the first time it works perfectly the server window outputs connection accepted from connections now open and dutifully ends the connection at the end of the function block end connection connection now open the connection is the mongo shell but when i repeat this in a subsequent block and try a command it throws an exception no suitable servers found serverselectiontryonce set generic server error so am i supposed to create the instanceconnection once and pass it to every function i write since globals are evil
0
theres only one implementation so no need for an interface
0
issue is i do not want id column to analysis in data analytics what is the solution for this to hide it if this column will be in collection then data quality score will be different working on ibm analytics and certify mongo db help on this
1
i cant start any replica secondaries server restarted sun sep mongodb starting dbpathebsmongodb sep db version pdfile version sep git version sep build info linux smp fri nov est sep options config ebsmongolatestbinmongoconf dbpath ebsmongodb directoryperdb true fork true logappend true logpath mntmongomongolog nojournal true port quiet true replset rest true sun sep unable to check for journal files due to boostbasicdirectoryiterator constructor no such file or directory ebsmongodbjournalsun sep admin web console waiting for connections on port sep waiting for connections on port sep replset i am sep replset sep replset member is upsun sep replset member is now in state arbitersun sep replset member is upsun sep replset member is now in state primarysun sep replset still syncing not yet to minvalid optime sep replset syncing to sep replset still syncing not yet to minvalid optime sep replset setting oplog notifier to sep command admincmd command replsetgetstatus sep command admincmd command replsetgetstatus sep command admincmd command replsetgetstatus sep command admincmd command replsetgetstatus sep command admincmd command replsetgetstatus sep command admincmd command replsetgetstatus sep command admincmd command replsetgetstatus sep command admincmd command replsetgetstatus sep command admincmd command replsetgetstatus sep assertion bsonobj size first element eoo ebsmongolatestbinmongod ebsmongolatestbinmongod ebsmongolatestbinmongod sun sep error writer worker caught exception invalid bsonobj size first element eoo on ts timestamp h op i ns gerawrawhits o id types ttconvert adid uid trackersused ip agent windows nt khtml like gecko platform windows version referer string string string cls hit query dt new type v gen null browser chrome sun sep fatal assertion ebsmongolatestbinmongod sun sep aborting after fassert failuresun sep got signal abortedsun sep backtrace ebsmongolatestbinmongod
1
this command code clientdropdatabaseasyncdbnamewait code doesnt drop the database i can drop it from the shell just fine the following works just fine code var command new bsondocument dropdatabase dbruncommandasynccommandwait code this leads me to believe theres problem with the dropdatabaseasync method
1
add a uuid collectionuuid member to keep track of the collections id
0
workload uses threads each of which repeatedly creates a new document then grows an array in the document and does some other updates document grows to kb the test script lets this workload run for a random time from seconds then abruptly kills the mongod processes repeating this cycle of running the workload then killing the mongod until a mongod fails to recover during startup recovery will frequently fail with a bson bad type error usually within a few iterations this is a recurrence in of a problem that was thought to have been fixed by and is using the same test that led to that ticket
1
hi i get sometimes the following error this blocks the whole system after i restart the mongod service it works again well this was not the first time i get this errorcodeuncaught exception getlasterror failed shards ok errmsg could not get last error from a shard caused by socket exception for
1
a quick grep for cursortype reveals that its only mentioned in optionsfindx never in any of the implementation files so we just dont propagate it and theres no way to ask for a tailable or await cursor right now mentioned in the mongodbdev group at
1
updating compass editions matrix at for view and optimize query performance with visual explain plans compass readonly is available add a checkmark add a line for embedded shell support add checkmark for stable and isolated but not readonly
0
we should clarify the following two points get the fact that within no longer requires an index into the release notes and the reference page ensure that the geowithin rename of within works in the agg framework leaving the reference in the release notes to within for the moment
1
a bunch of cypress tests broke over the weekend there is no reason or commit that i can associate with the failures
1
hello looking on npm version history you rollback lastest from the version but the version now give me this mongo error mongoerror seed list contains no mongos proxies replicaset connections requires the parameter replicaset to be supplied in the uri or options object mongodbserverportdbreplicasetname works good and also maybe there is a problem with the rollback latest on
1
code cmake dcmakebuildtyperelease dcmakeinstallprefixusrlocal make code produces code building cxx object srcmongocxxcmakefilesmongocxxdirbulkwritecppo building cxx object srcmongocxxcmakefilesmongocxxdirclientcppo in file included from mongocxxdriversrcmongocxxprivatesslhpp in function mongocssloptt mongocxxmakessloptsconst mongocxxssl error const valuetype has no member named cstr outpemfile ssloptspemfilecstr error const valuetype has no member named cstr outpempwd ssloptspempasswordcstr error const valuetype has no member named cstr outcafile ssloptscafilecstr error const valuetype has no member named cstr outcadir ssloptscadircstr error const valuetype has no member named cstr outcrlfile ssloptscrlfilecstr recipe for target srcmongocxxcmakefilesmongocxxdirclientcppo failed code
1
code pymongo docstrings with code blocks that need update motormotorasyncioasynciomotorclientunlock motormotorasyncioasynciomotorcollectionreindex motormotorasyncioasynciomotorcursorwhere motormotorasyncioasynciomotorgridoutcursorwhere motormotortornadomotorclientunlock motormotortornadomotorcollectionreindex motormotortornadomotorcursorwhere motormotortornadomotorgridoutcursorwhere code
0
these tests should instead have an entry against lastlts in backportsrequiredformultiversiontestsyml in the current release
1
the recordpreimage feature added in requires the fcv to be set to at startup i am able to set the flag on a server but it is not validated properly at startup and the server fails to start
1
for a given document where i have a unique sub document which i want to update code a b c d d c d d codefor this document you cannot update d to d proposed change to update using placemarkers as to the matched location rather elemmatchnote this is the proposed change beeter ways could be providedcodedbcollupdate a b c codeplease see elemmatch or any other operator doesnt work here even if i have the unique path to my subdocument which matches a single element i cannot update this is the biggest feature gap
1
according to the mongos manual dbcollectionupdate multitrue command always boradcast to all shard multiupdate operations are always broadcast operations and dbcollectionupdate manual said dbcollectionupdate multitrue always need shard key for sharded collection sharded collections all update operations for a sharded collection must include the shard key or the id field in the query specification update operations without the shard key or the id field return an error i think two documents explains are conflicted why multiupdate need shard key condition even they cant be targeted query and according to the mongos manual above link updatemany and deletemany can be target query if full shard key condition is provided then why multiupdate cant be target query is there any differences between updatemany and dbcollectionupdate multitrue
0
need to hook up new wt journaling feature so that logop can make use of it on the primary this way we can allow secondaries to read the oplog without needing to wait for such entries to be journaled first
0
the changes from as part of backporting to the branch introduce a bug where test suites using the interfacefixture class ie test suites which start their own mongodb deployment raise a notimplementederror exception in their getdriverconnectionurl method this exception is then caught and then signals to the other resmokepy job threads they should stop running however because no test was ever started ie testreportstarttest is where the exception was raised resmokepy exits with a return code of after all of this is done in particular testreportwassuccessful returns true if no tests have been run noformat encountered an error during test execution traceback most recent call last file line in call selfrunqueue interruptflag file line in run selfexecutetesttest file line in executetest testselfreport file line in call return selfrunargs kwds file line in run resultstarttestself file line in starttest command testascommand file line in ascommand return selfmakeprocessascommand file line in makeprocess connectionstringselffixturegetdriverconnectionurl file line in getdriverconnectionurl getdriverconnectionurl must be implemented by fixture subclasses notimplementederror getdriverconnectionurl must be implemented by fixture subclasses noformat
1
problem statementrationale is going wrong color to connect to remote mongodb using compass it always tries to connect to localhost color steps to reproduce could an engineer replicate the issue you’re reportingcolor the values for fields hostname username and password color replica set name and preference as primarycolor that if local instance of mongodb is running it connects and shows the database wrongly since the details were of remote machinecolor observe that if local instance of mongodb is stopped you receive and error of unable to connect to databasecolor expected results do you expect to happencolor to remote details provided or show an error color actual results do you observe is happeningcolor to localhost if running or shows that unable to connect to localhost though the details provided in the connection string are of remotecolor additional notes additional information that may be useful to includecolor connects to remote without authentication or replica setcolor
1
it is silly to compile regexen in a driver language only to deconstruct them for mongo this also makes it easier to represent a query in pure json useful in my app please make options optional
0
weve come across this issue several time in our unit tests a failed assertion or any other exception thrown never reaches the mochanode exception handlers if it is causedthrown within certain mongoose callbacksheres an example of a mocha test that never returnsmongoose requiremongoosemodel mongoosemodelmodel new mongooseschemamongooseconnectmongodblocalhostdatabaseithanging test done modelaggregate simulating a failed assertion throw new errorsome failed assertion done the exception is handled by baseprototypecallhandler and is never rethrown again so mochanode and before that mongooses utilstick are unable to catch itthis doesnt happen for all queries if find or create are used above the exception is first caught by mongooses utilstick and then rethrown so mocha is able to handle them in the endany thoughts on this
0
there is a case when reconciliation is generating an ondisk page with associated lookaside entries from an inmemory page that has a set of modify updates where the wrong visibility check is applied which can lead to the wrong version of data being saved we need to fix the visibility check
1
this page install a simple test ops manager installation here should include a note to install the os prerequisites if you dont do this it leads to errors specifically i ran into problems with queryable snapshots and extensive troubleshooting revealed a missing library libnetsnmpmibs that was resolved by installing these dependencies theres a note in step for ulimit settings maybe just add another yellow box to install os dependencies we have the dependencies subsequently here another issue is the page up top says this procedure explains how to activate the backup feature but it doesnt not as far as i can tell these omissions appear in all versions of the docs
0
introduced some optimizations to improve load times related to displayexecution task queries we might be able to improve the performance further if additional state is tracked for execution tasks eg to filter them out of queries currently this patch takes seconds to load for me
0
it seems setting a on a cursor will cause it to return null on next even if there is a document to return if you call hasnext before calling next it will return true for hasnext steps to recreate codejava node experimentalreplawait const m requiremongodb const c await useunifiedtopology true or wherever you have a db await cdbtestcollectioncollinsertmany const cursor cdbtestcollectioncollfind const await cursorhasnext true await true await cursornext x await this returns null it should return the doc code
1
hi a small correction its descending not ascending in sort query results in python version corrected in capitals in the paragraph below for example the following operation returns all documents in the restaurants collection sorted first by the borough field in ascending order and then within each borough by the addresszipcode field in descending order
0
i was recently trying to determine whether or not i could calculate a sum during an aggregation group stage based on certain conditions however neither the group nor the cond documentation or sum for that matter make any reference to the fact that this is possible i think it may be helpful to include an example or at least to reference that this is in fact possible
0
hi the upgrade guide for ops manager should be improved to cover upgrading installations where there are dedicated backup daemon hosts running ops manager or in particular there is a step missing before step starting the mongodbmms service on each machine will use the default confmmsproperties file that includes the parameter code code the service mongodbmms would not start on all the hosts that are not running an application database on the same machine port code service mongodbmms start starting preflight checks failure to connect to configured mongo instance configloadbalancefalse encryptedcredentialsfalse sslfalse dbnames error serverused ok errmsg not authorized on admin to execute command listdatabases code preflight checks failed service can not start code if there is a different mongodb database running on that is not the application database this can be possible for any reason for example it could be the blockstore database the result would be even worse the ops manager application databases will be generated like on a fresh install on the wrong application database i believe that there should be a step between step and step that includes instruction on how to configure correctly the mongomongouri parameter with the correct value before starting the mongodbmms service from version for the first time kind regards emilio
1
atlas will be disabling support for soon drivers need to document how to get support for tls on a users os of choice and document where its just not possible the most important oses in this case are macos windows and linux though the directions on linux likely apply to all nonmacos unix flavors for example the python interpreters apple ships on macos older than are built against openssl which doesnt support anything better than so the python driver tls docs will recommend installing a python version from pythonorg to work around the problem the ruby driver has a similar problem and might recommend installing ruby from homebrew or something similar
0
there’s quite a few flaky tests showing up these days we should look into what the flaky tests are and how we can fix them
0
paneltitleissue summary as of august issue description and impact for collections that have additional unique indexes apart from the default id index introduced a regression which may result in documents being inserted that violate those unique indexes’ uniqueness constraint these documents will be replicated successfully from the primary to the secondaries if this bug is exercised and multiple documents exist in a collection violating a unique index constraint subsequent delete operations using the affected unique index will only modify half rounding up of the affected documents being targeted per execution this is a result of internal optimizations that rely on uniqueness query and update operations are not affected and will return all targeted documents diagnosis and remediation this issue affects mongodb and this issue is resolved in and deployments on the affected versions that rely on unique indexes apart from the id index should be upgraded to mongodb or as soon as possible after upgrading to a version that is not impacted by this bug users can determine whether they have been impacted by using the validate command to validate all collections or by running the attached script finduniquenessviolationsjs this script iterates through every database and collection in the cluster looking for unique indexes that are not the id index for each unique index that it finds it will perform an operation to list each index key value that is incorrectly duplicated the ids of each document with that key value as this script will potentially perform multiple index scans we would recommend issuing it against a secondary to minimize production impact running the script here is an example invocation of the script you may also use the legacy mongo shell which will output results to resultstxt in the current directory noformat mongosh readpreferencesecondary username admin authenticationdatabase admin finduniquenessviolationjs tee resultstxt searching node for any documents that violate a unique index searching for duplicates in testc that has unique indexes found documents in testc index with duplicate values with key found documents that violate a unique index affecting collections in databases result json noformat you can inspect the affected documents by querying on the provided ids depending on the results and application logic it may be safe to remove the duplicated documents otherwise more involved reconciliation may be required for example in this case reviewing the affected documents we can see that they all match noformat dbcfindor id t i x x id t i x x id t i x x id t i x x noformat therefore based on our knowledge of the application we can safely remove all but one using the id noformat dbcremoveor writeresult nremoved dbcfindor id t i x x noformat additional option specifying namespaces to query after running the script you may notice that databases and namespaces have been skipped for reasons such as not being authorized to read a collection noformat we were unable to access these locations databases namespaces noformat you may want to run the script only against namespaces that have been skipped you can do this by modifying the script and providing an array of namespaces with the format ‘databasenamecollectionname’ in the namespace variable namespaces containing the admin local and config databases are unlikely to contain duplicate documents and may be ignored for all other namespaces verify that the user running the script has sufficient permissions to read the namespace codejavascripttitlefinduniquenessviolationjs snippet populate this array with specific namespaces to scan only them for duplicates using the format databasenamecollectionname namespaces code additional option automatic cleanup if you are absolutely certain that inserted documents will be materially similar this script can be leveraged to delete all but either the newest or oldest of each set of duplicates this option is disabled default and is only suitable if the contents of the duplicate documents are materially similar for your usecase warning use this facility only with extreme care as documents targeted by the script will be permanently deleted and does not back up or output the contents of those documents to use this script to clean up duplicate documents without regard for applicationspecific logic uncomment the declaration of cleanuptype in the script and set that variable to either deleteoldest or deletenewest panel this ticket will track the reverts of which was released in and
1
the distlock pinger uses the word failed in its messages which makes it hard to search for actual failures in the sharding unittests because the lock pinger always fails there since this word doesnt contribute too much to the message it should be replaced
0
json validation for my db updateupsert operations keeps failing updates are only successful when i remove the dbcreatecollection validator option parameter but schema validation is crucial for this usecase so development is currently at a halt and days have been lost as a result schema codejavascript additionalproperties false bsontype object description email data point properties id bsontype objectid address bsontype string maxlength pattern www description required email address contacttypes bsontype array description required a set of unique string values items bsontype string enum emailmap e evalue uniqueitems true datecreated bsontype long description required contact email creation timestamp in ms since epoch isprimary bsontype bool lastupdated bsontype long description contact email recent update timestamp in ms since epoch required code shell call mongodb enterprise dbemailsfindoneandupdate address adfadfslkdsffkcouk contacttypes isprimary true set address adfadfslkdsffkcouk contacttypes isprimary true datecreated returnnewdocument true upsert true result e query uncaught exception error findandmodifyfailed failed operationtime ok errmsg document failed validation code codename documentvalidationfailure clustertime clustertime signature hash keyid thanks in advance for looking into this issue
1
currently the mongoutilnetnetwork library is the dependency needed to satisfy both dependencies on the core messages types like message and opmsg and dependencies on the sslmanager and related types however some subsystems may have no need to depend on ssl related code but still want to manipulate message types this is primarily in furtherance of reducing the dependency set for the embedded library which needs to know about message but shouldnt need anything ssl related
0
description in step start snmp here instead of sudo systemctl start mongod it should be sudo systemctl start snmpd instead of sudo service mongod start it should be sudo service snmpd start scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
0
after bumping to latest patch im getting this error from some of my tests failureerror pricingdiscount wherecreatedatlt active scope defined as whereactive true pluckid maptos nomethoderror undefined method for utctime block levels in merge merge block in merge eachpair merge merge merge block levels in definescopemethod block in methodmissing withscope methodmissing it looks like it was introduced with this commit which was a patch for
1
sorry for filing it in the wrong project there were no obviously applicable projects continuous integration perhaps is down perhaps you could consider monitoring it and add some selfrecovery as i am sure many of your users use thiso
1
our current understanding of the degraded mode for this operation is that it causes events to be processed which means theyll never be sent when we restore service we might want to do this by having different flags for each behavior
0
hello i believe you have an error in your documentation on this page it reads mongodb can only use one index to support any given operation but this is no longer the case as of version as stated on this page of the documentation
0
please backport to branch and release todaymonitoring agent with mms onprem fix for race condition which can cause high cpu load when connecting to a replica set member which is unreachable backup agent with mms onprem critical bug fix for backing up mongodb deployments that include user definitions the systemversion and systemrole collections from the admin database are now included in the backuponprem critical bug fix for backing up mongodb deployments that include user definitions the systemversion and systemrole collections from the admin database are now included in the backup disable mongodb for insertonly mms backup collections speed optimization for mms backup http pull restores fix for ldap integration now passes full dn correctly when authenticating the user
1
executing a dbeval segfaults the server weve been unable to replicate this on on a linux machine and it seems to be related somehow to old data as the same operation on a completely clean mongod install doesnt segfaultthe log mentions mapheatmap and reduceheatmap functions which dont appear in our code or collections as far as we can tell however they show up via mongodumptue nov connection accepted from connections now opentue nov connection accepted from connections now opentue nov end connection connections now opentue nov connection accepted from connections now opentue nov connection accepted from connections now opentue nov end connection connections now opentue nov connection accepted from connections now opentue nov syntaxerror unexpected end of inputtue nov unable to load stored javascript function mapheatmap syntaxerror unexpected end of inputtue nov syntaxerror unexpected end of inputtue nov unable to load stored javascript function reduceheatmap syntaxerror unexpected end of inputtue nov invalid access at address from thread nov got signal segmentation fault nov backtrace mongod mongod mongod libsystemplatformdylib sigtramp mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod dbsystemjsfind results infollowprimary dbsystemjsfind id debug value function p printp tue nov javascript execution failed syntaxerror unexpected end of inputerror javascript execution failed syntaxerror unexpected end of inputfollowprimarysee the attached jstargz for mongodumps of the systemjs collection in questionattempting to remove the functions in question from the collection failed it seems that the syntax errors prevent mongo from doing anything with them and then eval just trashes the whole daemon when it tries to interpret them
1
we have experienced failure processing signals in cedar when getting time series with null values in them golang bson driver does not know how to convert these to floats as a dag engineer id like to coalesce these nulls to such that i can process signals ac null coalesce to statistic values to during performance data aggregation
0
today i upgraded my nuget packages to version iam using a filterdefinition on gridfsfileinfo like this code var filter buildersfiltereqx xmetadatacontainerid var results await thisstoragefindfiltertolistasync code before this upgrade this worked now iam getting an error code systemmethodaccessexception attempt by method mongodbdrivergridfsgridfsfindoptions systemthreadingcancellationtoken to access method failed code
1
during the development cycle we added tests that specifically tested fcv upgradedowngrade behavior from to many but not all of these tests were marked with a todo to indicate that they should be removed after branching since these tests failed when we attempted to upgrade to they required manual intervention from the server teams that owned the tests since these tests should always be removed when introducing new fcv constants to the server we should create a directory that holds all of these tests for example jstestsmultiversiontargeteddowngradetests this will let us easily delete all of these tests as part of the upgrade process without needing to coordinate with server teams to ensure that engineers are adding their tests correctly we will require developers to label every upgradedowngrade test as “generic” or “targeted” in addition we can place multiversion tests into one of two directories inside jstestsmultiversion one for targeted tests and another for generic tests we should do an audit of current multiversion tests and refactor them so that they live in the correct directory targeted multiversion tests can be removed immediately after branch cut since we’re officially on the next development cycle at that point
1
there are some outdated comments in logging particularly logslotc that should be cleaned up
0
every few days we see an osx static host that ran a task which finished but that task is still set in the running task field preventing new tasks from running on the host
0
hasnext doesnt work on a cursor object
0
version released fixes issue configuring the windows firewall if the windows firewall is disabled
1
tried to find a statement on blank characters or other white space anywhere in field names didnt find any eg should i be able to insert or query a document like this first name mickey last name mouse
0
the libbson and the mongodb c driver is ok mongocxxdriverbuild got an error erroroutput the cxx compiler identification is gnu check for working cxx compiler usrbinc check for working cxx compiler usrbinc works detecting cxx compiler abi info detecting cxx compiler abi info done detecting cxx compile features detecting cxx compile features done no build type selected default is release the c compiler identification is gnu check for working c compiler usrbincc check for working c compiler usrbincc works detecting c compiler abi info detecting c compiler abi info done detecting c compile features detecting c compile features done checking for module cmake error at message a required package was not found call stack most recent call first pkgcheckmodulesinternal pkgcheckmodules findpackage configuring incomplete errors occurred see also end
1
when i make changes to a distro config the save configuration button stays grayed out so im not able to save the changes
1
we currently allow and larger
1
gist stack trace modelclass lecture we store in our own collection instead of inheriting include mongoiddocument include mongoidtimestamps name of lectureclass field name type string default description of the class field description type string default who owns this classlecture customers can have thier own classlecture field customer type string default aztecaztec pre test required for class field requirepretest type boolean default false enable pre test onoff field enablepretest type boolean default true enable practice test field enablepracticetest type boolean default false enable practice test field enableessaytest type boolean default false embedsone pretestconfig classname automatictest inverseof testable embedsone posttestconfig classname automatictest inverseof testable use customer or automatic tests embedsone pretestcustom classname manualtest inverseof testable embedsone posttestcustom classname manualtest inverseof testable practice test is always manula embedsone practicetest classname manualtest inverseof testable essay test embedsone essaytest classname manualtest inverseof testable test types field manualpretest type boolean default false field manualposttest type boolean default false enable drilling field enabledrill type boolean default true unit of content hasandbelongstomany units index true licenses hasandbelongstomany license index trueendclass automatictest include mongoiddocument include mongoiddynamic number of questions per problem sets available field numberofquestions type integer default percentage score of the test field passingscore type integer default enable timer field enabletimer type boolean default false total time allow for test in minutes field testtime type integer default points per question field pointsperquestion type integer default field allowessay type boolean default false do we need a calculator field calculator type boolean default false what calculator do we need field calculatortype type string default gedendclass manualtest automatictest problems sets to use field problemsetid type bsonobjectid random support field randomize type boolean embeddedin testable polymorphic trueend
1
problemduring a mongoperf run execution hangs on updatefieldatoffsetattached the logs and gstach output from the mongod config node replset on same host each mongod pinned to cores workload mongoperf pinned to other cores
1
we missed because of this
1
mongodb version win server server instances sharding shardsreplica set size mongodb is readable but not fully writable for example issuing a command as below codedbcollectionupdateactivetrue set field value false truecodethe command will update only one record and then receiving error message simply saying locisnull i have also observed frequent log entries in mongoslog codewed apr warning distributed lock pinger detected an exception while pinging caused by update not consistent ns configlockpings query id update set ping new err locisnulln connectionid waited ok updatedexisting true n lastop timestamp connectionid waited err null ok code
1
we assumed that a connection that was being destroyed could not also be inuse i triggered this invariant by starting a mongod with ssl required and attempting to connect to it without using ssl code mongod dbpathdatadb sslmode requiressl sslpemkeyfile etcsslmongodbpem code code mongo code code mongod mongod mongod mongod mongod mongod code
1
traditionally retryable findandmodify calls reconstruct a response to a retry by writing the returned document to the oplog separate from the updatedelete the findandmodify performed offers a second option the document being returned can be written to a separate image collection however for features such as tenant migrations and resharding these images are communicated via the oplog as opposed to selectively copying that collection to accomplish that we reserve two oplog timestamps when recording an image as part of a findandmodify this allows us to add an aggregation stage that can seamlessly insert the image into the oplog and not worry about choosing a timestamp for regular update findandmodifys were correctly only reserving two optimes when we intend to store an image however deletes are done unconditionally ie regular retryable deletes dont record a preimage but do reserve two optimes this has been identified as a perf regression a sample patch that corrects the perf regression noformatdiff git asrcmongodbcatalogcollectionimplcpp bsrcmongodbcatalogcollectionimplcpp index asrcmongodbcatalogcollectionimplcpp bsrcmongodbcatalogcollectionimplcpp void collectionimpldeletedocumentoperationcontext opctx cannot remove from a capped collection const auto oplogslot reserveoplogslotsforretryablefindandmodifyopctx boostoptional oplogslot boostnone if storedeleteddoc collectionon oplogslot reserveoplogslotsforretryablefindandmodifyopctx opobserveroplogdeleteentryargs deleteargs nullptr frommigrate getrecordpreimages oplogslot oplogslot boostnone noformat
1
this ticket is a piece of the work for
0