text_clean
stringlengths
10
26.2k
label
int64
0
1
codecppsrcmongodbinstancecpp const namespacestring nsstring ns srcmongodbinstancecpp uassert strstream invalid ns nsstringisvalid srcmongodbinstancecppsrcmongodbinstancecpp status status statusoksrcmongodbinstancecpp if cursormanagergetglobalcursormanagerownscursoridcursorid srcmongodbinstancecpp todo implement auth check for global cursors srcmongodbinstancecpp else srcmongodbinstancecpp status txngetclientgetauthorizationsessioncheckauthforgetmoresrcmongodbinstancecpp nsstring cursoridsrcmongodbinstancecpp srcmongodbqueryfindcpp srcmongodbqueryfindcpp else srcmongodbqueryfindcpp check for spoofing of the ns such that it does not match the one originallysrcmongodbqueryfindcpp there for the cursorsrcmongodbqueryfindcpp if globalcursormanagerownscursoridcursorid srcmongodbqueryfindcpp todo implement auth check for global cursors srcmongodbqueryfindcpp else srcmongodbqueryfindcpp auth error strequalsns ccnscstrsrcmongodbqueryfindcpp srcmongodbqueryfindcpp iscursorauthorized truecode
1
is not affected noformat thread tid name mongod stop reason breakpoint frame frame mongodmongoskipthis at frame mongodmongoanonymous namespacereadtypereader invertedfalse frame mongodmongoanonymous namespacereaddecimalcontinuationreader invertedfalse numdecim frame mongodmongoanonymous reader typebits frame ▒ ordbits typebits at keys frame mongodmongoanonymous namespacecurrthis partskkeyandloc c frame mongodmongoanonymous namespaceseekthis key frame mongodmongoanonymous namespacetraverseindexthis iam frame mongodmongovalidatethis txn levelkvalidatefull results frame mongodmongorunthis txn ▒▒▒ cmdobj frame mongodmongorunthis txn request replybui frame mongodmongoexeccommandtxn command request frame mongodmongoruncommandstxn request replybuilder frame mongodmongoanonymous namespacereceivedrpctxn client frame mongodmongoassembleresponsetxn m dbresponse remo frame mongodmongosessionloopthis session a frame mongodoperatorclosure session at serviceentrypointmongodcpp frame mongodstdfunctionhandlervoidmongosession mongostartsessionmon frame mongodstdfunctionoperatorthis frame mongodmongoanonymous namespacerunfuncptr at frame frame at noformat
1
very few users restrict outbound traffic for their monitoring and backup agentsplease update copy to make it clear that this applies only to agent traffic this applies only to customers that restrict outbound trafficagain restricting outbound traffic is extremely unusual
1
the list of commits that we would like to backport speed up collscan for oplog schedule compactions to clean up tombstones dont let iterations spill to the next prefix compact oplog every minutes fix calculation of maxprefix speed up rocksindexbasegetspaceusedbytes and rocksenginegetidentsize enable block cache size to be changed during runtime optimize insertions into capped collections for each prefix add a key to rocksdb add support for displaying thread status change the default compaction to dynamic leveled fix background index build concurrency
0
the filter from documentsourcechangestreambuildmatchfilter needs to discard applyops commands with a true prepared field and accept commit commands for relevant namespaces the former can be implemented by modifying gettxnapplyopsfilter
0
for examplecode id c name case c age createtime ismale false group groupd codei want update this documentmy query object is code c age tocode c name c age createtime date ismale false group groupupdatecodei didnt update successfulbut when i use code c name case c age workswhy i cant only use age condition
1
in order to test the use checkpoint cursors for background validation the unit test infrastructure must support using the wt storage engine which is the only storage engine currently supporting checkpoints
0
testformat took a segfault on the ppc with this stack noformat noformat
0
our current packaging system has a bindip parameter that is set to localhost this effectively means that anyone who upgrades will have their sharding and replication broken because they will only be listening on localhost this should be commented out in the configuration when upgradingthis was changed in and should be reflected in the release notes please follow up with the packaging team on the future of this but for now we should warn and guide people appropriately in both the upgrade page and release notes
1
hi team i am using mongodb in net development where i have string in a collection but it is throwing an error cannot deserialize a string from bsontype undefined please provide me the resolution asap
1
write a new variables file called etcsconsdeveloperversioningvars that sets mongoversion using the latest git tag and mongogithash with some string constant it may look something like this noformat import os import subprocess def shortdescribe import os import subprocess with openosdevnull r as devnull proc subprocesspopengit describe stdoutsubprocesspipe stderrdevnull stdindevnull shelltrue return mongogithashunknown mongoversionshortdescribe noformat
0
one of the faq isnoformatcan i take snapshots more or less often than every hoursyes see activate mms backup for a replica set and activate mms backup for a sharded cluster for detailsnoformatafaik this is not actually true and users cannot take snapshots more often than every six hours
1
this page says replica sets provide strict consistency other parts of the site use the term eventual consistency please correctclearify
1
this was encountered while debugging a memory leak in and can be reproduced in the shell or server w dbevalcodevar mongo new mongofor var i i var res mongofindtesta bnew
0
original description om cm api docs for enabledisable alert has delete command in example instead of patch referring to this om link code code and this cm link code code the resource section at the top says to specify patch but the example at the bottom has delete the latter causes the alert to be deleted note atlas docs are correct code code description scope of changes files that need work and how much impact to other docs outside of this product mvp work and date resources eg scope docs invision
0
its not possible to connect to a running mongod instance on a mac even the provided test example fails with the following message fail testdocumentationexamples error trace error received unexpected error topology is closed fail fail commandlinearguments
1
there is a dead link on this page the line below contains the dead linksee for a full example
1
this paragraph needs special emphasis likely in red before you install ops manager you must deploy the supporting databases first these are called backing databases ops manager cannot deploy or manage these databases these databases include the ops manager application database and the backup database the bolded section is important and many of our customers do this without realizing its a bad thing
1
code filter options query new mongodbdriverqueryfilter options vardumpoptions code the sort option is changed to a stdclass instance code options filter options query new mongodbdriverqueryfilter options vardumpoptions code the above results in noformat script freeing bytes total memory leaks detected noformat dumping the sort option from after a second query construction results in its value being displayed as which indicates some corruption lastly its possible to invoke a segfault by executing one of these queries see segfaultphp
1
backup agent version released logging improvements
1
we force config stepdowns in this test already and a background thread stepping nodes down as well can change the expected behavior
0
as of libmongoc and mongocwriteresultts old writeconcernerror bson document field has been replaced with a writeconcernerrors bson array handling for this needs to be updated for now writeresultgetwriteconcernerror can remain asis and simply return the first write concern error which is compliant with the bulk api spec as discussed in
1
new to mongo and going through online docs ver currentappears cursorinfo has been deprecated since the documents were published replaced by serverstatus perhaps synch up the instructions with the code
1
for some reason the new manager call on line will cause a memory link if empty options and driveroptions are passed thismanager new manageruri options driveroptions breaks thismanager new manageruri works
1
javalangnoclassdeffounderror commongodbutilthreadutil at at at at at at at at at
1
the mongo shell prompt should tell one more about contect when connected for example if connected to a mongod and it is a member of a replica set the prompt should tell you the servers state something likeprimaryor secondaryorarbiterif connecting from the shell to an entire set it will be important that the user know which node to which it is currently active so this could be something where myset is the set name host name to which we are connected and primary the state of currentlylikewise with sharding similar things if connected to a mongos we should somehow indicate that maybe sif connected to a mongo in a particular shard indicate connected to a config server indicate that not for but useful lateralso we can make this configurable we can have a prompt js function that returns a string with the above being a default
0
to use transaction the driver requires to create an inline function for the sessioncontext while this is fine for many cases it doesnt work for all use cases codego if session err clientstartsession err nil tfatalerr if err sessionstarttransaction err nil tfatalerr if err mongowithsessionctx session funcsc mongosessioncontext error if result err collectionupdateonesc bsonmid id update err nil tfatalerr if resultmatchedcount resultmodifiedcount tfatalreplace failed expected but got resultmatchedcount if err sessioncommittransactionsc err nil tfatalerr return nil err nil tfatalerr sessionendsessionctx code in our use case we try to run all unit tests within their own transaction so that they can be rolled back and run independently of other tests running at the same time as go tests run in parallel for this we run the unit tests using the testify suite where the transaction is started in the setuptest and aborted in the teardowntest to avoid many duplicate lines of transactionrelated code in the actual tests
0
in some circumstances making a second change to a configuration while it is trying to deploy the first change is not advisable therefore we tried to make this something that users would not do by accidenthowever it seems that users have trouble figuring out to make a change in the case where they need to correct their deployment configuration to make a change the user needs click the edit configuration make the change
1
its safe to run the setwindowfields stage on each shard independently as long as each partition lives on a single shard that means the shard key must be at least as coarse as the partitionby expression the shard key must be constant within each partition for example given this query codesetwindowfields partitionby state state city city code this is safe to push down if the shard key is any of these state city city state state city all these cases the documents that the partitionby groups together live on the same shard some examples of shard keys that wouldnt allow this state city id id country state city the analysis wed need seems similar to pushing down match past setwindowfields one problem is that we convert setwindowfields with a partitionby to a sort internalsetwindowfields before optimization pushing down the sort wouldnt be valid maybe we should somehow delay introducing the sort until after weve done some optimization maybe we can do something like where we push a lookup down through the mergesort part of a sharded sort
0
hi team we are using mongo db version and c driver at starting applications runs fine with no issues after hr we encounter below issue socketexception handling request closing client connection socket exception serve after that issues noting happens application crashes
1
some tests like jstestschangestreamslookuppitpreandpostimagejs are tagged as multiversionincompatible make them multiversion compatible and add requires fcv tags
0
resumable index builds currently wait indefinitely for the majority commit point before starting the collection scan the underlying replicationcoordinator function supports an optional deadline to limit the waiting period and it would be useful to extend resumable index builds with a configurable deadline for the majority wait with a reasonably short default duration this also serves as a workaround for
0
if all multiple config databases become unavailable the cluster can become inoperable should probably be if all config databases become unavailable the cluster can become inoperable
1
we had a replica set crashed all nodes at the same timehere are some log fragments showing the segfaultnode sep query datamodeldatamodel query id locksmicros sep replset member is upwed sep replset member is now in state secondarywed sep info dfmfindall extent was empty skipping ahead nslocalreplsetminvalidwed sep replset member is upwed sep replset member is now in state secondarywed sep replset warning caught unexpected exception in electselfwed sep invalid access at address from thread wed sep got signal segmentation faultwed sep backtrace codenode sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep recv message len is too sep end connection connections now openwed sep connection accepted from connections now openwed sep recv message len is too sep end connection connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep assertion bsonobj size first element ugg wed sep recv message len is too sep end connection connections now open usrbinmongod usrbinmongod wed sep assertionexception handling request closing client connection invalid bsonobj size first element ugg sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep assertion bsonobj size first element wed sep recv message len is too sep end connection connections now openwed sep end connection connections now openwed sep end connection connections now open usrbinmongod usrbinmongod wed sep assertionexception handling request closing client connection invalid bsonobj size first element sep recv message len is too sep end connection connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep invalid access at address from thread sep got signal segmentation faultwed sep backtrace usrbinmongod codenode sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep end connection connections now openwed sep end connection connections now openwed sep end connection connections now openwed sep connection accepted from connections now openwed sep invalid access at address from thread sep got signal segmentation faultwed sep backtrace codenode sep connection accepted from connections now openwed sep recv message len is too sep end connection connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep recv message len is too sep end connection connections now openwed sep assertion error next object larger than space left in messagewed sep recv message len is too wed sep end connection connections now openwed sep connection accepted from connections now openwed sep connection accepted from connections now openwed sep recv message len is too sep end connection connections now openwed sep recv message len is too sep end connection connections now openwed sep backtrace usrbinmongod code
1
adding locations on the boundaries of a geoindexed collection causes no problems on max values but min values throw ugly assertation errors should probably be either allowed or checked with a more useful exception thrownfri jan testborders assertion failure in optmongobinmongod fri jan testborders assertion failure in optmongobinmongod fri jan insert testborders exception assertion
0
give a client connected with the options coderuby connect sharded read mode primarypreferred code when sending commands like clientcommandrolesinfo it also sends a queryoptions option along which is not supported when issuing a command like that the following mongooperationfailure is sent back code queryoptions is not a valid argument to rolesinfo code im not entirely sure this has anything to do with sharded setups actually it might be more general than that
0
i try to implement c code that get about document in a collection has field types include int double string it take more than seconds to count duration time i seperate code into sections the first invokes dbclientconnectionquery to get all from collection and return vector of bsonobjs it take seconds the last iterate that vector and parses each of bsonobj to a c object using bsonobjgetfieldint or double or string and more than seconds please consider and give me some advices to improve it thanks you
0
having a collection with elements with a tags array where each element in the array is k v with an index tagsk tagsv query of the form dbcollfindtags elemmatch k somekey v somevalue correctly makes use of the index however a query like dbcollfindtags all does not make use of the index and produces a full scannote the use of all is to allow search matching on more than one tag
0
currently initial sync ignores any inprogress phase index builds only if it sees commitindexbuild oplog entry during the initial sync oplog replay or during secondary oplog application the index build gets started and committed now the problem is that for two phase index builds with commit quorum on the primary can commit the index build only if my number of ready to commit votes satisfy commit quorum value lets assume my commit quorum is all and its a node replica set to be noted in our todays world initial syncing nodes can vote as a result so consider the below scenario node a is primary node b starts performing initial sync primarynode a starts the index build for collection foobar and votes for itself node b ignores buildinginprogress during initial sync node b finishes initial sync since index build has received only one vote all so it cant commit the index build and keeps waiting for the votes from secondarynode b forevercolor
0
you can see the failure history here but this query which i would expect to return a bfg ticket every day or so instead only has one ticket
1
we need to make the page much more simple and focused on concrete aspects of backup solutions rather than abstract concepts relevant to backups while the current conceptual information is largely correct its difficult for readers to attach that information to their applications and deployments and a revision of this document should make this more clearthe structure of the page should be backup methods mms backup file system snapshots mongodump oplogmongorestore oplogreplay mongodumpmongorestore each section should discuss requirements usecases considerations benefits restrictions when used with standalonesreplica setssharded clusters as appropriate and link to the appropriate tutorial or tutorials for the optional additional information as needed its possible that we wont need this section but it might be required to cover some of the general details about backing up a sharded cluster
1
throughout the codebase we use magic numbers sometimes without defining uuid length markingsalgorithms for subtype lets replace them with meaningful macros or variables to improve readability
0
description paneltitledownstream change added new functionality dateadd and datesubtract query expressions panel description of linked ticket implementing these expressions in sbe will be handled under a separate ticket scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
0
mongoc never reports writeconcernerror from the write commands
1
for pagerduty only alerts that require acknowledgement can use this delivery method if the alert is informational only such as primary changing in a replica set pagerduty will not be selectable on the alerts configuration page
1
use of the systems index collection is deprecated and generally not supported by storage engines as such invocations of raw getindexes should be removed from the tests
1
hi i have a weird bug with a mongodb one of my collections seems corrupted i am hosted on atlas and they referred me here when i try to do operations on this one collection it just hangs forever no code change on my end i even tried older code to see if it could change anything but its still the same i use the mongodb driver for nodejs the system works in production for every collections with no issue but when we query our session collection find findoneandupdate etc it just hangs until the timeout is reached and there are no errors i tried to drop the collection but the bug comes right back also i tried to backup the db on another cluster and the same thing happens do you have any idea what could be the issue thank you
0
itll be nice to store the creation timestamp of database collection and index in metadata and make it easy to retrieve via commands like dbstats dbcollectionstats indexstats
0
i have the following following collectioncode dbystestaggregate result id unitstatus espws id unitstatus esrun id ok try to run an aggregate to match a particular unitstatus and project every record basically with just the id and unitstatus not other fieldsnote im using aggregate because this is part of a more complicated aggregate i know this can be done with a findi expected to have only document as a result but i get everything am i doing something wrong or is this a bugcode dbystestaggregate match unitstatus exists true nin project id result id unitstatus espws id unitstatus esrun id ok
0
for release on automation agent changelog version fix behavior of rolling upgrades when one or more secondaries has significant replication lag ensure that a secondary has always fully caught up before upgrading the primary fix creation of users imported from one dpeloyment item and then applied to a new sharded cluster add small sleep time during autoupgrade process monitoring agent changelog version update documentation and settings urls to cloudmongodbcom backup agent changelog version update documentation and settings urls to cloudmongodbcom support for backing up only selected namespaces not exposed yet in user interface
1
under the behaviour section the result for example range is this is not consistent with the documentation which states range generates the sequence from the specified starting number by successively incrementing the starting number by the specified step value up to but not including the end point the correct value should be
1
hiwe have following setupmongos server shards are journal enabled become nonresponsive logs show error too many opens filesat same time mongos also become nonresponsive the resident memory utilization of was are the logs of that timewed aug listener accept returns too many open fileswed aug error out of file descriptors waiting one second before trying to accept more connectionswed aug listener accept returns too many open fileswed aug error out of file descriptors waiting one second before trying to accept more connectionswed aug listener accept returns too many open fileswed aug error out of file descriptors waiting one second before trying to accept more connectionswed aug listener accept returns too many open fileswed aug accessing for the first timewed aug couldnt open too many open fileswed aug user assertion map file memorywed aug accessing for the first timewed aug command admincmd command listdatabases exception cant map file memory aug socket recv conn closed aug socketexception remote error socket exception server wed aug couldnt open too many open fileswed aug user assertion map file memorywed aug command admincmd command listdatabases exception cant map file memory aug socket recv conn closed let us know of any solutionthanksjitendra verma
1
the runner has hung during several recent deploys and we had to kill it to continue the deploy we should figure out why it is hanging
0
noted that users habitually type description or d when they use the evergreen patch command although the message and m commands better match the effect of this flag—setting the git commit message—it would be more natural if users could use the argument they are used to from the evergreen patch command
0
i have a two shard setup on aws i went ahead and added a new shard however the balancer is not able to move chunks to the new shards i see a bunch of errors in the log migration already in progressthu nov ns chatchatsdevelopment going to move id chatchatsdevelopmenttargetpidminkey lastmod timestamp lastmodepoch ns chatchatsdevelopment min targetpid minkey max targetpid shard from to tag thu nov connection accepted from connections now openthu nov end connection connections now openthu nov ns chatchatsstaging going to move id chatchatsstagingtargetpidminkey lastmod timestamp lastmodepoch ns chatchatsstaging min targetpid minkey max targetpid shard from to tag thu nov moving chunk ns chatchatsproduction moving nschatchatsproductionshard targetpid max targetpid nov connection accepted from connections now openthu nov end connection connections now openthu nov connection accepted from connections now openthu nov end connection connections now openthu nov connection accepted from connections now openthu nov end connection connections now openthu nov movechunk result ok errmsg migration already in progress thu nov balancer move failed ok errmsg migration already in progress from to chunk min targetpid max targetpid thu nov moving chunk ns chatchatsdevelopment moving nschatchatsdevelopmentshard targetpid minkey max targetpid nov connection accepted from connections now openthu nov end connection connections now openthu nov connection accepted from connections now openthu nov end connection connections now openthu nov movechunk result ok errmsg migration already in progress thu nov balancer move failed ok errmsg migration already in progress from to chunk min targetpid minkey max targetpid thu nov moving chunk ns chatchatsstaging moving nschatchatsstagingshard targetpid minkey max targetpid nov movechunk result ok errmsg migration already in progress thu nov balancer move failed ok errmsg migration already in progress from to chunk min targetpid minkey max targetpid here is the output of locks collectionmongos dblocksfind id configupgrade process state ts when who why upgrading config database to new format id balancer process state ts when who why doing balance round id chatchatsproduction process state ts when who why migrate targetpid id chatchatsdevelopment process state ts when who why split targetpid id chatchatsstaging process state ts when who why split targetpid
1
mongo driver do not seem to free used buffers whileafter upload a file i tried with files y seems to keep double the file size memory i use the following code for my tests codejavascript use strict const mongodb requiremongodb gridfsbucket mongodbgridfsbucket mongoclient mongodbmongoclient fs requirefs db const bucket new gridfsbucketdb readstream fscreatereadstreambigfiledat uploadstream bucketopenuploadstreambigfiledat uploadstreamoncefinish function consoleloguploadstreamid readstreampipeuploadstream code i do not close db to keep the process open is it something wrong with the test regards
1
i believe that mongogridfsdownload overloads should allow ignoring sum altogether not hashing at all for performance reasons this of course should be an optout flag with default still validating sums a little backstory while there is a gridfs module for nginx we would prefer using net backend to nginx wcaching for now since nginxgridfs is a bit limited in functionality we require and is based off the mongodb c driver that is very limited in functionality also so we cant easily extend the module for our needs it turned out that calculcations in net takes most of the cpu time for our gridfs server and we would not like to hack around the api or access collections directly to skip computations to emphasize that this is not recommended maybe the there should be another method instead of a flag for example mongogridfsdownloadunsafe
0
mapreduce when run on db with version does not produce any result both with inline and output collection i tried with c driver on version the mapreduce does not work on version everything works fine heres a simple snippet to use to reproduce the bug codec example of input documents id timestamp value date public class program private const string mapjs function mapf const key thisdategetfullyear const valueperyear total emitkey valueperyear private const string reducejs function reducefyear values let sum valuesforeachv sum vtotal return total numberintsum public static void main string mongoconnectionstring myconnectionstring mongourl mongourl mongourlcreatemongoconnectionstring mongoclient client new mongoclientmongoconnectionstring imongodatabase db clientgetdatabasenydatabasename imongocollection collection dbgetcollectiondocinput bsonjavascript map new bsonjavascriptmapjs bsonjavascript reduce new bsonjavascriptreducejs filterdefinitionbuilder filterbuilder new filterdefinitionbuilder filterdefinition filter filterbuilderempty mapreduceoptions options new mapreduceoptions filter filter maxtime outputoptions mapreduceoutputoptionsreduceresult nonatomic true verbose true try collectionmapreducemap reduce optionstolist catch exception ex consolewritelineexception occurred exmessage code
1
encountered an error when trying to use changestream when running error mongoservererror cursor session id is not the same as the operation contexts session id none when running over mongodb nodejs driver this issue disapear my environment nodejs mongodb community server mongodb driver nodejs replset nodes mongodconf codejava storage dbpath varlibmongodb journal enabled true systemlog destination file logappend true path varlogmongodbmongodlog net port bindip processmanagement timezoneinfo usrsharezoneinfo security authorization enabled keyfile varlibmongodbpkikeyfile operationprofiling mode slowop slowopthresholdms replication replsetname code my code codejava const mongoclient requiremongodb user encodeuricomponentprocessenvdbuser pass encodeuricomponentprocessenvdbpass host processenvdbhost port processenvdbport uri options usenewurlparser true useunifiedtopology true client new mongoclienturi options clientconnect const db clientdbflexograv collection dbcollectionjobs pipeline changestream collectionwatchpipeline changestreamonchange change consolelog change change changestreamonerror error consolelog error error code
1
this variable is possibly used without initialization detected by thread conditional jump or move depends on uninitialised at mongopostprocesscollectionmongocurop mongoprogressmeterholder by mongorunstdstring const mongobsonobj int stdstring mongobsonobjbuilder bool by mongoexeccommandmongocommand stdstring const mongobsonobj int mongobsonobjbuilder bool by mongoexeccommandmongocommand mongoclient int char const mongobsonobj mongobsonobjbuilder bool by mongoruncommandschar const mongobsonobj mongobufbuilder mongobsonobjbuilder bool int by mongoruncommandschar const mongobsonobj mongocurop mongobufbuilder mongobsonobjbuilder bool int by mongorunquerymongomessage mongoquerymessage mongocurop mongomessage by mongoreceivedquerymongoclient mongodbresponse mongomessage by mongoassembleresponsemongomessage mongodbresponse mongohostandport const by mongoprocessmongomessage mongoabstractmessagingport mongolasterror by mongothreadrunmongomessagingport by startthread in code
0
html drivers ticket description script target if you can read this text the script has failed get functiondata var description datafinddescriptionval langscripttargethtmldescription html
0
version builds fine see qa monitoring trying to update to segfault during test suite code results status pass testfile testsuiteversioncmp seed start end elapsed status pass testfile arraybasic seed start end elapsed status pass testfile asyncismaster seed start end elapsed status pass testfile asyncismasterssl seed start end elapsed status pass testfile bufferbasic seed start end elapsed status skip testfile clientauthenticate status skip testfile clientauthenticatefailure status skip testfile clientauthenticatetimeout error calling ismaster no suitable servers found serverselectiontimeoutms timed out uri binsh line aborted core dumped testprog nofork f testresultsjson make error code full buildlog
1
places of note everywhere where we check fcv before verifying if a stale config exception has a shard id places where we check fcv before throwing a shardinvalidatedfortargeting error noted files not exhaustive
0
results are inconsistentcodejs with insert the empty field is created dbfooinsert not allowedwriteresult ninserted dbfoofind id not allowed dbfoodroptrue replacestyle upsert allows field names dbfoofind id dbfoodroptrue dbfooupdate not allowed upserttruewriteresult nmatched nupserted nmodified id dbfoofind id not allowed other upserts dont allow empty fields dbfoodroptrue dbfooupdate not allowed upserttruewriteresult nmatched nupserted nmodified writeerror code errmsg an empty update path is not valid dbfoodropfalse dbfooupdate upserttruewriteresult nmatched nupserted nmodified writeerror code errmsg an empty update path is not valid code
0
hi all i have installed mongo db on amazon server i am trying to connect the db from my local java standalone program but it throwing exception i know there is a firewall issue on my server i am new this amazon and mongo db could anybody help me to resolve this issuehere is error i am getting when i run the program from my local machinenov am commongodbdbtcpconnector initdirectconnectionwarning exception executing ismaster command on couldnt connect to bcjavanetsockettimeoutexception connect timed out at at at at at at at at at at at at at at is my java programpublic static void mainstring args throws unknownhostexception mongo m new try mgetdbadmincommandping catch mongoexceptionnetwork e you should get this exception if the server is unavailable eprintstacktrace note i can able to connect mongo db from my terminal using ssh i keypair userservernamecould you please tell where to change the firewall setting in amazon serveris there anything has to do in my local machinethanksrengith manickam
1
autocompletion is currently not working with the browserrepl i think its failing to create the auto completer with the server version its probably trying to do something only the clirepl needs can do
1
typo use this tutorial to install mongodb on a windows systems redundant s
0
i checked online but i didnt find a way of adding the ip to every log is there a way or a config to do it also i want to set ttl but maintain a backup of the data before deletionas well do you have a recommendation if we run another delayed member of a replica set with maximum compression will that backup replica have the backup data or will the ttl delete the data from that backup example main replica sets have data with year ttl a backup replica will be running with a one year delay will the backup member have
0
if an object is returned in the shell from a query or any other way from the server which is then modified the result will be that the id field will be ignored when being sent to a mongodb server in the bson encoding processthis is a bug in javascript shellclient but does not effect the server other than the client not sending the id field it is possible to cause this behavior with javascript on the server using eval dbeval or mapreduce in the reduce phase if the first document in the array is modified which is not a normal usage patternorig descriptiontheres a regression in the shell where if i write an object with my own id the shell ignores it and creates its own objectid i found this out during a data migration today when my previously working js function destroyed an entire tables worth of data
1
while implementing the failpoint code in i add a temporary fix to test format that would pass the timing stress configuration options into the second call it does to wiredtigeropen this is required as the open config generation is performed in createdatabase and not wtsopen as such any configuration that isnt persisted in the databases baseconfig file would be lost between open calls this included the timing stress configuration the work in this ticket is to create a common function for generating open config call it from all points where test format opens a database connection remove the code added in in place of the new functionality
0
i have downloaded this installable from the link that you had specified in the email below • redhat centos the os installed on my production servers is cat etcredhatreleasered hat enterprise linux server release santiago please confirm if i have downloaded the correct installable and if it will be compatible with the os version
1
there are no examples for mongodb after colons i see nothing although there are all examples for sql
1
server logcodethu jun connection accepted from connections now openthu jun authenticate db wse authenticate user uuapp nonce xxxxxxxxx key yyyyyyyyyyyyyy thu jun build index id thu jun getmore cursorid not found localoplogrs jun build index done scanned total records secsthu jun localoplogrs assertion failure firstempty srcmongoutilunorderedfastkeytableinternalh usrbinmongod thu jun assertion size is invalid size must be between and first element eoothu jun invalid access at address from thread thu jun got signal segmentation usrbinmongod thu jun backtrace code
1
this unit tests fails with mongodbdrivermongocommandexception command failed exception chunks out of order response errmsg exception chunks out of order code ok the server complains that fri sep should have chunk id filesid n data bindata cnextsafe id filesid n data bindata fri sep should have chunk id filesid n data bindata cnextsafe id filesid n data bindata fri sep should have chunk id filesid n data bindata cnextsafe id filesid n data bindata note i i changed maxconnectionpoolsize in mongodbserversettings to meaning only a single connection can be alive at once this error does not occurid also like to point out that using threadpool in the c connection manager seems like a bad idea seeing as this threadpool is static and shared with everything else in the system allowing any code in the system to essentially lock out the waitcallbacks
1
paneltitleissue status as of october summaryduring a chunk migration if one of the documents in the chunk has a size in the range of and bytes inclusive then some documents in that chunk may be lost during the migration processuser impactdocuments which are not migrated from the chunk are lost and need to be reinserted into the collectionmongodb maintains a backup of every document involved in a chunk migration in a movechunk directory it is possible to examine this directory programmatically to find documents migrated within the document size in questionmongodb has this option off by defaultsolutionmongod needs to ensure it always sends at least one doc until the batches are done for that chunkworkaroundsif there are very large documents in your cluster you should disable the balancer until upgrading see if document loss is suspected locate the movechunk directory on the master replica of the donor shard at the time of the migration the lost documents can be reinserted from that backup or your own regular backupspatchesmongodb and will address this problem downloads for the release candidates will be available at within hourspanel
1
i want to make an index for million of rows but everytime i try it give me some error row id userid inreply ts lang es fonum frnum crts geolng geolat text rt realmadrid hoy se celebra el sorteo de la fase de grupos de la champions league realmadrid keywords realmadrid celebra sorteo fase grupos champions league realmadrid index i use debian squeeze and mongodb from apt rootxxxx aptcache show package version architecture uname a linux xxxx smp fri apr utc gnulinux free m total used free shared buffers cached mem bufferscache swap mongodblog mon aug finishing map mon aug finishing map mon aug external sort used files in secs mon aug couldnt open varlibmongodbtmpesort too many open files mon aug assertion failed usrbin lsei usrbinmongod usrbin usrbin usrbin addre mon aug terminate called printing stack usrbin lsei usrbinmongod usrbin usrbin usrbin addre mon aug got signal aborted mon aug backtrace usrbin lsei usrbinmongod usrbin usrbin usrbin addre mon aug dbexit mon aug shutdown going to close listening sockets mon aug closing listening socket
1
show an example too so it becomes easier to run the query
0
as followon work from we have recently enabled tcmalloc in our evergreen testing this required a local tcmalloc library to be built per machine on a local development machine we would need to export ldlibrarypath to the path of the local tcmalloc library once before compiling wiredtiger binaries however evergreen works a little differently evergreen copies over wiredtiger binaries into different machines and starts to perform testing on them the problem appears because we export the ldlibrarypath to the local tcmalloc library in a different machine we are testing on because the path of the tcmalloc library installed is now different on another machine cant find the location of the library therefore this forced us to update each variants ldlibrarypath to the location of the built tcmalloc when running each test this ticket aims to remove the repetitive setting of ldlibrarypath environment variable through manually patching the wiredtiger libraries elf files post compilation the idea is to patch the tcmalloc path within the wiredtiger libraries into a relative path both patchelf and installnametool change darwin machines commands can be used for this purpose note patchelf is not installed in the evergreen machines and will need to consult build team to install the command
0
it should have metrics for the following coordinator donor recipient number of resharding operations that succeed coordinator donor recipient number of resharding operations that fail with an unrecoverable error coordinator donor recipient number of resharding operations that are canceled by the user donor number of write commands rejected during critical section donor number of write commands were successful after being queued recipient number of documents cloned recipient bytes cloned recipient number of oplog entries applied
0
since the topology package will be migrated first it needs to function with both the driver and driverlegacy packages to enable this we need a secondary connection type that implements the legacy functionality this type is called legacyconnection and it handles the glue code necessary for driverlegacy to continue functioning the type should be implemented as outlined above the connectionlegacy method of server returns this type although the actual return type remains a connectionconnection
0
there are multiple warning thrown for extstoragesourceslocalstorelocalstorec the message is same the system resource will not be reclaimed and reused reducing the future availability of the resource all warnings are originating from resources allocated in localstore and can be fixed in a single pr
0
the transportlayerlegacyendallsessions function takes a lock sessionsmutex this lock is also taken by the transportlayerlegacydestroy method which is called indirectly by the transportlayerlegacylegacysession destructor within endallsessions the sessions list of weak pointers is onebyone promoted by taking a shared pointer to it then processed then the shared pointer is discarded this leads to a pair of difficult to reproduce race conditions on endallsessions the failing case in where the shared pointer is created making the refcount of its object at least other threads dispose of their shared pointers to this object leaving only the shared pointer which was promoted from the weak pointer behind that shared pointer will go out of scope at the end of the loop iteration processing it thus invoking the destructor that destructor will indirectly call transportlayerlegacydestroy which will attempt to take the lock recursively taking a lock in cs mutex class is undefined behavior typical implementations will either deadlock or throw stdsystemerror as encouraged by the standard but this is nonnormative behavior thus the attempt to take the lock in the destroy function will throw an exception which is the precise observed behavior in an entry in the weak pointer list has expired due to the last true pointers to it being destroyed promoting the weak pointer will fail giving a nullptr value for the sharedptr in this case the code skips over empty promoted pointers thus having no failing actions
1
in certain heavily overloaded environments it is possible to have socket communications undergo timeouts while there are sockettimeout options available in the python driver setting these very high has downsides for socket availability and recycling etc to avoid these situations normal python socket communications allow for an option of setting a socket option of socketkeepalive which results in the sending of keepalive packets that reset the timeout timers on the server side this is a common practice in heavily loaded applications that transmit large amounts of data and where the cpu of the client may be overloaded for various reasonssetting this option is relatively trivial enabling this option involves passing the option through from mongoclient instantiation base behavior can be unaffected since it is an option that only is active when explicitly setfiles affected should be mongoclientpy pass option thru poolpy actually set socket option commonpy validators
0
we need to update before the ip address change to make sure users are aware and can proactively make changes on their systems
1
on the redhat install page we dont list how to add the repo to your system the equivalent of this section from the docs but what we used to have for was correct vs the docs which are specific to but also mention correct information for
1
im using a time field to keep track of our accounts periods of consumption monthly storagetransfer quotas after upgrading mongoid from sha to i noticed that our specs started randomly failingboiling it down a bit it looks like the time value that i set the field to is not consistent with what actually ends up in the databasereproduce withcoderuby class account include mongoiddocument field periodstartedat type time hasmany consumptionperiods dependent destroy validate false def currentconsumption consumptionperiodsfindorcreatebystartedat periodstartedat end end class consumptionperiod include mongoiddocument belongsto account field startedat type time end accountdestroyall do i print account accountcreateperiodstartedat timenowutc createdconsumption accountcurrentconsumption accountreload changes the value of periodstartedat sometimes causing the consumption reference to break unless createdconsumption accountcurrentconsumption puts error expected createdconsumption but was accountcurrentconsumption end end codeive noticed other issues on github relating to time being slightly different in rubyland vs on the database but as this has worked for us for years using mongoid i expect it can be fixed
1
on and potentially other pages it assumes that the url and filename for mongodb is this has changed to the change to remove ssl will continue moving forward per this affects not only the curl command on this page but also the cp command a few lines down
1
instaed of accessing configured pid file at is trying accessvarrunmongomongodpidcodecat etcmongodconf mongoconfwhere to fork and run in backgroundfork trueport location of pidfilepidfilepath start service hangs for minutes and failscode service mongod startstarting mongod via systemctljob failed see system journal and systemctl status for details codemongo is running after failurecode ps ef grep mongomongod usrbinmongod f etcmongodconfroot grep colorauto mongocodelog infocodeapr localhost mongod starting mongod tue apr localhost mongod tue apr warning servers dont have journaling enabled by default please use journal if you want durabilityapr localhost mongod tue apr localhost mongod about to fork child process waiting until server is ready for connectionsapr localhost mongod forked process localhost mongod all output going to localhost mongod child process started successfully parent exitingapr localhost mongod apr localhost systemd pid file varrunmongomongodpid not readable yet after startcode
1
just tried to run noformat aggregate bigarray pipeline cursor noformat and got error noformat sort exceeded memory limit of bytes but did not opt in to external sorting aborting operation pass allowdiskusetrue to opt in noformat its not clear why aggregation sorting is involved in sample if thats expected it needs to be prominently documented but if it shouldnt be using sort then this is a bug
0
description the backwards incompatible features section in should include the update operation changes in the example below you can see that setting fcv to impacts the way data is stored when adding new fields to a document via an update statement noformat dbcolldrop true dbadmincommand setfeaturecompatibilityversion ok dbcollinsert id x writeresult ninserted dbcollupdateid set b a writeresult nmatched nupserted nmodified dbcollfind id x b a noformat note when fcv is set to fields are added in the order they were presented noformat dbadmincommand setfeaturecompatibilityversion ok dbcollupdateid set d c writeresult nmatched nupserted nmodified dbcollfind id x b a c d noformat note when fcv is set to fields are added in lexicographic order scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
0
weve seen this problem a lot twitter compass mailing list atlas intercom looks like the problem does not occur with the isolated edition is it possible that on plugin activation we do something with the network eg in compassmetrics that might cause this behavior
1
in and in master its no longer possible to insert documents no matter if its json mode or fieldbyfield mode image
1
hellowe were impacted by the bugs resolved in version we upgraded this morning to the new driver and after the production push we get a lot of errors coming from mongodriver which crashes preventing web request to be fulfilled thus having a high rate of http status code errorsour application is hosted on heroku where there were also no issue at the momentwe are using a mongohq masterslave replication set running mongodb which run well there were no failover that could cause the issuei cannot say whether heroku platform or mongohq could have broke the db connection for some reasonon mongohq the current master is being the slave there is also another slave and an arbitrer but they are not used in the connection stringthe connection string any case here are the exceptions we getcodejavacaused by commongodbmongoexceptionnetwork read operation to server failed on database guestful at at at at at at at at source at at at at at groovylangmetamethoddomethodinvokecallunknown source at at source at at at at at at at at at source at at at at at at at at at at source at at comguestfulbackendrestservicerestauranthelperfindonecallunknown source at at comguestfulbackendrestrestaurantavailabilityresourcegetavailabilitiescallcurrentunknown source at at source at at at at at at at at at at at common frames omitted caused by javaioeofexception null at at at at at at at at at common frames omittedcodeandcodejavacommongodbmongoexceptionnetwork operation on server failed at at at at at at at at source at at at at at groovylangmetamethoddomethodinvokecallunknown source at at source at at at at at at at at at source at at at at at at at at at at comguestfulbackendservicedbgcollectionupdatedbgroovy at source at at source at at at source at at at at at at at at at comoveatajinframeworkasyncguavaeventhandlerhandleeventcallunknown source at at javalangrunnableruncallunknown source at at at at caused by javaioeofexception null at at at at at at at common frames omittedcodethere are very frequent compared to before driver version and cause requests to fail
1
use of uninitialized value on the stack len fix is included hope to see this thread conditional jump or move depends on uninitialised at mongoreadresponse by mongocursoropquery by mongocursornext by mongofindone by mongoruncommand by mongosimpleintcommand by mongocheckismaster by mongoclient by hawkmongopersistentconnect by hawkmongopopulatelist by modulessensorreload by startthread in uninitialised value was created by a stack at mongoreadresponse codestatic int mongoreadresponse mongo conn mongoreply reply mongoheader head header from network mongoreplyfields fields header from network mongoreply out native endian unsigned int len int res mongoenvreadsocket conn head sizeof head mongoenvreadsocket conn fields sizeof fields len headlen fixed codestatic int mongoreadresponse mongo conn mongoreply reply mongoheader head header from network mongoreplyfields fields header from network mongoreply out native endian unsigned int int res mongoenvreadsocket conn head sizeof head mongoenvreadsocket conn fields sizeof fields len headlen srcmongoc srcmongocfix mongoheader head header from network mongoreplyfields fields header from network mongoreply out native endian unsigned int len unsigned int int res mongoenvreadsocket conn head sizeof head
1
hi we had indicate previously in that we re having trouble upgrading today we have tried upgrade again from to and left it so it can start up it took about hours to start noformat i network admin web console waiting for connections on port i repl did not find local voted for document at startup nomatchingdocument did not find replica set lastvote document in localreplsetelection i network starting hostname canonicalization worker noformat unfortunately for us looks like mongo decided to disregard all the date in the oplog and cannot sync as its to stale noformat i repl syncing from w repl we are too stale to use as a sync source i repl syncing from i repl could not find member to sync from e repl too stale to catch up entering maintenance mode i repl our last optime term timestamp aug i repl oldest available is term timestamp aug i repl see i repl going into maintenance mode with other maintenance mode tasks in progress noformat to be perfectly clear this node was warmed up and in production without issue before we attempted this upgrade oplog db size was very big as well noformat dbgetreplicationinfo logsizemb usedmb timediff timediffhours tfirst sun aug utc tlast sun aug utc now wed aug utc noformat as you can see mongo decided somehow that oplog have to be cleared in comparison see below same info from other replica member noformat floowprimary dbgetreplicationinfo logsizemb usedmb timediff timediffhours tfirst sun aug utc tlast wed aug utc now wed aug utc noformat i can provide log from that period but there is nothing indicating any unusual behaviour no errors noformat i repl syncing from w repl we are too stale to use as a sync source i repl syncing from w repl we are too stale to use as a sync source i repl could not find member to sync from e repl too stale to catch up entering maintenance mode i repl our last optime term timestamp aug i repl oldest available is term timestamp aug noformat in current state we cannot reliably upgrade our database to as this results in desync
1
your repo url is not available anymore baseurl
0
initial sync doesnt survive sync sources state transitions in or older under this tests settings i think it is always possible for a node to step down while another node is still syncing from it this is because when at least two nodes are available primary could step down if the other node has higher priority and if the third node is still in initial sync then the initial sync will fail a potential fix would be to set a higher numinitialsyncattempts currently set to for this test and i think this could happen with too
0
the pyopenssl transitive dependency got updated last week which broke windows we should pin it to which is the same version used and pinned by external auth tests
0
found through inspection maxstalenessms is used for selecting replica set members but isnt sent to mongos fix that and implement these tests
1
i am getting this issue with latest c driver version frequently while i run even simple query on a large collectionhowever this does not generate in all cases and it works sometimes but fails sometimes which is really frustratingthe mongo server is on amazon ubuntu instance and net app is on another servertried sevres fixes like connectiontimeoutsockettimeout etc but not able to fix thisunable to connect to the server at mongodbdriverinternaldirectmongoserverproxychooseserverinstancereadpreference readpreference in cprojectsmongocsharpdrivermongodbdrivercommunicationproxiesdirectmongoserverproxycsline at mongodbdrivermongoserveracquireconnectionreadpreference readpreference in cprojectsmongocsharpdrivermongodbdrivermongoservercsline at mongodbdrivermongocollectionupdateimongoquery query imongoupdate update mongoupdateoptions options in cprojectsmongocsharpdrivermongodbdrivermongocollectioncsline help asap
1