text_clean
stringlengths
8
6.57k
label
int64
0
1
uninitctor
0
the page says that we can enable chinese language support by getting a rlp license but when i follow the instruction to get the license the people at replied back quoteunfortunately per a management decision we are no longer licensing the rlp c platform to new customers this is an older technology that we have decided as an organization to sunset quote so is there any other way to solve this
1
hello looking on npm version history you rollback lastest from the version but the version now give me this mongo error mongoerror seed list contains no mongos proxies replicaset connections requires the parameter replicaset to be supplied in the uri or options object mongodbserverportdbreplicasetname works good and also maybe there is a problem with the rollback latest on
1
original scope of changes files that need work and how impact to other docs outside of this mvp work and resources eg scope docs invision
1
somewhere around the evergreen agent binary name changed from main to evergreen this will prevent inadvertently killing new agent processes
1
this field has been unused for at least a year
0
there is a recursive call in retryentiretransaction in txnoverridejs if we have to retry the transaction there we pass the old txnnumber so the log message has the incorrect txnnumber this can be confusing when debugging because it seems like the txnnumber is never progressing even though it says the transaction is being retried here is an example from a recent bf
0
when i execute resmokepy run help it wants to show me its options default values but it shows me default instead for example noformat dryrun mode instead of running the tests outputs the tests that would be run if modetests defaults to modedefault noformat
0
we are seeing unusual failures in the following test testunpinfornontransactionoperation eg it should be extremely rare for this test to fail as the probability of selecting the same mongos after server selections is extremely low
0
create a graphql directive that handles checking if a user has a specific permission defined in the directive before allowing the request to go through
0
procsysfs collectors for procsysfsfilenr and procsysfsfilemax would be helpful for troubleshooting issues associated with running out of available file handles
0
hi there downloading through curl seems broken curl o its an xml page i downloaded from downloads page so its ok but link on doc is wrong
1
compatibilitytestfornewerreleases failed on compatibility tests host project wiredtiger develop commit diff rework tiered storage reconfigure rework tiered storage reconfigure call tiered storage destroy before shutting down transactions remove isolation modification on internal tiered thread may utc evergreen subscription evergreen event task logs
0
we currently use set e at the end of the activatevenv which can cause the shell script to never fail if the shell never sets its own errexit value afterwards most functions dont
1
this will eliminate the cumbersome need to use a thread just to schedule a shardregistry lookup when incorporating rsm changes ie instead of returning cacheacquire getdata can return cacheacquireasync and no longer need to take an opctx the vectorclock can be obtained via service instead most callers will then just call getopctx on the result from getdata but callers such as the above which are just scheduling a lookup and arent interested in the result dont need to do that
0
several times ive failed to update motors example code when changing the api or ive updated examples incorrectly examples should be doctests so i can test them
0
mongod dbpath tue sep mongodb starting dbpath tue sep db version pdfile version sep git version nogitversiontue sep sys info linux smp mon aug edt sep waiting for connections on port sep web admin interface listening on port sep connection accepted from sep timeoutms not support for yettue sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep query program symbol setmemaliasset sep query program gcc symbol setmemaliasset sep connection accepted from sep connection accepted from sep connection accepted from sep query program gcc symbol timevarpop sep connection accepted from sep connection accepted from sep query program gcc symbol skipescapednewlines sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep connection accepted from sep query program gcc symbol globalbindingsp sep connection accepted from sep query program gcc symbol csizeinbytes sep query program namd symbol sep query program wrf symbol multifiles sep query program soplex symbol sep query program cactusadm symbol pugharraygroupsize sep query program cactusadm symbol cctkischeduletraverseghextensions sep query program calculix symbol dvfill sep query program cactusadm symbol cctkitimergettimeofdaystart sep query program symbol startslice fatal error in allocation failed process out of memorytue sep got signal abortedtue sep backtrace mongod mongod mongod mongod mongod mongod tue sep dbexit tue sep shutdown going to close listening socketstue sep going to close listening socket sep going to close listening socket sep going to close listening socket sep going to close listening socket sep shutdown going to flush oplogtue sep shutdown going to close socketstue sep shutdown waiting for fs preallocatortue sep shutdown closing all filestue sep closeallfiles finishedtue sep shutdown removing fs locktue sep dbexit really exiting nowerror clientclient context should be null connthis occurs when i am running the following from a shell and bunch of processes attempt to connect running mongo type reduce functionobj prev prevcsum objscore prevcount initial csum count dbscoresstats ns count size avgobjsize storagesize numextents nindexes lastextentsize paddingfactor flags totalindexsize indexsizes id ok a rather long group operation
1
if a mongos tries to connect with a different config string for example if hostnames changed from local to fully qualified names an exception will be logged showing the old new configuration string and mongos specified a different config database string codetue feb warning db exception when initializing on current connection state is state conn vinfo cursor none count done false retrynext false init false finish false errored false caused by could not initialize sharding on connection caused by mongos specified a different config database string stored vs given can be resolved either by restarting with the correct old config string or by restarting mongosmongods to reflect the new config string
1
after a coupe of days lost troubleshooting a race condition on our end we determined mongodb has issues with concurrent findandmodify writes specifically we have business logic that acquires a lease think lock on a given object for a few minutes for processing the find and modify sets the lease datetime in the present and the query makes sure its at least three minutes in the pastwe found mongodb acquires the same object and modifies it twice within a matter of milliseconds having a toctou timeofcheck timeofuse condition we are able to reproduce consistently on a threaded environment with several queries going out at the same time we initially suspected the following issue was the culptrit but it doesnt seem like it was fixed we still see findandmodify returning objects that were modified by a previous findandmodify statement and no longer fullfil the query predicate due to a concurrent update
1
now that the driver has switched over to scheduledexecutorservice for monitoring the cluster applications that use the driver no longer exit unless mongoclient is explicitly closed to fix this the driver needs to create the service with a thread factory that creates daemon threads
0
when i try to restoring some datas from i am taking unexplained erros like this wtsessionopencursor the process must exit and restart wtpanic wiredtiger lib rary panic ltwt cursor failed wtpanic wiredtiger library panic or like this wiredtiger file wterror nonspecific wiredtiger error
0
in the scenario where the agg pipeline is noformat noformat we should be able to merge the query for into the query into as findbarfoo as a performance optimization
0
the symbols files are now nested in a bin directory in order to be colocated with the executables however the extract debugsymbols function is only searching for debug dsym and pdb files in the toplevel of the mongodb directory when trying to move them to into the current working directory this issue is preventing the debugger from having the symbol files loaded when analyzing a process for hangs noformat running command shellexec in extract debugsymbols step of debug symbols are not created for every variant if then copyright c igor pavlov processing archive mongodebugsymbolstgz extracting extracting extracting noformat
1
commands that requires sharding infrastructure to run will be executed only after gridsetshardinginitialized is called the problem is that by that time the shardingstate is not already being initialized commands that triggers refreshes of the catalogcache will trigger this invariant trying to run forceprimarydatabaserefreshandwaitforreplication
0
detected by address sanitizer leak detector mci build of commit test was previously green at bisection results across git bisect badbisecting revisions left to test after this roughly steps update multikey metadata in a separate transaction
0
on and potentially other pages it assumes that the url and filename for mongodb is this has changed to the change to remove ssl will continue moving forward per this affects not only the curl command on this page but also the cp command a few lines down
1
see screenshot to reproduce start the shell with nodb type db
1
i have an issue when using out in compass while being connected to atlas datalake created an aggregation pipeline to store certain documents into a configured bucket but compass doesnt show the out stage the pipeline still works using compass and the mongo shell more details can be found in datallakehelp please let me know if you need additional input
1
im trying to set up a sharded mongodb instance with on freebsd without success mongod crashes when mongos try to connect to it the same setup works with just fine
0
introduced tassert which should be better suited for most invariant statements scattered throughout the qo codebase this ticket tracks the work to take a pass through and update where appropriate
0
i ran the following code shshardcollection“dcuuserdata id hashed code and i simply cannot shard based on that key why this is a new production server we are setting up in aws noformat mongos shshardcollection“dcuuserdata id hashed e query syntaxerror unexpected token illegal mongos shshardcollection “dcuuserdata id hashed e query syntaxerror unexpected token illegal mongos shshardcollection “dcuuserdata id hashed e query syntaxerror unexpected token illegal noformat
1
i am trying to run the following command on my linux machine before building my binary for the code that is shown below the command i ran go install linkshared buildmodeshared gomongodborgmongodriverbson gomongodborgmongodriverevent gomongodborgmongodrivermongooptions since go i am not able to run the above command as i always get the following error i cant seem to understand what this error actually means and i am stuck at this error gomongodborgmongodrivervendorgithubcomklauspostcompresszstdinternalxxhash gomongodborgmongodrivervendorgithubcomklauspostcompresszstdinternalxxhashasm when dynamic linking is clobbered by a global variable access and is used here addq axasm assembly failed the code i want to build is attached here
1
documentation says is the current point release this is no longer true as has been released
1
ive hit this invariant a few times while running multiple tests in parallel the tests were run against the storage in raise at in mongomongobreakpoint at in mongobreakpoint at in mongofassertfailed at in mongolockerimpllockcomplete this resid checkdeadlockfalse at in mongolockerimpllock this resid modemongomodex checkdeadlockfalse at in mongocollectionlock this lockstate ns modemongomodeix at in mongolockandcheckimpl this result intentlocktrue at in mongolockandcheck this result at in mongoinsertone state result at in mongoexeconeinsert this state error at in mongoexecinserts this request errors at in mongobulkexecute this request upsertedids errors at in mongoexecutebatch this request response at in mongorun this txn dbname cmdobj errmsg result fromreplfalse at in mongoexeccommand txn c dbname cmdobj errmsg result fromreplfalse at in mongoexeccommand txn c cmdns cmdobj result fromreplfalse at in mongoruncommands txn ns cmdobj b anobjbuilder fromreplfalse at in mongoruncommands txn ns jsobj curop b anobjbuilder fromreplfalse at in mongorunquery txn m q curop result fromdbdirectclientfalse at in mongoreceivedquery txn c dbresponse m fromdbdirectclientfalse at in mongoassembleresponse txn m dbresponse remote fromdbdirectclientfalse at in mongoprocess this m port le at in mongohandleincomingmsg arg at in startthread arg at in clone at outputnoformatlockerid encountered an unexpected deadlock the server will shut down in order to prevent becoming unresponsive the cycle of deadlocked threads islocker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by locker waits for resource collection held by noformatversion
1
collection name lengths were originally derived from the implementation of and restricted to characters long with the removal of in we can remove this restriction
0
there is a todo in the codebase referencing a resolved ticket which is assigned to youplease follow this link to see the lines of code referencing this resolved ticketthe next steps for this ticket are to either remove the outdated todo or follow the steps in the todo if it is correct if the latter please update the summary and description of this ticket to represent the work youre actually doing
0
please take a look at the comments in that link to the commits that were made to that encompass the same work thats needed to be done here ac update the binversion in serverjs to update the binary version constants defined in multiversionconstantspy the fcv constants in this file can be left for work to be done as a part of ensure no new redness was introduced to the enterprise rhel implicit multiversion build variant
1
the test had been disabled because it is failing in multiversion suites since the fcv constants have been upgraded
0
it fails with the following error cannot decode javascript into a string type attached is a sample bson file
1
step appears twice in the first step connect to the mongodb server the example should read mongo username samdbaexamplecom password authenticationmechanism plain authenticationdatabase external host port authenticationmechanism not authenticationmechanisms
0
value returned from a function is not checked for errors before being used defect staticc checker checkedreturn subcategory none file srcmongodbstoragestorageenginelockfileposixcpp function mongowritepid srcmongodbstoragestorageenginelockfileposixcpp line colorredcalling fsync without checking return value as is done elsewhere out of timescolor fsynclockfilehandlefd code
0
currently libbson is in a separate repository from libmongoc libmongoc includes it as a git submodule this causes various problems its difficult to build libmongoc with a bundled checkout of libbson using cmake its confusing to develop new libbson features that libmongoc depends on common logic in the test suite and build systems must be duplicated across the two repositories and inevitably the common logic diverges replace the git submodule in libmongoc with the contents of the libbson repository and retire the libbson repository do this as a fake author to avoid biasing the github contributor stats merge the two test suites into one no more testlibbson executable testlibmongoc should now run all the tests for both libraries factor the common build logic if we havent yet deleted the autotools scripts dont bother factoring the duplicated autotools logic only do so for cmake scripts continue to ship libbson and libmongoc as distinct libraries only the repositories are merged
0
noformat thread thread lwp in wtreadlock in wtsessionlockdhandle in wtsessiongetbtree in wtconnbtreeapply in wtcurstatinit in wtcurstatcolgroupinit in wtcurstatinit in wtcurstatopen in wtcurstattableinit in wtcurstatinit in wtcurstatopen in wtopencursor in in mongoexporttabletobsonwtsession stdbasicstring stdallocator const stdbasicstring stdallocator const mongobsonobjbuilder in mongoappendcustomstatsmongooperationcontext mongobsonobjbuilder double const in mongostoragesizemongooperationcontext mongobsonobjbuilder int const in mongosizeondiskmongooperationcontext const in mongorunmongooperationcontext stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand int char const mongobsonobj mongobsonobjbuilder bool in mongoruncommandsmongooperationcontext char const mongobsonobj mongobufbuilder mongobsonobjbuilder bool int in mongonewrunquerymongooperationcontext mongomessage mongoquerymessage mongocurop mongomessage bool in mongoassembleresponsemongooperationcontext mongomessage mongodbresponse mongohostandport const bool in mongoprocessmongomessage mongoabstractmessagingport mongolasterror in mongohandleincomingmsgvoid noformatandnoformat thread thread lwp in llllockwait from in from in pthreadmutexlock from in wtevictfileexclusiveon in wtcacheop in wtcheckpointclose in wtconnbtreesyncandclose in wtconndhandlecloseall in wtschemadrop in wtschemadrop in in mongodropmongostringdata const in mongodropidentmongooperationcontext mongostringdata const in mongocommit in mongocommit in mongocommit in mongorunmongooperationcontext stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand int char const mongobsonobj mongobsonobjbuilder bool in mongoruncommandsmongooperationcontext char const mongobsonobj mongobufbuilder mongobsonobjbuilder bool int in mongonewrunquerymongooperationcontext mongomessage mongoquerymessage mongocurop mongomessage bool in mongoassembleresponsemongooperationcontext mongomessage mongodbresponse mongohostandport const bool in mongoprocessmongomessage mongoabstractmessagingport mongolasterror in mongohandleincomingmsgvoid noformat
1
computeoperationtime the replication lastcommittedoptime for majority reads or the lastappliedoptime otherwise and we have this dassert that makes sure that neither of them should be beyond the cluster time we have a dassert in replicationcoordinatorimplsetmylastappliedoptimeandwalltime to assert that the lastappliedoptime should never advance beyond the cluster time however we do not have a corresponding dassert for the lastcommittedoptime normally for internal communications between replset members we have the logicaltimemetadatahook to parse cluster time metadata and advance the cluster time on receiving a network message for example when a node receives a heartbeat response it parses the cluster time metadata as part of the network interface hooks before handing it off to repl to process the heartbeat response and when the node processes the heartbeat it could advance the commit point on hearing a more recent commit point so the assumption that the commit point is never ahead of the cluster time is normally correct because we parse the cluster time metadata first however if a heartbeat response comes from an arbiter it could contain a more recent commit point without cluster time metadata simply because logical clock is disabled for arbiters so theoretically the following could happen secondarys current knowledge of the cluster time is secondary receives a heartbeat response without cluster time metadata from an arbiter that has a commit point secondary processes the heartbeat and advances its commit point lastcommitted to a majority read on the secondary returns operation time from computeoperationtime dassert is hit because the secondarys logical clock is being the operation time the fact that computeoperationtime returns lastcommitted for majority reads seems weird because for majority reads they dont actually read at lastcommitted but the committed snapshot and repl guarantees that the committed snapshot is never ahead of the lastapplied whereas lastcommitted could so i think it is more correct for computeoperationtime to return the committed snapshot for majority reads this ticket should investigate whether what mentioned above could actually happen with arbiters no matter which time we decide computeoperationtime should return for majority reads we should add a corresponding dassert in repl like we did for the lastapplied to enforce that assumption and we should have a targeted test for it
0
when interrupted in between trying to persist new chunk changes or when the changes gets partially rolled back when shard tries to persist new updates to configcache it sets the refreshing flag deletes the current chunk document and then inserts the newer version and finally unsets the reloading flag after it finishes processing the updated chunks however if the mongod crashed in the middle or some of these writes get partially rolled back such that insert was rolled back but not the delete the configcache will now be in an inconsistent state
0
version released always send hardware metrics in association with the fqdn of the server rather than with any defined aliases
1
i just wanted to point out a new driver for delphi optimized with mongodb it can store documents in variants with latebinding for the properties and handle all bson types it has also full json support with mongodb extended syntax and has been optimized for speed from the ground upsee there is also an orm layer available and a clientserver restul access via json resulting benchmark is awesome when compared to regular sql engines monodb rocks
1
ocsp validation is a vague phrase that doesnt capture the fact that the setparameter is used for stapling only
0
on this page i would expect there to be some direction on installing a monitoring agent but there is nothing or an automation agent which would install a monitoring agent ditto on all of the install pages if we just direct to a separate page on installation of agents and their behavior that would be fine
1
problem description steps to reproduce connect to any of favorites connections go to favorites list and check that last connection time of just connected item has been updated to present select that connection in the sidebar and open edit favorite screen for it dont do any changes and save expected results last connection time is not changed if name or color of favorite connection has been updated actual results last connection time of just updated favorite connection has changed to never additional notes
0
action dropallrolesfromdatabase has a few problems one of them big problem the command seems to produce no entry in audit logs the command is called droprolesfromdatabase without an all i think the all is clearer droprolesfromdatabase suggests that one might drop selected the spec for the text message is dropped all roles from there shouldnt be a role in this there is no single role involved
1
ticket work for latest if have to skip tests and look at any racy tests
0
api change from wt api which will make setting turn off sweep of handles but which wont negatively impact lsm which should continue to close obsolete trees see
0
titlefilename changes although sharding makes sense at a higher level as well as sharding internals we should use sharded clusters where more shardingadmin content can be divided into separate from the sharding fundamentals page can be moved a shard keys can be relocated to a new page and act as an intro paragraph for the other shard key material b mongo and querying section also can be relocated to new page c requirements can go to the admin
0
from rfc the usage of field is deprecated but permitted see end of chapter subject from quoteconforming implementations generating new certificates with electronic mail addresses must use the in the subject alternative name extension section to describe such identities simultaneous inclusion of the emailaddress attribute in the subject distinguished name to support legacy implementations is deprecated but permittedquoteright now emailaddress presence breaks auth
0
i try to implement c code that get about document in a collection has field types include int double string it take more than seconds to count duration time i seperate code into sections the first invokes dbclientconnectionquery to get all from collection and return vector of bsonobjs it take seconds the last iterate that vector and parses each of bsonobj to a c object using bsonobjgetfieldint or double or string and more than seconds please consider and give me some advices to improve it thanks you
0
my os is ubuntu server and my mongodb version is i make a stress test with the new storage engine wiredtiger but i find the memory usage never reduce finally killed by kernel because of out of memory noformat localhost kernel out of memory kill process mongod score or sacrifice child noformat here is the dbserverstatus noformat dbserverstatus host rtdstest version process mongod pid uptime uptimemillis uptimeestimate localtime asserts regular warning msg user rollovers connections current available totalcreated cursors note deprecated use server status metrics clientcursorssize totalopen pinned totalnotimeout timedout extrainfo note fields vary by platform heapusagebytes pagefaults globallock totaltime currentqueue total readers writers activeclients total readers writers locks global acquirecount r w w acquirecount r w database acquirecount r w r w acquirewaitcount r w w timeacquiringmicros r w w collection acquirecount r w network bytesin bytesout numrequests opcounters insert query update delete getmore command opcountersrepl insert query update delete getmore command storageengine name wiredtiger wiredtiger uri statistics lsm sleep for lsm checkpoint throttle sleep for lsm merge throttle rows merged in an lsm tree application work units currently queued merge work units currently queued tree queue hit maximum switch work units currently queued tree maintenance operations scheduled tree maintenance operations discarded tree maintenance operations executed async number of allocation state races number of operation slots viewed for allocation current work queue length number of flush calls number of times operation allocation failed maximum work queue length number of times worker found no work total allocations total compact calls total insert calls total remove calls total search calls total update calls blockmanager mapped bytes read bytes read bytes written mapped blocks read blocks preloaded blocks read blocks written cache tracked dirty bytes in the cache bytes currently in the cache maximum bytes configured bytes read into cache bytes written from cache pages evicted by application threads checkpoint blocked page eviction unmodified pages evicted page split during eviction deepened the tree modified pages evicted pages selected for eviction unable to be evicted pages evicted because they exceeded the inmemory maximum pages evicted because they had chains of deleted items failed eviction of pages that exceeded the inmemory maximum hazard pointer blocked page eviction internal pages evicted maximum page size at eviction eviction server candidate queue empty when topping up eviction server candidate queue not empty when topping up eviction server evicting pages eviction server populating queue but not evicting pages eviction server unable to reach eviction goal pages split during eviction pages walked for eviction eviction worker thread evicting pages inmemory page splits percentage overhead tracked dirty pages in the cache pages currently held in the cache pages read into cache pages written from cache connection pthread mutex condition wait calls files currently open memory allocations memory frees memory reallocations total read ios pthread mutex shared lock readlock calls pthread mutex shared lock writelock calls total write ios cursor cursor create calls cursor insert calls cursor next calls cursor prev calls cursor remove calls cursor reset calls cursor search calls cursor search near calls cursor update calls datahandle connection dhandles swept connection candidate referenced connection sweeps connection timeofdeath sets session dhandles swept session sweep attempts log log buffer size increases total log buffer size log bytes of payload data log bytes written yields waiting for previous log file close total size of compressed records total inmemory size of compressed records log records too small to compress log records not compressed log records compressed maximum log file size preallocated log files prepared number of preallocated log files to create preallocated log files used log read operations records processed by log scan log scan records requiring two reads log scan operations consolidated slot closures logging bytes consolidated consolidated slot joins consolidated slot join races slots selected for switching that were unavailable record size exceeded maximum failed to find a slot large enough for record consolidated slot join transitions log sync operations log write operations reconciliation page reconciliation calls page reconciliation calls for eviction split bytes currently awaiting free split objects currently awaiting free session open cursor count open session count threadyield page acquire busy blocked page acquire eviction blocked page acquire locked blocked page acquire read blocked page acquire time sleeping usecs transaction transaction begins transaction checkpoints transaction checkpoint currently running transaction checkpoint max time msecs transaction checkpoint min time msecs transaction checkpoint most recent time msecs transaction checkpoint total time msecs transactions committed transaction failures due to cache overflow transaction range of ids currently pinned transactions rolled back concurrenttransactions write out available totaltickets read out available totaltickets writebacksqueued false mem bits resident virtual supported true mapped mappedwithjournal metrics commands buildinfo failed total createindexes failed total dbstats failed total getlog failed total getnonce failed total ismaster failed total listcollections failed total listdatabases failed total ping failed total replsetgetstatus failed total serverstatus failed total top failed total whatsmyuri failed total cursor timedout open notimeout pinned total document deleted inserted returned updated getlasterror wtime num totalmillis wtimeouts operation fastmod idhack scanandorder writeconflicts queryexecutor scanned scannedobjects record moves repl apply batches num totalmillis ops buffer count maxsizebytes sizebytes network bytes getmores num totalmillis ops readerscreated preload docs num totalmillis indexes num totalmillis storage freelist search bucketexhausted requests scanned ttl deleteddocuments passes ok noformat
1
when i added asyncio support i switched from the tornadospecific futuresetexcinfo to the compatible futuresetexception to propagate errors see if theres a compatible way to preserve the error traceback with both tornado and asyncio
0
since in safe reconfig we always compare config term and version we need to audit all places where config version is used and see if they should be updated
0
kris mentioned that uri does not have precedence which is not consistent with the spec recommendation the spec rationale for precedence order specifically notes that its a recommendation not a requirement for back compatibility reasons as a new driver the go driver should honor the spec recommendation to be consistent with compliant drivers this will help tse provide consistent answers to user questions
0
it looks like createtaskdirectory is missing a call to taskconfigexpansionsputworkdir newdir this means that the taskconfigexpansionsgetworkdir will still refer to agttaskconfigdistroworkdir instead of the unique subdirectory that was created
1
when compiling mongoc and mongocxx i encountered an issue with the target bsoncxxtesting im using de release with and otherwise it cannot compile since im using gcc heres the console message codejava code built target bsoncxx built target bsoncxxtesting linking cxx executable testbsonexe cmakefilestestbsondirobjectsabsonbuildercppobjbsonbuildercpptext undefined reference to cmakefilestestbsondirobjectsabsonbuildercppobjbsonbuildercpptext undefined reference to cmakefilestestbsondirobjectsabsonbuildercppobjbsonbuildercpptext undefined reference to cmakefilestestbsondirobjectsabsonbuildercppobjbsonbuildercpptext undefined reference to cmakefilestestbsondirobjectsabsonbuildercppobjbsonbuildercpptext undefined reference to cmakefilestestbsondirobjectsabsonbuildercppobjbsonbuildercpptext undefined reference to cmakefilestestbsondirobjectsabsonbuildercppobjbsonbuildercpptext undefined reference to cmakefilestestbsondirobjectsabsonbuildercppobjbsonbuildercpptext undefined reference to error ld returned exit status make error make error make error code i couldnt find a way to fix it but all the other targets worked fine
1
we missed because of this
1
completely clean data start mongo without add admin a local user use local dbadduserrepl restart mongod with auth and master and get the followingmon mar mongo db starting pid port dbpath varlibmongodb master slave mar db version pdfile version mar git version nogitversionmon mar sys info linux promethium smp thu jan utc mar waiting for connections on port mar assertion for db lock type usrbinmongod usrbinmongodmain usrbinmongod mon mar exception in initandlisten std unauthorized for db lock type terminatingmon mar dbexitmon mar shutdown going to close listening socketsmon mar shutdown going to flush oplogmon mar shutdown going to close socketsmon mar shutdown waiting for fs preallocatormon mar shutdown closing all filesmon mar closeallfiles finishedmon mar shutdown removing fs lockmon mar dbexit really exiting nowstarting without auth is fine
1
the transactionlifetimelimitseconds server parameter indicates how long a transaction may run for before it is aborted by a background thread for tests we set it to an extremely high value however fixtures started by tests in the nopassthrough suite have the default value of seconds this seems strange though since it means that each test which uses transactions is responsible for setting the parameter manually its easy to forget to this like in this test i think the test fixtures should be modified so that this flag is set to a higher value or at least the test linked to above should use the higher value
0
mongodb segfaults expecting mongodblog to exist during logrotate
1
description low level bson helpers in some drivers for example the bson writer in the java driver can be abused to create documents with duplicate key names for example noformat foo bar foo bim foo bam noformat the bson spec is silent on the issue of duplicate key names the servers behavior when confronted with duplicate key names is undefined since drivers generally use a map to represent a bson document only one of the keys will survive decoding and reinsertion of such a document a good place to add this might be but the text should really be a warning call to action scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
0
we do not test validate upgrades from ops manager customers may upgrade from directly to but not from
1
this visibility check here ignores prepare written to the disk codejava check conflict against the on page value if there is no update on the update chain except aborted updates otherwise we would have either already detected a conflict if we saw an uncommitted update or determined that it would be safe to write if we saw a committed update if rollback upd null cbt null btreecolfix cbtins null wtreadcelltimewindowcbt cbtref tw if twstoptxn wttxnmax twstopts wttsmax rollback wttxnvisiblesession twstoptxn twstopts else rollback wttxnvisiblesession twstarttxn twstartts code
0
for the latest stable release of mongodb use the following repository file namemongodb repository baseurl pycurl error the requested url returned error not found
0
found through inspection maxstalenessms is used for selecting replica set members but isnt sent to mongos fix that and implement these tests
1
paneltitleissue status as of april issue summary in the geonear command returned the distance in radians for legacy coordinate pairs in this behavior changed and the geonear command returned the result in meters this behavior change was unintentional and needs to be reverted user impact users that have their data stored in legacy coordinate pairs rather than in geojson format and who use the geonear command get the results back in an unexpected unit which may break their applications workarounds a clientside rescaling of the result back from meters to radians can be used as a workaround resolution revert the behavior change and return the result in radians again if the coordinates are stored in legacy format affected versions version was affected by this bug patches the patch is included in the production release panel original descriptionit seems that the distance metric for handling legacy points has changed from mongodb to the distance in geonear queries for legacy points was returned as radians since it is now returned in meters since this behaviour is not listed in the incompatibility changes list and could be seen as a breaking change i suppose that this is a bugsee also here
0
im running my rubyrails app using jruby and ive recently upgraded mongodb from to and subsequently updated bson to and the mongo ruby driver to however my tests are now failing the error im getting is from the bson gem specifically bsonrb line where its trying to load the gem appears to be looking for jars within but they do not exist as they did with i looked at the mongorubydriver source and see it now includes the jars however when i downloadinstall the mongorubydriver it does not include the ext directory meaning does not contain the ext directoryi believe the mongorubydriver gem needs to be fixed to include the jars and the bson gem needs to be fixed to look for the jars contained within the mongorubydriver gem once its been fixed
1
i have a favorite connection to my atlas free tier for some reason it includes the password as undefined which results in the connection timing out rather than failing or prompting for passwd ive attached a screenshot heres the url that was autopopulated if i go to the fill in connection fields individually screen it will render as if i fill in the password correctly then toggle back to the shortcut screen the password is correct
0
there are use cases where a generator is more convenient than passing a list of bulk write operations for example when performing a large number of operation where creating the list would balloon memory usage bulkwrite should be able to accept a generator without inflating the entire thing to a list this would also allow insertmany to support generators without inflating as well which is requested in some open questions what should be the behavior of unordered bulkwrite where the generator flips flops between request types insert update insert etc i think it would be a performance regression to send each operation individually but sorting the operations from the generator without ballooning memory usage will be tricky
0
it would be nice to have all release instructions located in the repo we should also consider changing the process to require prs when merging to version branches
0
cursorsettypemap and mongodbbsontophp currently proceed with bson conversion even if phpphongobsontypemaptostate throws an exception ideally they should check if phpphongobsontypemaptostate fails and return early
0
ive seen users and multiple arbiters just to be safe i think the idea is that if one is good than more is better it often results in returning the set to an even number of members which defeats the purpose of the arbiter unless there is a good reason to allow more than one we should disallow it to keep users from shooting themselves in the foot
0
in order to support listing collections of databases with a large number of collections the listcollections command that was introduced in will need to be able to return a cursor otherwise it will run into the document size limit more details to follow once the changes needed for are specified update specs enumerate collections enumerate indexes
1
the reshardingdonoroplogiterator intentionally doesnt return the opn final resharding oplog entry not returning the final resharding oplog entry ensures the recipient when resuming would be guaranteed to come across the oplog entry again in its local oplog buffer and realize it had already finished applying all of the oplog entries however the current behavior leads the number of oplog entries fetched to always be greater than the number of oplog entries applied even when resharding completes successfully this is due to how the opn final resharding oplog entries are accounted for in the number of oplog entries fetched but are not accounted for in the number of oplog entries applied incrementing the inmemory counter by for the number of oplog entries applied when reshardingoplogappliercurrentbatchtoapply is empty would correct this discrepancy the ondisk value for numentriesapplied in the progressapplier collection must not be incremented by
0
add a config option to the tools that lets users specify values for password uri and sslpemkeypassword in a configuration file see linked writing ticket for detailed design
0
the thread usage monitor sends a every minutes to any running python process it should ony send it to resmokepy but that is also problematic as the signal is propagated to the subprocesses we will remove this logic for now
1
since got merged into collectionvalidatorfeaturecompatibilityversionjs has started failing the test expects start up of a laststable mongod to fail when there is a collection validator using new query features present we should change the test to instead check that inserts fail
0
detail is here table imbmicroblogcodepublic class imbmicroblog public guid id get set public guid createuserid get set public list commentlist get set public bool isaudit get set public string content get set codetable imbcommentcodepublic class imbcomment public guid id get set public guid createuserid get set public bool isaudit get set public string content get set codewhen i want to match nested imbcomments cloumns isaudittrue and slice them c like this codecollectionfind queryand queryeqid microblogid queryeqcommentlistisaudit true setfieldsfieldsslicecommentlist skip limittolistcommentlistreplylistcountcodebut it match all rows include isauditfalse i dont how to query nested item hope help me thanks very much
0
i want to test the fragmentation and under mongodb replication set functions config server has three port i also tested when the config server down what will happen after test is complete i start three config server when i start mongos like this mongos configdb port i got this error:wed sep mongos version starting help for usagewed sep git version sep build info linux smp fri nov est sep options configdb port wed sep warning config servers and differwed sep warning config servers and differwed sep warning config servers and differwed sep warning config servers and differwed sep error could not verify that config servers are in sync caused by config servers and differchunks chunks databases sep configserver connection startup check failedplease tell me how to solve it thanks
1
paneltitleepic summary as part of the atlas planned maintenance initiative the drivers will need to become more intelligent so that we as a platform can notice key indicators of issues in new releases we also want to take note of changes to essential metrics that occur during maintenance to accomplish that wed need to identify what metrics and parameters we may want to collect as the driver protocol to report metrics to the mongod report those metrics in serverstatus drivers to focus on node python and java desirable metrics should be relevant client metrics that we do not already have available either from server or on atlas side lead tbd author rachelle palmer pocs spec update panel
0
there is a common need to access the type name or class name of a zval for exception messages this involves a ztype check in a ternary followed by an access of the class name string or zendgettypebyconst for nonobject types turning this into a macro would clean up some code
0
i cant compile my user code with the new cxxdriver downloaded from because of include pchhthere should be a relative path to pchh in all cxxdriver header file otherwise i would have to add usrlocalincludemongo to the include search path which is a bad option in my optionnoformatusrlocalincludemongo grep r include pchh clientdbclienthinclude pchhclientdbclientinterfacehinclude pchhclientdbclientcursorhinclude pchhclientdbclientrshinclude pchhclientdistlockhinclude pchhscursorshinclude pchhschunkhinclude pchhsdlogichinclude pchhsclientinfohinclude mongopchhsbalancehinclude pchhsdwritebackhinclude pchhsinterruptstatusmongoshinclude pchhsutilhinclude mongopchhswritebacklistenerhinclude pchhsshardhinclude pchhsrequesthinclude pchhsstatshinclude pchhsstrategyhinclude pchhsdchunkmanagerhinclude pchhdbdbhinclude pchhdbclienthinclude pchhdbcursorhinclude pchhdbcmdlinehinclude pchhdbinterruptstatusmongodhinclude pchhdbclientcommonhinclude pchhdbindexkeyhinclude pchhdbopsqueryhinclude pchhdbopsupdatehinclude pchhdbopsupdateinternalhinclude pchhdbopsdeletehinclude pchhdbbtreehinclude pchhdbjsobjhinclude pchhdbreplblockhinclude pchhdbmodulehinclude pchhdbextsorthinclude pchhdbinterruptstatushinclude pchhdbindexhinclude pchhdbstatscountershinclude pchhdbstatssnapshotshinclude pchhdbdbhelpershinclude pchhdbintrospecthinclude pchhdbnamespacedetailshinclude pchhdbclientcursorhinclude pchhdbhasherhinclude pchhdbprojectionhinclude pchhdbnamespacehinclude pchhutiltracehinclude pchhutilnetsockhinclude pchhutilnetmessageserverhinclude pchhutilnetminiwebserverhinclude pchhutilnethttpclienthinclude pchhutilfileallocatorhinclude pchhutilhashtabhinclude pchhutilqueuehinclude pchhutilwinutilh include pchhutilchecksumhinclude pchhutilbsonutilhinclude pchhutiliteratorhinclude pchhutillruishmaphinclude pchhutilstringwriterhinclude pchhnoformat
0
descriptionunder the url section heading kerberos gssapisspi replace to connect using the authentication mechanism with to connect using the gssapi authentication mechanism replace mongoclientconnectfmongodbssauthmechanismgssapigssapiservicenamemongodb urlencodedprincipal server functionerr client with mongoclientconnectfmongodbssauthmechanismgssapigssapiservicenamemongodb urlencodedprincipal server functionerr client scope of impact to other mvp work and resources scope or design docs invision etc
0
this lab of mongodb basics we ask students to locate the city of cancun mexico and drag a radios around the coast to find geopositional data which should look like this image however in the latest version of compass or the of the mapbox plugin this label is not getting shown in the map see attached image to reproduce this we can open compass connect to the course hostname username password replica set locate the shipsshipwrecks collection and run the following query in the schema tab coordinates geowithin centersphere
1
we turned off a number of asserts in objectmodreadonlytests to catch exceptions because were changing how exceptions bubble up through invoke switch those over to verifying asserts
0
program terminated with signal segmentation fault in gdb bt in in mongocclusternodedestroy in mongocclusterdestroy in mongocclientdestroy in mongocclientpoolpush
1
after upgrade to mongodb from to started encountering mongo ioerror after some research found that happened each time when primary member in replicaset was restarted and elected as primary again was reproduced with coderuby client mongoclientnew ssl true sslverifyi false sslcert tmpmongodbsslclientcrt sslkey tmpmongodbsslclientkey sslcacert tmpmongodbsslmongodbpem read mode primary write w database test authsource admin user user password password client clientusetest clientinsertonename john stop primary member wait for another member to be elected as primary clientinsertonename paul start exprimary and wait for this member to be elected as primary again clientfindname johncount code and we are getting this error each time codenone failureerror expectclientfindname johncountto eq mongo ioerror rescue in handleerrors handleerrors read deserialize block in read ensureconnected read deliver block in dispatch publishcommand dispatch code
1
as described in after the next major release the plans to maintain are up in the air the next major release plans to drop support for which we may not want to drop support for lets investigate alternatives like google test or decide if is still okay with these downsides
0
the above page has a link to install and update the monitoring agent when you click on this link it takes you back to the same page with the following warningquoteyou were redirected from a different version of the documentation click here to go backquote
1
while dropping a collection it should abort any inprogress index builds aborting the index will produce an abortindexbuilds oplog entry which suffices for inprogress index builds when dropping the collection secondaries should not abort inprogress index builds but instead wait for the abortindexbuilds oplog entry
0
hiwe have following configuration of mongodbjournal enabledon have following servers mongod config server mongos on have following servers mongod config server we ran dropdatabase command from mongos got below errormongos dbdropdatabase assertion dbclientbase transport error query dropdatabase assertioncode errmsg db assertion failure ok dropped the database only from was not deleted from to again ran dropdatabase command from mongos and get ok mongos dbdropdatabase dropped ok but database was not deleted from to got below log in mongoscouldnt find database in config dbi have following how can we drop the database in above what does mean by couldnt find database in config db any other way to remove database files apart from dropdatabaseplease help me to resolve this issuetrjrv
1
the newly introduced optimized assembly routines is missing the required notegnustack section leading to the stack of any executable that includes the generated object file having an executable stack this can easily be seen by building mongodb with ldgold and adding linkflagswlwarnexecstack to the build noformat linking buildcachedmongodbopsupdatedrivertest usrbinldgold warning missing notegnustack section implies executable stack noformat note that this affects all target architectures using the gnu toolchain not just the build because the file is unconditionally compiled in but its contents are ifdefed out except on note that these changes were also backported to the branch so this will need to be fixed on both branches
1
panelbgcolorfafbfc summary the serverless test suite is currently run directly against atlas proxies as this allows drivers to run certain test against a single proxy if required eg if a failpoint needs to be tripped and target failpoints to individual proxies eg for the mongos pinning tests however in the near future the proxies will be moving behind a load balancer which makes both of these impossible the existing load balancer tests avoid this issue by spinning up two different load balancers one that fronts a single mongos and one that fronts multiple and conditionally uses each as necessary we could consider doing something similar for serverless though it would require changes atlas side motivation without a solution to this problem drivers testing of serverless will be limited does this ticket have a required timeline what is it all serverless instances will be moving behind load balancers in month is this ticket only for tests yes panel notes for language authors drivers that already implemented serverless testing before load balancing must opt in to testing load balanced serverless instances pass loadbalancedon as an environment variable to evergreenserverlesscreateinstancesh to create a load balanced aws backed instance use the new expansions singleatlasproxyserverlessuri and multiatlasproxyserverlessuri in place of singlelbmongosuri and multilbmongosuri in load balancer tests see the go poc for an example of updating existing serverless resync spec tests to account for new serverless forbid as part of this commit update also resync spec test to acount for one more serverless forbid as part of this commit
0
it looks like the testselection service is returning most of the data as a string instead of json this will be annoying to consumers of the service and should be changed see here for an example as a testselection consumer i want all the data returned as json so that i can easily parse it ac all endpoints return their data as json tests are written to ensure this is the case
0