text_clean
stringlengths 10
26.2k
| label
int64 0
1
|
---|---|
show ninserted show dbafind id a id a type it for show dbadmincommandlistcollections collections ok use localswitched to db dbadmincommandlistcollections collections ok show dboplogrsstats ns localoplogrs count size avgobjsize storagesize nindexes capped true max maxsize wiredtiger uri lsm code | 1 |
when a user executes applyops with an update on the primary this is treated as an upsert by default therefore when the applyops oplog entry is replayed on the secondary during initial sync it should also be treated as an upsert however it is not fetching missing documents masks this bug for now see after this change we no longer fetch missing documents when applying an applyops oplog entry during initial sync and initial sync still succeeds | 0 |
the current c driver side implementation of compression only works with opmessage which in turn was introduced in server however compression itself was introduced in server the c driver currently has issues when compression is configured and the server is version one issue is that we dont compress opquery messages sent to the server another issue is that we dont always handle opcompressed replies from the server correctly in particular when raising commandstartedevent | 0 |
you can reproduce yourself connect here go to collection aggregation tab try to import the following pipeline with the out at the end the same pipeline without the out doesnt cause the problem codejava group id country country date date uids addtoset uid addtoset addtoset countrycodes addtoset countrycode combinednames addtoset combinedname population first population confirmed sum confirmed deaths sum deaths recovered push recovered states push state project id country idcountry date iddate uids cond eq null remove cond eq null remove countrycodes cond eq countrycodes null remove countrycodes combinednames cond eq combinednames null remove combinednames population cond eq population null remove population confirmed deaths recovered cond eq recovered remove sum recovered states cond eq states remove states out countriessummarytemp code | 1 |
this is present both in and in master but not in or earlier for each wait which is more than a second mongodb will report the sum of arithmetic progression of all msec increments of the wait so basically we report approximately a square of the actual wait time it happens on this line for lock waits we wake up every milliseconds to check for deadlock and to also update the wait time counters so that if a thread is blocked for a long time the currentop statistics will reflect that the bug is in that we do not reset the last sample timestamp and so the time keeps accumulating the workaround until we fix it would be to take the square root of the wait time reported in the lock info | 0 |
burnintestspy has a function called findlastactivatedtask that gets called here but findlastactivatedtask can return none producing noformat python buildscriptsburnintestspy branchmaster testlistoutfilejstestsnewtestsjson noexec burninargs comparing current branch against none traceback most recent call last file buildscriptsburnintestspy line in main file buildscriptsburnintestspy line in main valuescheckevergreen file buildscriptsburnintestspy line in findchangedtests revisions callosplitlines typeerror cannot concatenate str and nonetype objects command failed exit status task completed failure noformat this breaks the compile task | 1 |
see for example at this moment it says uptime is hour even though the host only ran for its calculating uptime as now starttime regardless of the current state of the host which is it should instead be now starttime when the host is still running but terminationtime starttime for all other host states | 0 |
delayms in configsettings id balancer | 0 |
automation agent changelog released now built using go when importing a process that uses a password for the pemkeyfile import it without making the user reenter it significant performance improves for state gathering especially for larger sharded clusters add a configurable timeout always attempt to step down replica set member nodes before shutting down monitoring agent changelog version released support for high resolution monitoring for cloud manager premium plans support for multiple active monitoring agents explicit monitoring of arbiters | 0 |
i have tried to start a secondary and failed to get the replication started then i updated the secondary to and it also fails to replicatewhat i see is that the optime for the secondary is stuck at the time of the backup the steps from booting to secondary look like they happen over time then the server just never gets caught up to the optime of the primary | 1 |
makes the tarball unbuildable | 0 |
loops back to the install mongodb landing page doesnt load suse tutorial | 1 |
running the incremental backup csuite test testcsuiteincrbackup produces occasional checksum mismatches running at the head of the wt branch the problem occurs when the test reads a backup and gets a block of zeros instead of the data it expected this suggests that a required block didnt get copied to the backup codejava testincrbackup s seed wtcursorinsert wtblockreadoff read checksum error for block at offset block header checksum of doesnt match expected checksum of wtcursorinsert wtbmcorruptdump chunk of code the error occurs while replaying the log when opening the backup codejavagdb bt giraise at in giabort at in wtabort sessionsessionentry at in wtpanic sessionsessionentry at in wtblockreadoff sessionsessionentry blockblockentry bufbufentry at in wtbmread bm session buf addr addrsize at in wtbtread sessionsessionentry bufbufentry addr at in pageread sessionsessionentry refrefentry at in wtpageinfunc sessionsessionentry refrefentry funcfuncentry wtrowsearch at in wtpageswapfunc func func want held session at wtrowsearch cbtcbtentry srchkeysrchkeyentry insertinsertentrytrue leafleafentry leafsafeleafsafeentryfalse leaffoundpleaffoundpentry at in cursorrowsearch leaffoundp leaf inserttrue cbt at wtbtcurinsert cbtcbtentry at in curfileinsert cursor at in txnopapply rrentry lsnplsnpentry ppppentry end endentry a at in txncommitapply end pp lsnp r at txnlogrecover sessionsessionentry logrec lsnplsnpentry nextlsnpnextlsnpentry cookiecookieentry at in wtlogscan session lsnplsnpentry funcfuncentry cookiecookieentry at in wttxnrecover session sessionentry at in wtconnectionworkers sessionsessionentry cfgcfgentry at in wiredtigeropen home homeentry wttestincrbackupcheck eventhandlereventhandlerentry configconfigentry connectionpconnectionpentry at in checkbackup backuphomebackuphomeentry wttestincrbackupbackup backupcheckbackupcheckentry wttestincrbackupcheck tinfo tinfo at in main argc argv at code this isnt too hard to repeat i ran testincrbackup times and hit three failures the other problem seeds were codejava testincrbackup s testincrbackup s code this seems to have been introduced in the backports of and at the previous commit the backport of and friends i dont see the problem i also dont see this problem in runs in develop or the branch so i expect that something went wrong with the backport to | 1 |
see pinning oldest timestamp in the design doc test that d can advance to pinning and r can advance to pinned states create opobservers on d and r that pin the wt oldest timestamp test that its actually pinned | 0 |
currently when a readwrite error occurs we request an immediate heartbeat this is incorrect rather we should be clearing the connection pool | 1 |
currently the interactions between event subscription and client instantiation and duplication result in some interesting behavior clientwith may reuse or duplicate the cluster in the returned client if the cluster is reused so are event subscriptions and attaching events to the new client also adds them to the old client if a new cluster is created it does not inherit any of the event subscribers on the original client and instead gets the default subscriber set if an application explicitly removed those subscribers from the original client they would reappear in the new client global event subscribers are copied into perclient event subscribers upon client instantiation therefore adding global event subscribers when clients are already instantiated is a silent noop these subscribers will not be receiving any events for existing clients sdamproc is currently only applied to the client on which it is specified not to derived clients via dupwith to be consistent with point above we should think about how to handle the above cases in a sensible and consistent manner | 0 |
possibly related to but without crashon a system i start mongo on existing data which was created by mongodb which never caused problemsdatalabsbinmongod dbpath datalabsdb nohttpinterface journal port i do end up in an exception even in db shell dbstats assertion createprivatemap failed look in log for error assertioncode errmsg db assertion failure ok show dbstue may uncaught exception listdatabases failed errmsg exception createprivatemap failed look in log for error code ok exception taken from command line istue may mongodb starting dbpathdatalabsdb may db version pdfile version may git version may build sys info linux smp fri nov est may journal dirdatalabsdbjournaltue may recover no journal files present no recovery neededtue may waiting for connections on port may connection accepted from may error mmap private failed with out of memory bit buildtue may assertion failed look in log for error datalabsbinmongod datalabsbinmongodthreadproxy more informationfree ltm total used free shared buffers cachedmem bufferscache acore file size blocks c seg size kbytes d unlimitedscheduling priority e size blocks f unlimitedpending signals i locked memory kbytes l memory size kbytes m unlimitedopen files n size bytes p message queues bytes q priority r size kbytes s time seconds t unlimitedmax user processes u memory kbytes v unlimitedfile locks x unlimited | 1 |
in we implemented the downgrade process of the configcachecollection entries to since this version is not longer supported we can get rid of that code | 0 |
following the comment from with durable history a btree with reconciliation splitting the pages at performs terribly as compared to before durable history this ticket will investigate the reasons behind the performance variations and this might help gain a lot of performance back for durable history the sampled wtperf workload is updateonlybtree | 0 |
this code now errors out on the mongoid version i ran this code on where it works but it errors out on and coderubyclass testmodel include mongoiddocument embedsmany othermodels classname othermodel storeas x end class othermodel include mongoiddocument embeddedin testmodel end m testmodelcreate mothermodelscreate mdup code it errors with nomethoderror undefined method eachwithindex for nilnilclass from block in processlocalizedattributes because the code at errors because it does not use the storeas value to find the attribute | 1 |
function works fine in compass but not in node created example pipeline with function inside compass → works fine exporting this pipeline to node throws error pipeline in attachment | 1 |
theres no way to serialize to an existing builder without adding an extra container addressrestrictions serializers could stand to be improved as well | 0 |
the error happens when viewing these pages | 0 |
looks like checkcloseddurationmsec isnt used anywhere is this a deliberate change in | 0 |
as part of supporting point in time restores for deployments on cloud backup there is an intermediary step to restart a node as a single node replica set on an ephemeral port one that is not included in the replica set config document the reason why the node is started on an ephemeral port is to avoid the noop that occurs on stepup during the election this ticket was created to ensure that a server change does not break this behavior as well as to potentially discuss alternative solutions text below taken from comment thread with siyuan some other ideas update its hostport so that it cannot talk to itself the node will be in removed state after recovery update its config to include another nonexistent node so itll run for election infinitely but never succeeds make its priority not sure if this works but this is the most elegant way without significant change set a very high minvalid it cannot reach in recovery so that this node will stay in recovering state add a new command to truncate oplog in standalone mode but backward compatibility will be an issue | 0 |
i click on current and get an options pane which allows me to select or for the version whichever version i select i still get the docs | 1 |
similar to passing invalid options to mongos doesnt throw an error but instead ignores the invalid option for example this command should throw an error but does not codejava dbcollcountrandom truecode | 0 |
also may occur during update havent dug inwhen a deadlock exception is thrown in the deletestage the stage is not restored before reusing via deletestagerestorestage this can allow a primary stepdown to go undetected or undetected cursor invalidation and is generally undefined behavior also the deletestage does not store the working set member wsm when the problem occurs so on resume the child stage will return the next member | 1 |
building an application with the shared client leaves all the symbols from oidcpp unresolved not sure where mongojstime is defined but that is also unresolved for my build i am using the git tag for but the same also happens for the legacy branchwhen linking with the static library i do not see this behaviour | 1 |
as a result of work on weve run the evergreen test suites with the race detector enabled which has detected a number of errors in the following packages agent plugin pluginbuiltingit | 0 |
the new tool mongoreplay offers a superset of the functionality offered by mongosniff starting in mongosniff will no longer be shipped and no longer supported | 1 |
they conflict with the upcoming support for sessions in mongodb applications that need to support multiple concurrent users can use multiple instances of mongoclient instead with the removal of these methods we can also remove a lot of slow code related to checking connections out the pool which should provide a nice performance boost for all applications regardless of the use of authentication | 0 |
its fairly straight forward to set up both participants of a handshake inside of a unit test these participants can then be executed against each other and improve coverage of our native scram implementation outside of the jstests | 0 |
run poplar upload as a system task so it does not cause a task to fail | 0 |
the mongod server had a segmentation fault while running overnight two nights in a row i dont know if this is significant but the virtual machine mongo was running on had very little free memory we doubled the memory on the virtual machine and the segmentation fault has not reappearedthe following was in varlogmongomongodlognoformattue apr remove mwsrmvirtualmachines query pluginid simulation apr invalid access at address tue apr got signal segmentation faulttue apr backtrace usrbinmongod logstreamget called in uninitialized statetue apr error clientclient context should be null but is not clientconnlogstreamget called in uninitialized statetue apr error clientshutdown not called connnoformat | 0 |
following configurations constantly fail on evergreen apiversionrequired standalone auth ssl apiversionrequired standalone noauth nossl this should be fixed before release | 1 |
need a nonracy openifdatabaseexists context this causes issues when we drop a database while migrations are still waiting for cleanup | 0 |
the following features are being removed in server version storage engine eval all tests that depend on any of these features should be skipped when run against server version | 0 |
when a notification setting is set to none for build break notifications the setting is not saved and the attached error is shown credit | 0 |
we want to restrict transactions to only run on primaries the way accomplished that was by restricting them to only work if the readpreference provided was primary drivers however may send primarypreferred when connected to a primary via a direct connection we should undo the server changes from and instead change the checkcanservereadsfor method in replicationcoordinator to return false if the node is not currently primary and were in a transaction | 0 |
on start up with the argument setparameter authenticationmechansmsgssapi i receive the errorsevere failed global initialization badvalue no mechanism available couldnt find mech gssapithen mongod shuts down | 1 |
the old ninja module had separate messaging verbs for compiling linking and installing the new ninja module only seems to have generating building install can we make install be installing to be parallel with the other messages also can we bring back the differentiation between running a compile task and running a link task | 0 |
support minimum cache sizes in format format runs tests with tiny caches which is a good thing but theres a problem were not going to fix in the current release where a tree is entirely filled with internal pages that cant be flushed add a minimum cache size to formats configuration so we can set a bottom limit on the stress test cache size to avoid the problem and unblock testing | 0 |
rectxnread code order cleanup we clear the chosen update based on birthmarks at a strange point in the code literally inbetween another test and the exercise of the test result and its confusing | 0 |
what problem are you facing callback is called twice when an error occurs when using insertone i suspect itll affect all collection apis what driver and relevant dependency versions are you using mongodb driver database version steps to reproduce call insertone after connection has been closed to force an error notice that the callback is executed twice i believe the error is here driversnodemongodbnativecollectionjs line codejava catch error collection operation may throw because of max bson size catch it here see if typeof callback function callbackerror this should return else thisconnemitoperationend id opid modelname thismodelname collectionname thisname method i error error if typeof lastarg function lastargerror this calls the cb again else throw error code higher up the file line you can see that the callback already calls lastarg codejava callback function collectionoperationcallbackerr res if err null thisconnemitoperationend id opid modelname thismodelname collectionname thisname method i error err else thisconnemitoperationend id opid modelname thismodelname collectionname thisname method i result res return lastargapplythis arguments already calls lastarg code | 1 |
im in the process of upgrading from mongo gem to in my setup i had this code that was working fine code works fine no matter how many times you call it collupdate id p ne pricevat d ne dt set p pricevat d dt push h d dt p pricevat upsert true code in the process of upgrading i transformed this query to what i think that would be the equivalent compatible with code collfindid p ne pricevat d ne dt updateoneset p pricevat d dt push h d dt p pricevat upsert true code this code runs the time creates the document and the subsequent times throws the following error code mongo duplicate key error index pricehistoryproductsid dup key code it seems that upsert is not taken into account and the operation tries to create the document instead of updating it | 1 |
this is from the test is failing because a transactional read was blocked by tenant migration blocker and when if was unblocked the oldest timestamp was already moved forward by the wired tiger thus read had to fail after migration was aborted so this really happens by design here the read fail errmsg read timestamp is older than the oldest available timestamp here is the evidence that wt moved the oldest timestamp recovery shutdown timestampsattrstable data my opinion we should simply fix the tenantmigrationconcurrentreadsjs test here to allow the transienttransactionerror with oldest timestamp message after all we will fail the read anyway if the tenant migration will succeed common case im leaving this as a new bug for someone to agree or disagree with my conclusion if we keep it as a normal condition the fix is lines in the test to some extend my question is about migration design does it make sense to block all reads for undetermined time if they will likely fail anyway i understand that the user wont have much option than to retry anyway but the retry may complete faster because if the new read will have newer timestamp it will not block | 0 |
problem we from switzerland have signs like ö ü ö and those chars are also stored in our fields in our mongodbin our application all works fine with that special chars while reading from db and saving everything works finebut not in csv exportsim exporting my docs successfull to csv over cli withmongoexport host xymongohqcom port username xy password xy db appxy collection moment csv out usersmillienprojectsamagexportcontactscsv fields nameemailiptextcreatedatisvisiblein the exported csv every special sign öäü and others are replaced with not readable chars likeü fìnfseems to be an encoding problemcan you help us is there a way to set encoding of exports i didnt find somethingthanks and best regards from switzerland | 0 |
what problem are you facing i have multiple queries running parallel without using await let dbcollectioninvoicesaggregatetoarray returns promise let dbcollectioninvoicesaggregatetoarray returns promise then i use await on the result when all promises have finished let result await promiseallsettled prior to version this worked perfectly now i have tested with all versions after and the output of nettotal is zero or sometimes a smaller amount of expected value driver and relevant dependency versions are you usingcolor is workingcolor steps to reproduce invoices let dbcollectioninvoicesaggregatetoarray returns promise let dbcollectioninvoicesaggregatetoarray returns promise then i use await on the result when all promises have finished let result await promiseallsettled result of is result of is | 1 |
our application opens a long running transaction that pulls records out of mongogenerating the report out of the application can take many minutes depending on the size of the reportsif a user just exits the grails application without waiting for completion the collection being queried gets stuck and mongodb experiences significant slowdownthere is no indication in the mongo log that anything is awry | 1 |
change the format from last communication jul am to last communication a month ago | 0 |
this is a problem that extends to all functions in the aggregation framework both those that expect positional arguments in the form of arrays and those that expect named arguments in the form of objects fixing this issue will be fairly labor intensive and it is not a critical improvement it just limits the amount of partial redundancy elimination we can do when optimizing pipelines code dbfooaggregateproject out let vars x in add x e query error command failed ok errmsg add only supports numeric or date types not array code codename aggregate failed code this occurs because the parse methods for the individual expressions check for arrays and objectsspecific object keys statically before the pipeline runs so addressing this will also lose us some of our static checking capabilities | 0 |
after running js to atomically update a number of documents document data was moved and json elements reordered i think this is expected when this happened the class element appeared before the id element this seemed to impacting the ability to update existing records updates were silently failing with no update to the documentheres a snippet of the jsoncode class comriskchecknowdatafactorymodelpersonimplpersonwatchimpl id active true createdat shardk value date updatedat codework around was to rename the collection read through the renamed collection writing everything back the original collectionis there a known bug that would prevent proper update behavior when id is not the first element in the document | 0 |
calling mongoccollectionfind with batchsize of and a limit of will set the wireprotocol numbertoreturn to setting a batchsize it also seems to be passed in as a negative number to numbertoreturn for unknown reasonthis does not work if the collection has plenty big collections mongodb will only send it data fitting into the initial reply and then ignore everything after thatthis is why all drivers set numbertoreturn to or positive number not a negative number by default | 1 |
im trying to restore a database with collections using mongorestore which was taken as a backup from mongo i used the following command mongorestore u p d im using batchsize to try to make it work im getting an insertion error after collections it almost runs for almost an hour and half perfectly when i get this error failed restore error error restoring from insertion error eof | 1 |
the test jstestsmultiversionminorversiondowngradereplsetjs fails when running the replica set with wiredtiger noformat i assertion type for field ns i control begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname darwin release version darwin kernel version tue apr pdt machine somap end backtrace noformat | 0 |
at the moment it is possible for queries on the metadata table to block waiting for space in cache that can lead to hangs since some paths that query the metadata hold locks in the system ive seen a case of this with lsm trees where a thread is doing a checkpoint and has the checkpoint lock but is waiting for the table lock while another session is opening a table for the first time which holds the table lock and then is waiting on space in cache space never becomes available and the system hangs call stack of the cursor open operation code thread thread lwp in from in wtcondwaitsignal in wtcacheevictionworker in in wtbtcursearch in curfilesearch in wtschemaopentable in wtschemagettable in wtcurtableopen in sessionopencursorint in sessionopencursor in mongogetcursorstdbasicstring stdallocator const unsigned long bool in mongowiredtigercursorstdbasicstring stdallocator const unsigned long bool mongooperationcontext in mongofindrecordmongooperationcontext mongorecordid const mongorecorddata const in mongofindentrymongooperationcontext mongostringdata mongorecordid const in mongogetmetadatamongooperationcontext mongostringdata in mongogetmetadatamongooperationcontext const in mongogetallindexesmongooperationcontext stdvector stdallocator stdallocator stdallocator const in mongogetttlindexesfordbmongooperationcontext stdbasicstring stdallocator const stdvector in mongodottlpass in mongorun in mongojobbody in executenativethreadroutine in startthread from in clone from code call stack of the checkpoint code thread thread lwp in llllockwait from in from in pthreadmutexlock from in wtspinlocktrack in txncheckpoint in wttxncheckpoint in sessioncheckpoint in mongowaituntildurablebool in mongoflushallfilesbool in mongodorealwork in mongorun in mongojobbody in executenativethreadroutine in startthread from in clone from code i think we should either add a check in wtcacheevictioncheck for the btree being the metadata table or always set the wtsessionnoeviction flag on sessions when they are using the metadata the latter is likely to require more invasive code changes | 0 |
the changes from had reintroduced the testdataforcevalidationwithfeaturecompatibilityversion option so that the jstestfuzz mutational fuzzer could upgrade the feature compatibility version before running the validate command this is useful because certain versions of mongodb have the validate command intentionally fail in lastlts fcv to allow users to be completely confident about downgrading it appears we forgot to change in the jstestfuzzyml test suites during the release cycle observe that both the and branches are currently forcing the fcv to instead the branch should be forcing the feature compatibility version to this hasnt been an issue because there isnt a difference in validate behavior on and thatll likely be why the skipfcv concept from as part of is safe on the branch however this in turn led us to change in as part of when it should really be on the master branch now | 0 |
running a replica set on amazon boxes an arbiter all in amazons datacenter this is in production doing insertssec on the primary im seeing this over and over in the secondarys log and it wont catch up to the primarynoformattue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted ebsmongolatestbinmongod tue sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep warning too much data written uncommitted sep flushing mmaps took for filestue sep assertion not master or secondary cannot currently read from this replset member nsgerawsystemindexes query expireafterseconds exists true tue sep problem detected during query over gerawsystemindexes err not master or secondary cannot currently read from this replset member code tue sep error error processing ttl for db geraw invalid parameter expected an object tue sep warning assertion failure a srcmongoutilalignedbuildercpp ebsmongolatestbinmongod ebsmongolatestbinmongod ebsmongolatestbinmongod noformat | 1 |
while querying using mongocursor the date field returned as integer variable but when i use dbobject it seems to be return as perfect mongodb capturepngthumbnail when i query using the mongocursor codejava date date code when i query using the dbcursor codejava date date code the method getdbstring from the type mongo is deprecated | 1 |
in particular totalindexsize which works by getting datasize for each index and adding them to seems like this gets implicitly converted to a string append w the new numberlong type see this exampleinstead of ending up with we get | 0 |
on change it updates the url using a debounce | 0 |
so this is the predicament i know the embedsmany allows you to embed documents into a parent document this makes sorting by relations easier however i cannot have this because my objects can be isolated entities in other words a contact cannot embedmany leads because a lead can exist independently of a contact so the relationship i created is a hasone class contact include mongoiddocument hasone lead field referencenumber type string end class lead include mongoiddocument belongsto contact end it works fine until i need to sort leads by contact reference number then i do not know how this can be done in activerecord you can do this leadjoinscontactorderreferencenumber asc leadjoinscontactorderreferencenumber desc but there is no way to sort by relations in mongoid contactids contactorderreferencenumber ascpluck id leadincontactid contactids the above doesnt work any idea how to improve this problem | 0 |
create generatesslcert operation for rundsi that uses expansionsyml instead of catting to a config file this will benefit us to have more flexibility in the future if we ever want to change the shell script and move it to a new location by avoiding servers evergreenyml from having the full hardcoded path | 0 |
hello we are trying to deploy our app on aws we are running a replica set configuration with data nodes and arbiter same configuration works fine on local environment but throws following exception on awscommongodbmongoexception cant find a master is the config of replia setprimary rsconfig id sparkreplicaset version members id host id host id host arbiteronly true i have installed mongo shell on the app servertomcat host and i am able to connect to primary via the shell that rules out networking issue the code which gets the connection string is as follows list addrs getmongoaddress m new mongoaddrs morphia new morphiaaddrs list is created using are we missing in all this same code works on node replica set on local machine here is config from my local setupprimary rsconfig id sparkdb version members id host id host id host thankssankate | 1 |
tutorial should cover configuring mongodb native ldap for authentication via active directory configuring mongodb for authorization via active directory | 0 |
this results in some odd behaviour hostnames rs pymongomongoreplicasetclienthostnames replicasetmjbtest print rsdatabasenames force it to block until primary is initialized it seems to have some problem connecting to the server not sure why but it seems to fall back to localhost for subsequent attempts if i strace it and grep connect it looks like this noformat safamilyaflocal sunpathvarrunnscdsocket enoent no such file or directory safamilyaflocal sunpathvarrunnscdsocket enoent no such file or directory safamilyafinet safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress safamilyafinet einprogress operation now in progress noformat where is the masked address of my aws instance why is it trying to connect to localhost on subsequent attempts thats definitely not what was requested noformat traceback most recent call last file pymongomonitorsingle line in print rsdatabasenames force it to block until primary is initialized file line in databasenames readpreferencereadpreferenceprimary file line in command with clientsocketforreadsreadpreference as sockinfo slaveok file line in enter return selfgennext file line in socketforreads with selfgetsocketreadpreference as sockinfo file line in enter return selfgennext file line in getsocket server selfgettopologyselectserverselector file line in selectserver address file line in selectservers selferrormessageselector pymongoerrorsserverselectiontimeouterror could not reach any servers in replica set is configured with internal hostnames or ips noformat | 0 |
mongorestore crashes importing a database dump with a large number of collections failed restore error databasecollection error creating indexes for databasecollection createindex error exception too many open files as a consequence the database is not imported completely and the server daemon crashes elevating the number of open files possible for ubuntu to the maximum accepted value it still crashes just at a later stage than before i think it is definitely a bug not a feature if the program runs out of filehandles when already set to a maximum of it should really free some file resources on the way also since restoring a database is not that much of a timecritical application in most cases i could not think of any justification for such an excessive use of resources that makes it unusable for large databases with the current version i could not restore my database which is rather bad best maik | 1 |
this is especially helpful for sharding since were dealing with many coordinating processes and if the logs arent ordered by process timestamp the burden is on the reader to impose an ordering on what essentially becomes a jumble of lines i actually already have a python script that does this just have not found time to integrate it into lobster myself keywords for search sort order reorder | 0 |
i am getting not master error while connecting to the replicaset with readpreference secondarypreferred the error is thrown while using the drop collection and dbcommand method | 0 |
im working on a dns management api and i run in a strange issue with polymorphic embeddingi have these modelscoderuby class dnszone include mongoiddocument embedsmany rrsets classname dnsrrset inverseof zone embedsone soa classname dnsrecord as container class dnsrrset include mongoiddocument embeddedin zone classname dnszone inverseof rrsets embedsmany records classname dnsrecord as container class dnsrecord include mongoiddocument embeddedin container polymorphic true codeso i have one record embedded in the zone and many records embedded in rrsetwhen assiging the soa record to the zone everything works fine but when trying to replace it or just delete it i get thiscode nomethoderror undefined method records for from removechild codethis records name is provided by metadataname which returns records instead of soanow the strange part in its code metadataname just returns metadataname butcode zsoametadataname records zsoametadataname soa codeand when adding puts metadatainspect in metadataname i get thiscode zsoametadataname mongoidmetadata autobuild false classname dnsrecord cyclic nil countercachefalse dependent nil inverseof nil key records macro embedsmany name records order nil polymorphic true relation mongoidmany setter records records puts zsoametadatainspect mongoidmetadata autobuild false classname dnsrecord cyclic nil countercachefalse dependent nil inverseof container key soa macro embedsone name soa order nil polymorphic true relation mongoidone setter soa code metadataname leads to the wrong relation when can this wrong metadata object come from | 0 |
during a stepdown the server logs look like this noformati index index build received interrupt was interrupted i index index build ignoring interrupt and continuing in i index index build was interrupted noformat these log messages are contradictory because we report the index build as failing immediately after saying it will continue in the background | 0 |
i have mongodb collection like follow mongodb json id class comacoll bcoldate bname xx bfirstname blist ccolllist cid cname abc clist s n qq status false cid cname abc clist s n qq status false cid cname abc clist s n qq status false java pojo documentcollectiontest class test implements serializable private static final long serialversionuid id indexed private string id fieldbbcoll private bcoll bbcoll fieldccolllist private list ccolllist class bcoll implements serializable private string bname private string bfirstname private list blist class ccolllist implements serializable private string cid private string cname private list blist class clist implements serializable private string s private string private string private string java code spring data mongodb query test col mongooperationfindonequery testclass query when i try to iterate the clist object value for the cid and the mongodb collections list values are null but in mongodb the values are existed as per json and list object are not null why this issue in springdatemongodb version otherwise please let me know that this issue identified earlier i really appreciate your help thanks and regards pandiyan rengasamy | 1 |
we added a clean node to our replica set yesterday it took about hours to sync everything seemed fine except when we turned on reads we saw terrible perf in looking there are no indexes on this node just the defaults for what did we miss the docs i have read dont mention anything about it please helpalso if you follow the instructions herewith mongodb you corrupt your configuration and hose your database you cannot add a replica set node while mongod is running on that node we learned that the hard way fyi | 1 |
buildinfo was originally made to be a test only command in mongocryptd as the driver fle spec has changed it needs to be exposed so drivers can treat mongocryptd as a normal mongod | 0 |
hellothese fields are missing from this documentationelectiontime it helpsmaga | 1 |
the pretask hook will be defined by evergreen administrators in a text box on the distro configuration page this hook will run using the shellexec command before each task that is not in a running task group it is expected that this shell script will invoke a script placed on the machine by buildhostconfiguration it is also the responsibility of the caller to noop some of the time since this hook will run before every task it may be the case that this hook is more natural as a posttask hook since runpostgroupcommands already runs only when the following task is not in the same task group we will also explore this option | 0 |
this pageis a top result in the mongodb backup search can we please add a short summary paragraph to the top so that those that arrive on that page can get a brief overview and be directed to create an account if they dont have one here is some suggested copy and feel free to edit as neededwelcome to the documentation for mongodb management service mms engineered by the team who develops mongodb mms provides cloudbased monitoring backup and recovery for mongodb mms users can visualize database performance and set custom alerts when particular metrics are out of normal range mms is also the only continuous backup solution for mongodb providing pointintime recovery for replica sets and clusterwide snapshots of sharded systems you can create a free mms account at mmsmongodbcomplease link the first instance of mongodb management service mms and mmsmongodbcom using the following link | 0 |
the getting full collection information section of the collection enumeration spec mentions that the output of the listcollections helper method must remain the same regardless of whether it is performed via a listcollections command or via a query over the legacy systemnamespaces collection in pre servers however since the release new fields were added to the listcollections command response eg info that would be omitted according to a strict reading of the collection enumeration spec this spec should be updated to relax this requirement and allow for future additive changes to the listcollections command response the relevant quote quote the returned result for each variant must be equivalent and each collection that is returned must use the field names name and options quote on a more general note the collection enumeration spec is currently listed as a draft despite having been written in we should update this spec and put it through the formal approval process | 0 |
when attempting to use clientside field level encryption by means of an aws kms i run into the error tls handshake failed routinescertificate verify failed the enterprise mongodb server i am connecting to is version and does not require an ssl configuration in the connection i have tracked my error down to an inability to set the ca file for the ssl connection to the aws kms in the file there is a getstream function whose variable sslopts of the type mongocssloptt is filled in with null values through the function mongocssloptgetdefault i was able to resolve my issue and load and unload encrypted fields successfully by compiling a version of the c driver in which i used mongocgetenv to pass in a string that i assigned to the cafile value of sslopts if there is a manner of configuring this ssl connection i have not found the documentation for it nor a code path that assigns values given by the user | 1 |
when you click on the modify a cluster tab on the left of the mongodb atlas documentation there is no ink behind it previously there was an option to scale a cluster which had the following link it appears that the scale a cluster has been replaced with modify a cluster but without a corresponding link | 1 |
i need to match part of an id that was automatically generated on the server this is for an administration tool letting the administrator specify part of an id to make things more usable for them i dont care what the actual solution is but there is currently no way to do this without visiting every row from a clientone example way would be using regex findid regexes do not work against object id | 0 |
a new defect has been detected and assigned to renctan in coverity connect the defect was flagged by checker pwuselesstypequalifieronreturntype in file srcmongostypechunkh function none and this ticket was created by | 0 |
when i try to export a collection to json or csv the export fails can reproduce it of the times with testusers in compassdatasets console error code mongodbcompassimportexportexport error running export pipeline mongoerror cursor session id is not the same as the operation contexts session id none at messagestreammessagehandler applicationsmongodb compass at messagestreamemit at processincomingdata applicationsmongodb compass at messagestreamwrite applicationsmongodb compass at dowrite at writeorbuffer at messagestreamwritablewrite at tlssocketondata at tlssocketemit at addchunk code a user also reported this other error when exporting to csv but i could not reproduce it code cannot run getmore on cursor which was created in session without an lsid code i imported the collection he was trying to export as teststocks in compassdatasets | 1 |
given i have the following code coderuby class user include mongoiddocument scope author whereauthor true end class article include mongoiddocument scope public wherepublic true scope authored authorids userauthorpluckid whereauthoridin authorids end code articleauthoredtoa is working as expected coderuby findusers filterauthortrue findarticles filterauthoridin code but articlepublicauthoredtoa is not working properly coderuby findarticles filterpublictrue authortrue findarticles filterpublictrue authoridin code it turns out that coderuby scope author whereauthor true code is called on article instead of user coderuby findarticles filterpublictrue authortrue code | 1 |
motor leaks one socket in each call to copydatabasethe socket is closed by garbage collection but it continues to count toward maxpoolsize if maxpoolsize is finite the default is then the client hangs after maxpoolsize calls to copydatabaseapplies to motorclient and motorreplicasetclientmotor users apparently dont call copydatabase much or this would have been reported earlier | 1 |
should mention that mongoid will use the drivers query cache if available driver version is or higher should also specify that x mongoid will use the drivers query cache if available x mongoid retains the interface for controlling the query cache onoff x mongoids turning the query cache onoff will affect the driver x mongoids own query cache implementation is retained for use with older driver versions but is deprecated | 0 |
several comments in the server codebase incorrectly had the word string changed to stdstring as part of see and look for changes in commentsin many cases the edited comment is no longer correct because the actual argument or return value is c string not a c stdstring or because the reference to string in the comment was used in its generic sense eg connection string | 0 |
we run the ssl tests on the asan builder but the underlying openssl build isnt instrumented so any leaks of openssl objects result in a very unhelpful stacktrace see itd be helpful to run those tests against a recent instrumented debug build of openssl on the asan builders so we can get clean stacktraces without having to build local copies of openssl | 0 |
when the replication lag is non zero was several hours in that specific occurrence we see what looks like gridfs corruption when reading data from secondary nodes noformat mongocxx expected file to have chunks but query to chunks collection only returned chunks a gridfs file being operated on was discovered to be corrupted noformat we dont update documents in gridfs we only create or delete them let me know if you need any other information | 0 |
this should be implemented under fromrawbuf method input parsing should divide this buffer into sign exponent significand | 0 |
after we added the bug submission tab i cant close an issue because it prompts me for the mandatory driver version fieldi dont think the field requirements should apply to us internally only to external submitters if that is possible somehow | 1 |
hi we use mongo replicaset version on our production environments the replicaset works with nodes one primary and secondaries few weeks ago we have started to get alerts on high cpu consumption on the primary node at the beginning we thought that we should separate the load between the members so we configured our service to make reads from secondaries and write to the primary unfortunately we are still on the same point the cpu of the primary node is between even on idle times while the secondaries consumes maximum of it is important to mention that we creates db for each one of our customers with average of indexes each we have like that attached a log of dbcurrentop mongostat and diagnostic metrics let me know if more information required thanks roie | 0 |
specifically atquoteno mms does not support see activate mms backup for a replica set and activate mms backup for a sharded cluster for more information on configuring backup snapshot schedulesquotemissing period at end of first sentence and activate mms backup for a sharded cluster should be a link but is currently just bolded text | 1 |
a number of build failures on windows systems seem to stem from file locking issuesworking theory our test harness assumes that files owned by a process usually mongod will be unlocked as soon as the process exitswindows guarantees that the os will unlock files after process termination but makes no guarantees about when they will be unlocked on a loaded system it may not be immediatetest failures we believe are due to our test framework assuming that the files are immediately unlocked on exitbuildbot windows js suite failing between tests build build build build build build includecodewindowserror access is denied the process cannot access the file because it is being used by another process windows build access is denied one different not a test framework error but rather a failure within a testbuildbot windows unclean shutdown detected when mongod restarted test didnt wait until previous mongod was fully shut down | 0 |
steps to recreate codejava const bulk cdbtestcollectioncollinitializeunorderedbulkop for let i i i bulkinsertx const result await bulkexecute resultinsertedids returns undefined undefined code resultresultinsertedids has ids that are not undefined | 1 |
it appears that only the first entry in a seedlist is ever used for replica set topology discovery if the first entry is invalid the topology discover fails after seconds with timed out trying to select a server but if the first entry is valid and all subsequent entries are not everything works just fine except no attempts are ever made to connect to them codec include include include include static void runcommand void mongocclientt client mongoccollectiont collection bsonerrort error bsont command bsont reply char str client mongocclientnew collection mongocclientgetcollection client test test command bconnew ping if mongoccollectioncommandsimple collection command null reply error str bsonasjson reply null printf sn str bsonfree str else fprintf stderr failed to run command sn errormessage bsondestroy command bsondestroy reply mongoccollectiondestroy collection mongocclientdestroy client int main int argc char argv mongocinit runcommand mongoccleanup return code note the port of the first node in the seedlist is not a mongod noformat replicasetsecondary rsstatus set replicaset date mystate syncingto members id name health state statestr primary uptime optime optimedate lastheartbeat lastheartbeatrecv pingms electiontime electiondate configversion id name health state statestr secondary uptime optime optimedate syncingto configversion self true id name health state statestr arbiter uptime lastheartbeat lastheartbeatrecv pingms configversion ok noformat noformat replicasetsecondary rsconfig id replicaset version members id host arbiteronly false buildindexes true hidden false priority tags ordinal one dc pa slavedelay votes id host arbiteronly false buildindexes true hidden false priority tags ordinal two dc nyc slavedelay votes id host arbiteronly true buildindexes true hidden false priority tags slavedelay votes settings chainingallowed true heartbeattimeoutsecs getlasterrormodes getlasterrordefaults w wtimeout noformat noformat gcc o sdam sdamc pkgconfig cflags libs sdam trace mongoc entry trace mongoc exit trace mongoc entry trace mongoc exit trace mongoc entry trace mongoc exit trace mongoc entry trace mongoc exit trace cluster entry trace cluster exit trace collection entry trace collection exit trace cursor entry trace cursor exit trace cursor entry trace cursor trace trace cursor entry trace cursor entry trace cluster entry trace cluster entry trace topologyscanner entry trace socket entry trace socket entry trace socket exit trace socket exit trace socket entry trace socket entry trace socket exit trace socket exit trace topologyscanner entry trace socket entry trace socket entry trace socket exit trace socket exit trace socket entry trace socket entry trace socket exit trace socket exit trace topologyscanner entry trace socket entry trace socket entry trace socket exit trace socket exit trace socket entry trace socket entry trace socket exit trace socket exit trace stream entry trace socket entry trace stream exit trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream exit trace stream exit trace stream entry trace stream writev trace stream trace stream a d m i n c m d trace stream ff ff ff ff i s m a trace stream s t e r trace stream entry trace socket entry trace socket entry trace socket sendbuf trace socket trace socket a d m i n c m d trace socket ff ff ff ff i s m a trace socket s t e r trace socket exit trace socket exit trace stream exit trace stream exit trace stream entry trace stream writev trace stream trace stream a d m i n c m d trace stream ff ff ff ff i s m a trace stream s t e r trace stream entry trace socket entry trace socket entry trace socket sendbuf trace socket trace socket a d m i n c m d trace socket ff ff ff ff i s m a trace socket s t e r trace socket exit trace socket exit trace stream exit trace stream exit trace stream entry trace socket entry trace stream exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream ad trace stream exit trace stream exit trace buffer exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream x trace stream trace stream s e t n a m e trace stream r e p l i c a s e t s e t trace stream v e r s i o n i s m trace stream a s t e r s e c o n d a r trace stream y h o s t s trace stream trace stream trace stream trace stream a r b i t e r s trace stream trace stream trace stream p r i m a r y trace stream trace stream a r b i t e r o n l y m trace stream e trace stream m a x b s trace stream o n o b j e c t s i z e trace stream m a x m e s s a g e s i z e trace stream dc b y t e s l m a x w r trace stream i t e b a t c h s i z e trace stream l o c a l t i m e h trace stream m m a x w i r e v e r s trace stream i o n m i n w i r e trace stream v e r s i o n o k trace stream trace stream exit trace stream exit trace buffer exit trace mongoc entry trace mongoc exit trace stream entry trace socket entry trace stream exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream trace stream exit trace stream exit trace buffer exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream d trace stream trace stream s e t n a m e trace stream r e p l i c a s e t s e t trace stream v e r s i o n i s m trace stream a s t e r s e c o n d a r trace stream y h o s t s trace stream trace stream trace stream trace stream a r b i t e r s trace stream trace stream trace stream p r i m a r y trace stream trace stream t a g s d c trace stream n y c o r d i n a l trace stream t w o m e trace stream trace stream m a x b s o n o b j e trace stream c t s i z e m a x m trace stream e s s a g e s i z e b y t e s trace stream dc l m a x w r i t e b a t trace stream c h s i z e l o c a trace stream l t i m e h m m trace stream a x w i r e v e r s i o n trace stream m i n w i r e v e r s i o trace stream n o k trace stream trace stream exit trace stream exit trace buffer exit trace mongoc entry trace mongoc exit trace topologyscanner entry trace socket entry trace socket entry trace socket exit trace socket exit trace socket entry trace socket entry trace socket exit trace socket exit trace stream entry trace socket entry trace stream exit trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream exit trace stream exit trace stream entry trace stream writev trace stream trace stream a d m i n c m d trace stream ff ff ff ff i s m a trace stream s t e r trace stream entry trace socket entry trace socket entry trace socket sendbuf trace socket trace socket a d m i n c m d trace socket ff ff ff ff i s m a trace socket s t e r trace socket exit trace socket exit trace stream exit trace stream exit trace stream entry trace stream writev trace stream trace stream a d m i n c m d trace stream ff ff ff ff i s m a trace stream s t e r trace stream entry trace socket entry trace socket entry trace socket sendbuf trace socket trace socket a d m i n c m d trace socket ff ff ff ff i s m a trace socket s t e r trace socket exit trace socket exit trace stream exit trace stream exit trace stream entry trace socket entry trace stream exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream trace stream exit trace stream exit trace buffer exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream i trace stream trace stream s e t n a m e trace stream r e p l i c a s e t s e t trace stream v e r s i o n i s m trace stream a s t e r s e c o n d a r trace stream y h o s t s trace stream trace stream trace stream trace stream a r b i t e r s trace stream trace stream trace stream p r i m a r y trace stream trace stream t a g s d c trace stream n y c o r d i n a l trace stream t w o m e trace stream trace stream m a x b s o n o b j e trace stream c t s i z e m a x m trace stream e s s a g e s i z e b y t e s trace stream dc l m a x w r i t e b a t trace stream c h s i z e l o c a trace stream ac l t i m e h m m trace stream a x w i r e v e r s i o n trace stream m i n w i r e v e r s i o trace stream n o k trace stream trace stream exit trace stream exit trace buffer exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream ad trace stream exit trace stream exit trace buffer exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream trace stream trace stream s e t n a m e trace stream r e p l i c a s e t s e t trace stream v e r s i o n i s m trace stream a s t e r s e c o n d a r trace stream y h o s t s trace stream trace stream trace stream trace stream a r b i t e r s trace stream trace stream trace stream p r i m a r y trace stream trace stream a r b i t e r o n l y m trace stream e trace stream m a x b s trace stream o n o b j e c t s i z e trace stream m a x m e s s a g e s i z e trace stream dc b y t e s l m a x w r trace stream i t e b a t c h s i z e trace stream ad l o c a l t i m e h trace stream m m a x w i r e v e r s trace stream i o n m i n w i r e trace stream v e r s i o n o k trace stream trace stream exit trace stream exit trace buffer exit trace mongoc entry trace mongoc exit trace topologyscanner entry trace socket entry trace socket entry trace socket exit trace socket exit trace socket entry trace socket entry trace socket exit trace socket exit trace stream entry trace socket entry trace stream exit trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream exit trace stream exit trace stream entry trace stream writev trace stream trace stream a d m i n c m d trace stream ff ff ff ff i s m a trace stream s t e r trace stream entry trace socket entry trace socket entry trace socket sendbuf trace socket trace socket a d m i n c m d trace socket ff ff ff ff i s m a trace socket s t e r trace socket exit trace socket exit trace stream exit trace stream exit trace stream entry trace stream writev trace stream trace stream a d m i n c m d trace stream ff ff ff ff i s m a trace stream s t e r trace stream entry trace socket entry trace socket entry trace socket sendbuf trace socket trace socket a d m i n c m d trace socket ff ff ff ff i s m a trace socket s t e r trace socket exit trace socket exit trace stream exit trace stream exit trace stream entry trace socket entry trace stream exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream trace stream exit trace stream exit trace buffer exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream trace stream trace stream s e t n a m e trace stream r e p l i c a s e t s e t trace stream v e r s i o n i s m trace stream a s t e r s e c o n d a r trace stream y h o s t s trace stream trace stream trace stream trace stream a r b i t e r s trace stream trace stream trace stream p r i m a r y trace stream trace stream t a g s d c trace stream n y c o r d i n a l trace stream t w o m e trace stream trace stream m a x b s o n o b j e trace stream c t s i z e m a x m trace stream e s s a g e s i z e b y t e s trace stream dc l m a x w r i t e b a t trace stream c h s i z e l o c a trace stream ee l t i m e h m m trace stream a x w i r e v e r s i o n trace stream m i n w i r e v e r s i o trace stream n o k trace stream trace stream exit trace stream exit trace buffer exit trace stream entry trace socket entry trace stream exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream ad trace stream exit trace stream exit trace buffer exit trace buffer entry trace stream entry trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream readv trace stream trace stream trace stream s e t n a m e trace stream r e p l i c a s e t s e t trace stream v e r s i o n i s m trace stream a s t e r s e c o n d a r trace stream y h o s t s trace stream trace stream trace stream trace stream a r b i t e r s trace stream trace stream trace stream p r i m a r y trace stream trace stream a r b i t e r o n l y m trace stream e trace stream m a x b s trace stream o n o b j e c t s i z e trace stream m a x m e s s a g e s i z e trace stream dc b y t e s l m a x w r trace stream i t e b a t c h s i z e trace stream l o c a l t i m e h trace stream m m a x w i r e v e r s trace stream i o n m i n w i r e trace stream v e r s i o n o k trace stream trace stream exit trace stream exit trace buffer exit trace cluster exit trace cursor goto failure trace cursor exit trace cursor exit trace cursor exit trace cursor entry trace cursor entry trace cursor exit trace cursor exit trace cursor entry trace cursor entry trace cursor exit trace cursor exit trace collection entry trace collection exit trace mongoc entry trace mongoc entry trace mongoc exit trace mongoc entry trace mongoc exit trace mongoc entry trace mongoc exit trace mongoc entry trace mongoc exit trace mongoc exit trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream exit trace stream entry trace stream entry trace socket entry trace socket exit trace stream exit trace stream exit trace cluster entry trace cluster exit noformat | 1 |
hey as discussed at the meeting there was inconsistency issue with hippo vs phongo regarding what type stdclass or array should be the top level type we decided to made it consistent as what we already did for embedded documents and make every document come back as a stdclass object type map should be changed to return toplevel documents as stdclass | 1 |
this test failed due to a node falling behind and getting stale the test should ensure this cant happen test failure failures noformat i repl syncing from i network connection accepted from connections now open i network end connection connections now open w repl we are too stale to use as a sync source i repl syncing from noformat | 0 |
in the update operators section i believe you have the descriptions of the minmax operators reversed the min operator should check if a value is greater than specified and the max operator should check if a value is less than specified | 1 |