text_clean
stringlengths
10
26.2k
label
int64
0
1
first occurred at logs show error printing stack trace at printstacktrace at doassert at functionasserteq at testgridfs at are not equal undefined at to load datamcigitgithubcommongodbmongogitmasterjstestsshardinggridfsjs
0
collectionfindquery fieldshintin my logs i found that the hint was hint geolocation id location
1
mon mar replset initial sync cloning db adminmon mar replset initial sync query minvalidmon mar replset initial oplog application from starting at mar to mar mar assertion failure database dbpdfileh mar replset initial sync failing error applying oplog assertion mar replset initial sync failed during applyoplogmon mar replset cleaning up mon mar replset cleaning up mon mar replset initial sync pendingmon mar replset syncing to mar replset initial sync drop all databasesmon mar dropalldatabasesexceptlocal
1
before durable history we only evict truncated pages if the truncation is globally visible with durable history i think we can now evict non globally visible truncations as we can reconcile the tombstones to the time window
0
we have seen multiple mongodb test failures with following assert from wtrecupdateselect codejavawtassertsession vpack null vpacktype wtcelldel vpacktwprepare code our initial investigations revealed that following scenario can reproduce this assert commit an update and remove it in single transaction add a prepared update to the same key evict the page which contains that particular key rollback the prepared transaction perform a checkpoint or evict the page debugging from gdb suggests that the function wtrecupdateselect is unable to properly restore the tombstone and the regular update from a single transaction from history store
1
the ordering class in orderingh is a good example of something that should be deprecated
0
when getlasterror is called with the number represents the total number of nodes that have the write when getlasterror returns null errhowever implementation of returns just replicatedto members omitting the primary which also has the writeaside from it being difficult to later track if application logs have the full writeresult but no information about which node was primary and also has the write it also feels unintuitivei think it would be less confusing to end user if returns a structure with writtento that on success is greaterthan or equal the w value and on failure or timeout writtentolength will be shorter than wheres an examplecodejs updatedexisting true n lastop t i connectionid wtimeout true waited replicatedto err timeout ok
0
the mysqli extension generates a changelog for all classes in the extension using the following codexml changeloglistingtitle changeloglistingdescription code we should look into doing this
0
mongocclientreadwritecommandwithopts includes logic to inject the mongoclientts read concern read preference and write concern if said options are not specified in the bson options not the same as the command document since phplib currently passes in these options in the command document a simple migration to mongocclientreadwritecommandwithopts would see duplicate fields injected by libmongoc therefore phpcs executecommand methods will need to extract these fields from command documents and supply them in the separate bson options document when combined with this will ensure that only phpc and libmongoc are responsible for injecting clientlevel options into commands lastly it should be noted that mongocclientreadwritecommandwithopts does not inspect a command document to determine whether it is a read or write command the default behavior will see read concern and write concern always injected from mongoclientt if not provided in bson options therefore the php driver may need to implement some detection on its own to determine whether to call mongocclientreadcommandwithopts or mongocclientwritecommandwithopts on a percommand basis
0
after our failpoints documentation is out of date on the internal wiki page references to and mongofailpointblock should be updated those macros are gone now a comment in curopfailpointhelpersh mentions mongofailpointpausewhileset instructions for using fail points is spread across failpointh and failpointserviceh
0
ive a compound id that has duplicated entries as shown below code id from original id from original id from original id from original code the only difference is the order of the fields inside is this behaviour ok
1
for immediate release automation agent version released support for rolling conversion to member auth fixes for rolling index builds backup agent version released support for streaming initial syncs support for mongodb clusters with config server replica sets
1
we have been unable to upgrade our azure classic cloud services to windows server for a few months now the matter is confusing because the same nuget package works fine on a regular windows server vm turns out that the os image of windows server used for azure classic cloud services specifically turns off tls and and only has enabled once we turned on support for the previous versions we were able to get mongo connectivity back and everything works now we have a startup script which enables the older tls versions on windows server please support tls for customers so we can use it this is net on c on azure which is a classic aspnet web api application
1
what problem are you facing i updated from v to and after deploying to azure functions app the functions crashes on libcoreindexjs error trace result failure exception worker was unable to load function inserttimings error cannot find module requireoptional require stack dhomesitewwwrootnodemodulesmongodblibcoreindexjs dhomesitewwwrootnodemodulesmongodbindexjs dhomesitewwwrootinserttimingslibcontrollerstimingsjs dhomesitewwwrootinserttimingsindexjs dprogram files dprogram files stack error cannot find module requireoptional reverting to solved the issue what driver and relevant dependency versions are you using node steps to reproduce works locally on windows on linux and on aws only way to reproduce is deploying an azure function app my app is running node on windows not sure if same issue applies to azure linux apps
1
changes to metadata such as setting validation and collmod operations in general may leave the catalog cache in inconsistent state when their transaction is aborted
1
or try at least
1
hi i just upgrade mongonativedriver from to i need to batch insert small documents into mongodb based on my project requirements my codes work fine with but when i upgrade the driver to i get this error exceeded maximum write batch size of what should i do to make it work thanks
0
the following line times out
0
comment on manualreferencecommandcounttxt i thought count command supports maxtimems options but dont see it documented
0
it is no longer needed
0
hi the ops manager installation guidelines in the online documentation for automation agent tell to use the latest version which is the cloud version this is wrong since you must use the binary versions provided with ops manager package distribution this means downloading those binaries from ops manager server as described in the ops manager web ui instructions automation agents from cloud manager cannot be used with ops manager it would also be nice to advertise this better to avoid confusion with cloud manager agents also for monitoring and backup agent thanks emilio scalise
1
description iterate over rhe results scope of changes fix the header ref typo check if we are referencing that header ref from other guides
0
the mongod server does not daemonize correctly when using fork it only forks once and sets the session id it really needs to fork an additional time and also close stdinstdoutstderr this is causing an issue for us when we do our backups of mongodb our process ssh roothostname etcinitdmongod rsync the db ssh roothostname etciintdmongod startthis can also be replicated without using an init script by running a command likessh roothostname usrbinmongod fork logpath varlogmongomongodlog dbpath datamongodbthis will should return but it does not it hangs i have verified that this can be fixed by having mongod close stdinstdoutstderr if you use forkfor reference
0
typoto pass read operations to from clients
1
here is psedu code code cat gor new cat isalive true mongogorcollectionsavegor saving gor as a main level document cage catcage new cage animalsaddrange new cat gor mongocagecollectionsavecatcage adding gor to animals list of cage as a subdocument var retrievegor mongogorcollectionfirstordefault retrievegorisalive false mongogorcollectionsaveretrievegor well we have changed isalive of gor to false and saved back in db so if i retrieve the only cat in the cage then it must be dead bool isaliveshouldbefalse mongocagecollectionfirstordefaultanimalsfirstordefaultisalive isaliveshouldbefalse is not false its true code i have checked db with mongovue and i see two document with same id has different values here is the link to screenshot
1
the explain command documentation says that it provides information on the execution of the following commands count group delete and update as of it should also mention the find command note that the list of commands is mentioned in two places on the page
0
message was lost in but its needed for cloud operators to know the pid of the daemonized process
1
how can we connect with ssh to mongodb server which located in aws and hosted in linux server we are getting a timeout when trying to connect to the mongo client we are working in net application using c driver and trying to connect via ssh client the following is snippet from our code using var client new sshclient mongodb user password clientconnect var connectionstring mongoclient mongoclient new mongoclientconnectionstring iasynccursor mongoclientlistdatabases clientdisconnect
1
the command to install the mongo shell is missingsudo yum install y that this is a perfect howto
1
the helper should support output as a cursor explain read preference
1
the secondaryreadspassthrough task on runs mongods jobs with a node cluster and occasionally the host runs out of memory leading to one of the mongod being killed the task should be moved to run on a distro instead of a currently
0
see this server ticket
0
dropindexes should abort inprogress index builds this should be done after makes it possible to abort an index build will enhance dropindexes to receive an array of indexes as arguments this task will only abort inprogress index builds if the user specifies all of the indexes that a single builder is building together a createindexes command can start multiple indexes building together on one builder and we currently only have the granularity to abort all or none of the indexes on a builder lastly dropindexes will not write a dropindexes oplog entry if aborting inprogress builds aborting the index will produce an abortindexbuilds oplog entry which suffices this allows rollback via refetch to know that an index was fully built prior to a dropindexes oplog entry
0
unittestzstd failed on rhel host project wiredtiger develop commit diff add an encryptor extension that uses the libsodium cryptography library add an encryptor extension that uses the libsodium cryptography library it should really be audited by a cryptographer before being used but is expected to be usable with at worst minor adjustments it uses the construction from libsodium to encrypt and checksum blocks it does not support retrieving keys from a key manager there not being any obvious opensource choices that im aware of this means that it can for the time being anyway only be configured with secretkey and not keyid which is perhaps unfortunate but better than nothing besides the encryptor itself this changeset includes the following related changes add the new extension to both the cmake and autotools builds rework the encryption page in the documentation adding the new encryptor and expanding on some of the other material and also add some bitsmake some improvements to the wtencryptor docs in utilmainc add a wtexplicitzero function for zeroing memory that takes precautions against being removed by the compiler and use it to clear copies of the secret key zero and free the secret key and open config string which contains the secret key when there is one earlier in nopencryptorc since this is supposed to be a template for application developers to fill in add a blank customize method without a customize method you cant configure keys so even though its officially optional it seems like the example should have one add support for the new extension to testformat note that doesnt exist are for testing the config plumbing and not any particular extension and needs to be able to munge the encrypted data and doesnt work with real encryption add new that checks the error paths in the new extensions customize method add an example snippet for how to configure the new extension to exallc for use in the docs add the encryptor directory to doxyfile so it can be an example add the new encryptor to the examples page in the documentation add a bunch of spelling words add some of the functions to the exception list in svoid like other extensions it also includes the following change that is not related but directly adjacent to a piece of the above in the cmake build of testformat pass the path to the zstd library with d like the other extensions some minor adjustments from a preliminary review document that wts checksums can be disabled when using encryption because any viable encryptor applies a cryptographically strong checksum theres no need to add a separate weaker checksum as well document this in the encryptors page and in the checksum argument of wtsessioncreate fix compiler warnings missed by accident initial changes from review also i missed something the change in wiredtigerin about configuring checksums also needs to be in apidatapy and incurs another spelling word argue with clangformat to get rid of the hangingindent comments make a couple more comment adjustments try again with the comment formatting it seems that the header is required to use hanging indent by functionpy so in order to avoid the rest of the comments after being reformatted with hanging indent by clangformat move them inside the function body this is maybe not optimal but it at least isnt visually revolting and doesnt break the tree also add sodiumencryptc to distextlist so that all the checks are run on it split the cleanup path for secretkeyp in two hopefully avoids false positives from inadequately pathsensitive static analyzers jul utc evergreen subscription evergreen event task logs unittestzstd
1
reindex currently accumulates a list of indexes deletes them then reinserts them into the system index collection we should not use the system index collection where possible we should use the reindex command in the database
1
on step up launch a task to scan all configrangedeletions that dont have prepare true and submit to the range deleter
0
contentlength in thread main javalangillegalargumentexception tried to save too large of an object max size at at at at at at method at at at at
1
it is required that we call mongocinit before using libmongoc in the driver this is taken care of in the mongoclient initializer as of however mongomobile gets a libmongoc client directly from libmongocembedded before it even uses mongoswift we should update that code so that before the embedded client is retrieved we call mongocinit
0
have attempted to install several compass beta releases each time when i launch the compass beta version it begins migrating my settings and then hangs on the attached screen with the wait indicator circling
1
after creating a mongoclient and getting the database with var client new mongoclientlocation database clientgetdatabasedatabasename the client automatically tries to connect to the server of the location string if the server is present everything works fine but if the connection can not be established an uncaught task exception is raised systemaggregateexception ausnahmen einer aufgabe wurden nicht überwacht entweder wegen wartens auf die aufgabe oder wegen des zugriffs auf die ausnahmeeigenschaft daher wurde die nicht überwachte ausnahme vom finalizerthread erneut ausgelöst systemtimeoutexception a timeout occured after selecting a server using compositeserverselector selectors readpreferenceserverselector readpreference mode primary tagsets latencylimitingserverselector allowedlatencyrange client view of cluster state is clusterid type unknown state disconnected servers serverid clusterid endpoint endpoint state disconnected type unknown heartbeatexception mongodbdrivermongoconnectionexception an exception occurred while opening a connection to the server systemnetsocketssocketexception es konnte keine verbindung hergestellt werden da der zielcomputer die verbindung verweigerte and catched by our unobservedtaskexceptioneventhandler taskschedulerunobservedtaskexception unobservedtaskexceptioneventhandler since we do not want to continue work after unobserved task exception in our system the normal behaviour is to shut down and restart the application is there a way to avoid the exception since the task can not be reached from outside the driver we do not see a way to catch it before the task gets out of reach and the exception is handed to the handler by the garbagecollector as a workaround we modified the handler to not restart on timeoutexceptions but we would appreciate a different solution
1
ordered bulk write operations the second insertone ide should be instead of in order for the error to occur as described in the explanations
0
this test does an intentional shutdown in the middle of the test so we need to skip this check
0
goals replace all text using the updated terminology in all of mongodb’s product user interfaces uis create new api endpoints and marking the old api endpoints as deprecated completely remove deprecated api endpoints at a later date modify the output of commands such as rsstatus dbismaster to use the updated terminology update all of the documentation that coincides with these changes commands and terminology overview of terminology changes “master” and “slave” “primary” and “secondary” dbismaster dbhello “whitelist” and “blacklist” for server allow list and deny list for drivers allow list and deny list for kafka allow list and block list for atlas “ip whitelist” “network access list” for adl “white list” “access list” and “whitlisted” “allowed” for charts “whitelist” “list” plus the description “do not include fields that may reveal sensitive data”
0
we set up a replica set a couple of weeks ago and were happily using it until about yesterday our monitoring alerted us to the fact that a secondary had become too stale to sync eg and so we investigatedfor whatever reason i do not yet understand the secondary has gotten so far behind that it was unable to keep up with replication after looking in the docs i attemped a couple of full resyncs at first without restarting mongod but when that process crashed i saw myself forced to start the server anewstandard replication started connections to the primary and arbiter were made hands were shaken and data started to be transferred up until about of a collection when the process stopped again the interinstance connectivity is capable of sustained loads and so the collection should have at the latest been resynced within two hours but it did not long story short a secondary became too stale to sync attempting to resync it by a keeping the local files did not work but b removing all local and other db files and starting completely fresh did not work either changing the oplogsize to it was before and starting the process anew also yielded no resultsthe secondary will stay in recovering state but never recover i am hesitant in adding another secondary at this point as i believe it too would not be able to sync itself from the primaryany thoughts on what is going on here
0
see comments in
0
create a series of functions used to convert option values to any given type
0
for cloud tests on evergreen wed like to create custom reports that track the passfail rate of all tasks on a build variant and the time taken for each as such wed like a rest endpoint that only requires project and variant can take an optional maxresults and sortby that returns information on the last maxresults of builds we need tasklevel detail but not testlevel
0
we have one boolean that controls both of these behaviors we upsert during steady state replication for rollback and startup recovery though it may not be correct according to applyops also decides if we should upsert based on its alwaysupsert flag we also use this flag for determining if we should fail renames or upgradesdowngrades which is completely orthogonal
1
mongo crashed with message in logcode fatal error in is no longer usablecodereporter alexander email
1
we have node replicaset hosted on linux machines recently we added authentication restarted the replicaset the following documentation was used we use the following uri for connecting to mongo we changed that to the following after this c driver components no longer able to retrieve data from mongo although c components are working fine we can also login to the replicaset from mongo console and query data following are errors from log assertion not authorized for query on xxxbbb nsxxxbbb query query domain abccom unauthorized not authorized on db to execute command
1
the change should occur here this change was requested by our contact at antithesis
0
this breaking changes page on the wiki is out of date this is a necessary resource for people looking to upgrade away from to the more feature rich release we should audit the diff between and and update the breaking changes page as needed
1
when allowpartialresults is true establishcursors ie the find command only swallows retriableerror and failedtosatisfyreadpreference errors however it looks like asyncresultsmerger ie the getmore command swallows all kinds of errors it should be changed to swallow only retriable errors
0
snmp module initialization no longer occurs with the removal of the module mechanism which had previously called init on all module instances
1
code replsettest waiting for connection to to have an oplog builtreplsettest waiting for connection to to have an oplog builtmon jul exec error timed out after tries replsettest waiting for connection to to have an oplog built failed to load
1
currently step down kills all conflicting user operations and some internal operations that are marked killable using setsystemoperationkillable write operation that takes global lock in ix and x mode read operations that takes global lock in s mode operationsreadwrite that are blocked on prepare conflict step down hangs due to below three way deadlock chunk splitter thread runautosplit performs read by holding rstl in ix mode and is blocked by a prepared txn due to prepare conflict chunksplitter internal threads are not marked killable so step down wont be able to killinterrupt those internal read operations step down enqueues rstl lock in x mode and blocked behind chunk splitter internal thread committransaction cmd is waiting for rstl lock to acquire in ix mode but blocked behind the step down thread
0
before starting post ga uuid work wed like to clean up the lastminute code from
0
using the python cli pythonpython feb on darwintype help copyright credits or license for more information import pymongo conn pymongoconnectionlocalhost most recent call last file line in file line in init file line in findmasterpymongoerrorsautoreconnect could not find masterprimary conn pymongoconnectionlocalhost connconnectionlocalhost attempting to specify a port it throws an error if i dont specify a port it works fine but uses the default port
0
when generating keystring keys we use the keystringpooledbuilder that places multiple temporary keystring on the same larger memory block to avoid multiple small allocations each keystring holds a reference to the underlying memory that will not be freed until all keystring that is using it has been freed normally this is fine as these keystring are passed to the sorter that spills do disk when the memory consumption reach a certain threshold as this process clears all temporary keystring instances all memory blocks should be freed however when there are multiple indexes being built at the same time these pooled memory blocks can be shared between keystring instances belonging to different indexes if some indexes generate large keys and need to be flushed to disk often the actual memory will not be freed if there is still an index building that hasnt needed to spill to disk
1
the action statement here is incorrect presuming that the related message on the mms console is correctthis is still displayed when the doc is followedto remove the error the iamgetuser permission needs to be set instead of iamaccesskey no error appears on the mms console when iamaccesskey is removed entirely so theres a mismatch between the doc and the logic behind the error message
1
the transaction spec says bq starttransaction should report an error if the driver can detect that transactions are not supported by the deployment a deployment does not support transactions when the deployment does not support sessions or maxwireversion or the topology type is sharded however the java driver never implemented this check when runs a transaction against mongodb it will get an error like this on the first command that goes to a new mongos code ok errmsg cannot continue txnid for session with txnid code codename nosuchtransaction errorlabels code
0
while doing some load test i got this exception from orgspringframeworkjmslistenerdefaultmessagelistenercontainer execution of jms message listener failed and no errorhandler has been setjavalangillegalargumentexception cant serialize class javamathbiginteger at at at at at at at at at at at at at at at at at at at at at script generated the same data with different id id and created and most of them were inserted into database fine this exception only happened sporadically
1
in gridfs many large companies name their bucketscolllections according to their business model such as since the c or collection option is missing from mongofiles it cant be used to put files into collections that are not named fschunksfsfiles since you cant create custom collection and bucket names with all the other tools we would please like mongofilesexe enhanced so it can support a customized data model
1
late last night we had some issues with mongos unfortunately not clear what went wrong bouncing fixed it about an hour later we then had a massive spike in the number of connections from mongos to the primary this then caused too many open connections to start flooding the primaries logs and connection attempts throughout our application to consistently fail in effect our primary was dead however our primary was still telling all the secondaries that it was alive and well so no failover happened i think the health checks need to do more than they do the primary cant just be alive it must be alive and well ie responding to queries and new connections
0
when a mongodb instance is created with a field called create is added to the collection options for each collection this extra field is now an error in due to as a result a database created in fails to start with it does not matter if a user upgrades from example failure noformat i fatal assertion invalidoptions the field create is not a valid collection option options create startuplog size capped true at i aborting after fassert failure f got signal abort trap begin backtrace libsystemplatformdylibsigtramp libsystemcdylibabort mongodmain libdylddylibstart end backtrace abort mongomongod noformat
1
this objtoindex bsonobj is pointing to a cursor which can be savedrestored and no longer save the data to which objtoindex points saverestore was exercised further down the stack because we were running a createindex command on two indexes at once so this loop further down the stack with a reference to objtoindex can hit a wce saverestore the cursor and then keep trying to use the now invalid objtoindexdoc bsonobj for the second index recommendations call getowned to assign a copy in objtoindex or update the objtoindex reference in the same lambda that restores the cursor
1
i replace bsondefaultsguidrepresentation guidrepresentationstandard by bsonserializerregisterserializernew guidserializerguidrepresentationstandard to set all my app use standard representation but if i do buildersfiltereqid myguid the filter is translated with csuuid in place of uuid in more largest case how i can specify guid representation with this type of filter
0
version released fix for memory leak in automation agent
1
maybe this
0
the jstestscoretextindexlimitsjs creates a text index and then inserts some very large pieces of text these operations can be very slow seconds in certain cases in a node linear chain of replica set nodes the time it takes for a single operation to replicate to all nodes will be the sum of the time it takes to apply that operation on each node thus in the secondaryreadspassthrough suite these operations can be very slow to replicate leading to timeouts in awaitreplication when doing consistency checks it should be fine to blacklist this test from the secondary reads passthrough suite since there shouldnt really be any specific interactions between text index limits and chain replication we have plenty of other passthrough suites that will run this test against multinode replica sets
0
we are looking for mongochavessl but we should be looking for mongocenablessl
1
verbosemongodb shell version mar versioncmptest passedfri mar versionarraytest passedconnecting to testfri mar creating new connection mar backgroundjob startingand crashesfrom the eventviewerfaulting application name mongoexe version time stamp faulting module name version time stamp exception code fault offset faulting process id faulting application start time faulting application path module path id starting the shell with eval eval printjsondbstatsmongodb shell version to test db test collections objects avgobjsize datasize storagesize numextents indexes indexsize filesize ok
1
upgrade to the latest patch version of bundled libbson and libmongoc to pull in some upstream bug fixes no bump is needed for libmongocrypt as remains the most recent patch release in
0
ive gotten this error many tens of times on symptoms secondary clones dbs secondary builds indexes immediately upon completion secondary resyncs again my countenance darkens and my heart sinks into despair sometimes maybe of the time it actually finishes initial syncsetup replica set with data biggest collection data docs with indexes each primary replicas trying to resync arbiter typically active connections aws servers spread across availability zones initial sync takes days oplog is long example below seems worse with larger replicasets with bigger data one that includes a db with large compound indexescode oplog dbprintreplicationinfoconfigured oplog size length start to end first event time wed mar utcoplog last event time mon mar utcnow mon mar utccodethe slightly anonymized log below contains the moment where the initial sync fails the primary is the secondary trying to sync is there is some traffic on the servers which causes slow responses connection spam is filtered outcodemon mar index external sort progress mar couldnt connect to couldnt connect to server mar couldnt connect to couldnt connect to server mar index external sort progress mar replset info is down or slow to respondmon mar replset member is now in state downmon mar replset member is upmon mar replset member is now in state primarymon mar index external sort progress mar index external sort progress mar couldnt connect to couldnt connect to server mar index external sort progress mar couldnt connect to couldnt connect to server mar replset info heartbeat failed retryingmon mar index external sort progress mar index external sort progress mar index external sort progress mar index external sort progress mar index external sort progress mar index external sort progress mar external sort used files in secsmon mar index btree bottom up progress mar index btree bottom up progress mar index btree bottom up progress mar index btree bottom up progress mar index btree bottom up progress mar done building bottom layer going to commitmon mar build index done scanned total records secsmon mar build index cxxxpxxxday stat mon mar build index done scanned total records secsmon mar build index cxxxpxxxday caxxx mon mar build index done scanned total records secsmon mar build index cxxxpxxxday bucket mon mar build index done scanned total records secsmon mar build index cxxxpxxxday bucket mon mar build index done scanned total records secsmon mar build index cxxxpxxxday caxxx stat bucket mon mar build index done scanned total records secsmon mar build index cxxxpxxxday caxxx stat bucket mon mar build index done scanned total records secsmon mar replset initial sync cloning indexes for clxxxxmon mar socket say send connection timed out mar replset initial sync exception socket exception server attempts remainingmon mar dbclientcursorinit call failedmon mar replset info heartbeat failed retryingmon mar replset info is down or slow to respondmon mar replset member is now in state downmon mar replset info heartbeat failed retryingmon mar replset member is upmon mar replset member is now in state secondarymon mar replset initial sync pendingmon mar replset syncing to mar replset initial sync drop all databasesmon mar dropalldatabasesexceptlocal mar removejournalfilesmon mar dbclientcursorinit call failedmon mar replset info heartbeat failed retryingmon mar dbclientcursorinit call failedmon mar replset info heartbeat failed retryingmon mar replset info is down or slow to respondmon mar replset member is now in state downmon mar dbclientcursorinit call failedmon mar replset info is down or slow to respondmon mar replset member is now in state downmon mar replset member is upmon mar replset member is now in state secondarymon mar removejournalfilescodewhat more can i share that would help debug this problem does it seem like initial sync is sensitive to high load
1
i am getting the following error when the application is trying to do bulk writes at mongodbdrivercoreconnectionsbinaryconnectionopenconnectionhelperfailedopeningconnectionexception wrappedexception at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at at in helpercsline at systemruntimeexceptionservicesexceptiondispatchinfothrow at systemruntimecompilerservicestaskawaiterhandlenonsuccessanddebuggernotificationtask task at systemruntimecompilerservicestaskawaitergetresult
1
code wed feb cmd drop wed feb invalid access at address from thread wed feb got signal segmentation fault wed feb backtrace code
1
in the posttask command the output is of the form putting into path in bucket mciuploads ive gotten feedback that it would be more useful to concatenate these together to be the actual bucket url rather than having the user do that manually
0
we may also want to add a compile variant for this
1
make modifications to the collections and database classes for sessions namely getsession
0
currently using explain reevaluates and invalidates the query plans for the query templatepattern this should not be the default behaviorthis causes explain to have unexpected sideeffects
0
just under header files fail to issue a pragma once directive we should automate the detection of this in our linting tools code grep includeh irl pragma once srcmongo srcmongobsonmutablemutablebsontestutilsh srcmongotransportmessagecompressornooph srcmongotransportmessagecompressorzlibh srcmongotransportmessagecompressorsnappyh srcmongoutiltableformatterh srcmongoutilquickexith srcmongoutilversionh srcmongoutiloptionsparserstartupoptionsh srcmongoutildnsquerywindowsimplh srcmongoutildnsqueryandroidimplh srcmongoutildnsqueryposiximplh srcmongoshelllinenoiseh srcmongoshellmkwcwidthh srcmongoplatformstacklocatorh srcmongosdatabaseversionhelpersh srcmongosversionmongosh srcmongodbtestsframeworkh srcmongodbftdcftdctesth srcmongodbauthrestrictionmockh srcmongodbauthimpersonationsessionh srcmongodbauthusercacheinvalidatorjobh srcmongodbcatalogdropdatabaseh srcmongodbcatalogdropindexesh srcmongodbcatalogcatalogcontrolh srcmongodbcatalogcappedutilsh srcmongodbcatalogcollmodh srcmongodbcatalogdropcollectionh srcmongodbcatalogrenamecollectionh srcmongodbcatalogcreatecollectionh srcmongodbrepldotxnh srcmongodbreplapplyopsh srcmongodbreplmockreplcoordserverfixtureh srcmongodbstoragewiredtigerwiredtigerparametersh srcmongodbstartupwarningsmongodh srcmongodbstartupwarningscommonh srcmongodbexecandcommoninlh srcmongodbopsinserth srcmongodbcommandscopydbstartcommandsh srcmongodbcommandskillopcmdbaseh srcmongodbcommandsshutdownh srcmongodbcommandskillcursorscommonh srcmongodbqueryparsedprojectionh srcmongodbqueryfindcommonh srcmongodbquerygetexecutorh srcmongodbqueryqueryplannertestlibh srcmongodbmodulessubscriptionsrcsnmpsnmph srcmongodbmodulessubscriptionsrcsnmpserverstatusclienth srcmongodbmodulessubscriptionsrcrlprlplanguageh srcmongodbmodulessubscriptionsrcrlprlpoptionsh srcmongodbmodulessubscriptionsrcqueryablequeryablewtblockstorefsh srcmongodbcuropfailpointhelpersh srcmongodbstatsfineclockh srcmongodbfieldparserinlh srcmongoexecutorasyncstreamcommonh srcmongoexecutorconnectionpooltestfixtureh srcmongoexecutortestnetworkconnectionhookh srcmongoclientnativesaslclientsessionh srcmongoclientcyrussaslclientsessionh srcmongoclientembeddedlibmongodbcapih srcmongoclientembeddedembeddedtransportlayerh code
0
this ticket was split from please see that ticket for a detailed description
0
several mci builders have started failing the parallel suite first failing builds at hash though there are several inactive commits precedingfailing mci builds
0
old cdn links break the admin and make it confusing for usersneeds updating to follow new bootstrap markup and cdn links
1
we configured our mongobd to forbid the listdatabases ie mongclientgetdatabasenames feature for privacy reasonsi would like to check whether a database exists in mongodb without using mongclientgetdatabasenamesif i use mongoclientgetdbmydb mongodb creates a new db instance which cant help to check whether dbname existcan we have an api to check existance of database like mongoclientisdbexistsdbname
1
parallelwt failing i query error assertsoon failed function mh mgetdb test runcommand dbhash printjson mh sh sgetdb test runcommand dbhash printjson sh return at error at doassert at functionassertsoon at at to load cdatamcishellsrcjstestsparallelrepljscodetest failure
1
in the ctor of gridfs the following call on dbcollectionensureindex sets a wrong value respectively the wrong type for option unique so that the index will not be found even if it existscodechunkcollectionensureindex basicdbobjectbuilderstartadd filesid add n get basicdbobjectbuilderstartadd unique get codethe correct value would be true instead of basicdbobjectbuilderstartadd filesid add n get basicdbobjectbuilderstartadd unique true get code
0
the default object id is increasing approx sequentially it would be nice to have an option in the config for a pseudorandom default setting for mongo example in gridfs typically programmers will reverse the object id to make it sort of random so that sharding doesnt create as many hotspots especially if the environment is preranged out of the box it would be nice to be able to use the object id in a sharded environment where the programmer doesnt have to worry about the shard key having an optional config setting to make the object id pseudo random would be nice just reverse the time portion of the object id or reverse the object id entirely the correct thing to do is for the programmer to define the value of the object id but this doesnt always happen
0
there is a different behavior of a given query using inequality matching in combination with orthere is some additional information on consideration of not as a top level operator in
0
i found this error on apr waiting till out of critical sectionwed apr waiting till out of critical sectionwed apr waiting till out of critical sectionwed apr error movechunk commit failed version is instead of apr error terminatingwed apr dbexit wed apr shutdown going to close listening socketswed apr closing listening socket apr closing listening socket apr closing listening socket apr closing listening socket apr removing socket file apr removing socket file apr shutdown going to flush diaglogwed apr shutdown going to close socketswed apr shutdown waiting for fs preallocatorwed apr shutdown closing all fileswed apr end connection apr end connection apr end connection apr waiting till out of critical sectionwed apr end connection apr end connection apr getmore localoplogrs getmore ts gte new exception interrupted at shutdown apr end connection apr socketexception in connthread closing client connectionwed apr now exitingwed apr dbexit exiting immediatelywed apr invalid access at address wed apr got signal segmentation faultwed apr waiting till out of critical apr waiting till out of critical sectionwed apr error movechunk commit failed version is instead of apr error terminatingwed apr dbexit wed apr shutdown going to close listening socketswed apr closing listening socket apr waiting till out of critical sectionwed apr waiting till out of critical sectionwed apr error movechunk commit failed version is instead of apr error terminatingwed apr dbexit wed apr shutdown going to close listening apr waiting till out of critical sectionwed apr waiting till out of critical sectionwed apr error movechunk commit failed version is instead of apr error terminatingwed apr dbexit wed apr shutdown going to close listening sockets
1
original description description scope of changes files that need work and how much impact to other docs outside of this product mvp work and date resources eg scope docs invision
0
monitoring agent changelog released improved error handling on windowsbackup agent changelog released further enhancements to support backup of mongodb
1
if the file size is ,the gridfsuploadstreamimpl line while pass and keep the var “len” is and gridfsbucketimpl line need len val is to stop the loop so the loop will never end i think need throw exception or just upload a empty file
0
currently atclustertime is chosen in the find and aggregate paths and directly put into the readconcerns of the requests created by those commands instead atclustertime should be placed on the routersession during targeting and added to requests in transactionparticipantattachtxnfieldsifneeded snapshot level readconcern should also be disallowed on mongos for commands not in a multi statement transaction mongos may try several different atclustertime values until it finds one that all shards can provide a snapshot at mongos should remember the first value that was successful so every subsequent statement that targets a new shard can use it
0
followup to as part of this ticket we should also remove the wait queue size limitation in the connection pool implementation
0
helloabout the example return average city population by state i spent a lot of time figuring out why the query was so complexi had to analyze the dataset to see that cities can have the same name in the same state maybe it is obvious for americans but not really for foreign people like me i think it should me mentioned somewhere that we need to group on state and cities because cities can have the same name in the same stateelse the simple following request would be enoughdbcollectionaggregatemichel
1
ive been on forums talking with several people for the last few days and no one has been able to give me an answer is there anyway to reconfigure a replica set in the event the primary as well as a majority of the servers are down for example if data center goes down a procedure can be run to reconfigure and set a new primary with the remaining serversi tried the eval method on the database object and i get thiscommand eval failed not master response note from execcommand ok errmsg not master i also tried thisvar reconfigcommand new commanddocument replsetreconfig new bsondocumentconfiguration newconfigforce truevar response databaseruncommandreconfigcommand and runcommand throws the following mongocommandexceptioncommand replsetreconfig failed replsetreconfig command must be sent to the current replica set primary response ok errmsg replsetreconfig command must be sent to the current replica set primary from what i can tell in the documentation force true should allow reconfig against a nonprimary databaseany suggestions
1
this build shows as green but it didnt actually pass note that this also shows that we need to support gcc i will file a a separate ticket for that please note also what happens when cmake fails after the compiler minima change went in even though the cmake step failed it still tried to run the tests we are ignoring failed exit statuses somehow
1
in the release archive phpinfo returns noformat libmongoc version libbson version noformat something seems wrong with the mongoclibbson config generation there the sources definitely are
1
after upgrading from to i started getting systemtimeoutexceptions a few minutes after starting my app the timeouts are occurring after which seems like its a problem systemtimeoutexception timed out waiting for a connection after at mongodbdrivercoreconnectionpoolsexclusiveconnectionpoolacquireconnectionhelperenteredpoolboolean enteredpool at mongodbdrivercoreconnectionpoolsexclusiveconnectionpoolacquireconnectioncancellationtoken cancellationtoken at mongodbdrivercoreserversclusterableservergetchannelcancellationtoken cancellationtoken at binding cancellationtoken cancellationtoken at the bug is that queries using the legacy api can fail to return a connection to the connection pool the bug is triggered when the result set is large enough that the results are returned in multiple batches each time such a query is made one connection is failed to be returned to the connection pool and once all the connections in the pool have been leaked no further queries can be made with the symptom being that a timeoutexception is thrown from acquireconnection
1