text_clean
stringlengths
22
6.54k
label
int64
0
1
the mongo db runs well for month without any problem recently it stores more than data whenever there is new connection unhandled exception happens see attachment
1
also remove code with in the comment for example
0
has the following example of codereplsetconnectionnew read secondaryi create a local replica set with nodes listening on the ports listed abovecalling this will fail with a message mongo cannot connect to a replica set using seeds it to mongoreplsetconnectionnew read secondary does work correctly
0
wed feb homedavidmongodblatestbinmongos db version pdfile version starting help for usagewed feb git version feb sys info linux smp fri nov est feb web admin interface listening on port feb about to contact config servers and shardswed feb waiting for connections on port feb couldnt unlink socket file operation not permitted skippingwed feb couldnt unlink socket file operation not permitted skippingwed feb updated set to feb updated set to feb config servers and shards contacted successfullywed feb balancer id started at feb feb creating writebacklistener for feb creating writebacklistener for feb creating writebacklistener for feb messagingport recv connection reset by peer feb socketexception remote error socket exception wed feb assertion failure clientdbclientrscpp homedavidmongodblatestbinmongosthreadproxy wed feb scopeddbconnection conn nullwed feb assertionexception in process assertion feb assertion failure clientdbclientrscpp homedavidmongodblatestbinmongosthreadproxy wed feb scopeddbconnection conn nullwed feb assertionexception in process assertion
0
the genny commit queue got stuck because the version completion wasnt logged
0
when you run a find with more results the text has more appears the command it lets you access them but its hard to find any reference to this command and if you dont know it exists its quite unlikely youd know to look for it anywayso itd be a lot more useful if the prompt were changed to something like has more type it to see themor any other text which tells you how you can access the more without relying on memory or using google first
0
we will no longer pin the stable timestamp behind the oldest prepare timestamp or behind the oldest prepare timestamp of a transaction whose corresponding commitabort oplog entries have not been majority committed that is we will revert
0
problemduring a mongoperf run execution hangs on updatefieldatoffsetattached the logs and gstach output from the mongod config node replset on same host each mongod pinned to cores workload mongoperf pinned to other cores
1
maybe i didnt understand very well the rollback part but i think there is a mistake in the first and second possible rollback operation indeed the second paragraph tells me to follow the steps below but i have the impression that the steps are for the first paragraph not the second best regards
1
what problem are you facing i updated from v to and after deploying to azure functions app the functions crashes on libcoreindexjs error trace result failure exception worker was unable to load function inserttimings error cannot find module requireoptional require stack dhomesitewwwrootnodemodulesmongodblibcoreindexjs dhomesitewwwrootnodemodulesmongodbindexjs dhomesitewwwrootinserttimingslibcontrollerstimingsjs dhomesitewwwrootinserttimingsindexjs dprogram files dprogram files stack error cannot find module requireoptional reverting to solved the issue what driver and relevant dependency versions are you using node steps to reproduce works locally on windows on linux and on aws only way to reproduce is deploying an azure function app my app is running node on windows not sure if same issue applies to azure linux apps
1
there are no examples for mongodb after colons i see nothing although there are all examples for sql
1
uninitialized scalar field the field will contain an arbitrary value left over from earlier computations a scalar field is not initialized by the uninitctor class member declaration for uninitctor nonstatic class member catalogepoch is not initialized in this constructor nor in any functions that it calls
0
use of uninitialized value on the stack len fix is included hope to see this thread conditional jump or move depends on uninitialised at mongoreadresponse by mongocursoropquery by mongocursornext by mongofindone by mongoruncommand by mongosimpleintcommand by mongocheckismaster by mongoclient by hawkmongopersistentconnect by hawkmongopopulatelist by modulessensorreload by startthread in uninitialised value was created by a stack at mongoreadresponse codestatic int mongoreadresponse mongo conn mongoreply reply mongoheader head header from network mongoreplyfields fields header from network mongoreply out native endian unsigned int len int res mongoenvreadsocket conn head sizeof head mongoenvreadsocket conn fields sizeof fields len headlen fixed codestatic int mongoreadresponse mongo conn mongoreply reply mongoheader head header from network mongoreplyfields fields header from network mongoreply out native endian unsigned int int res mongoenvreadsocket conn head sizeof head mongoenvreadsocket conn fields sizeof fields len headlen srcmongoc srcmongocfix mongoheader head header from network mongoreplyfields fields header from network mongoreply out native endian unsigned int len unsigned int int res mongoenvreadsocket conn head sizeof head
1
codereplsettest await syncedtrue thu jul assertion not master or secondary cannot currently read from this replset member nsdbnamesystemindexes query thu jul problem detected during query over dbnamesystemindexes err not master or secondary cannot currently read from this replset member code err not master or secondary cannot currently read from this replset member code jul uncaught exception error failed to load
1
it would be nice if a duplicatekeyexception is thrown and the duplicated property is returned by the exception object
0
go driver is out with various bugfixes we should revendor it and backport
0
what problem are you facing my query is match group ne null group id group amount sum quantity count sum limit result in shell id fruit amount count id wjdbj amount count id wer amount count result with node js driver mongoservererrormessagethe match filter must be an expression in an objectstackmongoservererror the match filter must be an expression in an objectn which gives proper result in mongo shell but not with node js driver what driver and relevant dependency versions are you using mongo db version installed is npm package mongodb v steps to reproduce
1
i found this error on apr waiting till out of critical sectionwed apr waiting till out of critical sectionwed apr waiting till out of critical sectionwed apr error movechunk commit failed version is instead of apr error terminatingwed apr dbexit wed apr shutdown going to close listening socketswed apr closing listening socket apr closing listening socket apr closing listening socket apr closing listening socket apr removing socket file apr removing socket file apr shutdown going to flush diaglogwed apr shutdown going to close socketswed apr shutdown waiting for fs preallocatorwed apr shutdown closing all fileswed apr end connection apr end connection apr end connection apr waiting till out of critical sectionwed apr end connection apr end connection apr getmore localoplogrs getmore ts gte new exception interrupted at shutdown apr end connection apr socketexception in connthread closing client connectionwed apr now exitingwed apr dbexit exiting immediatelywed apr invalid access at address wed apr got signal segmentation faultwed apr waiting till out of critical apr waiting till out of critical sectionwed apr error movechunk commit failed version is instead of apr error terminatingwed apr dbexit wed apr shutdown going to close listening socketswed apr closing listening socket apr waiting till out of critical sectionwed apr waiting till out of critical sectionwed apr error movechunk commit failed version is instead of apr error terminatingwed apr dbexit wed apr shutdown going to close listening apr waiting till out of critical sectionwed apr waiting till out of critical sectionwed apr error movechunk commit failed version is instead of apr error terminatingwed apr dbexit wed apr shutdown going to close listening sockets
1
i want to update around fields of my metadata without modifying the document inside gridfs collection in mongocsharp driver i was using databasegridfssetmetadata for the same could you please share an example or documentation for the same thanks in advance
1
for example both of the following should be validvar update updaterenamex xxrenamey yyvar update updaterenamex xxsety
0
use python mongoclient to connect db using account generated by linksprinternewtestdeployment”” linksprinternewtest””admin””users””add user” it prompts atentication failure is that anything wrong i just want to crate a accout by to connect database how to do
1
the dbstats command only outputs a subset of the data that would be of interest to a dba when diagnosing storage issues the current fields output are filesize amount of disk space used datasize amount of space used by data sum of datasize for all collections storagesize sum of extents allocated for data for all collections indexsize amount of space used by indexesadditional fields of interest would be indexstoragesize sum of extents allocated for indexes for all collections freeextentsize sum of extents on the free list freerecordsize sum of records on the record free listboth indexstoragesize and freerecordsize could also be added to the output of collstatsthe net effect would be to much better determine the need to compact or repair a database or collection it would also make the results of compact visible to users
0
we require to have driver version in our projects but while installing it there are errors coming in envc file as below please let me know how can we fix this cc o srcenvos c pedantic wall ggdb dposixsource dmongohavestdint fpic dmongodllbuild srcenvc srcenvc in function ‘mongoenvsocketconnect’ error storage size of ‘aihints’ isn’t known struct addrinfo aihints warning implicit declaration of function ‘getaddrinfo’ status getaddrinfo host portstr aihints ailist warning implicit declaration of function ‘gaistrerror’ bsonerrprintf getaddrinfo failed s gaistrerror status error dereferencing pointer to incomplete type ‘struct addrinfo’ for aiptr ailist aiptr null aiptr aiptrainext warning implicit declaration of function ‘freeaddrinfo’ freeaddrinfo ailist warning unused variable ‘aihints’ struct addrinfo aihints recipe for target srcenvos failed make error
1
when doing or queries when indexes are used without specifying sort clause no results are returned this happens for example when doing counts which ignores sort clauses below ive created a small example that reproduce the bugfirst ive created a new database and put some documents there use videostest switched to db videostest dbvideossavetags title dbvideossavetags title dbvideossavetags title i perform an or query searching by tags in the or clause and outside it dbvideosfindtags all or id tags title dbvideosfindtags all or id tags title dbvideosfindtags all or tags all id tags title id tags title everything is fine here the results are as expected now i create an index on tags dbvideosensureindextags dbvideosfindtags all or tags all dbvideosfindtags all or tags all sorttitle id tags title id tags title the first or query without sort doesnt return any documents but the second one with sort clause returns correctly i asked the server to explain the queries dbvideosfindtags all or tags all explain clauses cursor basiccursor nscanned nscannedobjects n millis nyields nchunkskips ismultikey false indexonly false indexbounds cursor basiccursor nscanned nscannedobjects n millis nyields nchunkskips ismultikey false indexonly false indexbounds nscanned nscannedobjects n millis dbvideosfindtags all or tags all sorttitle cursor btreecursor nscanned nscannedobjects n scanandorder true millis nyields nchunkskips ismultikey true indexonly false indexbounds tags b b the one without sort clause uses two clauses both of them dont use indexes reporting scanned objects the one with sort clause uses an index and reports scanned objects which is correctfinally ive attempt to merge the outside tag filter into the or query dbvideosfindor tags all id tags title id tags title dbvideosfindor tags all sorttitle id tags title id tags title it returns correctly even without sort clause explaining the queries gives this dbvideosfindor tags all explain cursor btreecursor nscanned nscannedobjects n millis nyields nchunkskips ismultikey true indexonly false indexbounds tags b b dbvideosfindor tags all sorttitle cursor basiccursor nscanned nscannedobjects n scanandorder true millis nyields nchunkskips ismultikey false indexonly false indexbounds now it uses an index when sort clause is not specified but it doesnt use the index when sort is specified maybe the query planner chose not to use index but i really dont know whyis this an expected behavior for or queries
1
these features are planned for deprecation under i believe the intention is for this deprecation notice to go out with in which case support for these features should be removed in the development branch
0
noformatexception in thread main commongodbmongotimeoutexception timed out while waiting for a server that matches anyserverselector after ms at at at at at at at at at at at at at at at at at codecodejavapublic class mongodbtest public static void mainstring args throws ioexception systemoutprintlnentered mongotest mongoclient mongoclient new a mongodb client with internal connection pooling for most applications you should have one mongoclient instance for the entire jvm db db mongoclientgetdbtest dbcollection collection dbgetcollectiondownloadsmeta string filepath file file new filefilepath gridfs gridfs new gridfsdb downloads gridfsinputfile gfsfile gridfscreatefilefile gfsfilesave it crashes here basicdbobject info new basicdbobject infoputname dell infoputfilename infoputrawname infoputrawpath caxd collectioninsertinfo writeconcernsafe code
1
code fri apr fooa assertion failure la srcmongodbbtreeh fri apr update fooa query id update set x exception assertion
1
change the autogetcollectionforread class to no longer take collection modeis locks when flagged to do so while still performing the same checks and providing the same read semantics the flagging may be done through a special enum constant to be passed to the constructor or similar approach the idea is that we can change individual call sites to use lockfree reads or not this ticket probably can only be tested through a unit test or dbtest until at least is done
0
cant start after upgrading to have to downgrade to log noformat i control server restarted i control mongodb starting dbpathvarlibmongodb i control db version i control git version i control openssl version openssl feb i control allocator tcmalloc i control modules none i control build environment i control distmod i control distarch i control targetarch i control options config etcmongodconf net bindip port processmanagement fork true security authorization enabled setparameter enablelocalhostauthbypass false loglevel storage dbpath varlibmongodb engine wiredtiger journal enabled true systemlog destination file logappend true path varlogmongodbmongodlog d network fd limit max conn i storage wiredtigeropen config d command backgroundjob starting wtjournalflusher d storage starting wtjournalflusher thread e storage wiredtiger wtsessioncreate table does not match existing configuration invalid argument i fatal assertion i noformat
1
in order to adapt oplogentry in idl the idl compiler needs to support simply passing through bsonobj without parsing them noformat object bsonserializationtype object description a bsonobj cpptype mongobsonobj noformat
0
in oplog entries are applied one at a time on each secondary thus on a restart after a shutdown only a single op can be applied twice in ops are applied in batches so on a restart potentially all the ops in the last batch can be reapplied there are certain sequences of ops that violate idempotency guarantees needed to insure proper replication in if we detect such a sequence the secondary halts replication and shuts down unable to come back online
1
we track cursors by object location so if an object is deleted we can advancewe werent keep these isolated by db so if a cursor is open at position x on db a and a delete on db b removes an object at position x a segfault can happen because a cursor will be advanced to an illegal positionthis is exacerbated by replication because once an oplog rolls over every time we add an op we delete one so it makes the odds of this higher
1
in order to implement caching linq queries we need to compare two expressions to see if they are the same this ticket represents the subtask of implementing an expressioncomparer
0
release notes link to wiredtiger docs those are out of date we are using code thats in line with there are some pretty significant changes in particular to terminology that might really confuse people
1
failing to write to database only in a replica set configuration read is working fine also works fine in looks life failing to select the primary instance full stack trace clientinsertone vpa coolcool mongo no server is available matching preference using and from selectserver from nextprimary from legacywritewithretry from writewithretry from block in insertone from withsession from insertone from from exit
0
getlasterror returning oldwrongno data the code that we are running works when not shardedwe have a shard setup each shard with servers in a replica set config servers and mongos running on the application serverall querys are using safemodeafter going an update with a query that would match documents the following is returned via updatedexistingtrue lastop errnull no documents will have been updated so should be wrong as well as updatedexsistingwhen using a query that does match a document and does successfully update occasionally we will get the following from lastop errnull writeback updatedexistingtrue writebackgle lastop errnull is sometimes the same if that means anything
1
profiling our use case showed a lot of time being spent with callstacks as per the below throwing exceptions in parsing json what seems to be happening is we have a jsoncallback which we call arraydone on thats not implemented by jsoncallback but by basicbsoncallback implements arraydone by calling objectdone theres some attempt to skip the bulk of jsoncallbackobjectdone if the last object was an array with the lastarray member of jsoncallback but thats flawed if you have an array of objects youll get callbacks arraystart sets lastarray true objectstart sets lastarray false objectdone arraydone objectdone lastarray is false rather than this messing about with lastarray surely implementing arraydone as superarraydone so that basicbsoncallback can maintain its stack state would be simpler and more correct and avoid the below problem currently when parsing an array of objects we execute all of jsoncallbackobjectdone for the array calling containsfield for each of the supported extended json attributes the implementation of basicbsonlist is to attempt to parse its argument as an integer which throws an exception this is problematic as creating exceptions is expensive as it happens this was stack frames down so each exceptions fillinstacktrace as its created is quite expensive an alternative fix is to make basicbsonlistcontainsfield check the first character if its not in the range then just return false bypassing the exception codepath for this common case at javalangthrowablefillinstacktracenative method at locked a javalangnumberformatexception at at at at at at at at at at at at at at at at at at at
0
currently index build for empty collection does the following steps we reserve an oplog slot with timestamp say and write the startindexbuild oplog entry with that timestamp we reserve another oplog slot with timestamp say and use that to write commitindexbuild oplog entry to be noted as a result of this step all future writes done to that wt recovery unit will use timestamp now we start the index build which would update the catalog table to write the index entry with ready field as false using commitindexbuild oplog entrys timestamp on index completionsuccess we would again update the catalog table to write the index entry with ready field as true using commitindexbuild oplog entrys timestamp now suppose if my stable timestamp checkpoint timestamp is ie startindexbuild oplog entry then it would miss rebuilding the index while replaying the commitindex oplog entry during rollback and startup recovery because when the node tries to replay the commitindexbuild oplog entry during rollbackstartup recovery commitindexbuild oplog entry will fail with errorcodesnosuchkey since it wont be able to find any corresponding runninginprogress index also when commitindexbuild oplog entry fails with errorcodesnosuchkey we dont escalate it to the caller oplog applier instead we suppress it as errorcodesnosuchkey is an acceptable error code during oplog application
0
this is probably a bug in ruby opened an issue at rubymine
0
heyi am trying to run following querycode dbitemsfindid query suppose to return one document with id but it for some reason gives strange resultcode id codeit seems that something weird happens with default id index but i cant recreate it let me know guys how i can help reproduce this issue or how i can solve itthank you
1
in some workloads there are a lot of connections to one or many mongod instances which run as a shard we then start getting these errors in dmesg tcp requestsocktcp possible syn flooding on port sending cookies check snmp counters tcp requestsocktcp possible syn flooding on port sending cookies check snmp counters after the workload is gone the mongod instance responds very slowly to some very simple queries if we restart the mongod instances the problem goes away we have increased somax tcp syn backlog and tcp memory rmem wmem but the issue is not fixed is this old ticket still related thank you
0
when pressing ctrlf i usually want to change the search criteria so selecting the text would make that faster
0
as first step towards perf dashboard poc
0
epic summary brief overview of goal and deliverable motivation based on our existing user research data from surveys etc we have concluded that what users want is for mongodb to have better integrations with the tools that they use below is a collection of tickets that are specific to the php community where we understand the following the dominant ides for php developers are phpstorm and vscode the dominant frameworks are laravel and symfony most php developers use container technologies like docker and rocket therefore we will as part of our strategy include integrations with these items to the extent we are able cast of characters engineering lead document author pocers product owner rachelle palmer program manager stakeholders slack channel product description panel
0
cant install on centos with yum
1
hi with insertone it is possible to insert documents no more with the same id specifying id in the inserted doc if i try once more i get http hope this helps
0
we are trying to setup a recovery strategy from multiple master single slavethe current implementation of only will allow for only a single database per host to be replicated but we need a way to replicate a small subset of databases ie out of a possible databasesi have tried a workaround to no avail which was adding multiple source records with different only parameters but it does not seem to work
1
added an enterprise osx tsl build and kept the openssl one around for test purposes but the push task wasnt removed testing variants should not be pushing binaries we also have a number of variants using the which runs a push task they should be fixed as well
0
i have tried to start a secondary and failed to get the replication started then i updated the secondary to and it also fails to replicatewhat i see is that the optime for the secondary is stuck at the time of the backup the steps from booting to secondary look like they happen over time then the server just never gets caught up to the optime of the primary
1
the config server can target an old shard primary that has yet to find out that there is a new shard primary for setshardversion the old shard primary can step down while executing the setshardversion and continue executing setshardversion as a secondary as long as the stepdown occurs before here because stepdown only kills operations that have taken a modeix modes or modex lock the old shard primary will then send flushroutingtablecacheupdates to the new shard primary if the new shard primary does not have an entry for the database in its catalogcache it will not schedule a collection refresh against the shardservercatalogcacheloader so the configcache entries for the dropped collection will not get deleted after the flushroutingtablecacheupdates returns the old shard primary now secondary will think its filtering table is up to date and will continue to accept reads for the dropped collection even if the collection has been recreated elsewhere which violates causal consistency
0
hello i have question about behaviour of acceptsnestedattributesfor for hasmany relations specifically removing related objects on assignment if allowdestroy is set and destroy is passed it seems that the intended behaviour for hasmany is indeed to persist changes immediatelly on the other hand this behaviour is different for embedsmany most notably assume following validation on person model class person include mongoiddocument hasmany posts acceptsnestedattributesfor posts allowdestroy true validates posts presence true end now it is possible to remove all posts from existing person resulting in it to be invalid since it has no posts using personupdatepostsattributes update will fail return false but the posts will be removed it doesnt look like correct behaviour best michał
0
noformat thread thread lwp in wtreadlock in wtsessionlockdhandle in wtsessiongetbtree in wtconnbtreeapply in wtcurstatinit in wtcurstatcolgroupinit in wtcurstatinit in wtcurstatopen in wtcurstattableinit in wtcurstatinit in wtcurstatopen in wtopencursor in in mongoexporttabletobsonwtsession stdbasicstring stdallocator const stdbasicstring stdallocator const mongobsonobjbuilder in mongoappendcustomstatsmongooperationcontext mongobsonobjbuilder double const in mongostoragesizemongooperationcontext mongobsonobjbuilder int const in mongosizeondiskmongooperationcontext const in mongorunmongooperationcontext stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand int char const mongobsonobj mongobsonobjbuilder bool in mongoruncommandsmongooperationcontext char const mongobsonobj mongobufbuilder mongobsonobjbuilder bool int in mongonewrunquerymongooperationcontext mongomessage mongoquerymessage mongocurop mongomessage bool in mongoassembleresponsemongooperationcontext mongomessage mongodbresponse mongohostandport const bool in mongoprocessmongomessage mongoabstractmessagingport mongolasterror in mongohandleincomingmsgvoid noformatandnoformat thread thread lwp in llllockwait from in from in pthreadmutexlock from in wtevictfileexclusiveon in wtcacheop in wtcheckpointclose in wtconnbtreesyncandclose in wtconndhandlecloseall in wtschemadrop in wtschemadrop in in mongodropmongostringdata const in mongodropidentmongooperationcontext mongostringdata const in mongocommit in mongocommit in mongocommit in mongorunmongooperationcontext stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand stdbasicstring stdallocator const mongobsonobj int stdbasicstring stdallocator mongobsonobjbuilder bool in mongoexeccommandmongooperationcontext mongocommand int char const mongobsonobj mongobsonobjbuilder bool in mongoruncommandsmongooperationcontext char const mongobsonobj mongobufbuilder mongobsonobjbuilder bool int in mongonewrunquerymongooperationcontext mongomessage mongoquerymessage mongocurop mongomessage bool in mongoassembleresponsemongooperationcontext mongomessage mongodbresponse mongohostandport const bool in mongoprocessmongomessage mongoabstractmessagingport mongolasterror in mongohandleincomingmsgvoid noformat
1
paneltitleissue status as of feb issue description and impact in mongodb wiredtiger fails to parse the desupported huffmankey option during table creation this prevents initial syncs and mongorestores of collections created prior to mongodb if any collections were created with a usersupplied wiredtiger configuration string in support for huffman encoding of keys was removed but collections created prior to mongodb still contain the huffmankey option that was provided at collection creation time attempts to reuse this option via initial sync and mongorestore with options trigger the bug diagnosis and affected versions this bug exists in mongodb and affects collections created in any earlier version methods for doing this include explicit createcollection commands that specify a storageenginewiredtigerconfigstring value that includes huffmankey any collection creation performed while mongod was running with the wiredtigercollectionconfigstring parameter enabled for these collections on initial sync fails and logs a collection clone failed message with an unknown configuration key huffmankey invalid argument error mongorestore also fails with codeerror running create command badvalue invalid argument wiredtigerconfigvalidate configchecksearch unknown configuration key huffmankey invalid argument code remediation and workarounds upgrade to to resync or mongorestore it is not currently possible to change a usersupplied wiredtiger collection configuration string inplace if necessary to sync a node in use the sync by copying data files method or start the new node using version and upgrade it when sync is complete mongorestore can be used in with the nooptionsrestore flag panel original description as part of the huffmankey encoding support is removed along with the configuration options to control it the older versions of mongodb where the option of huffmankey is configured to all the tables that are created in wiredtiger metadata whenever these databases are upgraded to the newer versions like and above leads to a problem in parsing the old configuration option that is not known
1
the mongodb manualpdf i read the http links in that book are all wrongcause the manualreference has been changed to manual in itso i wish you could fix itthx
1
i think repair just takes a big global x lock or something and supposedly in doesnt use much in terms of wuow though i think it would need one somewhere this needs to be explored more
0
spruce separates display tasks into individual execution tasks in the patch details view in contrast the legacy ui only exposes execution tasks through their display task when a display task encounters a system failure due to an aws host being terminated evergreen will restart it rather than mark the task as a system failure in spruce because the execution tasks are broken out they will appear as failed with a system failure in this case this implies that theres action required from the user to address the system failure when in reality the task will restart and the failure should resolve on its own a potential solution to address this is to display these system failed execution tasks as in progress since for all intents and purposes the task has not yet completed the task details page can provide further information on whether a task is truly in progress or slated to be restarted for users interested in differentiating between those two states
0
how do i call it i tried dbcollstats but get typeerror property collstats of object mydb is not a function
1
starting mongod community version results in an immediate illegal instruction error with intel xeon cpus
0
codecodevar mongodb functionerror db dbdropdatabasefunction dbcollectiontestinsert name val result function dbcollectiontestfindone name val functionerror doc consolelogjsonstringifydoc a result of the following code in mongodbcore
1
compass version continually prompts for the keychain password at startup even when always allow is selected
1
unless there is a good reason to not do this we should make this change to allow motors implementation of convenient transactions api to use only public pymongo apis it is easier to use an intransaction flagmethod than manipulating the txnstate enum we should remember to update motors pymongo dependency when this done
0
with the introduction of config servers as replica sets shard hosts will checkpoint the latest config server optime after each metadata operation they perform and if necessary on startup will contact the config servers primary in order to recover the minoptime that they should be using this poses problems with restoring a host which was previously backed up with this recovery information because it will attempt to connect a potentially defunct config server for this reason we need to either have a separate argument which allows the recovery information to be ignored or we should start requiring shardsvr to be specified for all shard servers
0
in order to update unique index catalog data setfcv runs a special collmod for each unique index and waits for those to be majority committed the collmod command itself cannot run while background index builds are in progress on the same collection therefore the setfcv command will return an error on a primary or wait until all index builds are done blocking replication on a secondary we should investigate ways of mitigating this behavior
0
this issue has been repurposed to create regression tests for tailable cursor iteration original description quote while attempting to implement a regression test for that iterates on a tailable cursor i ran into a brick wall we obviously cannot use foreach for such iteration as rewinding is not permitted wrapping the cursor with iteratoriterator to get direct access to next valid and current methods essentially an api for hasnext and getnext also did not seem to work correctly and would be a convoluted solution even if it did immediately work quote
0
add a variant that runs the implicit multiversion tests with all feature flags itll be identical to the current implicit multiversion variant otherwise
0
we need to add support to the distros page cc
1
here is the code codetitlecborderstylesolid public class mainmongo public mongodbbsonobjectid id get set public class dbref mongodbdrivermongodbref public dbrefstring colname mongodbbsonbsonvalue id basecolname id public t value get return new mongodbdrivermongoclientmongodblocalhostgetservergetdatabasewikfetchdbrefasthis public class car mainmongo public datetime builddate get set public class player mainmongo public string name get set public list cars get set here is code that uses database var carcol mdbgetcollectioncar var playercol mdbgetcollectionplayer carcoldrop playercoldrop var ca new car builddate datetimenow carcolsaveca var pl new player name mohsen cars new listnew dbref new dbrefcar caid playercolsavepl var getcardbref playercolasqueryableoftypefirstordefaultcarsfirstordefault bool getcardbrefcollectionname null true bool getcardbrefdatabasename null true bool getcardbrefid null true code all values of dbref which has inherited from mongodbref are null
1
have attempted to install several compass beta releases each time when i launch the compass beta version it begins migrating my settings and then hangs on the attached screen with the wait indicator circling
1
attempting to save a document with results in a duplicate key error if a document already exists with that id i suspect it is not detecting the presence of and trying to do an insert instead of upsertdemonstration requiremongodbdbconnectmongodblocalhosttest functionerr client clientcollectiontest function err collection collectionremove safe true function err result collectionsaveid name foo safe true function err result consolelogsaved result collectionsaveid name foo safe true function err result clientclose if err return consoleerrorerr else consolelogsaved result
0
tmp collections can be left behind by mr operations should be periodically be cleaning them when mongod is running
0
using the same code as shown here i noticed that after updating to the driver via nuget that all my unit tests were failing i tracked this down to the collectioninsertdocument was failing to give an objectid value even though the representation was set to objectid for an interface member before and after the call to insert the id property value was nullif you need a sample i can try to put one together
1
hello what is the right synchronous use of motor we need both sync and async connections for my project when trying motorclientdelegate or motordatabasedelegate i got the following self pair localhost def connectself pair copy of poolconnect avoiding call to sslwrapsocket which is inappropriate for motor todo refactor extra hooks in pool childgr greenletgetcurrent main childgrparent assert main should be on child greenlete assertionerror should be on child greenlet
1
need to instrument the code to when ldap errors are created and eventually notify the fault manager
0
in the update operators section i believe you have the descriptions of the minmax operators reversed the min operator should check if a value is greater than specified and the max operator should check if a value is less than specified
1
there is currently a lot of auth code that is just replicated across auth providers and makes a lot of noise we should reconcile it all into one base class
0
release notes for provide misleading incorrect descriptive text next to actually appears to specify next to two different fixed bugs
0
change mongodb setup code to use config files
0
we currently allow and larger
1
according to address sanitizer report a double free is happening inside documentsourceoptimizeat here’s the sanitizer output addresssanitizer heapuseafterfree on address at pc bp sp read of size at thread in mongodispose in mongodisposeunsigned long in mongododispose homeubuntumongosrcmongodbpipelinedocumentsourceteeconsumer in mongodispose in mongodispose in mongodisposemongooperationcontext in mongododispose in mongodispose in mongodooptimizeatstdlistiterator stdallocator homeubun in mongooptimizeatstdlistiterator list stdallocator homeubuntumongosrcmongodbpip in stdallocator in mongooptimizepipeline in mongobuildpipelinemongodocument const homeubuntumongosrcmongodbpipelinedocume in mongodogetnext in mongogetnext in mongogetnext in mongotrygetnext in mongogetnext is located bytes inside of region freed by thread here in operator deletevoid unsigned long in mongodocumentsourcematch homeubuntumongosrcmongodbpipelinedocumentsourcematchh in mongointrusiveptrreleasemongorefcountable const in boostintrusiveptrintrusiveptr homeubuntumongosrcthirdpartyboostboostsmart in void gnucxxnewallocator destroy boostintrusiveptr in void stdallocatortraits destro y stdallocator boostintrusiveptr in stdallocator merasestdlistiterator in stdallocator erasestdlistconstiterator in mongopushmatchbeforestdlistiterator stdcx stdallocator homeubuntumongosrcmongod in mongoattempttopushstagebeforestdlistiterator stdallocator homeubuntumongosr in mongooptimizeatstdlistiterator list stdallocator homeubuntumongosrcmongodbpi in stdallocator in mongooptimizepipeline in mongobuildpipelinemongodocument const homeubuntumongosrcmongodbpipelinedocume in mongodogetnext in mongogetnext in mongogetnext in mongotrygetnext in mongogetnext code it looks like that we’re deleting some document source at attempttopushstagebefore and then delete it again at dooptimizeat codejavapipelineiterator documentsourceoptimizeat pipelineiterator itr pipelinesourcecontainer container invariantitr this attempt to swap itr with a subsequent stage if applicable if attempttopushstagebeforeitr container the stage before the pushed before stage may be able to optimize further if there is such a stage return stdprevitr containerbegin stdprevitr stdprevstdprevitr return dooptimizeatitr container code
0
the migratefromstatus data structure is largely synchronized by the lock on the collection containing the chunk that is currently migrating this implementation assumes that modifications of a collection exclusively lock that collection which is no longer the case for documentlevel locking storage engines
1
i have a mongodb cluster because the server is only available on private interfaces access control is not used when i attempt to connect to the cluster via this error occurs auth error sasl conversation error unable to authenticate using mechanism authenticationfailed authentication failed however when i dont specify the database the connection succeeds the same connection string with the database specified works fine with other drivers additional notes given that this particular issue has been fixed as of the release if you are still seeing the above error the most likely cause is simply that your credentials are incorrect
1