text_clean
stringlengths
8
6.57k
label
int64
0
1
a catchall ticket to track commits related to the performance and correctness of the lock manager
1
weve added a new hello command that driversclients can use however we still extensively use the ismaster command internally in the rsm and various networktransport layers we probably cannot change most instances since we must use ismaster to communicate with servers that dont support other protocols if we cant change the command were using perhaps we can still change local variables to reflect the new hello terminology either way we should audit files in the client executor transport and rpc directories to determine which instances are worth changing
0
as of the windows toolchain now offers a wholearchive flag apparently analogous to wlwholearchive we should evaluate whether this flag works around the prior issues with static initializers and if so consider adding it to libdepspy
0
the listcollections database method passes through the databases read selector instead of constructing a new read selector for read preference primary compare to how listdatabases does it because the constructed command adds read preference primary despite possibly selecting a nonprimary server listcollections run on a database with secondary read preference will fail with an error notmasternoslaveok not master and slaveokfalse
1
helloi have an interesting problem for indexing and i need additional helpi use mongodb for our multisite cms system and we put all kind of contents to the same contents collection our document is likecode id t application comcnnturk status contenttype article path dunya title description text startdate codet c driver put this field to hold inheritanceapplication we separate different web sites with this fieldstatus active passive etccontenttype content type article video episode etcpath our folder for example we put all worlds articles to the dunya path and in any path we have approximately contents turkiye ekonomi etcstartdate contents appear on the web sites after this datewe have following indexes we have mostly following the way i tried to use sort field on the end of index but i couldnt get good performancewhen i try to explain those queries i see the following resultscodecursor btreecursor multi ismultikey true n nscannedobjects nscanned nscannedobjectsallplans nscannedallplans scanandorder true indexonly false nyields nchunkskips millis btreecursor multi ismultikey false n nscannedobjects nscanned nscannedobjectsallplans nscannedallplans scanandorder true indexonly false nyields nchunkskips millis in the log file millis and nscanneds are very different and very high for those queriesanother situation is despite of those queries dont use indexes that starts with start date when i drop those indexes the queries slow down and millis goes to and higheranother problem is on count queries for example the following query codedbcontentsfind application comcnnturk status path spor contenttype in countcodelooks very slow in log filecodesat nov command quarktestcnncmd command count contents query application comcnnturk status path spors contenttype in numyields locksmicros the way im sending the db stats collection stats and index statscode db quarktestcnn collections objects avgobjsize datasize storagesize numextents indexes indexsize filesize nssizemb ok ns quarktestcnncontents count size avgobjsize storagesize numextents nindexes lastextentsize paddingfactor systemflags userflags totalindexsize indexsizes id ok have machine replica gb ram mongo uses gbraid spindle diskredhat enterprise linuxi set all ulimits as on your documentsintelr xeonr cpu for your helps in advance
1
at s suggestion im opening this separate from to highlight the impact on id specifically a bug in the perl driver revealed that its possible to insert a document with duplicatemultiple id fields noformat dbduptestfind id id id id noformat i reproduced this against and
1
sort order in queries will be limited to the following values in for ascending for descending meta textscore for text search descending by scoreall other values will be rejected by the query framework
0
after some sane grace period eg a few months the project validation warning should be converted into errors
0
i was wondering why pymongos asclass option is not passing attributes to the objects constructor similar to transformer in rubys mongo driver it would be more convenient to have the attributes passed to the objects constructor as i would be able to treat data pulled from mongo differently during my object construction could this be done in pymongo
0
the following command doesnt find anything if the type of numeropedido its a number but if the type is a string the commando works normallydbpedidosfind sort limit skip
1
as each is marked deprecated i used foreach that gave me the following warning referenceerror callback is not defined looking into cursorjs i discovered that cursorprototypeeach is calling each passing in a callback function that is not defined in that function and is a breaking change
1
description paneltitledownstream change a new moveprimaryinprogress error will be returned in the following commands on an unsharded collection on a database that is being migrated with the moveprimary command dml insert update findandmodify delete aggregate if using out and merge stages mapreduce with out on the db being migrated ddl create createindexes collmod converttocapped renamecollection dropindexes the following ddl will return the existing lockbusy error as they currently do drop dropdatabase panel description of linked ticket add moveprimaryisinprogress error code scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
0
with the c drivers new error api in the upcoming release we can finish the c interface error handling
0
lots of new tests have been added please resync
0
after branching the new version master should use the next versions signing keyreference
1
i am unable to use connect fromuri to create multiple clients on different databases all my clients connect to the same host but when i initialize new clients directly mongoclientnewlocalhost and then specify the database clients successfully connect to different databases
1
i recently encountered a useaftermove bug in getexecutorforsimpledistinct that can happen when is set the goal of this task is to fix the useaftermove bug in getexecutorforsimpledistinct
0
it seems all server patches are trying use enterprise at however that commit has a duplicate uassert errorcode which was fixed in the following commit this is causing all server patches to fail at the compile stage unless the user explicitly copies the patch over and uses evergreen setmodule
1
this page should also mention sslclientcertificate it is confusing that this is only mentioned on the page mongodb ssl settings linked to from this one
0
the functions defined with the config object of an fsm workload are executed by multiple threads in the following way the configsetup function is executed once by the main thread the configstates functions are executed configiterations times by each the worker threads the configteardown function is executed by the main thread the changes from made it so a session is started with causal consistency enabled for each of the fsm worker threads the changes from made it so that a readpreference of secondary is used which made it possible for the configsetup function to do a write to the primary and for a configstates function to do a read from a secondary the main thread must forward its operationtime and clustertime after to each of the worker threads to ensure that a read from secondary will wait until the write has been applied
1
the api server has crashed multiple times due to a lack of memory
1
update the test to confirm that we read the correct values from lookaside
0
timelib changed its fraction struct member f to a long long member us
1
jstestscorebackgroundvalidationjs is using the pausecollectionvalidationwithlock fail point but the jstestscore tests are parallelized by the basicjsbasicplusjs parallel suites so using fail points is unsafe as it might block a different test running in parallel jstestscorebackgroundvalidationjs wont need all of the tags it currently has
0
iterators that point to different containers are compareddefect staticc checker mismatchediterator subcategory mismatchedcomparison file srcmongoutilnetssloptionscpp function mongostoredisabledprotocolsconst stdallocator mongodisabledprotocolsmode srcmongoutilnetssloptionscpp line colorredvalidnoconfigsfindtoken returns an iterator for validnoconfigscolor mappedtoken validnoconfigsfindtokencode srcmongoutilnetssloptionscpp line colorredassigning mappedtoken validnoconfigsfindtokencolor mappedtoken validnoconfigsfindtokencode srcmongoutilnetssloptionscpp line colorredvalidconfigsend returns an iterator for validconfigscolor if mappedtoken validconfigsend code srcmongoutilnetssloptionscpp line colorredcomparing mappedtoken from validnoconfigs to validconfigsend from validconfigscolor if mappedtoken validconfigsend code
0
mongodb manual the mongo shell is a not found page
0
few customers have asked how to backup the mms backup data itselfwe did suggest few alternatives in discussions with customers however we do not have an official recommendation or documentation on the subject
1
rename collection does not support in initial sync after currently i am using i upgraded slave which i want to initial sync to but issue persisted same please help in which version this bug is fixed or any alternate for the same
0
bisected down to commit running parallel mr jobs and killing mongod via cc mongod fails to start up due to an invalid record in a temp collectionnoformatfri feb warning soft rlimits too low number of files is should be at least feb db version pdfile version feb git version feb build info darwin leaflocal darwin kernel version thu aug pdt feb allocator tcmallocfri feb options fri feb journal dirdatadbjournalfri feb recover no journal files present no recovery neededfri feb assertion size is invalid size must be between and first element name mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod main mongod start feb exception in initandlisten bsonobj size is invalid size must be between and first element name terminatingfri feb dbexit noformat
1
dberrortypesgetindexes name id ns prodecerrortypes key id key appid digest ns prodecerrortypes background true name key appid ns prodecerrortypes background true name key appid appversion isresolved updatedat ns prodecerrortypes background true name dberrortypesfind isresolved false appid updatedat error err too much data for sort with no index code but if i run the query with explain btreecursor does used but it seems that mongodb picked the wrong one dberrortypesfind isresolved false appid updatedat explain cursor btreecursor nscanned nscannedobjects n scanandorder true millis indexbounds appid digest minelement maxelement i added hint to the query but still the same errordberrortypesfind isresolved false appid updatedat err too much data for sort with no index code dberrortypesfind isresolved false appid updatedat err too much data for sort with no index code updatedat cursor btreecursor nscanned nscannedobjects n scanandorder true millis indexbounds appid appversion maxelement minelement isresolved false false updatedat maxelement minelement
1
we need to update the documentation to reflect the transition to clangformat
0
activerecord updates the existing attribute values to the new ones prior to aftersave callback being called code class cat applicationrecord aftersave do p self p attributewasage end end code code cat catfirst cat load select cats from cats order by catsid asc limit cat id createdat updatedat catsave transaction begin transaction cat update update cats set updatedat age where catsid transaction commit transaction true code mongoid does not and continues to return the previous value code class cat include mongoiddocument field age type integer aftersave do p self p attributewasage end end code acatcreate nil asave nil true code
1
it is my understanding that the specs repo does not currently have an extended json corpus since it was inplace replaced by the extended json corpus but since the ruby driver implemented extended json it did have the respective corpus tests restore as specspectestsdatacorpuslegacy x implement a test runner for this legacy corpus i expect it to need to use mode legacy when serializing x verify all of the legacy corpus tests as were present in bson still pass in bson master x consider updating the legacy corpus to which is the version immediately before extended json merge
0
descriptiondocument that the odbc manager app that ships with the current mac odbc driver will not work on macos catalina see scope of impact to other mvp work and resources scope or design docs invision etc
1
in the synchronous driver a nondefault sslcontext should be configurable as an alternative to configuring a socketfactory this sslcontext should be used when ssl is enabled via the boolean property and no socketfactory has been configured in the asynchronous driver the sslcontext should also be configurable and used when ssl is enabled via the boolean property
0
navigation panel is on the bottom all pages looks like it just started todaythis seems to happen to pages that show up in search as pages not pages they arent redirecting or somehow mixing old and new
1
it would be more aesthetically pleasing if insertmany returned all inserted ids dbtestinsertmany returns values in insertedids while dbtestinsertmany does not return value in insertedids even though there is one successful insert
0
if the batchsize associated with the initial find command is zero then the asyncresultsmerger will send a batchsize of zero to the shards for subsequent getmore commands setting a batchsize of zero is illegal for the getmore command and the mongod will return a badvalue error batch size for getmore must be positive but received instead the getmore commands forwarded to the shards should not have a batchsize in order to fix we should not set the value of clusterclientcursorparamsbatchsize to boostnone rather than zero here
0
this related to the jira ticket see linked ticketwe are moving to the direction to move journal file to hypervisor mount i did not find any document about how to specify journal directory it seems always under dbpathhow can we customize journal to different directoryis symlink a recommend solution
1
i have found today that the official recommendation is to set vmswappiness to i do believe this is a serious misunderstanding of the purpose of the swap space apparently the documentation was worse set to as i can see on from the official linux documentation you can read the following quotethe casual may think that with a sufficient amount of memory swap is unnecessary but this brings us to the second reason a significant number of the pages referenced by a process early in its life may only be used for initialisation and then never used again it is better to swap out those pages and create more disk buffers than leave them resident and unused quote the swap usage is not a problem by itself the only problem which may affect performance is if the operating system is actively paging in and out which can be tracked using vmstat you can find many articles in relation to this subject in the previous documentation ticket the value of was suggested which is better than however i would not recommend changing anything unless you really know what you are doing and you have the tools to observe the behavior and the impact scope based on internal discussions were going to remove this recommendation until performance testing completes and we have more data to back any recommendations in one direction or another we will backport this removal to
0
problem statementrationale in the repos are signed with the auth keys this causes this issuecolor codejava total size m installed size m is this ok y downloading packages warning header signature key id nokey retrieving key from the gpg keys listed for the mongodb repository repository are already installed but they are not correct for this package check that the correct key urls are configured for this repository failing package is gpg keys are configured as code
1
what problem are you facing my query is match group ne null group id group amount sum quantity count sum limit result in shell id fruit amount count id wjdbj amount count id wer amount count result with node js driver mongoservererrormessagethe match filter must be an expression in an objectstackmongoservererror the match filter must be an expression in an objectn which gives proper result in mongo shell but not with node js driver what driver and relevant dependency versions are you using mongo db version installed is npm package mongodb v steps to reproduce
1
i have a model named booking in the first request i save a new booking bookingsave in the following request ie index or show doing bookingalltoa returns an array of bsondocument objects which is obviously wrong if i restart the rails server bookingalltoa returns an array of booking object which is the right thing same happens with where and each i tried to debug it but i couldnt find the cause any hint is well appreciated
1
i noticed something very odd when installing the driver and the install location path had the word in all lower case test at the beginning of any of the parent directories of the install location the only headers that get installed are exporthpp versionhpp and confighpp even though when it goes to install it says install configuration release ive noticed this on both ubuntu and mac and with different compilers so it doesnt seem to be a os or compiler problem if i use the same exact cmake commands and just change the install location so that it doesnt contain the word test at the beginning of the parent directories it works and so it fails when the path is one of the following usersnicktestingmongoexpmongocxxinstall homehathawanmongoexptestmongocxxinstall homehathawanmongoexptestholdmongocxxinstall and it works with the following paths homehathawanmongoexptestmongocxxinstall homehathawanmongoexpholdtestholdmongocxxinstall so im not sure whats happening but im guessing its a cmake issue but i dont know cmake very well so i couldnt tell of course the work around is to not install to location that contains the word test but i thought it was odd and probably not intended behavior so i thought i would bring it up nick
0
paneltitleuseful lead pocs see spec changes for details panel
0
see the journalingdur stats in dbserverstatus are counters from the last or maybe from the seconds prior to that while it is convenient to get recent stats for monitoring this creates problems for external now i need to read the stat once per second interval to avoid missing a but i dont want to read the data twice within the same second intervalmy request is for a new set of counters displayed by dbserverstatus that are journalingdur stats which dont get reset every seconds
0
it should be possible to add flags to mongoclient build
1
request summary please discuss with nicholas about whether changes are necessary or ticket descriptionthe query planner does not trim boundsgenerating inequality predicates from the expression tree when the value being compared to is one of the following bson types object undefined regex dbref code symbol codewscope as a result such predicates are not eligible for covering behavior and require an unnecessary additional match operation after the document is fetched from disk reproduce as follows codejs dbfoodrop true createdcollectionautomatically true numindexesbefore numindexesafter ok dbfoofindagtefunction return explainqueryplannerqueryplannerwinningplan stage fetch unexpected fetch does not need a filter with gte predicate here filter a gte function return inputstage stage ixscan keypattern a indexname ismultikey false isunique false issparse false ispartial false indexversion direction forward indexbounds a function return codewscope dbfoofindaeqfunction return explainqueryplannerqueryplannerwinningplan stage fetch expected no filter with eq predicate here inputstage stage ixscan keypattern a indexname ismultikey false isunique false issparse false ispartial false indexversion direction forward indexbounds a code
0
we should force all changes to be done via commands or directly on the config db for maintenanceadministration this will remove the chance that accidents happen from rouge scriptsusers
1
basicbsonobjectput is unnecessary
0
this is a follow on ticket from with a set of small miscellaneous fixes and improvements identified whilst integrating cmake with evergreen these fixes include extend cmake definebuildmode to accept libraries unset havebuildmodex cache vars on error paths fix asan build mode flags for gcc builds add a coverage build mode separate compiler versions between c cxx for toolchain files detect if libclangfuzzer is available before building the fuzz test
0
dbtestfind id v dbtestfindandmodifyquery update inc v new true id v dbtestfindandmodifyquery update inc v new true fields id output of should contain the v fieldsame thing with the python driver with newfalse and with multiple fields
1
a recent change to etcevergreenyml moved the specific tasks to variants the powercycle tests should be moved as well
0
hi im trying to figure out what this sentence means but without success
0
numa is an issue on windows same as on linux but we are not properly detecting it nor do we have proper warning about it in the docs production noteswe specifically have incorrect notethe discussion of numa in this section only applies to linux and therefore does not affect deployments where mongod instances run other unixlike systems or on windowsin addition numa can affect mongos not just mongod and client applicationour driver many discussions out there about jvm and numa interactions for example
1
getting the following error from mongo shell when trying to examine the currentopserror error converting js type to at assertion converting js type to mongo error error converting js type to at
1
a common extension of idl types is the generation of relops for those types at the very least and we currently have free functions in the codebase which use both all fields as well as limited subsets so a code reducing implementation should probably support both of those modes an example syntax might look like for all fields equalitycompare all lessthancompare all for a subset equalitycompare lessthancompare naming things is hard but mimicking c standard concepts might be a good start
0
initservice calls directly into runmongosserver
1
after configuring the mongotools project for tagtriggered versions and pushing a tag called testtag i saw that the tag was added to the existing version corresponding to the tagged commit but no new version was created for reference see the attached screenshot of the tagtriggered versions config more info from a slack conversation with quoteit looks like we werent able to find any builds but i see that release is a valid build so im not sure why it wasnt foundquote
0
the migration logic on the donor shard that performs the initial index scan for documents to clone does not handle invalidations properly and will generate a truncated set of documents to clone if the executor is killed during the index scan as a result performing an index operation that invalidates plan executors at the same time that the initial index scan for a migration is yielding will cause some documents to not be transferred during the migration and these documents will be deleted from the cluster during the next migration cleanup job the following index operations invalidate plan executors and thus are able to trigger this issue dropping an index with the dropindexes command aborting an index build with killop updating the ttl configuration for an index with the collmod command this is a regression introduced in version by and affects all versions released since the following script will reproduce this issue codejs var numdocs set up cluster var st new shardingtestshards var s var var coll sgetdbtestfoo assertcommandworkedsadmincommandenablesharding collgetdbgetname assertcommandworkedsadmincommandshardcollection collgetfullname key id hashed for inumdocs i collinsertid i assertcommandworkedcollensureindexa check document count asserteqnumdocs collfinditcount configure server to increase reproducibility internalqueryexecyielditerations setyieldalllockswait mode alwayson data namespacetestfoo waitformillis initiate migration and index drop in parallel shell assertcommandworkeddbfoodropindexa sport assertcommandworkedsadmincommandmovechunk collgetfullname find id to waitfordelete true shell setyieldalllockswait mode off recheck document count asserteqnumdocs collfinditcount code when run locally with version the above script fails on the last line with the following noformat e query error are not equal undefined noformat
1
paneltitleissue status as of aug issue summary an update to a textindexed field may fail to update the text index as a result a text search may not match the field contents yielding incorrect search results for example given a collection with a text index on field “title” code dbcolensureindextitletext code inserting a document and searching for it produces the expected results code dbcolinserttitletest writeresult ninserted dbcolfindtextsearchtest id title test code but when the textindexed field is modified under the conditions outlined above queries may return incorrect results code dbcolupdatetitletest settitlefail writeresult nmatched nupserted nmodified dbcolfindtextsearchtest id title fail code at this stage if the document grows sufficiently and needs to be moved the data in the index entry no longer points to a valid document and queries that hit the index return an error code dbcolupdate set padding new writeresult nmatched nupserted nmodified dbcolfindtextsearchtest error err bsonobj size is invalid size must be between and first element id code code user impact users who update documents in a collection that contains a text index may see incorrectincomplete search results specifically an update may cause corrupt index entries if all of the following conditions are met the update modifies a textindexed field and the update does not change the size of any textindexed values and the update is inplace does not result in a document move and the update does not modify another index none of the following operations trigger this bug an update that changes the size of a textindexed value an update that results in a document move an update that modifies another index an update that replaces the entire document an insert query or delete operation workarounds no workarounds exist for this issue to fix this issue users must upgrade to or and then rebuild text indexes either by dropping and creating each index or by resyncing a new replica set member there is no simple way to identify whether or not a text index is affected by this issue if any updates have been issued to documents in a collection with a text index the index may have been impacted affected versions mongodb versions through and through are affected by this issue fix version the fix is included in the and production releases resolution details correctly determine if update with text index is inplace panel original description
1
i believe we used to test on platforms but this may have been lost when we migrated from jenkins to evergreen we should add a platform to our matrix this would have caught
0
sample installation instruction is sudo rpm ivh this is wrong it should be a single hyphen
1
this process includes adding fcv removing fcv setting fcv to be the default value of the fcv parameter for new shard servers updating latest fcv to be and laststable fcv to be changing generic references to refer to the latest upgraded and downgraded versions
0
im running with below aggregation where in match condition will be added depending upon selected filter bson detailsunwind unwinddetails bson empidmatch new basicdbobject bson deptidmatch new basicdbobject bson gradematch new basicdbobject empidmatch mandatory value empid ifdeptid null optional value deptid deptidmatch ifgrade null optional value grade gradematch list pipeline aslistdetailsunwindempidmatchdeptidmatchgradematch aggregateiterable aggresult collectionaggregatepipeline for document documet aggresult document doc document documetgetdetails if doc null systemoutprintlntesting if i level is not available than im getting command failed with error exception a pipeline stage specification object must contain exactly one field on server the full response is code ok errmsg exception a pipeline stage specification object must contain exactly one field this is what i want to achieve case when both department and level details are available collectionaggregate unwind details match empid detailsdeptid detailsgrade case when only grade is available collectionaggregate unwind details match empid detailsgrade
1
problem observed a drop in throughput for the mongoperf test appears to beyond normal observed variance this is vs stddev stddev stddev this appears to happen at client threads counts higher than the number of cores available to mongodb in the attached results mongodb was pinned to cores the drop appears to start after client threads testcase code testspush gendistincttest false false function gendistincttest name index query var doc name name tags if index docpre function collection collectiondrop for var i i i collectioninsert x collectioninsert x collectioninsert x collectionensureindex x else docpre function collection collectiondrop for var i i i collectioninsert x collectioninsert x collectioninsert x collectiongetdbgetlasterror var op op command tags ns bdb command distinct bcoll key x if query opcommandquery x docops return doc code
1
look at the following queries var dbgetcollectionlinqwherex xattribscount var dbgetcollectionfind t attribs size gt documentstolist var dbgetcollectionlinqwherex xattribscount and work fine but brings no result back i would expect that query return the same as which indeed are the same query
1
hi team i have a collection where i have incidents data in which if i pass the incident number more than in comma separated i want those documents into var variable tried the below code but it is not working filter buildersfiltereqincidentnumber new color result collectionfindfiltertolistcolor give the right codecolor charancolor
1
shards with members each mongosmongod on one of the shard members has been crashing sporadically with nothing in the mongod logsrunning dmesg on the host showsmongod segfault at ip sp error in the system is idle and only has mms connecting to it at this momentthankssadek
1
mongod crashes while running ycsb log and mongstat are attached the system keeps on consuming memory until it starts paging at the end of the log it shows noformat e storage wiredtiger wtcursorsearch read checksum error for block at offset block header checksum of doesnt match expected checksum of e storage wiredtiger wtcursorsearch encountered an illegal file format or internal value e storage wiredtiger wtcursorsearch the process must exit and restart wtpanic wiredtiger library panic noformat
1
request summary description sort now searches the entire pipeline for a limit and if found coalesces the limit into itself if there is a stage in between the sort and limit that changes the number of documents in the pipeline ie group unwind etc the sort aborts its search for a limit an exception to this rule is the case where one or multiple skip stages are in between a sort and limit in this case sort will still coalesce the limit but the limit value increases by the total of the amounts of all of the skip stages in between this means that neither project or skip swap with limit anymore if sort is not present scope of changes files that need work and how much coreaggregationpipelineoptimization pipeline optimization section skip limit and project skip limit sequence optimization no longer reorder pipeline coalescence section rewrite sort limit coalescence update sort skip limit example to no longer reorder update limit skip limit skip example to no longer reorder meh add sort unwind limit example referenceoperatoraggregationsort clarify behaviour change specifically in the sort optimization memory section referenceoperatoraggregationlimit update note at the bottom of the page resources eg scope docs invision my flowchart attached ticket description the new projectlimit optimization in might make the pipeline to be split much earlier than before because it will split the pipeline at the limit step im attaching two explain plan of queries one which uses the optimization and one that doesnt because i added a redact keep just before the limit in the case of this query much more fields are sent to the mergerpart because of the splitting and is triggering a very bad behavior with second batches of aggregation queries which will be described in another ticket i think it would be good to take into consideration pipeline splitting when doing those optimization in addition there is no sort stage which would benefit from having the limit moved up cheers antoine
0
mongodb version linux centos release final recently we encountered a problem about mongodb the mongodb suddently lost its service abilityevent a simple sql is hang for a long time that we have to kill this sql the sql is such as but inside has so many in values while we search the logs we just find normal operation such as count、update、delete with one of our main table we didnt find this trouble before is it a mongodb bug for huge table since our main table has millions count thanks
1
this causes not found for some valid variant names
1
paneltitlemongodb status as of september summarywhen a secondary requests the next batch of the oplog from the primary it holds an internal lock while waiting for the data to come back over the network this same lock is required to service heartbeat requests high latency and other network issues between nodes can cause the next batch of oplog data to take some time to retrieve resulting in heartbeat requests timing outuser impactthis issue can result in repeated and unnecessary replica set failover it is present in versions of mongodb prior to and including issue has been resolved by not holding the bgsync mutex while waiting for the networkworkaroundsimproving the latency and reliability of your network will help to alleviate symptomspatchesproduction release contains the fix for this issue and production release will contain the fix as wellpaneldetailed descriptionbgsyncproduce holds the backgroundsyncmutex through the call to rtailingquerygte which fetches the next batch of data from the primarys oplog if it takes a long time to get a response from the primary then heartbeats may start timing out as heartbeats also require getting the backgroundsyncmutex the fix is to change bgsyncproduce to call rtailingquerygte outside of the mutex lockinitial description by eric on june that weve pretty much fixed all the failing unit tests we can see that zbigmapreduce is failing on the linux debug builderthe problem seems to be that late in the mr phase the network seems to break such that the primary and the secondary can no longer see each other send and recv time out which causes the primary to step down i have no idea what would be causing thisquotethis failure has been visible since linux debug build on june but likely was hidden by simpler bugs the last green linux debug build was on june is also visible in linux debug dur off builds since on june last green build on this builder was
1
unittest failed on os x wiredtiger developcommit diff remove duplicates in statistics descriptions removed duplicates from statistics descriptions and merged connection and data source statistics removed duplicate extend calls minor formatting changes jan utcevergreen subscription evergreen event task logs
0
write erros to log instead of ostream as for windows version of printstacktrace
0
access of viewdefinition outside of lock to retrieve the default collation can lead to use after free we should retrieve the default collation prior to lock release
0
this broke the screenshots in the readme since theyre pointing to the master branch not main
1
when upgrade mongoid to i accidentally updated mongo to too and theres many connection failures like followingmongoconnectionfailureerrorcannot connect to a replica set using seeds findmongoconnectionfailureerrorcould not connect to primary findafter reverting back to it works as before
1
paneltitleepic summary we now have basic evergreen support for the node driver however we would like to add some more support for other projects under the node driver umbrella currently most of these projects are tested on travisci or appveyor panel
0
i am running following command in c using mongo net ap code var client new mongoclient var database clientgetservergetdatabasemarketdata var command new commanddocument find equity commandresult result databaseruncommandcommand code and error i see is mongocommandexception was unhandled command find failed find command not yet implemented response ok errmsg find command not yet implemented
1
this paragraph needs special emphasis likely in red before you install ops manager you must deploy the supporting databases first these are called backing databases ops manager cannot deploy or manage these databases these databases include the ops manager application database and the backup database the bolded section is important and many of our customers do this without realizing its a bad thing
1
this note about multikey is confusing and not to the point about arraysalso i see no mention of shard key mutability which is none and what to do if you need to change a shard key insert new remove old or reversedwe need a section with these key words to match search engine queries for shard key restriction and similar searches people do when things dont work
1
either because of early returns db errors in unlocking or panics the jobs stay locked and then run later after a restart
0
operation execution is not restored after the service restart seems like this is only the issue for aspnet apps hence console app is able to restore connectionoperation execution when the service as back online again
1
please add the website example code you worked on for into the examples directory so we keep it up to date and tested in evergreen please add a comment that if we change it eg to add makedocument and makearray later to open a docs ticket about it so the website can be updated kay is a docs ticket the right thing or something else
0
see we should add a section to the validation docs mentioning that priority nodes cannot have votes this means that nodes eligible to be primary always need to be able to vote for themselves in addition we should caution users against making configuration changes while running in a mixed version set to avoid having invalid configurations boot nodes out of the set
1
the search at the top heregoes to this link which is brokenand the results go to the old helpbut i guess google needs to cache the new linksps is there a option to do redirect from help to helpclassic
1
when i run one of mr job the second time fail with errorcodeerrmsgexception getmore cursor didnt exist on server possible restart or by logs i see that in final stage when it iterate over collectionbywhatmapreducerunintnumber it not handle connectionscode mr job connectionmon may getmore databasecollectionbywhatmapreducerunintnumber may connection accepted from connections now openmon may connection accepted from connections now openmon may may socketexception handling request closing client connection socket exception server mon may connection refused because too many open connections may connection accepted from connections now openmon may getmore cursorid not found databasecollectionbywhatmapreducerunintnumber after number of connection decrementsbut another type of mr job is runs okayany ideas
1
if you have a rootclass with a subclass as a child property and it has two float properties then bsonserializerdeserializejson will throw an error saying that it cannot deserialize the second float see this gist for repro code this error occurs in all versions of the c driver including the legacy one
1
add support for the following via helpers creating a user updating a user removing a user
1
under wire version the current wire version servers have two operation modes if its a partly upgraded server with users created with mongodbcr it will operate in a compatibility mode which with some overhead will allow users to connect using scram to those accounts if the server has been completely upgraded and all users have been migrated to native scram or the server has been freshly brought up on then it will only accept incoming authentication requests using scram mongodbcr requests will be deniedcurrently we only use scram when the max wire version of the server is greater than this means we will currently always use mongodbcr when connecting to a server if the user has not manually specified an authentication mechanism as in the auth method which accepts the username and password as strings we should default to using scram for wire version and allow the server to sort out how to handle it
1
i perform following tutorial for my knowledgein this tutorial i start three mongod instancesthen mongo id firstset members id host id host id host but above runcommand give me a following error errmsg couldnt initiate assertion ok then i try rsinitiatebut it also give same error no configuration explicitly specified making one me errmsg couldnt initiate assertion ok can i solve this error what i doing wrong in this tutorial
1
since it is insufficient to wait for collectionclonerwaitfordbworkers to return before checking the final status we fixed a number of test cases during but overlooked this test case
0
version of the connector has a new installation procedure the instructions in the installation section of the bi connector pod can serve as a template for our documetation
0
setup for reproducing the issue app to usecoderubyrequire sinatrarequire mongorequire jsonconn adminget do puts coll cursor collfind fields results cursortoa jsondumpresultsendcode data population script from consolecodejavascriptuse tweetdbtweetsdropforvar printi dbtweetsinsertaccountidnew objectid avgctrmathrandom publishednew date code steps to boot up mongod in auth run the console script to set up a database with some test set up the user for the tweet db using use tweet dbadduseradmin start the application ruby serverrb or where you stored the hit the url with your browser or curl a couple of times go to the mongo console and do dbserverstatusyou will see the number of open cursor increase and hang around until they time out this does not seem to happen if you run against the server on localhost or without auth
1
according to the enumerate collections spec both listcollections and listcollectionnames must allow a filter to be passed to include only requested collections i noticed this while reviewing in this pr we would like to change this line in the docs to use the new listcollectionnames codepython dbcollectionnamesincludesystemcollectionsfalse code however dblistcollectionnames will not have the same behavior because it will include system collections according to the spec we should support this codepython dblistcollectionnamesfiltername regex rsystem code supporting filter will also aid users who are migrating from collectionnames to listcollectionnames
0
this is a regression from parts of the low level io methods were rewritten and they are not properly throwing an endofstreamexception as did when the server closes the socket
1
import error on the latest version of pymongo for python pip package issue the version that works is
1
after is in master the tests preparetransactionjs and prepareconflictjs can be moved to coretxns because the disablejournalforreplicatedcollections server parameter will be the default the usestransactions tag should be added to the test
0