text_clean
stringlengths
10
26.2k
label
int64
0
1
info ccdashboardauthauthproperties no jwt secret found in configuration generating random secret by default info ccdashboardauthauthproperties no jwt expiration time found in configuration setting to one day info ccdashboardconfigmongoconfig replicasetfalse info ccdashboardconfigmongoconfig initializing mongo client server at info orgmongodbdrivercluster cluster created with settings hosts modesingle requiredclustertypeunknown ms info ccdashboardconfigmongoconfig connecting to mongo mongooptionsmongoclientoptionsdescriptionnull readpreferenceprimary fsyncfalse jfalse socketkeepalivefalse sslenabledfalse sslinvalidhostnamesallowedfalse alwaysusembeansfalse requiredreplicasetnamenull cursorfinalizerenabledtrue keepalivefalse keepalivefalse info orgmongodbdrivercluster exception in monitor thread while connecting to server commongodbmongosecurityexception exception authenticating usernamedashboarduser sourcedashboarddb password mechanismproperties at at at at at at caused by commongodbmongocommandexception command failed with error authentication failed on server the full response is ok errmsg authentication failed code codename authenticationfailed at at at at at common frames omitted info orgmongodbdrivercluster no server chosen by readpreferenceserverselectorreadpreferenceprimary from cluster description clusterdescriptiontypeunknown connectionmodesingle all waiting for ms before timing out error osbcetomcattomcatstarter error starting tomcat context orgspringframeworkbeansfactorybeancreationexception warn osbceannotationconfigembeddedwebapplicationcontext exception encountered during context initialization cancelling refresh attempt orgspringframeworkcontextapplicationcontextexception unable to start embedded container nested exception is orgspringframeworkbootcontextembeddedembeddedservletcontainerexception unable to start embedded tomcat info osblclasspathloggingapplicationlistener application failed to start with classpath error osbootspringapplication application startup failed orgspringframeworkcontextapplicationcontextexception unable to start embedded container nested exception is orgspringframeworkbootcontextembeddedembeddedservletcontainerexception unable to start embedded tomcat at
0
first logged failure here note that this commit was for where we made the default shutdown grace period foreverthis suggests that authtest never manages to terminate the driver
1
i perform following tutorial for my knowledgein this tutorial i start three mongod instancesthen mongo id firstset members id host id host id host but above runcommand give me a following error errmsg couldnt initiate assertion ok then i try rsinitiatebut it also give same error no configuration explicitly specified making one me errmsg couldnt initiate assertion ok can i solve this error what i doing wrong in this tutorial
1
description yes it would be a new component of software to document engineering description fundamental limitation of the tools the tools are currently distributed as golang executables in mongotools and mtc first of all it’s generally difficult to “deploy” and test cli tools in isolation clis intrinsically have to support all platforms across all dependencies in addition it’s not very easy to embed the underlying executables into other products eg compass and atlas web ui core concept tools as a service the tools server is a proposed system which executes a subset of the tools’ functionality in the cloud over an https service we could expose all of the tools in this fashion but the most salient use cases are in importexport and dumprestore the endpoints of the prototype might look something like this import import files in jsoncsv to authenticated dbcollection export export authenticated dbcollection to jsoncsv dump dump bson files corresponding to dbcollection in specified file sink restore restore given bson files to authenticated dbcollection this design gives rise to the following possible features provide importexport functionality over web ui and compass eg “upload file” and “save cluster as csv json” buttons this would be useful in analytics workflows where users rely on guis but tend to need this functionality use import to provide live sync between adl and any mdb instance use dumprestore to provide live backup of any mdb instance to the cloud ie an bucket a possible prototype of the system would be a generic implementation of dumprestore and importexport in realm andor aws lambda shortterm benefits in the past weve considered the following epics rationalize cli options and formalize interface ui quick wins these give rise to higherlevel problems in the tools and the ecosystem ie “what would a stable and succinct interface over the tools look like how do we implement that” this problem in particular is solved much more easily by versioning and distributing a httpsbased tools api and leaving the functionality embeddable to various interfaces we could trivially support the cli ie by using the https endpoints for compatibility if we wanted fundamentally decoupling the tools’ functionality as an api would result in better testing and reproducibility ie using various cloud services longterm benefits if we support a stable implementation of the tools as a service we could enjoy the following more features are possible to the users under this architecture gui button on compassweb ui to “download file as csv” and so on using export cloud service to back up an atlas cluster to adl using restore teams and orgs that want to use the tools in the future have an easier time doing so since https is embeddable into many services cloud services are anecdotally more easily authenticated tested and versioned than clis thanks to mdb itself as a platform scope of changes impact to other docs mvp work and date resources scope or design docs invision etc
1
createuser docs are wrong the name of the users needs to be specified in the user property not in the createuser property
1
smoketestendpoints failed on ubuntu dockerhost project evergreencommit diff use more comprehensive query for task history may utcevergreen subscription evergreen event
0
nodemongodbnativelibmongodbcollectionjs ive been using and when i updated to i noticed that none of my scripts were workingon inspectingstepping through the code i noticed that the functionality previously in libmongodbcollectionjs has been abstracted outmost of the collection methods look like thiscodejavascriptcollectionprototypefindandmodify function return corefindandmodify codewhen — i assume — they should look something like thiscodejavascriptcollectionprototypefindandmodify function return corefindandmodifyapply this arguments codeas the collection methods simply return a function nothing is actually happening and as such a program will just hang without throwing any errorsi marked this as a blocker as it causes any node program nodemongodbnative to stop working apologies if this is no considered a blocker
1
while trying to schedule a patch to run on waterfall i ran into a error setting version activation gateway timeout error it seems the patch was still scheduled correctly im currently using the old evergreen ui
0
the stress test should launch several operations that take a long time seconds to complete
0
some clients appear to use listcommands to ensure that remote servers do not record attempts to run nonexistent commands we should allow unauthenticated clients to perform this command to enable this usecase
1
in addition on prem mms monitoring does currently support monitoring for kerberosenabled nodes if your host is using kerberos for authentication the on prem mms monitoring agent will not be able to connect to itdoes support kerberos but will not be able to connect
1
noformatexception in thread main commongodbmongotimeoutexception timed out while waiting for a server that matches anyserverselector after ms at at at at at at at at at at at at at at at at at codecodejavapublic class mongodbtest public static void mainstring args throws ioexception systemoutprintlnentered mongotest mongoclient mongoclient new a mongodb client with internal connection pooling for most applications you should have one mongoclient instance for the entire jvm db db mongoclientgetdbtest dbcollection collection dbgetcollectiondownloadsmeta string filepath file file new filefilepath gridfs gridfs new gridfsdb downloads gridfsinputfile gfsfile gridfscreatefilefile gfsfilesave it crashes here basicdbobject info new basicdbobject infoputname dell infoputfilename infoputrawname infoputrawpath caxd collectioninsertinfo writeconcernsafe code
1
for release tomorrow automation agent changelog version basic support for mongodb including the ability to build a cluster with a csrs replica set handling of new enterprise version format ability to configure wt encrypted storage with local keys shut down the automation agent if the automatic update fails times in a row fix failed automation agent automatic updates can cause surge in configuration calls from the automation agent monitoring agent changelog version built with go backup agent changelog version built with go fix ignore collections deleted during an initial sync
1
when and is serialized it detects if it has no children to avoid serializing to something that would fail to parse during reserialization or does not have this check but it should presumably it should serialize an alwaysfalse match expression
0
hi i have a rails app configured to use time zone brasilia we discovered the following weird behavior coderuby correct behavior utc wrong behavior utc utc code as shown above if we try to save a model field of type date by specifying a fullyqualified datetime then mongoid will perform some conversions and end up with the wrong date jun instead of jun in this case this happens because the applications time zone is not utc coderuby timeconfigured code it looks like the problem is in these lines of libmongoidextensionsdaterb coderuby libmongoidextensionsdaterb returns a activesupporttimewithzone object not necessarily an utc time time objectmongoizetime creates a new time as if the previous object were in utc timeutctimeyear timemonth timeday code it seems that the second line should convert back to utc before creating the new object like so coderuby suggestion returns a activesupporttimewithzone object not necessarily an utc time time objectmongoizetime converts before creating time object time timeutc timeutctimeyear timemonth timeday code
0
the following is the description of the wtprepareconflict define wtprepareconflict conflict with a prepared update this error is generated when the application attempts to update an already updated record which is in prepared state an updated record will be in prepared state when the transaction that performed the update is in prepared state this error never occurred in my test with update of a record that is already updated by a transaction which is in prepared state it always produces the error of conflict between concurrent operations but the wtprepareconflict error is raised when the record is read and it is true also as per the description in the following link the documentation of the error needs an update from update to read in the following sections
0
deari have install on my computer this windowswindows home premium service pack db works for itregardsrose toledo
1
the serverstatus output section collected in last pings contains two keys named lastpingdataserverstatusmetricscommandsmapreduce lastpingdataserverstatusmetricscommandsmapreduce these two keys are within the serverstatus section of the document names that only vary by case sensitivity is perfectly legal in bson and json however it causes issues for sql tools that try to read the document as a note this issue was noticed while using presto to query the dataset the first key contains code mapreduce total failed code while the second key contains code mapreduce shardedfinish total failed code because this second key is the only command that uses this shape of using a shardedfinish subobject we could replace it by something like code mapreduceshardedfinish total failed code the above is one suggestion the request is to have names that differ for more than their case for the two keys
0
fix thissort operator orders the documents in the pipeline based on the vale of the pop field from largest to smallest
1
integrate mongo orchestration testing into travis mci infrastructure now that it has been open sourced
1
currently we maintain two versions of the retryable write prefetch pipeline one is in c code for production and another one in javascript for testing we should only need to maintain one copy and maybe we can modify the javascript test to query the oplog buffer to make sure the correct entries are in there
0
theres the prices collection modelbase theres the items collection modelabc series bms yyn id like to get the price for the items and the steps are check itembms if bms yyn use subpipelinea otherwise use subpipelineb lets look at subpipelinea while subpipelineb could be similar lookup price with three conditons pwid brandid model if found go no further if not found try pwid brandid seriesitemseries as pricemodel if found go no further if still not found try pwid brandid baseuse string base as pricemodel combine the result of both subpipelinea and subpipelineb for step im using a facet stage for step through im running lookups one after another which is obviously a waste of resources how to decide whether to run based on result of thanks
1
the rollbackcmdunrollbackablejs javascript test creates an oplog entry of an unsupported format inserts it into the oplog manually and then tries to have a node roll back that oplog entry to check that it fasserts this test case could be easily covered by a death test in our unit tests in rsrollbacktestcpp for example furthermore we should probably try to move away from having integration tests that manually modify the oplog since that is a behavior that should be disallowed and is not something that we account for when designing and testing the replication system
0
i am trying to install pymongo for windows so i chose the link ms windows installer from the python package indexthe installtion stops midway saying t has encountered an exception
1
noformat starting execution of cppunittests encountered an error during test execution traceback most recent call last file line in call selfrunqueue interruptflag file line in run selfexecutetesttest file line in executetest testconfigureselffixture confignumclientsperfixture typeerror configure takes exactly arguments given encountered an error during test execution traceback most recent call last file line in call selfrunqueue interruptflag file line in run selfexecutetesttest file line in executetest testconfigureselffixture confignumclientsperfixture typeerror configure takes exactly arguments given noformat
1
hi we are using a replica set mongodb setup primary node secondary nodes we get the following exception during load tests orgspringframeworkdatamongodbuncategorizedmongodbexception exception receiving message nested exception is commongodbmongosocketreadexception full stack trace is provided in the screenshots attached we tried increasing connectiontimeout and sockettimeout parameters value to minute but it didnt resolve the issue
1
here is the rake task that will simulate error error appears only when validates line is present class testcollection include mongoiddocument include mongoidtimestamps field text type string validates text presence true end namespace error do desc simulate error in mongoid task simulate environment do test testcollectionnew text text testsave end end by ther
1
for release on version released fix for rare issue encountered in automatic upgrade process which would prevent the upgrade process from completing successfully
1
certain collections consumes a lot of memory during schema analysis as indicates this ticket is to track and investigate the cause for higher memory consumption in some collections and if there are any potential solutions reproduction steps with mgenerate file attached to this ticket
0
on windows we use wsasend to send iovecs there is a lot of type mangling going on there for the error check and eventual return type of mongocsockettrysendv this needs to be updated to use clean types and explicit error checking we also need to check the result for socketerror as per the docs
1
mongorestoreexe is failing in when given the command linebqmongorestoreexe dir host the test looked like thiscodemongodb shell version oct shell started program mongodexe port dbpath nohttpinterface noprealloc smallfiles bindip note noprealloc may hurt performance in many applications thu oct mongodb starting thu oct debug build which is slower thu oct thu oct note this is a development version of mongodb thu oct not recommended for production thu oct thu oct db version pdfile version thu oct git version thu oct build info windows servicepackservice pack thu oct options bindip dbpath nohttpinterface true noprealloc true port smallfiles true thu oct journal thu oct recover no journal files present no recovery needed thu oct opening db local thu oct waiting for connections on port thu oct connection accepted from connection now open thu oct opening db thu oct allocating new datafile filling with zeroes thu oct creating directory thu oct done allocating datafile size took secs thu oct allocating new datafile filling with zeroes thu oct done allocating datafile size took secs thu oct datafileheaderinit initializing thu oct build index id thu oct build index done scanned total records secs thu oct insert locksmicros thu oct build index id thu oct build index done scanned total records secsthu oct shell started program mongodumpexe out host connected to thu oct connection accepted from connections now thu oct all thu oct database to thu oct error cannot dump collection has or null in the collection thu oct to thu oct doing snapshot thu oct thu oct metadata for to thu oct end connection connection now open thu oct thread stack usage was bytes which is the most so far thu oct cmd drop thu oct cmd drop oct shell started program mongorestoreexe dir host connected to thu oct connection accepted from connections now thu oct thu oct going into namespace objects found thu oct build index id thu oct build index done scanned total records thu oct creating index key id ns name id thu oct end connection connection now open thu oct thread stack usage was bytes which is the most so far thu oct connection accepted from connections now open thu oct terminating shutdown command received thu oct dbexit shutdown called thu oct shutdown going to close listening sockets thu oct closing listening socket thu oct shutdown going to flush diaglog thu oct shutdown going to close sockets thu oct shutdown waiting for fs preallocator thu oct shutdown lock for final committhu oct dbclientcursorinit call failed thu oct shutdown final commit thu oct end connection connection now open thu oct thread stack usage was bytes which is the most so far thu oct shutdown closing all files thu oct closeallfiles finished thu oct journalcleanup thu oct removejournalfiles thu oct shutdown removing fs lock thu oct dbexit really exiting now thu oct thread stack usage was bytes which is the most so farthu oct shell stopped mongo program on port completed successfully codeit now looks like thiscodefri oct end connection connections now openfri oct connection accepted from connection now openmongodb shell version oct shell started program mongodexe port dbpath nohttpinterface noprealloc smallfiles bindip note noprealloc may hurt performance in many applications fri oct mongodb starting fri oct debug build which is slower fri oct fri oct note this is a development version of mongodb fri oct not recommended for production fri oct fri oct db version pdfile version fri oct git version fri oct build info windows servicepackservice pack fri oct options bindip dbpath nohttpinterface true noprealloc true port smallfiles true fri oct journal fri oct recover no journal files present no recovery needed fri oct opening db local fri oct waiting for connections on port fri oct connection accepted from connection now open fri oct opening db fri oct allocating new datafile filling with zeroes fri oct creating directory fri oct done allocating datafile size took secs fri oct allocating new datafile filling with zeroes fri oct done allocating datafile size took secs fri oct datafileheaderinit initializing fri oct build index id fri oct build index done scanned total records secs fri oct insert locksmicros fri oct build index id fri oct build index done scanned total records secsfri oct shell started program mongodumpexe out host connected to fri oct connection accepted from connections now fri oct all fri oct database to fri oct error cannot dump collection has or null in the collection fri oct to fri oct doing snapshot fri oct fri oct metadata for to fri oct end connection connection now open fri oct thread stack usage was bytes which is the most so far fri oct cmd drop fri oct cmd drop oct shell started program mongorestoreexe dir host connected to fri oct connection accepted from connections now fri oct fri oct going into namespace fri oct assertion invalid ns query name fri oct problem detected during query over err invalid ns code fri oct warning restoring to without dropping restored data will be inserted without raising errors check your server objects fri oct creating index key id ns name id fri oct assertion invalid ns query getlasterror w fri oct fri oct problem detected during query over err invalid ns code fri oct dev wont reportnextsafe err invalid ns code assertion nextsafe err invalid ns code fri oct end connection connection now open fri oct thread stack usage was bytes which is the most so farassert are not equal collection does not restore properlyerrorprinting stack are not equal collection does not restore does not restore oct uncaught exception are not equal collection does not restore properlyfailed to load fri oct connection accepted from connections now open fri oct terminating shutdown command received fri oct dbexit shutdown called fri oct shutdown going to close listening sockets fri oct closing listening socket fri oct shutdown going to flush diaglog fri oct shutdown going to close sockets fri oct shutdown waiting for fs preallocator fri oct shutdown lock for final commit fri oct shutdown final commitfri oct dbclientcursorinit call failedcodein the good test mongorestore displays thu oct thu oct going into namespace codein the bad test mongorestore displays fri oct fri oct going into namespace codethe newbad one is failing to extract the filename portion of the file specification leaving the path part in it this is perhaps caused by the change from forward slashes to backslashes in part of the file specification displayed on the first linepassing testfailing test
1
there are various places which dont set a socket timeout and use the default of from the constructoretc these places should be updated to use a nonzero socket timeout most code uses seconds for example the following should be changed appendreplicationinfo clonecollection copydb cloner auth
0
step does not use the same password as steps and be confusing for the average user
1
currently you can get an audit message likenoformatmilkieadmin access denied on for replsetgetstatus forshell noformatfor commands that do not target a database we should make a better message
0
i seem to run into a problem related to this with custom types ruby mongoid a document i have a field practice type addresswhen reading that value i get the correct behaviour typecasted with docpractice addressbut i get the wrong behaviour the serialized version of the object with docreadattributepractice docattributes all attributes practice doc
0
the x and m options paragraph could use serious improvement the first line mentions x and s m is not mentioned in the paragraph this was very confusing to me
1
from the docscurl o tar zxvf r n mongodbetcthis needs to be fixed asap since im assuming we dont want our enterprise customers running a beta in productionnicholas
1
codetue feb command xxxxxcmd command findandmodify xxxxx query id sort id new remove upsert update pull xxxxx guid locksmicros write lock value on that query appears to be months checking their ntpd logs showed that ntpd was winding back milliseconds by default its max ntp will do is looks like it could cause problems in our query execution as it becomes possible for a query to appear to have started after it endedchecking the timer utiltimerh class shows that if we have a clock windback around when we call now we could potentially generate a negative number the values we use to capture these numbers are unsigned long longs which would cause problems in the event of a negative number
0
on it should say minvalidsaved lastoptimefetched rather than minvalidsaved lastoptimefetched
0
monitoring agent changelogversion released retrieve list of databases using the listdatabases commandbackup agent changelogversion released minor logging change clarifying stopping the balancer when there is no balancer settings document
1
look at the following queries var dbgetcollectionlinqwherex xattribscount var dbgetcollectionfind t attribs size gt documentstolist var dbgetcollectionlinqwherex xattribscount and work fine but brings no result back i would expect that query return the same as which indeed are the same query
1
mongo version mongosparkconnector also use mongosparkloadsparksessionsparkcontextwithpipeline match createdate gte but send to mongo is cant find any data
0
we sayexport pathpathbut really we should say something likeexport pathpath
1
and ideally the filter box should just replace the existing dropdown link to avoid wasting the vertical space
0
currently we need sharding solely for write distribution due to high write lock contention due to the fact that these writes happen on relatively small collections automatic sharding fails because collections may never top even in size so chunks are never split and therefore never moved across shards im aware we could use even smaller chunk sizes to force the issue but i was led to understood frequent chunk migrations come with significant overheadi was wondering if its possible to add a balancing strategy that would reserve a chunk on each available shard for a collection and start distributing writes chunk movessplits should only occur if additional shards are added andor the chunk size exceeds a specific size the current strategy basicallyim not sure if im explaining this right but basically what i want is distributes writes from the very first rebalanced when shards are added or removedhope it makes sense if theres a solution available to reach the same goal that doesnt involve manual admin id be interested in hearing it thanks
0
when you repeat parameters on the command line mongod errors as port port port parsing command line badvalue error parsing command line multiple occurrencesuse help for helperror parsing command line badvalue error parsing command line multiple occurrencesspecifying which options are repeated will greatly help also note the message is printed twice
0
the ttl monitor incurs high load on our system it deletes the oldest data which is never used so it generates lots of page faults disk load and readers start to accumulate in the queueit would be more efficient to run during the night time where the load on our system is low it would be nice to set a startend time window and also an interval i believe a higher interval could also contribute to overall efficiency
0
when closing a connection getting object reference not set to instance of object its coming from mongoconnecitonpoolreleaseconnectionmongoconnection connectionconnectionlastusedat datetimeutcnowavailableconnectionsaddconnectionmonitorpulseconnectionpoollocklooks like this is getting called after the close method which sets the availableconnections to null i assume at this point that the connection is just being abandoned and has already been close i added an if statement to my code to make sure availableconnections wasnt null before adding it back to the pool not sure if thats the right answer
0
field negation is outlined in the mongo docs here it is supported in the core of mongo but not in the java driverfor examplebasicdbobject fields new basicdbobjectfieldsputsomebigfield cur collectionfindnew basicdbobject fieldswill return the somebigfield rather than omitting it it doesnt matter what value is given for the field value xxxx the same thing is returned ie somebigfield and the id
0
the test replicaset setup is on my mac arbiterwhen i did from a ubuntu vm mongoreplsetconnectionnew or mongoreplsetconnectionnew i got exception mongo failed to connect to primary node from connect from setup from initialize from new from if i change the order of nodes and place the primary node as the first in the nodes list i can successfully create replsetconnection mongoreplsetconnectionnew readsecondaryfalse secondaries connectionmutex sockets connection socketops checkedout queue optimeoutnil auths safefalse readpoolnil primary replicasetnil nodestotry safemutexes nodes secondarypools nodestried loggernil queue idlock arbiters another thing is that if i connect the replicaset on the host machine mac where replicaset is on the order doesnt matter
1
noformat mongodump uuser ppass host o to oct all dbsthu oct database admin to oct adminsystemusers to oct doing snapshot querythu oct objectsthu oct metadata for adminsystemusers to oct database test to ls test mongorestore uuser ppass to nextsafe err invalid ns code noformat
1
the optionspdf feature does not work
1
we create a file on every server compile task that contains cache hit data for the scons cache it would be nice to capture that data to somewhere we could use to monitor the effectiveness of the caches you can see an example file here
0
it seems to be impossible to get aggregation queries to work against isodate fields with python api eg noformat optionspipeline noformat an attempt to use a native datetimedatetime objects fails with the bson error eg noformat optionspipeline an error occurred while calling orgbsonjsonjsonparseexception json reader was expecting a value but found datetime at at at at at at noformat nevertheless the isodate fields are seem to be properly marshaled from bson to datetimedatetime in rdds the expected behavior is to have a proper datetimedatetime bson conversion in both directions the way it is done in pymongo this is why i classify this issue into the bug category otherwise please let me know if there is an official way to deal with datetime objects in queries hoping for some sort of schemabased conversion magic to happen i already tried numeric timestamps like and isostring values with no success i would like to get some workaround to this problem before an official solution is released if possible is there any hack eg creating datetime objects via or so any help is appreciated
0
integrate libmongocrypt dependency in cmake add encrypt and decrypt calls support explicit encrypt and decrypt helpers properly test test with a corpus of documents check with apm add evergreen tasks with fle support test performance with benchmarks
0
once an iterator is eof it should stay eof
1
download button for ubuntu doesnt work
1
in mongodb dbhelp is not shown in alphabetical order eg dbdropuserusername is after dbprintslavereplicationinfo
0
mongod refuse to start when keyfile option is usedcode on powershell generate key fileps c openssl rand ckeyfileloading screen into random state done start mongod using keyfileps c mongod dbpath keyfile ckeyfilesat aug invalid char in key file ckeyfile codewe can verify keyfile contentscodeps c type starts without a problem when keyfile option is not specifiedcodeps c mongod dbpath aug mongodb starting hostaftabhpsat aug db version aug git version aug build info windows servicepackservice pack aug allocator systemsat aug options dbpath sat aug journal aug recover no journal files present no recovery neededsat aug waiting for connections on port aug admin web console waiting for connections on port
1
implement the bulk write api as described in the following spec document
1
queries with projection should not be considered of a different shape than queries with projection this can be particularly confusing to users of the plan cache shell helpers who will see shapes seemingly listed twice when in fact they differ in the bson type of the projection as the shell renders them identically codejs dbfoogetplancacheclear dbfoogetplancachelistqueryshapes query a b sort projection a dbfoogetplancachelistqueryshapes first query query a b sort projection a second query but looks the same query a b sort projection a code
0
im using c driver and mongodb the issue im encounted with is about the date field basically i know the date is stored in utc is local time and is its equivelent utc how come it shows different in two places my json looks like codejava id key xyz details v data ts createts detailsts code the json submitted from client contains only key and one or elements in details the serverside upserts it and pushes accumulates new details into the array before pushing the time stamp will be added to each element of the detail array then set the latest ts of detailsts the issue is the ts in the last element of details looks different from detailsts it doesnt make sense to me because i create only one bsondatetime and use many places they should be look exact the same no matter whats the timezone my c code is like codejava datetimeoffset dt datetimeoffsetnow bsondatetime bdt new bsondatetimedtutcdatetime creating filter var updatebuilder buildersupdate var updatelist new list updatelistaddupdatebuildersetoninsertcreatets bdt var details docasbsonarray foreach bsondocument d in details dsetts bdt updatelistaddupdatebuildersetdetailts bdt updatelistaddupdatebuilderpusheachdetails details var updater updatelistcount updatebuildercombineupdatelisttolist updatebuildercombine var options new updateoptions isupsert true var r await collectionwithwriteconcernwcupdateoneasyncfilter updater options code
0
testrecoverytruncatedlogc noformat the offset is the beginning of the last record truncate to the middle of that last record ie ahead of that offset overflow add operation overflows on operands offset and example value for operand offset overflowassign assigning overflowed or truncated value or a value computed from an overflowed or a truncated value to newoffset newoffset offset vsize printfparent truncate to un cid of integer overflowed argument integeroverflow overflowsink overflowed or truncated value or a value computed from an overflowed or truncated value wtofftnewoffset used as critical argument to function if ret wtofftnewoffset testutildieerrno truncate noformat
0
the api documentation for the writeconcern class does not contain a lot of detail of tagbased write concern we should improve the class comment and the writeconcernstring w comment we need to correct the comments for the constructors that take additional parameters after the string w the comments for them still refer to integer values of w and this is confusing
0
after update my ruby on rails project testing fail because the database cannot be cleaned after testing rspec fails with the following error noformat surveychildrensessionratingscale should have presence validator on relationship failureerror mongoidpurge mongo no server is available matching preference using and block levels in noformat i rolled back all update and just updated mongoid it does not update mongoid itself but the ruby driver mongo from to there must be some code changes that defunct mongoidpurge for some reasons coderuby env test require fileexpandpathconfigenvironment file require rspecrails require railsmongoid require mongoidrspec see rspecconfigure do config configinclude mongoidmatchers type model configbeforesuite do factorybotreload end configaftereach do mongoidpurge end end code i was not able to investigate this in depth right now but i hope someone sees the problem and can help me with a quick fix greetings markus
1
includes the document without a year eg id id other unknown count artwork title the great wave off kanagawa
1
when running validate against a collection in a sharded system the top level of the output document should have a valid field currently drivers have to iterate through the subdocuments of raw checking if each individual shard is valid this becomes even more complicated when one shard has members and another has members since the members dont return a valid field at all
0
hello what is the right synchronous use of motor we need both sync and async connections for my project when trying motorclientdelegate or motordatabasedelegate i got the following self pair localhost def connectself pair copy of poolconnect avoiding call to sslwrapsocket which is inappropriate for motor todo refactor extra hooks in pool childgr greenletgetcurrent main childgrparent assert main should be on child greenlete assertionerror should be on child greenlet
1
guys ive seen similar issue reported in and mongorestore fails with failed restore error medstreamarticles error restoring from dumpmedstreamarticlesbson insertion error eof in mongod log i see following message i assertion size is invalid size must be between and first element insert articles i control begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname linux release version smp thu oct utc machine somap mongod mongod end backtrace i network assertionexception handling request closing client connection bsonobj size is invalid size must be between and first element insert articles it blocks me completely form using database id love to try changing batchsize but i cannot find where i can set while using mongorestore could you please help me
1
there is already a resmokey background thread that steps down kills or terminates replica set primaries of a replicasetfixture the hook does stepdowns by default but has options to do kill or terminate instead see example of how the kill or terminate options get used this ticket is to add three new variants of tenantmigrationjscorepassthrough for doing background stepdowns kills and terminates it will require making the stepdownkillterminate background thread compatible with the tenantmigrationtestfixture which has an array of replica sets this should be fairly straightforward since it already works with different fixtures and also stores an array of replica sets making the tenantmigrationpy background thread retry on retryable errors if it doesnt already
0
in the mongodb swift driver page theres no mention why this thing exists or when we should use this instead of the realm swift ios sdk this is mentioned though here are you looking for information about using swift with mongodb in a backend app commandline utility or running on macos or linux see the mongodb swift driver documentation we should add to the swift driver page an intro paragraph stating exactly that and if youre looking to build an ios app you should use the ios sdk ac x as a reader i should understand if i want to use the swift driver or the realm ios sdk after reading the docsecosystem swift page
0
were experiencing regular index corruption on a table with a partialindexfilter though it is not always that index getting corrupted the only relevant thing i could find near the time when the corruption is occurring is log rotation via nothing else in the log file seemed suspect unfortunately our data is very sensitive and cannot be shared i can tell you were using rubyrails and the mongoidparanoia gem and set up a unique partialfilterexpression with code v unique true key name name uniqueactivenames ns apiproductionappinstances partialfilterexpression deletedat null code the corruption appears to have only been exposed with this added but may have happened before and gone unnoticed we found it when being unable to find documents with this index using code deletedat null name name code
1
what problem are you facing the api docs on are returning for class documentation what driver and relevant dependency versions are you using na docs issue steps to reproduce visit try any of the links on the right side for example note the node driver api docs seem fine and could serve as a reference point until the docs are fixed
0
if a subobject whether or not it is in an array does not contain the field that we are checking for in a redact using the any operator the aggregate command returns an errorto see the problem uncomment lines from the steps to reproduce sectionserror printing stack trace at printstacktrace at dbcollectionaggregate at aggregate failed errmsg exception anys argument must be an array but is null code ok at is contrary to the behavior implied in the jstests where there are subobjects without the level key see the f in the subobject probably because that redact doesnt use any in its conditiond level e f level g not an always included when b is included
0
allow the user to enter a dns wildcard value to add to the csp acceptance criteria user selects a csp directive use may enter multiple dns wildcard values for the directive
0
during serverless load testing tenant migrations were issued as a result of an autoscaling round of the completed successfully although they took hours to complete for a few mib of data with minimal activity for those specific tenants one migration tenant id and migration id seemed to hang indefinitely hours and ended up in failedmigrationcleanupinprogress i will try to reproduce and gather artifacts that will paint a clearer picture what artifacts exactly would be neededdesired in the meantime here are the mongod logs for the donor and recipient note there was a rolling restart during the course of the migrations so ive attached both donor primary logs covering the period after the migration started this node was selected as primary this is the proxy instance noted in the tenant migration document and the original primary this was the primary for the duration of the test this is the proxy instance noted in the tenant migration document rough timeline courtesy of donor server restarted at some migration related data following stepup oplog fetcher for migration server was slow some migrations finished got forgetmigration recipient migrations started short read error migrations get committed
1
implement any needed changes to the rollbacktostable api
0
description hello for aws examples there is a python command that does not add up for example codeclientencryption pymongoencryptionclientencryption aws accesskeyid secretaccesskey keyvaultnamespace client codecoptionsuuidrepresentationstandard datakeyid clientencryptioncreatedatakeyawscode the client is never defined no where and the clientencryptioncreatedatakeyaws expects to get actually a masterkey parameter eg codedatakeyid clientencryptioncreatedatakeykmsproviderawsmasterkey region i think those are critical for people to onboard fle succesfully fyi scope of changes x as a user i understand where variables are defined that are used in examples refer to the companion project and the other languages in the guide for context on this project impact to other docs mvp work and date resources scope or design docs invision etc
0
two of our mongo servers has been hanged for a while then machine was restartednow secondary cannot repair itselfplease helpwed mar assertion bsonobj size first element usrbinmongod usrbinmongodmain wed mar getmore localoplogrs getmore exception invalid bsonobj size first element mar exception in initandlisten std getmore cursor didnt exist on server possible restart or timeout terminating
1
in i rather foolishly added a pragma once to the initializerfunctionsh header while adding license textthis broke the static initializers on windows since this file is intended to be included multiple times to tie in the initializersthe fix is to remove the pragma once
1
the wiredtiger release notes had rather than as the year ensure that fixes this and cut a soon reported by a user on
0
use cases we are using mongoreplay to replay data in sysperf the data is collected against one shard but we need replay it against a nonshard replset to reduce test complexity shard meta data failed all writes would like to have a way to let mongoreply ignore shard meta data example bq connection opcommand insert buildlogs accept sharding commands if not started with
1
add tests which send the extended query syntax codequery explaintrue hint showdiskloctrue codein an update to make sure they arent supported as query optionscodedbaupdatequery explaintrue writeresult ok nmodified n lastop writeerrors index code errmsg could not canonicalize query query explain true codewe will want to cover other query options which are not appropriate to updates if there are any
0
i am trying to use pulp which a program that manages package repositories and uses mongodb as a backend however it has been causing mongodb to segfault the file being piped in just contains dbqueuedcallsdrop the second link in particular seems to point the finger at mongopulp has a db purging call queued and on its startup it segfaults mongodb which itself complains about a null pointer and what appears to a threading problem as soon as anything thread but the first does anything it gets said null pointer
1
the documentation for the mms backup agent states to use the backup role for mongodb in all of this role is missing permissions to take a snapshots of a shared clusterthe to documentation applies to all of the docs will now apply to
1
server parameters are a complex registry system intimately often invoked via idl generated classes they can be set at both runtime via command and at startup via config files or arguments the various relevant pieces live in srcmongodbcommands and srcmongoidl this section should discuss how we register server parameters how they are set including setparametergetparameter commands and how they are accessed during runtime we should document it as a new file docsparametersmd this is intended to be abstract documentation describe relationships and state transitions not code in common language
0
components use standard section titles and formats eg separate considerations into a single section group subprocedures as their own sequences use new stepsyaml formating ensure each action has a corresponding command
0
errors in our unit tests in mci looks to be some sort of locking problemterminate called after throwing an instance of boostcloneimpl and what boost mutex lock failed in pthreadmutexlock invalid argument
1
theres a type in one place its and in the example url its consolemongodb provides a web interface that exposes diagnostic and monitoring information in a simple web page the web interface is accessible at localhost where the number is more than the mongod port for example if a locally running mongod is using the default port access the http console at
1
i cannot find a way to create labels from the documentationselect the monitoring tab and then select hostsi dont have hosts i have deployment and in the panel i have a drop down for all processes mongos processes mongod processes simone
1
hello i wanted to test linq and after changing lingprovider option without any code changes i get this error codejava systeminvalidoperationexception the linqextensionsinject method is only intended to be used in linq where clausescode code codejava collectionasqueryablenew aggregateoptions allowdiskuse true wherex filterinjectcode thanks best regards dimitri
0
using the new containing the fix for i see code from ahlmongomongoosefixturestesthelpers import getlargets in mapi mongooseapiresearch in init elif userlib in selfmongooselistlibraries in fretry handleerrorf e retrycount gethostargs in fretry return fargs kwargs in listlibraries for coll in selfconncollectionnames in collectionnames names for result in results in next if lenselfdata or selfrefresh in refresh getmoreselfns selfbatchsize selfid in sendmessage selfcollectioncodecoptions in unpackresponse errorobject e operationfailure database error could not find cursor in cache for id over collection mongoosesystemnamespaces code
1
codescala class mongospec extends funspec val mongouri val input sparkmongodbinputuri val spark sparksessionbuildermasterlocalappnamemongo configinput mongouri getorcreate describeread first itshould return line val rdd mongosparkloadspark printlnfirst line rddfirst code the prints like the followingsa lot of checking status and update cluster codejava orgmongodbdriverprotocolcommand scalatestrunrunningmongospec sending command buildinfo to database localstorage on connection to server orgmongodbdriverprotocolcommand scalatestrunrunningmongospec command execution completed orgmongodbdriverprotocolcommand scalatestrunrunningmongospec sending command collstats bsonstringvalueshardfile to database localstorage on connection to server orgmongodbdriverprotocolcommand scalatestrunrunningmongospec command execution completed orgmongodbdriverprotocolcommand scalatestrunrunningmongospec sending command aggregate bsonstringvalueshardfile to database localstorage on connection to server orgmongodbdrivercluster checking status of orgmongodbdrivercluster updating cluster description to typestandalone servers orgmongodbdrivercluster checking status of orgmongodbdrivercluster updating cluster description to typestandalone servers orgmongodbdrivercluster checking status of orgmongodbdrivercluster updating cluster description to typestandalone servers orgmongodbdrivercluster checking status of orgmongodbdrivercluster updating cluster description to typestandalone servers orgmongodbdrivercluster checking status of code
0
our team has been working on building a collection of filter definitions to query against and we noticed in our tests that when adding a null value to the collection we end up getting a nullreferenceexception it appears that while the collection is being ensured that its not null the entities get no validation pass and are assumed to be correct public andfilterdefinitionienumerable filters filters ensureisnotnullfilters nameoffilterstolist if we look at the render a little further down in the filterdefinitionbuildercs we see filters iterated over foreach var filter in filters var renderedfilter filterrenderdocumentserializer serializerregistry if filters contains a null this will trigger a nullreferenceexception
0
hi i find a problem function mongocgridfsfilesetid always return false and output error cannot set file id after saving file when i call it before mongocgridfsfilesave
1
i do not believe the mentioned mmsbackuprestoresnapshotpitexpirationhours is a valid setting in ops manager anymore
1
formatfailureconfigstest failed on ubuntu host project wiredtiger develop commit diff add evergreen test that cycles through known failure testformat configs sep utc evergreen subscription evergreen event task logs
0
in the following worked when defining a filterdefinition code filterdefinition buildereq x xdomain new querydomain i code however this appears to be broken in
1
fixed the issue of not killing cursors that were closed explicitly via pymongocursorcursorclose we should fix the same problem with pymongocommandcursorcommandcursorclose
0
we should advise decide where to deploy mongos and how many to deploy customers are being advised in the mongos documentation bq the most common practice is to run mongos instances on the same systems as your application servers but you can maintain mongos instances on the shards or on other dedicated resources however if the customer has hundreds of application servers andor are deploying mongos with their application server we run into situations where there are too many mongos database servers running and thereby causing problems andor there are thousands of documents in the config database related to mongos
0

Dataset Card for "clean_MongoDB_balanced_1"

More Information needed

Downloads last month
0
Edit dataset card