text_clean
stringlengths
22
6.54k
label
int64
0
1
config servers must now either be the old style config triplet or a replica set
0
in the interest of usability we should use type maps to ensure that embedded documents are converted to an object implementing arrayaccess eg arrayobject instead of stdclass objects this would make a fine default for users although we should make it easy for them to specify a typemap at the client level and have it be inherited for database and collection operations
1
when you run mongoexe from the command line it makes the dbshell file in the working director not in your user folder eg cuserszippy or cdocuments and settingszippy most ports of unix utilities write their dotfiles there and this is expected behavior
0
a bunch of cypress tests broke over the weekend there is no reason or commit that i can associate with the failures
1
in the last bullet point of the last note there is a typoif you specify a target field for unwind that holds an empty array in an input document the pipeline ignores the input document and will generates no result documentsafter the last coma it should read and will generate notice theres no extra letter s
1
i would like to be able to connect to an authenticated replica set with the following command when the mongodbtcpmycluster srv record existed and a txt record existed containing authsourceadminreplicasetmycluster noformat mongo mongodbsrvmyclustertest username cory password noformat using this command against the shell fails because the test database is used as the auth source this occurs because the uri parsing ignores the authsource in the txt record because username is not also specified in the uri if i were to move the username to the uri then the shell no longer prompts for a password and does not authenticate properly ie noformat mongo mongodbsrvmyclustertestusernamecory password noformat the current workaround is to specify authenticationdatabase admin on the command line instead of using the txt record ideally the first example would work and the shell would use the authsource from the uri via the txt record even though the username is specified on the command line and not explicitly in the uri the second example also seems acceptable but less consistent
1
code id id testfoo id id testfoo id id testfoo id id maxkey testfooerrorprinting stack failed couldnt find y id x e random late find y id x e random late failed couldnt find y id x e random late checkcheckrandom late var d d didnt jul uncaught exception assert failed couldnt find y id x e random late
1
following an earlier issue with mongo config servers being out of sync and manually resyncing the config dbs im seeing the following error message in my logs and having trouble writing to the databaseaug s env dbcluster fri aug going to retry checkshardversion host oldversion timestamp oldversionepoch ns dbtrafficsourcesbyhour version timestamp versionepoch globalversion timestamp globalversionepoch errmsg client version differs from configs for collection dbtrafficsourcesbyhour ok ive tried restarting all mongos instances stepping down the primary flushing the router configs all without any success
1
i noticed while debugging another problem that eviction worker threads created on startup are often exiting immediately and the wiredtiger doesnt notice the problem is that the threads are starting before wtconnevictionrun is set and the wtthreadcreate call returns success since the thread is started and exits cleanly we should stop creating workers before the cache is setup update how we track running eviction workers to be based on currently running threads rather than the number of threads that have been started reproduce this by running code benchwtperfwtperf o benchwtperfrunnersbtreesplitstresswtperf o code turning on evictserver verbose logging and looking for cache eviction thread exiting at the start of the run
0
the attached code throws an invalidoperationexception the linqtomongoinject method is only intended to be used in linq where clauses var point new geojsonpointnew imongoquery query querygeointersectso ocoord point var result objectscollectionasqueryablewherex queryinjecttolist exception thrown here in version of c driver this code was worked i use linq where why exception was thrown
1
before running workload teardowns the fsm runners main thread removes the stepdown permitted file waits for the stepping down file to not be present but the continuous stepdown thread does the following checks for the stepdown permitted file on starting a stepdown round writes the stepping down file on completing the stepdown round removes the stepping down file this allows the following interleaving continuous stepdown thread checks for stepdown permitted file and sees it fsm runner thread removes stepdown permitted file fsm runner thread checks for stepping down file and doesnt see it fsm runner thread starts executing a workloads teardown continuous stepdown thread starts a stepdown round which can cause the workloads teardown thread to get a network error
0
since changing srcmongodbmatcherexpressiontreeh we now get errors during compilation on clang noformat error moving a local object in a return statement prevents copy elision noformat
1
we should clarify the following two points get the fact that within no longer requires an index into the release notes and the reference page ensure that the geowithin rename of within works in the agg framework leaving the reference in the release notes to within for the moment
1
investigation when running a passthrough suite with the logical session cache refresh set to it can cause failures in listlocalsessionsjs the test expects a session to exist but its likely been reaped by the time that assertexpect is run proposed fix after some thought it seems the best idea to blacklist listsessionsjs and listallsessions from the logical session cache suites these tests rely on the command refreshlogicalsessioncachenow which is supposed to refresh the cache in a deterministic fashion running these tests with the background refresh thread on interferes with the deterministic nature of the tests because refresh operations can happen when theyre not supposed to since both these tests are in jscore im not worried about losing test coverage
0
a nonformalized paradigm for benchmarking within the tools was supplied with it is clear that the setup being used can give rise to a generalized benchmark framework across the tools in general a benchmark framework should satisfy the following requirements parametrize environment setup ie with a go function – setup environment variables database server if necessary etc parametrize function for benchmarking ie with a go function – this should specify an idempotent ie repeatable action to be safely run many times by the benchmark runtime programmatically output benchmark data ie using runtimepprof optional an evergreen task to generate a visualization of performance metrics over a static website see http flag of pprof we could include such a framework as part of mtc for use across the tools and possibly mongomirror
0
we followed the below steps for installing mongo db enterprise root access sudo is available for downloaded the tar file from tar file placed the tar file under tar zxvf cp r n why is service mongod start and service mongod stop not root access will be revoked after days how should we manage the process after that
1
this is dumb but sort and the error message is unclear options continue not to support it make the error message better support as a noop make an error make both a noop
0
quote i cant speak for other platforms but with my current setup trying to do will save an in the database with value so it wraps around instead of truncating quote quote inc suffers from the same problem as set that is if i try to i will actually subtract from testint further testing has revealed that queries suffer from the same problem if i were to make a query with a filter like testint ill actually be looking for documents where testint is less than quote truncation of integers when converting bson to php values may also result in property name corruption see
1
setup geographically distributed replica have an use case where the client should read only from the passive mongo serversalso i dont want to increase the priority because they should not be eligible for primary electionthe nodejs client ignore the passive servers when the read preference is secondarysecondaryprefered or nearestin code mongodbnodemodulesmongodbcorelibtopologiesreplsetjsfunction pickserverline seems that ignores the passive servers
1
bulkwriteexception has historically used zero as its integer code with libmongoc error api mongocbulkoperationexecute now reports its errors in the mongocerrorserver and mongocerrorwriteconcern domains see error reporting with the introduction of serverexception in we now document that codes used for that exception and its subclasses originate from the server since writeexception and bulkwriteexception are children of that class we should also use the error code from libmongocs bsonerrott instead of zero as i mentioned in we can accomplish this by using zendthrowexception directly
0
transactions appear as applyops oplog entries to allow transactions to include both commands such as index and collection creation and other operations ensure that the entire transaction is applied as a single batch and ensure that the operations inside this batch are applied serially the latter goal can be accomplished by changing the logic in oplogapplierimplfillwritervectors to make sure that a single writer worker is used to apply the operations in the batch to ensure the entire transaction is applied as a single batch we can either parse the entire oplog entry as part of the check for whether it must be processed individually to check for commands inside of transactions or add information to the oplog entry to signify that it is a transaction with a command the former may have be a slight performance hit due to the extra processing but the latter is more difficult to implement provided the performance hit is not too high the former solution is preferable
0
the error code for exceeding timeouts was determined to be ambiguous and was changed in we should update places that expect these errors in resmoke to handle the new error code right now the only place is in the stepdown hook note that there has been no change in pymongo throughout this process it doesnt know about the new error code for exceededtimelimit and treats it as a generic operationfailure so the error handling code in stepdownpy should be changed to handle operationfailure as well we should also check that the code for the operationfailure is indeed for and still bubble up other errors affects master and branches for now but based on comments in it will be backported to earlier branches later
0
the simplest repro is running the following agg explain code collexplainexecutionstatsaggregate code this will trigger an access violation when the explain path attempts to serialize the pipeline after executing the plan which was disposed by the documentsourcesort stage in the case above the projection stage releases a uniqueptr when its disposed however that same pointer is dereferenced in its serialize method
0
this will involve adding functionality to operationcommand to handle returning a driverbatchcursor for runcommandcursor
0
in mongocsharp driver there was an option to delete documents in fschunks collection using gridfs object but in mongodb driver how can we delete documents from only fschunks is it okay to delete documents in fschunks without using gridfs bucket object ie by using codejava databasegetcollectionfschunks code thanks in advance irshad
1
if an object is returned in the shell from a query or any other way from the server which is then modified the result will be that the id field will be ignored when being sent to a mongodb server in the bson encoding processthis is a bug in javascript shellclient but does not effect the server other than the client not sending the id field it is possible to cause this behavior with javascript on the server using eval dbeval or mapreduce in the reduce phase if the first document in the array is modified which is not a normal usage patternorig descriptiontheres a regression in the shell where if i write an object with my own id the shell ignores it and creates its own objectid i found this out during a data migration today when my previously working js function destroyed an entire tables worth of data
1
if the index requested is a compound index with both the metadata and the time fields noformat createindexes abc indexes noformat the index specification created on the bucket collection will be noformat createindexes systembucketsabc indexes noformat since buckets in the underlying bucket collection may contain overlapping time ranges we include both lower and upper bounds in the index to support the query optimizers ability to order measurements conversely if the time field has to be indexed in descending order we would transform a compound index noformat createindexes abc indexes noformat as follows noformat createindexes systembucketsabc indexes noformat
0
when using the shardkey directive in a mongoid document class the fields are not interpreted with respect to relationships if you have a code belongsto foocode for example and try to code shardkey foocode youll receive a missing type exception when you try to destroy an object of this class the workaround is to use code shardkey fooidcode note that this works correctly for indexes codeindex foo code will correctly index against fooid
0
introduces a pluggable encryption layer in libmongocrypt by exposing a set of callbacks that can be implemented by a libmongocrypt consumer this ticket tracks the work to implement those callbacks in the c binding
0
the serverstatusdur documentation is inaccurate with respect to the period of time covered by the reported statistics
1
os is when comparing my workload on the vs there is a minor speed regression which appears to be mainly due to some intermittent activity eviction i see the cpu spike to every seconds with a corresponding speed drop in the cpu stays relatively flat this is minor but thought you might like to know
0
i am using the c driver and mongodb server and i am trying to update multiple documents on a single replica set server using the new transaction feature and i always get an exception code mongodbdrivermongocommandexception command update failed bson field operationsessioninfotxnnumber is a duplicate field at connectionid commandmessage responsemessage at connection cancellationtoken cancellationtoken at protocol cancellationtoken cancellationtoken at operation retryablewritecontext context cancellationtoken cancellationtoken at context batch batch cancellationtoken cancellationtoken at context cancellationtoken cancellationtoken at mongodbdrivercoreoperationsbulkmixedwriteoperationexecutebatchasyncretryablewritecontext context batch batch cancellationtoken cancellationtoken at mongodbdrivercoreoperationsbulkmixedwriteoperationexecuteasynciwritebinding binding cancellationtoken cancellationtoken at mongodbdriveroperationexecutorexecutewriteoperationasynciwritebinding binding operation cancellationtoken cancellationtoken at session operation cancellationtoken cancellationtoken at session requests bulkwriteoptions options cancellationtoken cancellationtoken at filter update updateoptions options bulkwriteasync code here is my code code var client new mongoclientconnectionstring var database clientgetdatabasedatabasename var coupons databasegetcollectioncoupons var books databasegetcollectionbooks var session await databaseclientstartsessionasync sessionstarttransaction try couponsupdateoneasyncsession couponsfilter couponsupdateexception happens at this line booksupdateoneasyncsession booksfilter booksupdate await sessioncommittransactionasync catch exception ex await sessionaborttransactionasync consolewritelineexstacktrace code
0
see for details there are spec test changes for this ticket
0
paneltitledownstream change this project implements new audit event types which will appear in the audit log as well as new error codes which will be returned during authentication in rare and exceptional circumstances panel description of linked ticket paneltitleepic summary provide consistent comprehensive auditing of server events in keeping with best practices for server administration and regulatory requirements motivation while mongodb currently has a facility to audit many events important to end users not all salient events are logged others are not logged in all cases documentation scope document technical design document panel
0
right now if you want to use bsonobjbuilder you have to link against all of libmongoclientamongoclientlib thats a very big library and you shouldnt really need all of it just to use bsonobjbuilder
0
begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname linux release version smp thu jul utc machine somap mongod mongod end backtrace
1
relevant piece of codecode if secondarythrottle numdeleted if waitforreplication cgetlastop seconds to wait warning replication to secondaries for removerange at least seconds behind endl milliswaitingforreplication secondarythrottletimemillis codenote that num of secondaries to wait is always passed to waitforreplication this can cause the delete to needlessly wait for seconds if there are less than secondaries and worse if the node is not a member of a replica set at all
0
related to obviously is about the bulk apis merged results from a series of operationsthis ticket is about the simple update operation if a driver uses the new update command the server responds with an accurate count of documents actually changed in the nmodified field for example if a document has x and you use the update command to set x to the server responds with nmodified behavior is impossible to simulate with opupdate so drivers shouldnt return an nmodified field as the result of a legacy update operation depending on the language the field should be absent or null or attempting to access it should raise an exception
0
the expansion isnt set which causes the if then condition to evaluate to true see this evergreen task as an example of when this was triggered noformat call hanganalyzerpy script for tasks that are running remote mongo processes the file will define the remotes and any task which uses remote processes will have previously loaded this file to set these expansions macros if then coreextcore if then coreextmdmp fi sshoptions o gssapiauthenticationno o checkhostipno o stricthostkeycheckingno o userknownhostsfiledevnull o o buildscripts must be installed in on the remote host remotedir cmdspathoptmongodbtoolchaingdbbinpath sudo python buildscriptshanganalyzerpy hanganalyzeroption python buildscriptsremoteoperationspy verbose userhost user sshoptions sshoptions retries commands cmds commanddir python buildscriptsremoteoperationspy verbose userhost user operation copyfrom sshoptions sshoptions retries file debugger file coreext fi usage remoteoperationspy remoteoperationspy error commanddir option requires an argument noformat
0
we cannot connect to anything using the driver when its bundled because it will fail with codejavamongoserverselectionerror the driverversion field must be a string in the client metadata document code because it appears the compiled driver cannot find the packagejson file here for mongosh we use parcel for which dirname is not accessible a related problem also happens when we try to use the driver with the vscode extension in which we get codejavauncaught error enoent no such file or directory open packagejson code because for webpack dirname is accessible but you cannot use read file sync and expect the file to be included in the bundle would it be possible to find a solution that doesnt require getting the version using a file read a straightforward solution would be in a prebuild step a versionts file could be generated that would be usable by bundlers like webpack or parcel an alternative is using codejava requirepackagejson code
1
that is if db is connected to and you set to be arbiteronly in replsetinitiate the other machines will endlessly spin withtue jul replset info not trying to elect self do not yet have a complete set of data from any point in timetue jul replset info not trying to elect self do not yet have a complete set of data from any point in timetue jul replset info not trying to elect self do not yet have a complete set of data from any point in timei started upmongod replset dbpath replset dbpath port replset dbpath port ran rsinitiateidunicomplex memberstest in
0
id like to be able so see what percentage of a given distrobox type eg are running which amis this will help coordinate project deploys after a toolchain update wed like to hold off pushing patches or commits until a new toolchain ami is on of hosts or at least wed like to knowingly take the risk that some percentage or tasks might fail by running on a box with the old ami
0
i cannot find a way to create labels from the documentationselect the monitoring tab and then select hostsi dont have hosts i have deployment and in the panel i have a drop down for all processes mongos processes mongod processes simone
1
this crashes the python interpreter with a segmentation faultimport bsonassert bsonhascd foo dappenddbsondicttobsond true
0
operation execution is not restored after the service restart seems like this is only the issue for aspnet apps hence console app is able to restore connectionoperation execution when the service as back online again
1
per conversation with the official releases page is here this page has stale release info that isnt being updated i suggest that we remove the specific link on this page and just link to that
1
im in the process of upgrading from mongo gem to in my setup i had this code that was working fine code works fine no matter how many times you call it collupdate id p ne pricevat d ne dt set p pricevat d dt push h d dt p pricevat upsert true code in the process of upgrading i transformed this query to what i think that would be the equivalent compatible with code collfindid p ne pricevat d ne dt updateoneset p pricevat d dt push h d dt p pricevat upsert true code this code runs the time creates the document and the subsequent times throws the following error code mongo duplicate key error index pricehistoryproductsid dup key code it seems that upsert is not taken into account and the operation tries to create the document instead of updating it
1
hi was trying the tutorial here cloned the repo switched to branch start npm install installed pods with pod install repoupdate launching the app with npx reactnative runios fails compiling it never launches the app environment macos big sur node xcode
1
invalid index options such as safe may have been created in the past before the createindexes command started validating options more strictly in mongodb the listindexes command more strictly validates index options and fails when an invalid option is encountered at this point the only option is to drop and recreate the index this feature request is for the validate command to remove these invalid options as it is validating an index this would provide a much faster way than recreating the index when fixing an invalid index option
0
hi i find a problem function mongocgridfsfilesetid always return false and output error cannot set file id after saving file when i call it before mongocgridfsfilesave
1
its not possible to connect to a running mongod instance on a mac even the provided test example fails with the following message fail testdocumentationexamples error trace error received unexpected error topology is closed fail fail commandlinearguments
1
based on my conversation with i think the following bug exists in transaction metrics active and inactive counts the bug may affect other metrics as well im not sure if a transaction is aborted whether the number of active or inactive transactions is decremented depends on whether the txnresources were stashed at the time of the abort if they were stashed the inactive count is decremented if they were not stashed the active count is decremented this usually works because in a transaction requests flow first transactionparticipantbeginorcontinue calls transactionmetricsobserveronstart which increments the number of currently inactive transactions later transactionparticipantunstashtransactionresources calls transactionmetricsobserveronunstash both if the txnresources already exist or if they were just created which increments the number of currently active transactions and decrements the number of currently inactive transactions however transactionparticipantabortarbitrarytransaction can be called outside of a checked out session and so the following sequence can happen which causes the metrics to be incorrect for a short period noformat inactive active starts new transaction increments inactive count thread transactionparticipantbeginorcontinue inactive active txnresources have not been created so txnresourcestash is boostnone interprets this as meaning the transaction is active and decrements active count thread transactionparticipantabortarbitrarytransaction inactive metric incorrect active metric incorrect thread transactionparticipantunstashtransactionresources inactive metric remedied active metric remediednoformat however if transactionparticipantunstashtransactionresources throws before calling transactionmetricsobserveronunstash for example by timing out waiting to acquire the globallock then the inactive and active counts may remain permanently incorrect
0
i want to make an index for million of rows but everytime i try it give me some error row id userid inreply ts lang es fonum frnum crts geolng geolat text rt realmadrid hoy se celebra el sorteo de la fase de grupos de la champions league realmadrid keywords realmadrid celebra sorteo fase grupos champions league realmadrid index i use debian squeeze and mongodb from apt rootxxxx aptcache show package version architecture uname a linux xxxx smp fri apr utc gnulinux free m total used free shared buffers cached mem bufferscache swap mongodblog mon aug finishing map mon aug finishing map mon aug external sort used files in secs mon aug couldnt open varlibmongodbtmpesort too many open files mon aug assertion failed usrbin lsei usrbinmongod usrbin usrbin usrbin addre mon aug terminate called printing stack usrbin lsei usrbinmongod usrbin usrbin usrbin addre mon aug got signal aborted mon aug backtrace usrbin lsei usrbinmongod usrbin usrbin usrbin addre mon aug dbexit mon aug shutdown going to close listening sockets mon aug closing listening socket
1
before returning the batch response in cloneronebatchthencanceled the test should cancel the canceltoken to prevent the cloner from potentially making it another round before it gets cancelled this is done in other test cases for the reshardingtxncloner note the token should be canceled only once the request is sent to make sure the first batch doesnt get skipped entirely motivation the test can hang if the token gets cancelled after the first batch is processed and already awaiting a new response it can be reproduced as follows codectestfreshardingtxnclonertest cloneronebatchthencanceled const auto txns auto executor maketaskexecutorforcloner reshardingtxncloner clonerktwosourceidlist timestampmax auto opctxtoken operationcontextgetcancellationtoken auto cancelsource cancellationsourceopctxtoken auto future runclonercloner executor cancelsourcetoken oncommandreturntxnbatchstdvectortxnsbegin txnsbegin true isfirstbatch added for repro cancelsourcecancel auto status futuregetnothrow asserteqstatuscode errorcodescallbackcanceled code
0
this page is missing important noteswarnings that are onshould there be even be two different pages
1
mongodb server version using a query witch consists of a index key it also spend long time this is the query explain by the document it says when performing a count mongodb can return the count using only the index if the query can use an index the query only contains conditions on the keys of the index and the query predicates access a single contiguous range of index keys and i only use attributescitype in the filter
0
three times on the last hours all our mongos were deadlocked on serving requests to sharded collectionusing outofprod mongos to a sharded collection was blocked toduring the lock it was possible to use dbcurrentop and show collection on the sharded databaserestarting all mongos was the only way to get out of thisin attachment a cleaned log of one mongos during the last failure
1
i have replicaset and try to automate backup processthere are secondary replica instance on and arbiter on looks stop mongodump dbpath datamongodb out tmpbackup journal start work with dumpif i run script from user shell its work perfectly but when i put it into cron i dont receive any dumps and got if you are running a mongod on the same path you should connect to that instead of direct data file access all that time where was mongodlock file in data directoryquestion is why where are different behavior in same environment
0
if a chunk migration fails during the setup of the sharded collection we can end up with data on only one of the two shards if this is the case resumeafter is known not to work on the branch due to
0
backup agent released encode all collection metadata avoids edgecase issues in which there are unexpected characters in collection settings
1
if you insert a document with id as a date and then try to querying using type it will cause an error tested on and git hash dbblahinsertid dbblahfindid type err wrong type for field code noformat
0
when try to use findoneandupdate function of nodejs driver the return result is not as expected it return the original value rather than the updated one mongodriverjs var mongodb require mongodb mongoclient mongodbmongoclient mongourl mongoclientconnect mongourl function error database var col databasecollectionusers colfindonename betty function err firstreadmap consolelogfirst read position j firstreadmap colfindoneandupdate name betty set position functionerr updatemap consolelogupdate position j updatemap colfindonename betty function err secondreadmap consolelogsecond read position j secondreadmap databaseclose test db content db test dbusersfind id name betty position top left run command and output dappsspamongonodejsnode mongodriverjs first read position update position second read position packagejson name spa version private true dependencies mongodb
1
once an iterator is eof it should stay eof
1
requests for boost c version information sometimes can timeout on the serverside the blackduckhubpy must improve its error handling so that developers can better understand the error from the server retry a number of times and if the retries fail then fail the script without generating a bf
0
in two of the jstests there are instances where a local variable is deleted and this causes problems for parsing the file for fuzzing these delete commands are not needed since a new mongo shell is spawned for each test clearing the local variables
0
cache accounting underflows are only logged in diagnostic mode
0
tldr monitoringonly sockets must not send scram mechanism negotiation in ismaster monitoringonly sockets must not authenticate at all nonmonitoring sockets eg connection pool or singlethreaded client do a normal handshake and authenticate if there are credentials an authentication error on a socket must close all and only nonmonitoring sockets to the same server possible backward breaking change some drivers were resetting a servers topology description to unknown on an authentication error and should stop doing so this means the topology will always be correct even when authentication fails it will no longer be possible for authentication errors to be masked as server selection errors detailed changes
0
unittestzstd failed on rhel host project wiredtiger develop commit diff add an encryptor extension that uses the libsodium cryptography library add an encryptor extension that uses the libsodium cryptography library it should really be audited by a cryptographer before being used but is expected to be usable with at worst minor adjustments it uses the construction from libsodium to encrypt and checksum blocks it does not support retrieving keys from a key manager there not being any obvious opensource choices that im aware of this means that it can for the time being anyway only be configured with secretkey and not keyid which is perhaps unfortunate but better than nothing besides the encryptor itself this changeset includes the following related changes add the new extension to both the cmake and autotools builds rework the encryption page in the documentation adding the new encryptor and expanding on some of the other material and also add some bitsmake some improvements to the wtencryptor docs in utilmainc add a wtexplicitzero function for zeroing memory that takes precautions against being removed by the compiler and use it to clear copies of the secret key zero and free the secret key and open config string which contains the secret key when there is one earlier in nopencryptorc since this is supposed to be a template for application developers to fill in add a blank customize method without a customize method you cant configure keys so even though its officially optional it seems like the example should have one add support for the new extension to testformat note that doesnt exist are for testing the config plumbing and not any particular extension and needs to be able to munge the encrypted data and doesnt work with real encryption add new that checks the error paths in the new extensions customize method add an example snippet for how to configure the new extension to exallc for use in the docs add the encryptor directory to doxyfile so it can be an example add the new encryptor to the examples page in the documentation add a bunch of spelling words add some of the functions to the exception list in svoid like other extensions it also includes the following change that is not related but directly adjacent to a piece of the above in the cmake build of testformat pass the path to the zstd library with d like the other extensions some minor adjustments from a preliminary review document that wts checksums can be disabled when using encryption because any viable encryptor applies a cryptographically strong checksum theres no need to add a separate weaker checksum as well document this in the encryptors page and in the checksum argument of wtsessioncreate fix compiler warnings missed by accident initial changes from review also i missed something the change in wiredtigerin about configuring checksums also needs to be in apidatapy and incurs another spelling word argue with clangformat to get rid of the hangingindent comments make a couple more comment adjustments try again with the comment formatting it seems that the header is required to use hanging indent by functionpy so in order to avoid the rest of the comments after being reformatted with hanging indent by clangformat move them inside the function body this is maybe not optimal but it at least isnt visually revolting and doesnt break the tree also add sodiumencryptc to distextlist so that all the checks are run on it split the cleanup path for secretkeyp in two hopefully avoids false positives from inadequately pathsensitive static analyzers jul utc evergreen subscription evergreen event task logs unittestzstd
1
when we talk to a given mongo server through the javaapi with a eval it always failshere is a quick and dirty java program that demonstrates the issue if we go onto the console of the server and run the same command it works as expectedcodetitlebarjavaborderstylesolidimport commongodbdbimport commongodbmongoclientpublic class donotuse public static void mainstring args throws exception mongoclient mc new mongoclient db db mcgetdblog object o dbeval dbserverstatus objectnull systemoutprintlno codefails withcodetitlebarjavaborderstylesolidexception in thread main commongodbmongoexception not talking to master and retries used up at at at at at at at at at at at at server we are talking to is a slave to another master
1
gridfs was skipped during retryable reads implementation for expediency convert these operations to use commandoperation and the executewithselection aspect
0
bool processinfochecknumaenabled return codeappears that this check assumes that there is no numa impact for windowssolutionit appears that windows supports apis to find this info seenext stepconfirm if numa affects windows in the same way as linux assumption is yes
0
in the shell we need to expose the tlsdisabledprotocols arguments to allow the user to specify additional tls options the driver currently does not support this options nodejs however does either by passing this information directly as options of tlsconnect or by explicitly creating a securecontext and passing it there given that the tls connection is done in the driver we should be able to pass these options to the driver along with the other options listed as legal here without the driver supporting this there is no way for mongosh to support this functionality while it does not seem to be an option that is very common there is some evidence in sf and jira that we have customers using it given that supporting it should be pretty trivial i would not drop this feature references
0
the change in increased the history window to minutes and had originally caused a few regressions that was reflected in since then we have incorporated a few fixes regarding this problem in which has recovered some of the performance we currently still have a slight regression that needs investigation to clarify whether this amount of regression is expected and was created to investigate on the remaining regressions that were remaining in yscb workloads describes a read performance regression in a read yscb workload describes a update performance regression in a update yscb workload
0
finalize new gridfs api make gemspec compatible with bundler use odd point releases for unstable and even for stable
0
the changes from made it so evergreen tasks which run server tests now depend on archivedisttest binaries rather than archivedisttestdebug debug symbols the evergreen fetch command run while spawning an evergreen host only downloads the artifacts from transitively depended on tasks this means the debug symbols tarball is no longer automatically available to engineers and the setupspawnhostcoredump script when spawning a host
0
currentop should report locks held and locks waiting to be acquired separately for example in this excerpt from a currentop op result the lock were waiting for is presumably the database w x but it should say so noformatlocks global w database w waitingforlock true lockstats global acquirecount r w database acquirecount w acquirewaitcount w timeacquiringmicros w noformat additionally we have access to resourceid information when reporting the lock information we should add that information so that we dont have to infer which ops must be trying to get which collectiondatabase locks from the fact that they are deadlocking there are internal ops that either dont have the ns field filled out or take collection and oplog locks and there are transactions that now take locks across collectionsdatabases in conclusion the resourceids would be nice to have
0
i have a node replica and primary goes down and the another node is not becoming a primary so we observed when primary timeout the other node will not be become a primary every time primary become a current replicaset status secondary primary secondary we got below errors errors noformat error in heartbeat request to hostunreachable connection timed out i repl error in heartbeat request to exceededtimelimit couldnt get a connection within the time limit i repl error in heartbeat request to hostunreachable connection refused i repl error in heartbeat request to hostunreachable connection refused i repl error in heartbeat request to hostunreachable connection refused i repl member is now in state secondary i repl error in heartbeat request to exceededtimelimit couldnt get a connection within the time limit i repl starting an election since weve seen no primary in the past i repl conducting a dry run election to see if we could be elected i repl dry election run succeeded running for election i repl election succeeded assuming primary role in term i repl transition to primary i command command stagingdatajobs command find find datajobs filter flag exists false operation sort jobdatecreated projection id limit plansummary ixscan operation locks global acquirecount r database acquirecount r collection acquirecount r protocolopquery noformat replicaset configuration information noformat id version protocolversion members id host arbiteronly false buildindexes true hidden false priority tags slavedelay votes id host arbiteronly false buildindexes true hidden false priority tags slavedelay votes id host arbiteronly false buildindexes true hidden false priority tags slavedelay votes settings chainingallowed true heartbeatintervalmillis heartbeattimeoutsecs electiontimeoutmillis getlasterrormodes getlasterrordefaults w wtimeout replicasetid objectidxxxxxxxxxx noformat
1
systemthe system in place includes a rails based application using mongomapper as the interface to the db the mongo instances include a primary and secondary with an arbiter on small vmbackgroundwe are using mongo to maintain a two lists of people that is the base of a checkout system one collection named campaignid is the source of people the comprises the list of people who are available for a checkout a second collection named availableid maintains the same list of people but this collection is actually modified using a findandremove call to remove the person who has been checked outwe were experiencing double checkouts caused by the same person being placed into the availableid collection multiple times the solution executed was to change the index on the collection to include unique and dropdups on the personid attribute of the document the solution appears to work properly has the count and length of a distinct match on both the availableid and campaignid collections however at some point during replication we are receiving a duplicate key error on the personid index the current work around we have employed is to reindex the collections on the secondary which results in dropping all indexes not an ideal situation for failover but it does allow the replication to begin working again and will eventually complete its sync a second situation that may have a part in producing the error may occur when these collections are refreshed on a nightly basis this refresh happens through a rake task kicked off by a request to the rails application this task creates a tempid collection and then uses mongoimport to to fill the collection with fresh data indexes are created on this temp collection the current collections are then renamed campaignoldid and availableoldid and the temp are renamed the standard campaignid and availableidthe question how can we eliminate the duplicate key error when replicating to the secondary what could cause an index to exist but not be respected on the secondary serverthe error messagesyncthread duplicate key error index dup key syncthread duplicate key error index dup key the indexes on the collectionsvcsrepprimary name id ns key id v name ns key loc after name ns key id after v name ns key personid unique true dropdups true dropdups true v
1
it would be handy to throttle inserts if the cache becomes very dirty otherwise the amount of work required to complete a checkpoint can easily cause applications to stall waiting for space in the cache its likely that well want to add throttling either when a page is first dirtied andor when an updated is made to a dirty page this probably requires a new configuration setting to wiredtigeropen something like evictiondirtymax or evictiondirtythrottletrigger we should throttle more and more aggressively as the proportion of dirty content moves above the setting or approaches the setting if we want it to be a max we might want to allow the setting to be flexible when a checkpoint is running we will need new logic in the eviction server so that it starts choosing dirty pages to evict and becomes more aggressive about it as it reaches the target independent of the current evictiontrigger the goals here are limit the amount of work required to complete a checkpoint allow for very large caches to be helpful for readmostly workloads avoid application stalls due to checkpoints pinning transaction ids and stopping eviction
0
weve upgraded from mongoid to and noticed a sudden spike in the scanandorder metric on my server after an investigation of the issue weve noticed that the version performs a sort on id for relations between parentchild models belongsto havemany authorpostsfirst translates to findposts singlebatchtrue but since the index used for the query is the id of the relation the sort on the id causes a scanandorder this behaviour was not present in since the sort was not added to the query as a workaround we monkeypatched the gem to add opts none but this is not optimal can we have a global option for this or fixing it to go back to the previous behaviour having a rate of scanandorder in the graphs is not something we will get used to it should be always near to zero thank you
0
we have issue with holding bsonobj behaviour while holding values for duplicate keyswhen we create bson obj and append some keys with same name using c driver and prints it contains then it shows all four keys with different valueswhen we insert this created bsonobj into mongodb using c driver and check contains for perticuler row then it show that all keys having same value ie value of first keywhile the expected behaviour was the inserted row shall contain keys with same name but different valuesexample void somefuct dbclientconnection c cconnectlocalhost bsonobjbuilder b bsonobj p bobj coutdataptostringcstrendl string strtable somedbnametablename cinsertstrtable pbelow is output for on consol for cout insertimestamp insertimestamp insertimestamp insertimestamp then after insertion below query run from mongclient to display inserted contents of bsonobj id insertimestamp insertimestamp insertimestamp insertimestamp so table have columns with same keyname and same value which was not expectedkindy let us know on this behavior
1
the changes committed under autoyielding it should be safe to get rid of batched deletes and instead simply delete documents belonging to a whole range in one shot experiments showed that deleting hundred of thousands of documents takes less than a minute and the yielding should theoretically prevent any kind of starvationcolor the current behavior of the rangedeleter can easily result in the balancer starving waiting for a specific range deletion to drain the more range deletions the more likely this happens because the task gets reenqueued behind all the other onescolor of this ticket is to decide whether we need tocolor get rid of the batching policy keep it but drastically increasing rangedeleterbatchsize eg set it to this may be useful in a user wants to throttle orphan deletions in favor of crud operationscolor
0
we have a replica set of was and had gotten stalewe shutdown db contents and started back upthey started syncing finished fine in took longer crashed and started over then finished it took couple of hours to completehere is the log for with the crash at fri sep fri sep command admincmd command replsetheartbeat prodrudy v pv checkempty false from sep run command admincmd replsetheartbeat prodrudy v pv checkempty false from fri sep command admincmd command replsetheartbeat prodrudy v pv checkempty false from sep user assertion getting readlockfri sep socket http response send the operation completed successfully sep unhandled windows exceptionfri sep ecfri sep run command admincmd replsetheartbeat prodrudy v pv checkempty false from fri sep command admincmd command replsetheartbeat prodrudy v pv checkempty false from sep run command admincmd replsetheartbeat prodrudy v pv checkempty false from fri sep command admincmd command replsetheartbeat prodrudy v pv checkempty false from sep connection accepted from sep run command admincmd replsetheartbeat prodrudy v pv checkempty false from fri sep run command admincmd replsetheartbeat prodrudy v pv checkempty false from fri sep command admincmd command replsetheartbeat prodrudy v pv checkempty false from sep command admincmd command replsetheartb
1
the replication rollback project is running into the problem that clean shutdown ignores stabletimestamp they need an option or the default behavior to close and discard changes more recent than the stable timestamp
0
all tag values must be strings this is not stated in the docs nor accurately shown in the examplesin the examples the numbers are missing the quotes on tutorialconfigurereplicasettagsets
1
instaed of accessing configured pid file at is trying accessvarrunmongomongodpidcodecat etcmongodconf mongoconfwhere to fork and run in backgroundfork trueport location of pidfilepidfilepath start service hangs for minutes and failscode service mongod startstarting mongod via systemctljob failed see system journal and systemctl status for details codemongo is running after failurecode ps ef grep mongomongod usrbinmongod f etcmongodconfroot grep colorauto mongocodelog infocodeapr localhost mongod starting mongod tue apr localhost mongod tue apr warning servers dont have journaling enabled by default please use journal if you want durabilityapr localhost mongod tue apr localhost mongod about to fork child process waiting until server is ready for connectionsapr localhost mongod forked process localhost mongod all output going to localhost mongod child process started successfully parent exitingapr localhost mongod apr localhost systemd pid file varrunmongomongodpid not readable yet after startcode
1
theres the possibility that the code to generate dur see dbcommandscpp around line for serverstatus is now including time for recordstats and is causing serverstatus to slow down we could provide an option to skip collecting recordstats
0
mongorestoreexe is failing in when given the command linebqmongorestoreexe dir host the test looked like thiscodemongodb shell version oct shell started program mongodexe port dbpath nohttpinterface noprealloc smallfiles bindip note noprealloc may hurt performance in many applications thu oct mongodb starting thu oct debug build which is slower thu oct thu oct note this is a development version of mongodb thu oct not recommended for production thu oct thu oct db version pdfile version thu oct git version thu oct build info windows servicepackservice pack thu oct options bindip dbpath nohttpinterface true noprealloc true port smallfiles true thu oct journal thu oct recover no journal files present no recovery needed thu oct opening db local thu oct waiting for connections on port thu oct connection accepted from connection now open thu oct opening db thu oct allocating new datafile filling with zeroes thu oct creating directory thu oct done allocating datafile size took secs thu oct allocating new datafile filling with zeroes thu oct done allocating datafile size took secs thu oct datafileheaderinit initializing thu oct build index id thu oct build index done scanned total records secs thu oct insert locksmicros thu oct build index id thu oct build index done scanned total records secsthu oct shell started program mongodumpexe out host connected to thu oct connection accepted from connections now thu oct all thu oct database to thu oct error cannot dump collection has or null in the collection thu oct to thu oct doing snapshot thu oct thu oct metadata for to thu oct end connection connection now open thu oct thread stack usage was bytes which is the most so far thu oct cmd drop thu oct cmd drop oct shell started program mongorestoreexe dir host connected to thu oct connection accepted from connections now thu oct thu oct going into namespace objects found thu oct build index id thu oct build index done scanned total records thu oct creating index key id ns name id thu oct end connection connection now open thu oct thread stack usage was bytes which is the most so far thu oct connection accepted from connections now open thu oct terminating shutdown command received thu oct dbexit shutdown called thu oct shutdown going to close listening sockets thu oct closing listening socket thu oct shutdown going to flush diaglog thu oct shutdown going to close sockets thu oct shutdown waiting for fs preallocator thu oct shutdown lock for final committhu oct dbclientcursorinit call failed thu oct shutdown final commit thu oct end connection connection now open thu oct thread stack usage was bytes which is the most so far thu oct shutdown closing all files thu oct closeallfiles finished thu oct journalcleanup thu oct removejournalfiles thu oct shutdown removing fs lock thu oct dbexit really exiting now thu oct thread stack usage was bytes which is the most so farthu oct shell stopped mongo program on port completed successfully codeit now looks like thiscodefri oct end connection connections now openfri oct connection accepted from connection now openmongodb shell version oct shell started program mongodexe port dbpath nohttpinterface noprealloc smallfiles bindip note noprealloc may hurt performance in many applications fri oct mongodb starting fri oct debug build which is slower fri oct fri oct note this is a development version of mongodb fri oct not recommended for production fri oct fri oct db version pdfile version fri oct git version fri oct build info windows servicepackservice pack fri oct options bindip dbpath nohttpinterface true noprealloc true port smallfiles true fri oct journal fri oct recover no journal files present no recovery needed fri oct opening db local fri oct waiting for connections on port fri oct connection accepted from connection now open fri oct opening db fri oct allocating new datafile filling with zeroes fri oct creating directory fri oct done allocating datafile size took secs fri oct allocating new datafile filling with zeroes fri oct done allocating datafile size took secs fri oct datafileheaderinit initializing fri oct build index id fri oct build index done scanned total records secs fri oct insert locksmicros fri oct build index id fri oct build index done scanned total records secsfri oct shell started program mongodumpexe out host connected to fri oct connection accepted from connections now fri oct all fri oct database to fri oct error cannot dump collection has or null in the collection fri oct to fri oct doing snapshot fri oct fri oct metadata for to fri oct end connection connection now open fri oct thread stack usage was bytes which is the most so far fri oct cmd drop fri oct cmd drop oct shell started program mongorestoreexe dir host connected to fri oct connection accepted from connections now fri oct fri oct going into namespace fri oct assertion invalid ns query name fri oct problem detected during query over err invalid ns code fri oct warning restoring to without dropping restored data will be inserted without raising errors check your server objects fri oct creating index key id ns name id fri oct assertion invalid ns query getlasterror w fri oct fri oct problem detected during query over err invalid ns code fri oct dev wont reportnextsafe err invalid ns code assertion nextsafe err invalid ns code fri oct end connection connection now open fri oct thread stack usage was bytes which is the most so farassert are not equal collection does not restore properlyerrorprinting stack are not equal collection does not restore does not restore oct uncaught exception are not equal collection does not restore properlyfailed to load fri oct connection accepted from connections now open fri oct terminating shutdown command received fri oct dbexit shutdown called fri oct shutdown going to close listening sockets fri oct closing listening socket fri oct shutdown going to flush diaglog fri oct shutdown going to close sockets fri oct shutdown waiting for fs preallocator fri oct shutdown lock for final commit fri oct shutdown final commitfri oct dbclientcursorinit call failedcodein the good test mongorestore displays thu oct thu oct going into namespace codein the bad test mongorestore displays fri oct fri oct going into namespace codethe newbad one is failing to extract the filename portion of the file specification leaving the path part in it this is perhaps caused by the change from forward slashes to backslashes in part of the file specification displayed on the first linepassing testfailing test
1
add iamgetuser to minimum access policyiam block of access policy should look something like the belownoformat effect allow action resource noformat
1
at the very least it needs an extra space should probably also say what the number is maybe include both actual size and max allowed sizecode ok n writeerrors index code errmsg object to insert too code
0
we need to come up with a way to create client error codes that do not overlap with server error codes in the futureexisting error codes can probably be shared
1
this page says replica sets provide strict consistency other parts of the site use the term eventual consistency please correctclearify
1
keep getting this error when iterating over a query resultcommongodbmongoexceptionnetwork cant call something at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at by javaioeofexception at at at at at at at more
1
mms server changelog released support for archive restores targz for databases whose filenames exceed characters api skip missed points in metrics data instead of returning empty data api return correct number of data points when querying metric data with the period option backup agent update to agent released with onprem use notimeout cursors to work around
1
recent versions of mongod will automatically create new users with scram credentials trying to authenticate with them using mongodbcr will fail this results in a failure in the auth client tests
1
i think there should be a hint saying to run the command from the prompt and not in the mongodb shell after the import there is no follow up as to how to view the file imported
0
were seeing the following message throuout the mongodblog i sharding refresh for collection configsystemsessions took ms and failed caused by commandnotfound no such command flushroutingtablecacheupdates bad cmd flushroutingtablecacheupdates configsystemsessions maxtimems clustertime clustertime signature hash keyid configserverstate optime ts t db admin i control sessions collection is not set up waiting until next sessions refresh interval no such command flushroutingtablecacheupdates bad cmd flushroutingtablecacheupdates configsystemsessions maxtimems clustertime clustertime signature hash keyid configserverstate optime ts t db admin noformat this secondary node is running on primary is on
1
from sample code is at
0
seeds is always set to meaning that previously discovered seeds are forgotten upon cluster reconnect
0
in order to create a new command when starting subsequent batches of grouped writes in the commandwriter we need to create a new bsonobjbuilder and bsonarraybuilder because they cannot be reusedthis functionality is required for bulkwrites
1

Dataset Card for "mongoDB_testset"

More Information needed

Downloads last month
0
Edit dataset card