text_clean
stringlengths
10
26.2k
label
int64
0
1
when the wt btree uses a file per collection or index then it uses a huge number of files for deployments that have a huge number of collections i want an option to use a file per database for the wtbtree and an lsm tree per database for the wt lsm for motivation see
0
in we removed support for the authenticate command via opquery but preserved support for the saslstart command this was unintentional and we should add back just support for authenticate to be parallel with the other allowed commands that drivers issue prior to determining whether to use opmsg or not
1
code inserting into negative shardwed jul shell stopped mongo program on port wed jul end connection connections now open wed jul end connection connections now open wed jul end connection connections now open wed jul end connection connections now opengle startwed jul dbclientcursorinit call failedwed jul query failed shardgleinsertcmd getlasterror to message error doing query failed filename srcmongoshellcollectionjs linenumber name error stack into positive shardwed jul trying reconnect to jul reconnect failed couldnt connect to server jul error socket exception server to load
1
in the replsettestawaitsecondarynodesforrollbacktest function we first call awaitsecondarynodes and if that times out we then enter this section of logic to check for an unrecoverable rollback scenario if the first awaitsecondarynodes call times out though we trigger the hang analyzer which will suspend the mongod processes that we attach to this prevents us from connecting to the nodes to run commands to check for unrecoverability we should disable the hang analyzer for this awaitsecondarynodes call we can consider using mongorunnerrunhanganalyzerdisable so that we can still connect to and run commands against nodes even after it times out
0
i have an existing mongo database with files stored in gridfs each with a metadata attribute for example from mongo shellcode id filename length chunksize uploaddate metadata session project filename codewe keep the session and project identifiers in the metadata so that we can query files by those foreign keys this data is stored in the metadata field as is consistent with mongo docs im trying to access this field using mongoid gridfs and had no luck even with a basic class defined like so i get thiscoderuby class file include mongoiddocument storein collectionfsfiles fieldfilename type string fieldcontenttype type string fieldlength type integer fieldchunksize type integer fielduploaddate type time type string fieldfilemetadata type hash as metadata end f codei was hoping that defining a field as metadata would have let me access it but no luck is there way to do this already or will this require some custom patching in my file class
1
during scheduling a migration we attempt to acquire the local distlock for the namespace this acquisition isnt exceptionsafe – in particular in can throw lockbusy we dont attempt to catch this exception unlike in the next distlock acquisition where we check the status then just return as a result of this uncaught exception the balancer thread will crash terminating the whole process
0
in the separatedebugpy tool we dont do anything special to account for the fact that on macos the separate debug information the dsym is a directory with substructure rather than a file scons is heavily file based and it appears that trying to ignore this nuance breaks scons in subtle ways instead we will need to teach separatedebugpy about the substructure and let scons handle it file by file this will require changes to how autoinstallbinariespy integrates with separatedebugpy as well since it currently uses suffix matching to determine placement that will no longer be sufficient
1
introduced here after a network read error the driver correctly discards its socket but in singlethreaded mode it also destroys the topology scanner node without a scanner node the driver cannot reconnect to the server on the next operation
1
attempting to insert a document larger than bytes crashes a server see the backtrace in the attached on the same opmsg request returns a command error response to the client with assertion
0
tasks context as a database contributor when i run a patch build containing google microbenchmarks then i expect to see redgreen resultsor async signal processed bfs as a way to alert me to performance regressions ac ensure signal processing is running in google benchmarks check historical values for google benchmarks through the evergreen api of the base commit make the threshold configurable in a yaml file in google benchmarks well use for this use case meaning if the task is worse for latency since google benchmarks only reports latency than the baseline commit then the task will be marked failed red the key in the yaml file should be the key for the metric in the json perf output
1
i have made changes to source code i require to run all testcases to check its effect using command code gradlew check code i am having the mongodb running in remote machine can anyone help me with configuring java mongodb driver with remotely running mongodb
0
coverage failed on coveragehost evergreencommit diff get all task queue distros query aug utcevergreen subscription evergreen event testbackgroundsuite logs historytestmaxheartbeats logs historytask logs
0
dbprototypecopydatabase calls through to mongoprototypecopydatabasewithscram when the authentication mechanism is configured to be the mongoprototypecopydatabasewithscram function is implemented in c and doesnt use the driversession underlying the db object codejavascript use the copydatabase native helper for if mechanism return thisgetmongocopydatabasewithscram fromdb todb fromhost username password slaveok code note this issue was found via manual code inspection of all usages of dbprototypegetmongo during
0
expressioncompareevaluate cannot compare different types including numeric types and cannot compare some types at all eg objects lacks some functionality of valuecomparetestcodec dbccdropcsave csave csave printjson caggregate project z eq codethere are also some todo commentscode todo look into collapsing by using valuecompare codecode cw todo at least for now later handle automatic conversionscodeobserved behavior an assertion occurs when an attempt is made to compare numbers of different numeric types with an aggregation expressionexpected behavior different numeric types can be compared
0
i created a simple test to just insert into a capped collectionwindows with journaling memory growth after inserting items is around and if i let it continue it will use all available memory without journaling memory quickly stops growing no mater how long test runson linux with journaling memory quickly stops growing no matter how long test is run without journaling memory quickly stops growing no matter how long test is runi tried in version with same problem
1
similar to work described in but a different code path in this case we fail to construct a real expressioncontext here
0
the documentation sayssorting on an id field that stores objectid values is equivalent to sorting by creation timethat does not seem to be the case eg i have the following keys in insertion order when sorted by id they do not reflect insertion time id id not larger than pervious tilo s
1
monitoring agent changelog version released monitoring agent will now identify itself to the mms servers using the fqdn of the server on which it is running improvements in connection management for monitored processes backup agent changelog version released backup agent will now identify itself to the mms servers using the fqdn of the server on which it is automation agent changelog version released support for mongodb fixes issue with minor version upgrades with auth enabled
1
paneltitleepic summary see spec changes for details panel
0
hiyour instructions states that i should use chkconfig on ubuntu to start mongodb as a service after reboot i couldt get this to work and after some searching i realized that the equivalent command on ubuntu is updatercdit would be great if you could update the documentation updatercd doesnt work exactly the same as chkconfig so im not quite sure about the parametersthanksmichael
1
in the process of upgrading from to ive tried to convert the following query code colmapreducemap reduce query query out foobar read primary code ive rewritten it as follows code colfindquerymapreducemap reduce out foobar read primary code when calling toa on the above mongomapreduce object i get the following error code nomethoderror undefined method values for asemasmwrestring from fetchqueryspec code it seems that fetchqueryspec always expects a hash as the value of the out option code def fetchqueryspec selector options dbname databasename collname outvaluesfirst expects a hash end code but looking at the docs i see that its valid to also pass just a string if this behavior was not intented and the driver should really accept strings too i think that this would be a proper fix code def fetchqueryspec selector options dbname databasename collname outisastring out outvaluesfirst also accept a string end code what do you think if that makes any sense id be happy to prepare and submit a pr thanks
1
to ns total read write binmongotop connected to ns total read write
0
introduced ghost timestamps on a primary for multistatement transactions doing a multikey write this causes wts perspective of the all durable timestamp to go backwards to compensate the method to get the all durable that the storage engine exposes ensures the value does not go backwards however the version of reading at the all durable timestamp uses the api that can go backwards this can result in causal reads not seeing their own writes when they are concurrent with other writers flipping multikey inside a multi statement transaction reading at the no overlap however does use from the safe api and master use the wtkvengine call which protects them from this bug
1
hi team when spinning up a cluster in azure using the atlas api the following error happens using this config file code name guillaumeapi numshards replicationfactor providersettings providername azure regionname europewest instancesizename disksizegb backupenabled false code the bad request is returned with this message code detailthe required attribute disktypename was not specified errorcodemissingattribute parameters disktypename reasonbad request code but i can’t find the possible values for disktypename in the docs
1
trying any find command when a replica set only contains a primary only one server in the set an fassert is raised in isqueryoktosecondaryit must be noted that i am using a scopeddbconnection
1
hi in you say to cp r etctuneprofilesdefault etctuneprofilesnothp this file is not present on redhatcentos anymore and maybe on ive not verified a good solution to fix that could be like said on in particular this reply quote in addition to setting the grub command line you also need to configure tuned but not using the instructions you linked to as they are so full of errors it would take half a day just to explain them all create a custom tuned profile which ill call custom and then set the profile you will base it on an existing profile such as virtualguest if you are running in a virtual machine is of course or throughputperformance if you are on a physical machine create the directory to hold the custom profile mkdir etctunedcustom create the custom profile etctunedcustomtunedconf for example includevirtualguest transparenthugepagesnever now set the profile tunedadm profile custom quote
0
we most provide wakeup to prevent unserializing the zendclassunserializedeny callback only works for type c not o at least see
0
mongodb secondary keeps throwing exceptions until it became stale old versions dont have this bugfri oct connection accepted from oct connection accepted from oct jiepangproductionuser assertion failure dbopsupdatecpp usrbinmongod usrbinmongodthreadproxy fri oct replset synctail assertion syncing ts timestamp h op u ns jiepangproductionuser id o set citystats喀什lcd citystats天津lcd citystats泰安lcd citystats杭州ncl citystats阿克苏ncl citystats上海lcd citystats杭州ncd citystats阿克苏ncd citystats博尔塔拉州ncl citystats北京ncl citystats阿克苏地区ncd citystats泰安ncl citystats北京ncd citystats阿克苏地区ncl citystats上海ncd citystats阿克苏lcd citystats海外ncd citystats天津ncd citystats上海ncl citystats海外ncl citystats天津ncl citystats和田ncd citystats深圳ncl citystats博尔塔拉州ncd citystats和田ncl citystats青岛lcd citystats深圳lcd citystats深圳ncd citystats阿克苏地区lcd citystats海外lcd citystats青岛ncd citystats北京lcd citystats秦皇岛lcd citystats和田lcd citystats青岛ncl citystats伊宁ncl citystats喀什ncl citystats乌鲁木齐lcd citystats伊宁ncd citystats博尔塔拉州lcd citystats喀什ncd citystats秦皇岛ncd citystats泰安ncd citystats伊宁lcd citystats乌鲁木齐ncl citystats乌鲁木齐ncd citystats秦皇岛ncl citystats杭州lcd fri oct end connection oct end connection oct replset syncing to oct jiepangproductionuser assertion failure dbopsupdatecpp usrbinmongod usrbinmongodthreadproxy fri oct replset synctail assertion syncing ts timestamp h op u ns jiepangproductionuser id o set citystats喀什lcd citystats天津lcd citystats泰安lcd citystats杭州ncl citystats阿克苏ncl citystats上海lcd citystats杭州ncd citystats阿克苏ncd citystats博尔塔拉州ncl citystats北京ncl citystats阿克苏地区ncd citystats泰安ncl citystats北京ncd citystats阿克苏地区ncl citystats上海ncd citystats阿克苏lcd citystats海外ncd citystats天津ncd citystats上海ncl citystats海外ncl citystats天津ncl citystats和田ncd citystats深圳ncl citystats博尔塔拉州ncd citystats和田ncl citystats青岛lcd citystats深圳lcd citystats深圳ncd citystats阿克苏地区lcd citystats海外lcd citystats青岛ncd citystats北京lcd citystats秦皇岛lcd citystats和田lcd citystats青岛ncl citystats伊宁ncl citystats喀什ncl citystats乌鲁木齐lcd citystats伊宁ncd citystats博尔塔拉州lcd citystats喀什ncd citystats秦皇岛ncd citystats泰安ncd citystats伊宁lcd citystats乌鲁木齐ncl citystats乌鲁木齐ncd citystats秦皇岛ncl citystats杭州lcd fri oct connection accepted from oct connection accepted from oct end connection oct connection accepted from oct replset syncing to oct jiepangproductionuser assertion failure dbopsupdatecpp usrbinmongod usrbinmongodthreadproxy fri oct replset synctail assertion syncing ts timestamp h op u ns jiepangproductionuser id o set citystats喀什lcd citystats天津lcd citystats泰安lcd citystats杭州ncl citystats阿克苏ncl citystats上海lcd citystats杭州ncd citystats阿克苏ncd citystats博尔塔拉州ncl citystats北京ncl citystats阿克苏地区ncd citystats泰安ncl citystats北京ncd citystats阿克苏地区ncl citystats上海ncd citystats阿克苏lcd citystats海外ncd citystats天津ncd citystats上海ncl citystats海外ncl citystats天津ncl citystats和田ncd citystats深圳ncl citystats博尔塔拉州ncd citystats和田ncl citystats青岛lcd citystats深圳lcd citystats深圳ncd citystats阿克苏地区lcd citystats海外lcd citystats青岛ncd citystats北京lcd citystats秦皇岛lcd citystats和田lcd citystats青岛ncl citystats伊宁ncl citystats喀什ncl citystats乌鲁木齐lcd citystats伊宁ncd citystats博尔塔拉州lcd citystats喀什ncd citystats秦皇岛ncd citystats泰安ncd citystats伊宁lcd citystats乌鲁木齐ncl citystats乌鲁木齐ncd citystats秦皇岛ncl citystats杭州lcd fri oct end connection oct end connection oct connection accepted from oct mem mb oct replset syncing to
1
html page is getting download and not opened
1
proposed directory structure gdbmongoinitpy see below gdbmongocommandutilpy see below gdbmongobacktracepy mongodbuniqstack mongodbbtactiveonly gdbmongolockingpy mongodbshowlocks mongodbwaitsforgraph mongodbdeadlockdetect not filed yet gdbmongoprinterspy all xxprinter classes and the buildprettyprinter function from the gdbinit file and hanganalyzerpy would then source buildscriptsgdbmongo codepythontitlegdbmongoinitpy from future import absoluteimport import importlib as importlib import pkgutil as pkgutil def loadallmodules dynamically loads all modules in the mongo package so that any commands declared within them are registered for module in pkgutilwalkpackagespathpath importlibimportmodule module packagename loadallmodules code codepythontitlegdbmongocommandutilpy from future import absoluteimport todo combine the logic of this function with registermongocommand to support the mongodbhelp command def registername commandclass kwargs registers a gdb command commandutilregisterhelloworld gdbcommanduser class helloworldgdbcommand greet the whole world def invokeself arg fromtty printhello world def wrappercls a gdb command is registered by calling gdbcommandinit we construct an instance of cls so the command is automatically registered as a result of decorating a class with the register function clsname commandclass kwargs return cls return wrapper code cc
0
a new version of the automation agent will be released on automation agent version released fix in algorithm for balancing mongod processes across cores fix for configuring oplog sizes tb fix that makes autoupgrades more reliable
1
a backport to for renumbered the named error code toomanylogicalsessions to which is not consistent with master or we should correct this so it is consistent with and set it to
1
recent versions of mongod will automatically create new users with scram credentials trying to authenticate with them using mongodbcr will fail this results in a failure in the auth client tests
1
attempting a non haystack query against a field indexed by a haystack index triggers a verify assertiontestcodec dbccdropcensureindex pos geohaystack type bucketsize printjson cfind explain true coderesultcodewed feb testc assertion failure srcmongodbgeohaystackcpp mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod mongod threadproxy libsystemcdylib pthreadstart libsystemcdylib threadstart feb assertion assertion nstestc query query pos explain true wed feb problem detected during query over testc err assertion code
0
if you have a sharded cluster all running on one machine in if you connect to the mongos via localhost and there are no admin users then it allows you full access to the mongos since the connections between mongos and mongod have full access anyway this gives you full access to the cluster in however authentication for commands is done on the mongods with the credentials passed from mongos some machines will not consider the connection from the mongos to the mongods to be a localhost connection if the cluster was configured using the machines hostname this means that even though you connect to mongos on a local connection some commands might still fail on the other hand some machines do recognize the connection between mongos and mongod as a localhost connection on those machines if you add an admin user to the cluster which should close the localhost backdoor commands that are passed through to the mongods directly can still succeed even without write authorization in order to disable the localhost exception completely you need to add admin users to each shard directly this only affects clusters that are all running on the same machine so its not really a security hole its more a problem for our test infrastructure because it makes the behavior of authentication in tests vary based on which machine the tests are run on and whether or not the connections between the mongos and mongods get considered local or not seems to be related to whether the hostname for itself on the machine resolves to or to the machines public ip address
0
logging of intermediate test results could be improved in this test
0
for immediate release please note that there are two i missed one on monday automation agent release notes version released fix do not overwrite the log file for the monitoring and backup agents when starting a new instance version released fix after starting a new monitoring or backup agent ensure that the process is running achieving goal state
1
visual studio reports as its cplusplus version even when we are running c do not make decisions based on this macro for the inclusion of features there is a bug report for this that was closed as deferred but not actually fixed it seems
0
were trying to get lagseconds out of dbruncommandserverstatus repl serverstatus dbcommandbsonsonsongetting the following error in the masters log assertion unauthorized dblocal lock this message is on the slave object notice it doesnt list the sources at allrepl ismaster false sources assertion nextsafe err unauthorized dblocal lock code assertioncode errmsg db assertion failure ok authenticating as the repl user which exists on both master and slave in the local db dbauthrepl im working on a authenticated nagios monitoring plugin
1
eg initializing host error copying script setupsh to host error copying script to remote machine exit status host encountered
1
this url has outdated instructions for creating an mms backup user im currently validating but it looks like the command should be more like thiscodeuse admindbcreateuseruserbackupuserpwdpasswordrolesclusteradminreadanydatabaseuseradminanydatabaserolereadwritedblocalrolereadwritedbadmincode
1
applications that use change streams can fall behind the source cluster if event processing does not keep up with the rate of oplog generated on the source in these cases the events received by the service may have been generated on the cluster secondsminuteshours before they are retrieved by the application and the change stream may even fall off the source oplog and cannot resume syncing from the point of failure the user may not be aware of the lag growing for some time sometimes too late – if change streams event output reported details about the server that could be used to calculate and monitor the change streams lag this could help users catch and address symptoms earlier for example can we configure change streams output to include lastcommittedoptime in addition to the clustertime metric that is already reported the difference of the two could be used to calculate the lag from the source cluster currently customers can calculate lag in a couple ways – for example compare the event clustertime to the time when the event was receive in the application retrieve lastcommittedoptime in a parallel request and compare that to the change streams output but each of these approaches requires extra configuration and inherently incorporates some imprecision – they are not a reflection of the lag at the particular point when the change stream event was returned by the server
0
the problem for customers who have their availability zones geographically quite far away secondaries can have an unavoidable high latency connection to the primary even though the datarates are well within needs roundtrip times as measured by ping can be in the range in the above environment chunk migration is terribly slow chunk transfer rates in our cluster average a measly bytessec furthermore rangedeletes for removing the documents that belonged to a chunk that was just moved are also terribly slow the rest of mongo works well in our cluster in fact our secondaries optime lag are typically less than standing up local secondary servers ie close to the primary servers improves chunk migration rates and rangedelete rates by at least two orders of magnitude but this has a tremendous cost effectively doubling the number of secondary servers we require a possible solution it appears from our experience that when the balancer calls movechunk andor rangedeletes it is using secondarythrottling by default with a write concern of at least couldnt mongodb support a highlatencytolerant chunk migration mode where calls to movechunk and rangedeletes for chunk migration would use a writeconcern of ie secondarythrottling disabled except for the final write or deletecolor the final write or delete for each chunk could use secondary throttling with a writeconcern of i believe this would yield a huge improvement in chunk migration performance for highlatency environments what are the down sides to this solution i really cant think of any
0
currently in the bucket catalog we use logicalsessionid to keep inserts separated per operation however it is not guaranteed that a logicalsessionid will always be supplied we should instead use operationid for this
0
xcutil failed on ubuntu armhost evergreen selftestscommit diff use correct cross compile path
0
using the same code as shown here i noticed that after updating to the driver via nuget that all my unit tests were failing i tracked this down to the collectioninsertdocument was failing to give an objectid value even though the representation was set to objectid for an interface member before and after the call to insert the id property value was nullif you need a sample i can try to put one together
1
as of the release the mongodborgenterprisetools metapackages are not present in the release for an example see the rhel enterprise repo
1
an uninterruptiblelockguard was added in the destructor for multiindexblockimpl a recent failure has shown that it should encompass the entire scope of the trycatch block in case other lock acquisitions are also interrupted
0
i have a node replica and primary goes down and the another node is not becoming a primary so we observed when primary timeout the other node will not be become a primary every time primary become a current replicaset status secondary primary secondary we got below errors errors noformat error in heartbeat request to hostunreachable connection timed out i repl error in heartbeat request to exceededtimelimit couldnt get a connection within the time limit i repl error in heartbeat request to hostunreachable connection refused i repl error in heartbeat request to hostunreachable connection refused i repl error in heartbeat request to hostunreachable connection refused i repl member is now in state secondary i repl error in heartbeat request to exceededtimelimit couldnt get a connection within the time limit i repl starting an election since weve seen no primary in the past i repl conducting a dry run election to see if we could be elected i repl dry election run succeeded running for election i repl election succeeded assuming primary role in term i repl transition to primary i command command stagingdatajobs command find find datajobs filter flag exists false operation sort jobdatecreated projection id limit plansummary ixscan operation locks global acquirecount r database acquirecount r collection acquirecount r protocolopquery noformat replicaset configuration information noformat id version protocolversion members id host arbiteronly false buildindexes true hidden false priority tags slavedelay votes id host arbiteronly false buildindexes true hidden false priority tags slavedelay votes id host arbiteronly false buildindexes true hidden false priority tags slavedelay votes settings chainingallowed true heartbeatintervalmillis heartbeattimeoutsecs electiontimeoutmillis getlasterrormodes getlasterrordefaults w wtimeout replicasetid objectidxxxxxxxxxx noformat
1
according to the natural parameter returns items according to their natural order within the database this ordering is an internal implementation feature and you should not rely on any particular structure within itremove test assert which makes assumptions on the document returned by findone
1
i had to manually copy the platform folder to usrlocalincludemongo in order to build example applicationsg secondcpp i usrlocalincludemongo i usrlocalinclude lmongoclient lboostsystem lboostthread lboostfilesystem o secondoutin file included from from fatal error mongoplatformbasich no such file or directorycompilation terminated
1
in the fix for an ifdef needed to be changed to fully enable support for windows
0
dbtgroupwed jan js error uncaught exception group command failed errmsg reduce has to be set ok ok errmsg reduce has to be set dbtstats sharded false ns testt count size storagesize nindexes ok ok
0
the tip under hostname is not correct the feature to paste in a connection string into the hostname field does not exist instead the tip could say if you copy a mongodb connection string to your clipboard and switch back to mongodb compass it will detect the string and ask to populate the connection dialog based on the clipboard content
1
dbcollectiondatasize data size for the collection dbcollectionstoragesize allocation size including unused space dbcollectiontotalsize the data size plus the index size dbcollectiontotalindexsize the index size
1
the internal shablock class represents the output of a hash algorithm it has generic methods for interacting with a fixed width binary blob it has traits about hash implementations injected via templates we currently only support and and various algorithms but there is nothing about the class that would prevent implementations of other hash algorithms we should rename shablock to hashblock to indicate the generic nature of the class
0
notice affected version is not listed in choice make check make entering directory make nothing to be done for checkam make leaving directory lttestlibmongoc collecttestsfromdir assertion dir failed binsh ligne abandon core dumpedtestprog f p f testlog i notice beta archive is much bigger than but this is mostly because of build directory testsjson is very small
0
this page has activity alerts and events listed twice in the lefthand toc box
1
motorclient asynciomotorclient are now the only client classes
1
fixed with
0
it will be nice if you could disable jmx registration on mongoin our context we have several wars inside tomcat that uses the same mongodb databaseupon loading the driver register to the same mbean and we might get some issues here
0
fails when it gets up to
1
when testing changes in on ppc we discovered that the readwrite lock implementation and hence the fair lock implementation do not include barriers in all of their lock unlock operations they need to reasonable application code relies on it
1
when a thirdparty library includes mongo headers it constrains our ability to improve those headers for use in our main codebase while investigating upgrade libfmt we discovered a recent problem with libfmt in which its header causes malfunctions in the presence of a using namespace std the library internally uses such directives so it cant include the fmtformath header because has been locally modified to include mongologgerlogseverityh it now brings in all the headers mongologging brings in which now include fmtformath so fails to build due to an unrelated libfmt upgrade we should isolate thirdparty code so that it doesnt directly include mongos full headers for systems like logging if we need to inject entities into a thirdparty library we have to find a way to do it through an abstraction so they dont depend on our headers directly we cant control what thirdparty code will do the way we can control our own code so we have to keep the thirdparty libraries separated from each other
0
failed on ubuntu evergreencommit diff return when no test results found in cedar dec utcevergreen subscription evergreen event task logs
0
code mon jul connection accepted from connections now open testpath testfile testname nojournal false nojournalprealloc false auth false keyfile nullmon jul shell started program eval testdata keyfiledata null or jstestauthenticatedbgetmongosleep dbgetlasterror jul invalid access at address from thread jul cmd drop jul got signal segmentation fault mon jul backtrace code
1
in we added a commandacceptsanyapiversionparameters flag for internal commands which indicates that the command should skip checking api parameters we applied that flag to the internal commands configsvrdropcollection and configsvrdropdatabase it seems we need to add this flag to other internal mongos commands as well such as configsvrcreatedatabase
1
to avoid forcing the scheduler to continually reconsider tasks that are underwater we should define an underwater threshold but one or two weeks is probably correct
0
what about to add the connection pool in the c driversample codesmongoconnpool pool mongoconnpoolallocmongoconnpoolinitpool optionsmongo conn mongoconnpoolacquirepool mongoconnpoolreleasepoolmongoconnpooldestroypoolmongoconnpooldeallocpooldoes anyone think this make sense
0
when i start mongodexe it crashes and tries to create a minidump file in its installation directory obviously on windows regular userservice accounts dont have write permission to cprogram files noformat i control mongodb starting dbpatheappicaptordatadata i control targetminos windows server i control db version i control git version i control failed to open minidump file cprogram access denied noformat
0
ive been trying to setup a simple replication system main mongo backup and arbiterunfortunately firing it up lead to main being elected secondary and the backup being elected primarymain priority set to and backup a priority of along with a slave delay of backup leads to main being elected primary but when backup is brought back on line it resumes its spot as primary main then enters recovering mode and never recovershidden true has no effect on this behaviourmore details including various config filesthis was eventually solved by reinstalling all of the relevant servers
0
this morning weve been swamped by the following error messagestue jul insert pankiaproductionarticles exception createprivatemap failed look in log for error jul error mmap private failed with out of memory bit buildtue jul assertion failed look in log for error usrbinmongodthreadproxy it didnt crash but some write operation failed consistentlynow we commented out journal true line from etcmongodbconf and everything went back to normal it seems weve confirmed that if we set journal true again the above log start to appear again so its reproducible herethe server has ram on linux and there are of files under varlibmongodbif you need more info let me know
1
i want to update around fields of my metadata without modifying the document inside gridfs collection in mongocsharp driver i was using databasegridfssetmetadata for the same could you please share an example or documentation for the same thanks in advance
1
on my machine it takes about per cursor adding mor cursors adds more load up to is reproduced by several peoplei am from nodejs so i can only give you this code to reproduce the issueto reproduce the issueyou need to npm i mubsuband here is the codevar mubsub requiremubsubvar client safe truevar channel clientchannelmubsubchannelsubscribeconsolelogchannelsubscribeconsolelogchannelsubscribeconsolelogchannelsubscribeconsolelogchannelsubscribeconsolelogchannelsubscribeconsolelogchannelsubscribeconsolelogchannelsubscribeconsolelogchannelsubscribeconsolelogoriginal issue on nodes driver tracker
1
during reconciliation if there exists a tombstone update that is globally visible remove the key if there exists a tombstone update with wttsmax timestamp just follow the regular tombstone approach to add the update time pair to the previous update stop time pair or to the ondisk value stop time pair
0
authentication restrictions can be attached to roles this is necessary for externally authorized users which do not have user document representations in mongodb when a user authenticates its authorization restrictions are checked it can authenticate if its restrictions are met and ever restriction attached to inherited roles are met the syntax for createrole and updaterole is identical to that of createuser and updateuser
0
keep getting this error when iterating over a query resultcommongodbmongoexceptionnetwork cant call something at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at at by javaioeofexception at at at at at at at more
1
page mms does not use two factor authentication for login this is an incorrect statement since last week when twofactor authentication was made mandatory for everyone in mms
1
wiredtiger page debug dump functions should unpack integer keys
0
tenant migration related jstests should call donorforgetmigration when migration is completedabortedcommitted failure to do so will make the recipient to exhaust the tenantmigrationrecipientservice thread pool default is see currently there are components on the recipient side like oplog fetcher and cloner which runs a synchronous task on the tenantmigrationrecipientservice thread pool without yielding the thread those components will be stopped only if the migration on recipient side fails due to an error or receives recipientforgetmigration cmd
0
when using a partialfilterexpression with an eq condition mongodb may fetch the document only to read the field of the eq condition of the the partialfilterexpression however fetching the value should be superfluous because the value is already known by the partialfilterexpression this is an undesired behaviour because it fetches uselessly documents from the disk which makes the query slow especially if documents are bigger to circumvent the issue youd need to add the field of the eq condition of the partialfilterexpression to the index however this makes the index bigger and as such it will uses more resources
0
a helpful pr revealed that we may have missed some options that are applied to operations in the initial options management we should incorporate serializefunctions checkkeys ignoreundefined and mininternalbuffersize in options management
0
only the gen tasks are selectable for the even though there ought to be a generated logicalsessioncachereplicationdefaultrefreshjscorepassthrough display task and a generated execution task
0
the changes from as part of reused the wouldchangeowningshard error response handling in mongos this comes with the following limitations the update must be performed in a batch of size the update must be performed as a retryable write or in a multidocument transaction the update must not be performed as a multitrue update this restriction was omitted from and maybe needs to be added one thought would be to have the donor shard handle the wouldchangeowningshard exception locally without bubbling it up to mongos
0
why in cloud manager documentation you document ops managers agent options quote new in version ops manager adds the ability to specify a custom option using the setparameter option with setparameter you specify both the parameter and its value as in the following mongodb configuration document quote this could be confusing please tell how this is related to cloud manager thanks emilio
1
i cant run mongodump neither mongorestore with success it gets stuck on downloading the database even if its small im using tunnel and a beta macos which maybe related to the issue it used to work on os x mongodump d db host port o outfolder easycarrosvoucher easycarrosschedule easycarrosbusyhistory easycarrosuser easycarrosvoucher easycarrosschedule easycarrosbusyhistory easycarrosuser easycarrosvoucher easycarrosschedule easycarrosbusyhistory easycarrosuser
1
hi we are not able to connect to the mongodb cluster environment in aws using async java driver the issue is mentioned here we spent lot of time writing code using async java driver finally having a blocker in aws environment we are able to connect using spring mongo though regards krishna
1
in we drop indexes with names that are too long for the index catalog before renaming the collection to the systemdrop namespace in a mixed cluster with a wiredtiger primary and a secondary the collection drop on the secondary would be executed with the indexes with long index names dropped implicitly in the event of a rollback on the secondary these dropped indexes will not be restored
0
if we let it get too full things will get slow once it hits fill or something we could fail all further adds even if in a lucky spot that doesnt exceed maxchain
0
now that regions have been turned on by admins users can choose to spawn hosts in default this can also be set when creating a host from the cli using the region flag if youre likely to always choose the same region note that you can set your own default from the user settings page this default will apply if the region is configured for that distro otherwise it will revert back to note that although it is possible to spawn virtual workstations in multiple regions icecream is presently only supported for
0
scopedthread is undefined in shell it work fine with shellnoformat shellrootruilinuxsyskernelmmtransparenthugepage shell version to test t new error first argument must be a function at f function return function return t new scopedthreadf init function init start function start join function join returndata function returndata shellrootruilinuxsyskernelmmtransparenthugepage shell version to test f function return function return t new referenceerror scopedthread is not definednoformat
0
the documention says codejava sudo mv confmmspropertiesrpmsave confmmsproperties code but in the upgrade the rpm procedure create an rpmnew file and not a rpmsave its dangerous because we might overide the good old file if we dont pay attention
0
default implementation should handle conversion of bson results into php values will likely convert bson objects to associative arrays as is done in the current drivera second implementation can have lossless marshaling into bson types which would avoid issues where empty bson objects and arrays are indistinguishable in php
0
while working on and i thought to check if the global durabletimestamp moves back while we update its value during a transactioncommit i made this change on top of change for noformat diff git asrctxntxnc bsrctxntxnc index asrctxntxnc bsrctxntxnc txnmodcompareconst void a const void b return aoptuopcolrecno uopcolrecno getalldurablets blah blah static wttimestampt getalldurabletswtsessionimpl session wtconnectionimpl conn wttxnglobal txnglobal wttxnshared s wttimestampt ts tmpts i sessioncnt conn txnglobal conntxnglobal ts txnglobaldurabletimestamp wtreadlocksession txnglobalrwlock walk the array of concurrent transactions wtorderedreadsessioncnt connsessioncnt for i s txnglobaltxnsharedlist i sessioncnt i s wtorderedreadtmpts spinneddurabletimestamp if tmpts wttsnone tmpts ts ts tmpts wtreadunlocksession txnglobalrwlock return ts wttxncommit commit the current transaction wttxncommitwtsessionimpl session const char cfg wttxnop op wtupdate upd wttimestampt candidatedurabletimestamp prevdurabletimestamp wttimestampt tsprev tsafter fileid previousstate uint i ftresolution wttxncommitwtsessionimpl session const char cfg updatedurablets candidatedurabletimestamp prevdurabletimestamp tsprev getalldurabletssession if it looks like well need to move the global durable timestamp attempt atomic cas and recheck wttxncommitwtsessionimpl session const char cfg prevdurabletimestamp txnglobaldurabletimestamp tsafter getalldurabletssession if tsafter tsprev wterrpanicsession wtpanic all durable timestamp moved backwards from to tsprev tsafter were between transactions if we need to block for eviction its a good time to do so note that we must ignore any error return because the users data is committed noformat i already see that though python tests pass testcheckpointsmokesh fails noformat t wtsessioncommittransaction wttxncommit all durable timestamp moved backwards from to wtpanic wiredtiger library panic t wtsessioncommittransaction wttxncommit the process must exit and restart wtpanic wiredtiger library panic t wtsessioncommittransaction wtabort aborting wiredtiger library aborted core dumped noformat it will worthwhile running full wt test suite and stress test to investigate when and why global durable moves back in our testing
0
this ticket is a placeholder to account for the work that needs to happen to support facet and count in sharded clusters
0
wed like to have smokepy shuffle the tests in a suite it should take a seed as a parameter to use for randomizing the shuffle if a seed isnt provided it should use the current git hash this way each build that buildbot runs has a different shuffle for running tests within a suite but it is still easily reproducible
0
hi id appreciate a clarification to the following questions for a customer production ticket on the mongodump documentation says mongodump version supports the following versions of the mongodb server mongodb mongodb mongodb mongodb however on the same page it says when using mongorestore to load data files created by mongodump be sure that you are restoring to the same major version of the mongodb server that the files were created from for example if your dump was created from a mongodb server running version be sure that the mongodb server you are restoring to is also running version in addition ensure that you are using the same version of mongorestore to load the data files as the version of mongodump that you used to create them for example if you used mongodump version to create the dump use mongorestore version to restore it does this mean if a customer needs to export a database from mongodb to import to mongodb this will not be supportedpossible when using mongodump is the export that different from to question if we are running mongodump and while mongodump is running documents are being updatedinserteddeleted in the mongodb database can we expect all of these concurrent changes be manifested in the output file created by mongodump thanks dror
1
ideally node would also go readonly when this flag is set
0
i tried to call extras with maxtimems but it doesnt perform the query with that extra mongodb operatorcode prymain yoolklistingwherename mongoidcriteria selector options class yoolklisting embedded false prymain yoolklistingwherename codeit takes ages to finish what is the correct way to do
1
the documentation needs to be changed with the latest update to cloud manager see this is the updated process on the deployment page click on the button with the wrench and a green plus icon for the cluster click the checkbox then click continue all the servers already have automation agents so click initialize automation wait for cloud manager to finish its process click on review deployment click on confirm deploy
0