text_clean
stringlengths
8
6.57k
label
int64
0
1
in the update operation if one of the updated key is prefix of any other updated key i am getting have conflicting mods in update error we are facing issue in our production system can someone throw some light if this is an expected behavior or there is some issue
1
sorry for my english hope you guys can get it thanks in advanced mongoid makes chaos and confusion in my environment i think its a very big problem cause it makes my rails application stop working i dont know if im the only one who met this kind of problem i had to downgrade to mongoid finally version is good it happens occasionally and if i restart my rails web application it disappears but reappear after about half an hour but if i run the sentense in rails console many times no problem appears the problem is some instances will be wrong i visit the same page a certain of instance of this page is good in most of time but sometimes it is nil or even another kind of instance it is most weird for example no followlecturermongowhereisvalid truedistinctuseridcount it should return a number or isnt it but sometimes it throw the exception undefined method count for nilnilclass followlecturermongowheredistinct should always be an array i think but return nil dont know why example no ss ssname sometimes it throw the exception undefined method name for the instance ss in the block should be a instance of class stockstylemongo but wxscanmessagemongo the latter class is never been used in this page example no logsmsstatusmongocreateidcode idcode sometimes it throws undefined method for nilnilclass in the stack documents process each each each first block in exists trycache exists exists validateroot block in validateeach withquery validateeach block in validate thats the statement cursordocument the cursordocument is nil is it possible conclusion it seems that sometimes mongoid makes chaos return wrong instance or return nil whats impossible maybe something wrong in the connectionpool integrated in mongoid call mongodb via one connectionpool get the result through another one but i tried limiting the maxpoolsize to exception still happened i limited rainbows thread to or changed rainbows to unicorn problem still appeared theres no problem when i run the above statements in rails console no matter how many times i run it the problems just appear when i visit the web page hope its helpful for you thanks for your patient hope this bug would be fixed soon ill be back to mongoid thank you all
1
request summary new aggregation expression thats also an accumulator ticket syntax codejs mergeobjects code examples codejs merging two objects together dbmerginginsertid subobject b c dbmergingaggregate results id newdocument b c d merging the root with a new field dbmerginginsertid dbmergingaggregate results id newdocument id newfield newvalue replacing a field in the root dbmerginginsertid dbmergingaggregate results id newdocument id newvalue overriding in the opposite order dbmerginginsertid dbmergingaggregate results id newdocument id dbmerginginsertid subdoc a b dbmergingaggregate results id subdoc a b a b code detailed behavior the new expression will take any number of expressions it will error if any expression does not evaluate to an object if the field x exists in multiple objects the value of x in the last document will win since its an array you can reverse the order if you wanted different overriding semantics
0
this ticket should introduce a new document source that passes documents through unchanged the point of this ticket is to unblock future work to be done in parallel we should not be able to parse to create this stage it will only be created by the desugaring implemented in
0
we are trying to establish a connection with mongodb from spark connector the total size of collection is around gb and it is sharded cluster i am trying to query only mins data which would be around mb max as i implemented predicate pushdown with pipeline clauses at the time of reading of data frame but i am getting the error below is the code snippet binpyspark conf sparkmongodbinputurixxxx conf sparkmongodboutputurixxxxx packages fromdateiso todateiso pipeline project id match createdate gte date fromdateiso lt date todateisocomment ankitpandey df sparkreadformatcommongodbsparksqldefaultsource optiondatabase optioncollection optionreadpreferencename secondarypreferred optionpipelinepipeline load below is the error an error occurred while calling orgapachesparksqlcatalysterrorspackagetreenodeexception execute tree exchange singlepartition ensurerequirements hashaggregatekeys functions output scan mongorelationmongordd at rdd at structfieldcreatedatetimestamptypetrue structfielddatacenterstringtypetrue structfielddatetakenlongtypetrue structfieldfilesarraytypestructtypestructfieldfiletypestringtypetrue structfieldurlstringtypetrue structfieldwidthintegertypetrue structfieldheightintegertypetrue structfieldsizelongtypetruetruetrue structfieldowneraccountidlongtypetrue pushedfilters readschema struct need help to rectify this issue thanks in advance
1
i installed below rpm packages on centos a replicate set deploy as followings primaryone arbiter secondary set a shard cluster for this replicate the network interface of machine member resided has been outage for some reason for daysthis is likely to casue the to lose communction with others members then i fixed the network issue latersymptom when run rsstatus on mongo primary find secondary exception status with error message db exception in producer replset source for syncing doesnt seem to be await capable is it an older version of mongodbdump message for details rsstatus set date mystate members id name health state statestr secondary uptime optime optimedate lastheartbeat pingms error errmsg db exception in producer replset source for syncing doesnt seem to be await capable is it an older version of mongodb id name health state statestr primary uptime optime optimedate self true id name health state statestr arbiter uptime lastheartbeat pingms ok can i fix this problem
1
ability to add comments to the documentation pages would be an incredible additional help source because sometimes not everything is in the documentationwith updown votes
1
code fri apr fooa assertion failure la srcmongodbbtreeh fri apr update fooa query id update set x exception assertion
1
when building the dbtest binary with clang the following error is emitted noformat compiling buildcachedmongodbtestsinserttesto error local variable obj will be copied despite being returned by name return obj note call stdmove explicitly to avoid copying return obj stdmoveobj error generated scons error scons done building targets errors occurred during build noformat the fix is to either change the function to return bsonarray or to change it to return stdmoveobj the former change is probably preferred for clarity
0
automation agent changelog version released better logging for ssl connection failures use absolute paths for determining which monitoring and backup agents are managed when restoring a backup ensure that arbiter nodes never download data monitoring agent changelog version released retrieve information on mongos in a cluster by querying the config servers backup agent changelog version released updated to use go
1
backport
1
hi if globallockratio is also high mongodb has likely been processing a large number of long running queries serverstatusgloballockratio does not exist it was removed from serverstatus long time ago see remove globallockratio from serverstatus hth pierrereporter pierre de wildeemail
0
i was attempting to submit a manual patch build for evergreen project is also mongophpdriver this pr was originally opened against the master branch but later rebased on which happens to be the first branch when our evergreen config was introduced i noticed that github didnt register any automatic evergreen patch build for the pr so i attempted to manually submit a patch from the command line using evergreen p mongophpdriver resulted in which restricted me to selecting teststandalonessl and testreplicasetauth tasks on a subset of build variants these appear to correlate with github patch definitions in our project configuration using evergreen patch p mongophpdriver t testatlas to manually specify the actual task i wanted to test resulted in which had the same restrictions finally i used evergreen patch p mongophpdriver v all t all to create the ui had the same restrictions as before before giving up i decided to use the schedule all tasks button from this patchs listing in and that appears to have done the job im curious if there is an explanation for what i experienced here eg ui bug something to do with rebasing the pr im not sure if its pertinent but our project configuration does have disable patching checked i didnt toggle the option so i believe its been set that way since our project was first configured i couldnt find an explanation of that option in the evergreen wiki so im not clear how it might impact what i was trying to do if it is intended to actually disable patch submission it begs the question as to how i was able to schedule a patch build and how evergreen patch was allowed to create a patch in the first place
0
in an configuration with multiple databases if records are inserted into collections of one database records from unrelated collections may be inserted into yet other unrelated collections there may also be rearrangements in collections in the original database but i can not determine this as readily as all records in the same database have an id of the same name so i have been using idname existsfalse to find the errorsthe logs frequently show errors indicating a bad bson object type and extremely long strings further validate does not always find the errors and returns ok when errors are still present other than inserting a couple hundred thousand records into a couple collections at the same time with the java driver i have not found a specific way to reproduce this error but i can provide a valgrind logthe bad records do not seem to be inserted with any client insert calls and the journal file does not repair the collections
1
the native driver doesnt support causal consistencyreal time order as expected codejava var db clientdbtest var largeobj forvar i i i largeobj mathrandom var session clientstartsession var v datenow consoleloginit v v var collection dbcollectiontest collectionupdateoneid set objectassignv v largeobj session upsert true function consolelognew date a collectionfindoneid projection v session functionerr result consolelognew date err result collectionupdateoneid set v v session function consolelognew date b collectionfindoneid projection v session functionerr result consolelognew date err result settimeout dbcollectiontestfindoneid projection v session consolelog code expect execute order get versions native
0
due to mongod immediately dropping connections on a replica reconfig the c driver can report an error when there isnt one
0
we currently have a documentation in a number of different places internal and external we should review the homes for our documentation and try to minimize the number of different sources
0
use case gh pr dependencies upstream andor downstream requirements and timelines to bear in mind unknowns questions that need to be answered to determine implementation acceptance criteria implementation requirements functional reqs potential snafus to avoid performance targets etc testing requirements unit test spec test sync etc documentation requirements docsp ticket api docs etc follow up requirements additional tickets to file required releases etc
0
re presently the mockocspresponder shell component output looks like this code initializing ocsp responder serving flask app mockocspresponder lazy loading environment production warning this is a development server do not use it in a production deployment use a production wsgi server instead debug mode off running on press ctrlc to quit code we need to update mockocspresponder to output details of requests and responses so that it is easier to understand what exactly is going on during failed tests
0
currently we have the document to convert a replica set to a sharded cluster as below however it doesnt cover the scenario when the replica set has authentication enabled for a sharded cluster the user credentials are stored in the admin database on the config servers so if auth is enabled there would be some extra steps to dump the user credentials from the original replica set and then restore them to the config servers via mongos
1
i sharding moving chunk ns tmfpfilestorefschunks moving ns tmfpfilestorefschunks shard lastmod min filesid minkey n minkey max filesid n i sharding movechunk result cause ok errmsg accept new chunks because there are still deletes from previous migrationcolor ok errmsg movechunk failed to engage toshard in the data transfer caused by unknownerror cant accept new chunks because there are still deletes code please check this and let me know for a solution thanks in advance
1
code cmake dcmakebuildtyperelease dcmakeinstallprefixusrlocal make code produces code building cxx object srcmongocxxcmakefilesmongocxxdirbulkwritecppo building cxx object srcmongocxxcmakefilesmongocxxdirclientcppo in file included from mongocxxdriversrcmongocxxprivatesslhpp in function mongocssloptt mongocxxmakessloptsconst mongocxxssl error const valuetype has no member named cstr outpemfile ssloptspemfilecstr error const valuetype has no member named cstr outpempwd ssloptspempasswordcstr error const valuetype has no member named cstr outcafile ssloptscafilecstr error const valuetype has no member named cstr outcadir ssloptscadircstr error const valuetype has no member named cstr outcrlfile ssloptscrlfilecstr recipe for target srcmongocxxcmakefilesmongocxxdirclientcppo failed code
1
with the new implementation of the setfcv command we have the guarantee that new and old ddl opeartions wont ever run concurrently on the cluster so there is no need anymore to take the distributed locks on this new paths instead we will only use the local version of the distributed lock to serialize with other ddl operation running on the same shard
0
weve just restored a member from a backup and are trying to get it to catch up by playing the oplog it is now spewing this into the logsnoformatthu jun assertion failure sz dbpdfilecpp usrbinmongod thu jun assertion assertion nsconversocialcontent query query bysource false status r source createddate lte new gt new orderby createddate id hint source status createddate id noformatplease advise
1
i setup config servers mongod servers for one replicaset and mongos after running a sequence of operationcreate collection enablesharding insert findandmodify query delete collection several times mongos received signal and crashed i ran into this error both on and snippet attached as followingtue aug cmd shardcollection shardcollection unique false key tue aug enable sharding on with shard key tue aug about to create first chunk for aug successfully created first chunk for at lastmod min minkey minkey max maxkey maxkey tue aug connection accepted from aug connection accepted from aug connection accepted from aug connection accepted from aug ns clusteredcursorquery shardconnection had to change attempt aug delete failed bc of staleconfigexception retrying ns patt tue aug connection accepted from aug update failed bc of staleconfigexception retrying ns query b tue aug connection accepted from aug distlock pinged successfully for aug connection accepted from aug end connection aug drop aug about to log metadata event id server clientaddr na time new what dropcollectionstart ns details tue aug about to log metadata event id server clientaddr na time new what dropcollection ns details tue aug enable sharding on with shard key id tue aug about to create first chunk for aug successfully created first chunk for at lastmod min id minkey max id maxkey tue aug connection accepted from aug connection accepted from aug assertionexception in process ns dowritetue aug assertionexception in process ns dowritetue aug connection accepted from aug connection accepted from aug assertionexception in process ns dowritetue aug assertionexception in process ns dowritetue aug connection accepted from signal
1
it would be nice to be able to use them in an if statement
0
it is my understanding that in server the only log format supported is json while this format has undisputable benefits when it is parsed by machines its readability by human beings is often significantly inferior to the old text format for example the following log output is produced when a server starts code i sharding marking collection adminsystemroles as collec tion version i sharding marking collection adminsystemversion as coll ection version i storage createcollection localstartuplog with genera ted uuid and options capped true size i index index build done building index id on ns loc alstartuplog i sharding marking collection localstartuplog as collect ion version i ftdc initializing fulltime diagnostic data capture with directory i storage createcollection localreplsetoplogtruncateaf terpoint with generated uuid and options i index index build done building index id on ns loc alreplsetoplogtruncateafterpoint code and the following is the same output from a server code cindex build done building index indexname on ns nssattrindexnameidnsslocalstartuplog cftdc fulltime diagnostic data capture with directory cstorage nss with generateduuidgeneratedprovided uuid optionswithuuiduuidget and options cindex build done building index indexname on ns nssattrindexnameidnsslocalreplsetoplogtruncateafterpoint cstorage nss with generateduuidgeneratedprovided uuid optionswithuuiduuidget and options cindex build done building index indexname on ns nssattrindexnameidnsslocalreplsetminvalid cstorage nss with generateduuidgeneratedprovided uuid optionswithuuiduuidget and options cindex build done building index indexname on ns nssattrindexnameidnsslocalreplsetelection crepl not find local initialized voted for document at startup crepl not find local rollback id document at startup creating one cstorage nss with generateduuidgeneratedprovided uuid optionswithuuiduuidget and options cindex build done building index indexname on ns nssattrindexnameidnsslocalsystemrollbackid code a quick glance at the log is sufficient to determine what is happening index builds the log is a wall of text and i cannot get a quick assessment for what the logs are saying at all in other words i need to carefully read each log entry to figure out what it is referring to in my opinion the loss of readability is due to the log entries becoming much longer this increases the amount of text that must be processed while the effective payload remains the same since the log entries have become longer frequently the important information is no longer in the first or so columns that a user likely has in their view this means the user either has to scroll right all the time or the user has to read the logs with lines wrapped wrapping lines loses the structure that the text format provided timestamp severity facility etc aligned vertically even more and makes the logs again harder to read the information is no longer presented continuously because of the separation of formatting strings and the data for example index build done building index id on ns localstartuplog became msgindex build done building index indexname on ns nssattrindexnameidnsslocalsystemrollbackid this means the reader must jump back and forth constantly as they are reading the entries as an engineer who develops drivers for the server as well as a system administrator who reads server logs to troubleshoot the server i find that the loss of realtime logging in text format has a major negative effect on my productivity there are various tools available for converting json logs to text logs ideally i would like to send logs to a program that would output json logs unmodified to one file and convert json log to a text log and store the result in another file while i am unaware of a program that specifically does this this seems like a straightforward problem to solve however there is the challenge of getting the log to such a program the server currently offers three options for log destination stdout syslog file stdout would be the most suitable destination since i would be able to trivially pipe the logs to the log splitter program unfortunately stdout logging is not allowed when fork is used which is the subject of this ticket it is possible to launch the server without fork by manually implementing the daemonization externally but given that i believe there is an easy to implement alternative in the server that allows filepipe output with forking i believe the server should allow logging to stdout when fork is used syslog logging is suitable for production installations when there is only a single mongodmongos process that can be globally configured it is not generally suitable for development because i can have over mongod processes running in various deployments including scratch ones and configuring syslog for each of them is untenable file destination requires the log splitter to know when the log is rotated by the server so that the splitter can reopen the current log file i am not aware how the splitter might be notified of this event by the server directly although i imagine a workaround may be devised by watching for changes in the file system in the log directory the splitter would also need to rename its log files correctly when the server rotates logs which would be errorprone expected behavior as an engineer developing drivers for the server and as a system administrator i would like to have mongodmongos etc be able to log to pipe such as by logging to standard output which i will redirect to a pipe even when the server forks implementation i believe it will be sufficient to prohibit logging to stdout when stdout is a tty if stdout is not a tty in my understanding it is redirected to either a file or a pipe and hence the logs will be consumed by something and will not be lost such that it does not seem necessary to prohibit these cases
0
after creating a mongoclient and getting the database with var client new mongoclientlocation database clientgetdatabasedatabasename the client automatically tries to connect to the server of the location string if the server is present everything works fine but if the connection can not be established an uncaught task exception is raised systemaggregateexception ausnahmen einer aufgabe wurden nicht überwacht entweder wegen wartens auf die aufgabe oder wegen des zugriffs auf die ausnahmeeigenschaft daher wurde die nicht überwachte ausnahme vom finalizerthread erneut ausgelöst systemtimeoutexception a timeout occured after selecting a server using compositeserverselector selectors readpreferenceserverselector readpreference mode primary tagsets latencylimitingserverselector allowedlatencyrange client view of cluster state is clusterid type unknown state disconnected servers serverid clusterid endpoint endpoint state disconnected type unknown heartbeatexception mongodbdrivermongoconnectionexception an exception occurred while opening a connection to the server systemnetsocketssocketexception es konnte keine verbindung hergestellt werden da der zielcomputer die verbindung verweigerte and catched by our unobservedtaskexceptioneventhandler taskschedulerunobservedtaskexception unobservedtaskexceptioneventhandler since we do not want to continue work after unobserved task exception in our system the normal behaviour is to shut down and restart the application is there a way to avoid the exception since the task can not be reached from outside the driver we do not see a way to catch it before the task gets out of reach and the exception is handed to the handler by the garbagecollector as a workaround we modified the handler to not restart on timeoutexceptions but we would appreciate a different solution
1
build variants that dont use the largedistroexpansions are trying to set the distro for generated tasks to none instead of using the default
0
someone asked about it on twitter its still used by a lot of hosting companies
0
todays mostly ebs related outage in amazon aws caused the same effect twice in our replica setthe root cause was that the ebs volumes became unavailable which in turn were mounted via mdadm and lvm i can see that the kernel probably leaves the mongo server just unaware or guessing here about whats going on but what happened was that the master didnt step down probably because its network was fine and it didnt lag all clients kept connecting to the node that didnt workwould there be any way for the master or the slaves to detect this situation and fail overnext to that i am not aware of any option to have a slave manually step up either instead of having the master step down in the above scenario the master didnt allow mongo shell access because of the bad data partition leaving no way to tell the master to step down other then powering it offcheers
0
location of bugi originally posted to the mongodbusers google group with the issue
1
hi mongo db tools support team we recently upgraded from a single node mongodb cluster running version to a multinode cluster running as part of the upgrade we are in need of running ondemand db refreshes from one environment to another and are using the mongodump and mongorestore utilities with the arguments mentioned below some important notes the commands run inside a docker container running mongo version the variables are passed using environment variables at runtime the backuprestore size all of the collections about codejava backup mongodump host sourcedbhost port dbport username sourcedbun password sourcedbpw db dbname authenticationdatabase dbname out outputdir quiet restore mongorestore drop outputdirdbname host destinationdbhost port dbport username destinationdbun password destinationdbpw db dbname authenticationdatabase dbnamecode the backup runs successfully but the restore fails every time on a larger collection hence ive marked this a as a debugging step weve tried the solutions in the following articles to no avail what can we try next to resolve this issue let me know if you need more information to guide us in the right direction and ill do my best to provide it for you
1
we should add one or more test cases to the integration tests with users which will acquire more than one ldap provided role
0
we should add code coverage to the buildscriptstest python unittests note the command to generate coverage is similar to this noformat pytest covreport html cov buildscriptsmobileadbmonitor s buildscriptstestsmobiletestadbmonitorpy noformat
0
we currently track history store statistics at the connection level it would be useful to be able to access the information per tabledata handle
1
description this behavior is only observable in storage engine desired behavior if an indexed query runs while documents are updated which moves them it is possible for those documents to be missing from the results when using storage engine we would like this behavior to change so that all matching documents which exist through the lifetime of the query are returned even if they are updated in particular we only expect this behavior when those updated documents have values updates in the query which arent changed so the query matches the document in all updated states example see code below add a large document followed by small documents all with letter a add an index on letter remove the large document start a query batch size using index update document to cause it to move to empty space left by large removed document technical details when a query walks an index in it is possible for documents to move behind the current position as document location is stored in the index in as the cursor moves forward resulting in documents being missed this behavior cannot be reproduced in wiredtiger inmemory or encrypted storage engines
0
imagine a class to store in a collection codecsharp enum myenum class mydoc public string id get set public myenum prop get set public myenum get return code mapping is declared as it to allow serialization of read only property codecsharp bsonclassmapregisterclassmapcm cmautomap cmmapidpropertyx xid setserializernew stringserializerbsontypeobjectid setidgeneratorstringobjectidgeneratorinstance cmmapmemberx code these conventions are added codecsharp var pack new conventionpack packaddnew camelcaseelementnameconvention packaddnew enumrepresentationconventionbsontypestring conventionregistryregistercustom pack t true code when this instance is inserted into the collection codecsharp var doc new mydoc prop code then the resulting document in mongodb is codejson prop code the conventions are not respected for the read only property its name is not camel case first letter is upper case its value is stored with the enum value instead of enum key
1
when we are running large map reduce jobs we have noticed that an index building operation seems to happen as phase of phases when this happens it seems to prevent the database from being used that is the indexing is not happening in the background and also not during insertions into the temporary mr collections see output snippet from the currentop command this index building seemed to happen during the execution of a mapreduce job after the map phase ended and before the reduce phase began opid active true locktype write waitingforlock false secsrunning op insert ns sociocastsystemindexes client desc conn msg index btree bottom up
0
there are not screen shots of the key activities would really help new users relate to the product
0
currently this is not possible since aborttransaction and committransaction change the state of a transaction to inactive we will need to add a third state done we should also explore adding a starting and none state in line with the state diagram in the drivers spec committransaction will now transition the state to done which will allow users to retry this command on a committed transaction
0
initial note added should probably be warning and it needs links to the relavant topics used in the explaination
1
use mongoperftestjava to reproduceantoineagantectengensupport java cp mongoperftest insert java cp mongoperftest insert are marginally slower about but queries are slowerit seems to have introduced by following commitcommit brendan w mcadams date fri aug lazybsonobject exhibits certain behavioral breakages fully functioning implemented lazybsonobject fixed string decodes test class now validates all known decodable types for correctness now works correctly fixes known issues for performance sanity and memory usage replaced all arraycopy calls with bytebuffer usage which should help overall longterm behavior
0
things to make sure we test coming up failing over w partition testing fastsync converting from masterslave convert all pair tests to set clock sku coming back online rollback
0
in our docs for write concerns we describe but no driver except for java supports it and java driver will get rid of it soon so we should remove that write concern from the docs to avoid confusion
0
mongodb server fetches the key from the session statistics cursor here on big endian system this key is not in the expected form eg on a little endian system a key is returned like this codejavathe key returned by wiredtiger stats cursor code whereas the same key on big endian system is like this codejavathe key returned by wiredtiger stats cursor code investigate that why session statistics cursor returns the incorrect correct key
0
the mongoccollectionreplaceone function is not available in the dynamic library for windows because the mongocexport is missing from the header file see below mongocexport bool mongoccollectionupdatemany mongoccollectiont collection const bsont selector const bsont update const bsont opts bsont reply bsonerrort error bool mongoccollectionreplaceone mongoccollectiont collection const bsont selector const bsont replacement const bsont opts bsont reply bsonerrort error mongocexport bool mongoccollectiondelete mongoccollectiont collection mongocdeleteflagst flags const bsont selector const mongocwriteconcernt writeconcern bsonerrort error
1
many collection operations that take a query can take either an imongoquery or an iqueryable mapreduce only has the option of imongoquery and it should alternatively be able to take an iqueryable for filtering
0
there are a number of ssl options that we do not have listed as supported fields in the driver this results in the driver stating that the option is unknown even if it works for instance secureprotocol
0
most of the links under the navigation pane for onprem restore do not workfor example navigate to restore from a stored snapshot link goes nowhere restore from a point in the last day goes to a restore a single database and seed a new secondary with backup data work correctly configure backup data delivery goes nowheresimilar issues exist for the documentation in the same area
1
mongo async jdbc driver and leak when i am using the code in a stand alone java program i am getting result back and no issue same code if i apply in a spring app i am seeing that i never got result back and tomcat gave me below warning on sinleresultcallbak apply method never get the result and i keep getting below error code thu mar mdt warn orgapachecatalinaloaderwebappclassloaderbase the web application appears to have started a thread named but has failed to stop it this is very likely to create a memory leak stack trace of thread sunmiscunsafeparknative method code
1
no type checking occurs currently go ahead and check these from the bson query document
0
create the main page layout and url this page should have a boolean that hides if for now until the feature is done this level will eventually hold the main state for the page that keeps track of whether there are any changes on the page that need to be saved design
0
from this page on setting up ldap with ops manager i clicked ldap authorization to this url and it returned file not found¶ the url you requested does not exist or has been removed
0
right now the crud spec defines the method signature for find as the following codejavafindfilter document options optional iterable code which implies that filter cannot be optional would it be acceptable for drivers to accept this filter as an optional so that users wont have to specify the empty document when they want to match all documents or is this distinction intentional as points out there already is precedent for this in several drivers eg java via overloading python via accepting none
0
the descriptions of custom relation names and polymorphism are confusing could you add examples of when and how these can be used
0
problem description start mongosh with the nodb flag run codejava and hit ctrlc expected the execution is interrupted actual the execution continues
1
i recently upgraded to and noticed that there has been significant changes to the wire protocol since none of this is documented on the release notes or in the mongodb wire protocol documentation located at the following url please update the documentation with the new opcode specifications asap as we have applications built on custom drivers that are breaking on thanks rh
1
problem statementrationale filter in the export dialog is not cleared after the filter in the query bar is resetcolor steps to reproduce go to a collection specify a filter in the query bar clear the filter with reset open export dialog expected results export filter is emptycolor actual results export filter is set to the one before the resetcolor additional notes video attached but note that its not necessary to navigate to the export dialog once while the filter is set the bug happens anywayscolor instead of clearing the filter i set it tocolor noformat noformat the empty filter gets properly reset in the export dialogcolor
0
the string textsearchenabled appears only in a test so there may not really be any work here but im scheduling the necessary investigative work to
0
last time we did that in it included bsondump mongo mongod mongodump mongoexport mongofiles mongoimport mongooplog mongoperf mongorestore mongos mongosniff mongostat and mongotop
1
bsonarraybuilder has a few rather serious performance glitches around subarraystart and subobjstart specificallythese functions require that you provide a name for the element that you wish to start as a string as a result you are forced into a strlen call to populate stringdata and a strtol call to convert the string to an integer and to keep your own external integer counter to convert to a string this is mostly useless the common use case here is just to start the next element in the series and bsonarraybuilder already keeps an internal counteri think you could just havesubarraystart autoincrement to next indexsubobjstart autoincrement to next indexsubarraystartsizet index ok you want to do something fancy use fill etc still no need for strtolsubobjstartsizet index as abovesubarraystart const stringdata strindex do the whole complex dance with strtol fill etcsubobjstart const stringdata strindex as abovethis would let most users never have to worry about this stuff at all since they are starting array elements in order let more advanced users use a more efficient mechanism that avoids string processing when using offsets by providing integer offsets and finally provide a way forward for the more general casefinally what should the behavior be if you specifiy a name index for an element and you are already beyond that many elements in the array it appears that the provided nameis just ignored in this case perhaps that should be an error
0
this url has outdated instructions for creating an mms backup user im currently validating but it looks like the command should be more like thiscodeuse admindbcreateuseruserbackupuserpwdpasswordrolesclusteradminreadanydatabaseuseradminanydatabaserolereadwritedblocalrolereadwritedbadmincode
1
per the new uuid spec should not allow users to decode to swift uuid from binary subtype ie uuiddeprecated
0
testoperations failed on ubuntu dockerhost project evergreencommit diff better handle graphs that have no history may utcevergreen subscription evergreen event testclifetchsource logs history
0
hi i want to set a value to a bson element in c driver in last version this code works ok datevalue bsonvaluecreatedatevaluetouniversaltimetostringo but in the new version value only have get and i dont find how to set the value
1
indexing was using way too much heap spaceproblem was building index keys the default buffer size for building the keys could be up to too big so instead of of space could use multiple gigs
1
this doesnt seem to impact other c apps on windows using mongoc but when compiling c then the user may be tempted to change the calling convention unless we specifically declare which calling convention our api expects things might get muddy
0
help when i tried to fix this issue by it can not work mongo version cmd mongod shardsvr port storageengine wiredtiger repair noformat i storage wiredtigeropen config e storage wiredtiger no such file or directory e storage wiredtiger filewiredtigerwt cursornext read checksum error e storage wiredtiger filewiredtigerwt cursornext wiredtigerwt encountered an illegal file format or internal value e storage wiredtiger filewiredtigerwt cursornext the process must exit and restart wtpanic wiredtiger library panic i fatal assertion i control begin backtrace mongodbversion gitversion uname sysname linux release version smp fri feb utc machine somap mongod mongodwteventv mongodwterr mongodwtpanic mongodwtbmread mongodwtbtread mongodwtcacheread mongodwtpageinfunc mongodwttreewalk mongodwtbtcurnext mongod mongod mongodwttxnrecover mongodwtconnectionworkers mongodwiredtigeropen mongod mongodmain mongod end backtrace i aborting after fassert failure noformat
1
mongodb crash with the following log message code f journal exception in journalwriterthread causing immediate shutdown boost no such file or directory i invariant failure false i aborting after invariant failure f got signal aborted begin backtrace backtraceprocessinfo mongodbversion gitversion compiledmodules uname sysname linux release version smp thu may utc machine somap mongod mongod mongod end backtrace code
1
i installed mongodb for osx on monday on my workstation for development use its been working fine until just a few minutes ago when it refused new db connections i was running mongod in a separate terminal in nondaemon mode the process wouldnt respond to ctrlc ultimately i had to kill the processconsole output before killing the processnoformatwed mar cmd drop mar command paccmd command mapreduce plugins map function var pluginid thisid thisreviewsforeachfunctionreview emitreviewid pluginid pluginid review reduce function key values if valueslength return values verbose false out reviews pipe mar select signal caught continuinggot pipe mar select signal caught continuinggot pipe pipe pipe mar select signal caught continuingnoformat
1
also point link to mms
1
it seems that the exists operator does not work on keys which have been indexed if a key has an index the exists operator will never return any documents ive attached some tests in ruby which demonstrate this along with their output and an example mongo console session showing that its not just an issue with the ruby driver
1
we would like the intel library to be tested since it is prone to miscompilation which could cause decimal arithmetic in mongo to be incorrect there is an extensive test suite that comes with the library and runs quite quickly a few seconds the intel built in test compiles to a standalone executable its not clear how to best integrate into our test suite we could execute it from a cppunittest or have a separate scons target that is a standalone task on evergreen the self tests are located at and use readtestin as the input they should be built using the same flags as the library libm must be included on some platforms as well for proper functionality ideally these would be integrated into the unittest task however they currently have minor failures on some systems that must be resolved by changing compilation or skipped fails on a edge case not an issue as we do not need this function some inaccuracies with when returning inexact results there are tests which check that the error is expected for this function these are failing we are not sure if the issue is related to miscompilation or is a problem with the platforms pow implementation used underneath besides these systems the selftests have been confirmed to pass on all other evergreen targets
1
having trouble adding nodes to a replica set after converting a single standalone to a replica set i get the following rsaddsecondarymongodbsungevitycom errmsg exception need most members up to reconfigure not ok code ok can connect to the node im attempting to add from the replica nodeubuntuprimary mongo secondarymongodbsungevitycomadminmongodb shell version to secondarymongodbsungevitycomadmin dbadminand i can connect to the replica node from the node i wish to addubuntusecondary mongo primarymongodbsungevitycomadminmongodb shell version to primarymongodbsungevitycomadmin dbadminhere is replica status output from the primary that i initiated the replica set dbruncommand replsetgetstatus set date mystate members id name health state statestr primary uptime optime optimedate self true ok rsconf id version members id host i have attached both the primary replica set node and secondary node i want to add mongodbconf files to this ticketthe documentation is so simple and clear i figure im missing something arcane or so obvious im going to be embarrassed thanks for your assistance
1
this page shows the following command to be used when installing on prem mms rpmssudo rpm ivh this command results in an unknown option errorthe text should be edited to remove the double hyphen and replace with a single hyphen
1
this also allows us to pool connections in mongos more effectively
0
with some of the recent issues related to upgradedowngrade it would be beneficial if we stored the wt version information as metadata somewhere
0
when testing changes in on ppc we discovered that the readwrite lock implementation and hence the fair lock implementation do not include barriers in all of their lock unlock operations they need to reasonable application code relies on it
1
instead of blocking all activity when rename collection is being executed i would suggest an improvementinstead of doing copy operation on rename just rename collection data files and any metadataanother option is to store data files with some kind of id instead of using a collection name for file names store collection names in metadata only
1
this see in text as jira is trying to interpret multiple curly brackets as macros should world it works in the shell but instead one gets a syntax error see attached quote match atmosphericcomposition unknown macro in quote
0
in some cases it would be useful to do combinations of geospatial queries using or or all to do geospatial union or intersection
0
we should consider the number of documents scanned between different plans being evaluated at the very least we should use that to resolve ties between plans this is critical for big data systems reading data via spark connector that partitions data by id for reading example for the query noformat dategtea idgteb emailgtec noformat all indexes below will tie although it is beyond obvious which one should be selected the difference is of course dramatic when were talking about tbs of data noformat noformat
1
from docs herebq new in version if there is no secondary within seconds of the primary replsetstepdown will not succeed to prevent long running electionsif replsetstepdown is called in a replica set with no valid new primaries eg secondaries in recovering or hidden nodes and a hidden node is within seconds of the primary the primary steps down but a new primary cannot be electedthis results in a replica set consisting of all secondaries replsetstepdown should throw an error if there is no new primary to be elected even if another unelectable node is within seconds
1
the problem is topology compatibilityerror field is not updated when the problematic state of the connection is recovered reproduce make the client connection stop mongodb and start after seconds the client connection is already marked the compatibilityerror no more communication with mongodb until the restart the rust application error output codejava error kind serverselectionerror message server at reports wire version but this version of the mongodb rust driver requires at least mongodb code expected behavior the connection should be recoverable
1
i am surprised to see that ordering by date is not working properly no matter what i try on the latest c driver pretty sure this was working before here are some examples where dates are not ordered properly and coming out of sequence code var row mongocollectionasqueryable orderbyp pmydatetimevalue orderbyp potherid wherep pmyid mydatamyid firstordefault var row from p in mongocollectionasqueryable where pmyid mydatamyid orderby pmydatetimevalue ascending orderby potherid ascending select pfirstordefault row from p in mongocollectionasqueryable where pmyid mydatamyid potherid rowotherid orderby pmydatetimevalue ascending orderby potherid ascending select pfirstordefault code none of these work i get dates out of sequence as shown in the attachment i even tried adding an index on the date column in the server made no difference
1
the docs are missing examples for getlasterrormodes modes or links to examplesdocs
1
during a primary step down we call closeconnection on all matching sessions this kicks those sessions out of sourcemessage which causes them to explicitly call end which calls closeconnection closeconnection calls globalticketholderrelease this causes us to double release on primary step down which causes us to increase the effective number of tickets in the system this behavior is most noticeable when looking at the now open number of connections in the main init and listen thread where the outof param stays steady at the desired number of connections but the available amount increases this causes us to report negative numbers of open connections
1
weve a mongodb replica running on windows we had nodes in the replica with security enabled but had to remove one for maintenance so the issue happened while the replica had only one node we than see the following mongo log lines noformat e storage wiredtiger sessioncommittransaction write error failed to write bytes at offset the specified network name is no longer available i invariant failure scommittransactions null resulted in status unknownerror the specified network name is no longer available at srcmongodbstoragewiredtigerwiredtigerrecoveryunitcpp e storage wiredtiger sessioncommittransaction write error failed to write bytes at offset the specified network name is no longer available i invariant failure scommittransactions null resulted in status unknownerror the specified network name is no longer available at srcmongodbstoragewiredtigerwiredtigerrecoveryunitcpp e storage wiredtiger sessioncommittransaction write error failed to write bytes at offset the specified network name is no longer available i invariant failure scommittransactions null resulted in status unknownerror the specified network name is no longer available at srcmongodbstoragewiredtigerwiredtigerrecoveryunitcpp i control mongodexe indexcollatorextension i control mongodexe indexcollatorextension i control mongodexe indexcollatorextension i control mongodexe indexcollatorextension i control mongodexe indexcollatorextension i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe mongoparsenumberfromstringwithbase i control mongodexe indexcollatorextension i control mongodexe indexcollatorextension i control mongodexe indexcollatorextension i control beginthreadex i control endthreadex i control basethreadinitthunk i control i aborting after invariant failure i control server restarted i control hotfix or later update is not installed will zeroout data files i control trying to start windows service mongodb i storage service running w detected unclean shutdown storagefoldermongodlock is not empty w storage recovering data from the last clean checkpoint i storage wiredtigeropen config e storage wiredtiger connection the system cannot find the file specified ajournal e storage wiredtiger filewiredtigerwt sweepserver wiredtigerwt setendoffile error the handle is invalid he file specified ajournal e storage wiredtiger filewiredtigerwt sweepserver final close of filewiredtigerwt failed the handle is invalid he file specified ajournal i assertion the system cannot find the file specified i storage exception in initandlisten the system cannot find the file specified terminating i control dbexit rc i control server restarted i control hotfix or later update is not installed will zeroout data files i control trying to start windows service mongodb i storage service running w detected unclean shutdown storagefoldermongodlock is not empty w storage recovering data from the last clean checkpoint i storage wiredtigeropen config i control mongodb starting dbpathstoragefolder hostmiragemgt i control targetminos windows server i control db version i control git version i control openssl version openssl mar i control build info windows servicepackservice pack i control allocator system i control options config net port operationprofiling slowopthresholdms replication oplogsizemb replsetname replicaname security authorization enabled keyfile csecurityfiletxt service true storage dbpath storagefolder engine wiredtiger journal enabled true wiredtiger engineconfig cachesizegb systemlog destination file logappend true path quiet true i repl did not find local replica set configuration document at startup nomatchingdocument did not find replica set configuration document in localsystemreplset noformat after which we identify the replica configuration disappeared we recall rsinitiate which succeeds but than find the entire database data is gone weve only one database other than the default admin with only one collection and we see it contains no data why did this happen what shouldve we done differently
1
if a specific index exists it is probably always better to use than the wildcard index if not we should probably warn or maybe even disallow adding the otherwise redundant index one way to do this would be to model the wildcard index purely as a fallback in the queryplanner so that when it would go to use an index on a field but there is no index it can use it will instead make a wildcard scan asif there was that index on that field i think that would also solve noformat dbfooensureindex createdcollectionautomatically true numindexesbefore numindexesafter ok dbfooensureindexa createdcollectionautomatically false numindexesbefore numindexesafter ok queryplanner plannerversion namespace testfoo indexfilterset false parsedquery a eq queryhash plancachekey winningplan stage fetch inputstage stage ixscan keypattern path a indexname ismultikey false multikeypaths path a isunique false issparse false ispartial false indexversion direction forward indexbounds path a rejectedplans stage fetch inputstage stage ixscan keypattern a indexname ismultikey false multikeypaths a isunique false issparse false ispartial false indexversion direction forward indexbounds a serverinfo host silversurferwsl port version gitversion ok noformat
0
possibly related to but without crashon a system i start mongo on existing data which was created by mongodb which never caused problemsdatalabsbinmongod dbpath datalabsdb nohttpinterface journal port i do end up in an exception even in db shell dbstats assertion createprivatemap failed look in log for error assertioncode errmsg db assertion failure ok show dbstue may uncaught exception listdatabases failed errmsg exception createprivatemap failed look in log for error code ok exception taken from command line istue may mongodb starting dbpathdatalabsdb may db version pdfile version may git version may build sys info linux smp fri nov est may journal dirdatalabsdbjournaltue may recover no journal files present no recovery neededtue may waiting for connections on port may connection accepted from may error mmap private failed with out of memory bit buildtue may assertion failed look in log for error datalabsbinmongod datalabsbinmongodthreadproxy more informationfree ltm total used free shared buffers cachedmem bufferscache acore file size blocks c seg size kbytes d unlimitedscheduling priority e size blocks f unlimitedpending signals i locked memory kbytes l memory size kbytes m unlimitedopen files n size bytes p message queues bytes q priority r size kbytes s time seconds t unlimitedmax user processes u memory kbytes v unlimitedfile locks x unlimited
1
i check the source code in the file mrcpp and i can see that a mapreduce with nonatomic true and outtype reduce can generate a global lock in the comment in the code is written this must be global because we may write across different databases my question is how can a map reduce will write across different databases if it is needed to specified the output database of the map reduce this global locks has a lot of influence in the performance of another operations that are i am running make them slow code mrcpp codecpp if configoutputoptionsouttype configreduce reduce apply reduce op on new result and existing one bsonlist values const auto count collectioncountopctx configtempnamespace callerholdsgloballock stdxlockguard lkopctxgetclient curopsetmessageinlock mr reduce post processing mr reduce post processing progress count uniqueptr cursor dbqueryconfigtempnamespacens bsonobj while cursormore this must be global because we may write across different databases lockopctxcolor bsonobj temp cursornextsafe bsonobj old code
1
mongoterminate called after throwing an instance of intabort core dumpedfrom the callstack it looks like there is a problem in versioncpp constructing the global variable versionarraycode lwpkill a raise abort b ef init e start
1
since in the branch entryexit tracing apppears on sasl mutex callbacks that are initalized in mongoconcefun in mongocinitc mongoconcefun is in turn called before any custom log handlers can be configured which means output like the following ends up on stderr before a user of the driver can capture it elsewhere noformat trace mongoc entry trace mongoc exit trace mongoc entry trace mongoc exit noformat looking at other code invoked by the init methods i see that the ssl and scram init functions dont use any tracing nor are there any tests for trace output for these sasl mutex functions should we simply remove the traces or is there a more elaborate solution to be considered
0
for example code mongoreplay play p tmpmr report tmpreport doing playback at speed preprocessing file preprocess complete ops played back in seconds over connections doing playback at speed preprocessing file preprocess complete ops played back in seconds over connections code
0
this is the follow up ticket from discussion this portion of the ticket investigates the functionality of adding an import thread into testformat stress testing currently the design that is discussed from describes adding an import thread in which will perform live import on the current rundir since import needs a table that is foreign to the rundir database the idea is to createdrop tables in another database in a new subdirectory rundirimport removing the necessity of exclusive access of tables there have been numerous implementation approaches that can be done here the thread will call session drop and then import the same uri onto the table this method is least preferable as no operation can be done on the uri between drop and import and drop would be ebusy most of the time the thread inside will create a new uri table and populate some data into the new uri then call drop on that uri and then import back into the database this is method allows other threads to work on original uri in testformat the thread will somehow copy the existing table uri file that it is performing on and then import the newly copied table this method is most experimental as not sure with the behaviours around copying a live table and importing a copied table file note testformat only performs operations on table continually currently an actual use case here can also provide assistance in designing the import for testformat
0
motorpoolgetsocket proactively checks a socket for errors if it hasnt been used in more than a second it calls select on the sockets file descriptor to see if socket has been shutdown at the os levelif this check fails motorpool fails to decrement its socket counter so the closed socket is forever counted against maxpoolsizethis is the equivalent of a semaphore leak in a normal multithreaded connection pool
1
link below opens in the same window despite having an icon implying that it will open in a new window
0
the thirdparty bson package in pypi is unmaintained and unrelated to pymongounfortunately since pip install pymongo also installs a toplevel bson package users see example code likecodeimport bsonimport pymongocodeand assume they should install bson and pymongo from pypi depending on the python packaging configuration of their system this can lead to a variety of surprising and unintelligible errors sometimes immediately sometimes only after a pymongo upgradethe pymongo docs should warn against installing bson from pypi in installationrst and perhaps elsewhere
0
the failed tests for this task say they failed in the base version even though they didnt
1

Dataset Card for "mongoDB_testset_highest_high"

More Information needed

Downloads last month
0
Edit dataset card