sourceName
stringclasses 1
value | url
stringlengths 52
145
| action
stringclasses 1
value | body
stringlengths 0
60.5k
| format
stringclasses 1
value | metadata
dict | title
stringlengths 5
125
| updated
stringclasses 3
values |
---|---|---|---|---|---|---|---|
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-cene-1 | created | # The Atlas Search 'cene: Season 1
# The Atlas Search 'cene: Season 1
Welcome to the first season of a video series dedicated to Atlas Search! This series of videos is designed to guide you through the journey from getting started and understanding the concepts, to advanced techniques.
## What is Atlas Search?
[Atlas Search][1] is an embedded full-text search in MongoDB Atlas that gives you a seamless, scalable experience for building relevance-based app features. Built on Apache Lucene, Atlas Search eliminates the need to run a separate search system alongside your database.
By integrating the database, search engine, and sync mechanism into a single, unified, and fully managed platform, Atlas Search is the fastest and easiest way to build relevance-based search capabilities directly into applications.
> Hip to the *'cene*
>
> The name of this video series comes from a contraction of "Lucene",
> the search engine library leveraged by Atlas. Or it's a short form of "scene".
## Episode Guide
### **[Episode 1: What is Atlas Search & Quick Start][2]**
In this first episode of the Atlas Search 'cene, learn what Atlas Search is, and get a quick start introduction to setting up Atlas Search on your data. Within a few clicks, you can set up a powerful, full-text search index on your Atlas collection data, and leverage the fast, relevant results to your users queries.
### **[Episode 2: Configuration / Development Environment][3]**
In order to best leverage Atlas Search, configuring it for your querying needs leads to success. In this episode, learn how Atlas Search maps your documents to its index, and discover the configuration control you have.
### **[Episode 3: Indexing][4]**
While Atlas Search automatically indexes your collections content, it does demand attention to the indexing configuration details in order to match users queries appropriately. This episode covers how Atlas Search builds an inverted index, and the options one must consider.
### **[Episode 4: Searching][5]**
Atlas Search provides a rich set of query operators and relevancy controls. This episode covers the common query operators, their relevancy controls, and ends with coverage of the must-have Query Analytics feature.
### **[Episode 5: Faceting][6]**
Facets produce additional context for search results, providing a list of subsets and counts within. This episode details the faceting options available in Atlas Search.
### **[Episode 6: Advanced Search Topics][7]**
In this episode, we go through some more advanced search topics including embedded documents, fuzzy search, autocomplete, highlighting, and geospatial.
### **[Episode 7: Query Analytics][8]**
Are your users finding what they are looking for? Are your top queries returning the best results? This episode covers the important topic of query analytics. If you're using search, you need this!
### **[Episode 8: Tips & Tricks][9]**
In this final episode of The Atlas Search 'cene Season 1, useful techniques to introspect query details and see the relevancy scoring computation details. Also shown is how to get facets and search results back in one API call.
[1]: https://www.mongodb.com/atlas/search
[2]: https://www.mongodb.com/developer/videos/what-is-atlas-search-quick-start/
[3]: https://www.mongodb.com/developer/videos/atlas-search-configuration-development-environment/
[4]: https://www.mongodb.com/developer/videos/mastering-indexing-for-perfect-query-matches/
[5]: https://www.mongodb.com/developer/videos/query-operators-relevancy-controls-for-precision-searches/
[6]: https://www.mongodb.com/developer/videos/faceting-mastery-unlock-the-full-potential-of-atlas-search-s-contextual-insights/
[7]: https://www.mongodb.com/developer/videos/atlas-search-mastery-elevate-your-search-with-fuzzy-geospatial-highlighting-hacks/
[8]: https://www.mongodb.com/developer/videos/atlas-search-query-analytics/
[9]: https://www.mongodb.com/developer/videos/tips-and-tricks-the-atlas-search-cene-season-1-episode-8/ | md | {
"tags": [
"Atlas"
],
"pageDescription": "The Atlas Search 'cene: Season 1",
"contentType": "Video"
} | The Atlas Search 'cene: Season 1 | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/atlas-open-ai-review-summary | created | # Using MongoDB Atlas Triggers to Summarize Airbnb Reviews with OpenAI
In the realm of property rentals, reviews play a pivotal role. MongoDB Atlas triggers, combined with the power of OpenAI's models, can help summarize and analyze these reviews in real-time. In this article, we'll explore how to utilize MongoDB Atlas triggers to process Airbnb reviews, yielding concise summaries and relevant tags.
This article is an additional feature added to the hotels and apartment sentiment search application developed in Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality.
## Introduction
MongoDB Atlas triggers allow users to define functions that execute in real-time in response to database operations. These triggers can be harnessed to enhance data processing and analysis capabilities. In this example, we aim to generate summarized reviews and tags for a sample Airbnb dataset.
Our original data model has each review embedded in the listing document as an array:
```javascript
"reviews": { "_id": "2663437",
"date": { "$date": "2012-10-20T04:00:00.000Z" }, \
"listing_id": "664017",
"reviewer_id": "633940",
"reviewer_name": "Patricia",
"comments": "I booked the room at Marinete's apartment for my husband. He was staying in Rio for a week because he was studying Portuguese. He loved the place. Marinete was very helpfull, the room was nice and clean. \r\nThe location is perfect. He loved the time there. \r\n\r\n" },
{ "_id": "2741592",
"date": { "$date": "2012-10-28T04:00:00.000Z" },
"listing_id": "664017",
"reviewer_id": "3932440",
"reviewer_name": "Carolina",
"comments": "Es una muy buena anfitriona, preocupada de que te encuentres cómoda y te sugiere que actividades puedes realizar. Disfruté mucho la estancia durante esos días, el sector es central y seguro." }, ... ]
```
## Prerequisites
- App Services application (e.g., application-0). Ensure linkage to the cluster with the Airbnb data.
- OpenAI account with API access.
![Open AI Key
### Secrets and Values
1. Navigate to your App Services application.
2. Under "Values," create a secret named `openAIKey` with your OPEN AI API key.
3. Create a linked value named OpenAIKey and link to the secret.
## The trigger code
The provided trigger listens for changes in the sample_airbnb.listingsAndReviews collection. Upon detecting a new review, it samples up to 50 reviews, sends them to OpenAI's API for summarization, and updates the original document with the summarized content and tags.
Please notice that the trigger reacts to updates that were marked with `"process" : false` flag. This field indicates that there were no summary created for this batch of reviews yet.
Example of a review update operation that will fire this trigger:
```javascript
listingsAndReviews.updateOne({"_id" : "1129303"}, { $push : { "reviews" : new_review } , $set : { "process" : false" }});
```
### Sample reviews function
To prevent overloading the API with a large number of reviews, a function sampleReviews is defined to randomly sample up to 50 reviews:
```javscript
function sampleReviews(reviews) {
if (reviews.length <= 50) {
return reviews;
}
const sampledReviews = ];
const seenIndices = new Set();
while (sampledReviews.length < 50) {
const randomIndex = Math.floor(Math.random() * reviews.length);
if (!seenIndices.has(randomIndex)) {
seenIndices.add(randomIndex);
sampledReviews.push(reviews[randomIndex]);
}
}
return sampledReviews;
}
```
### Main trigger logic
The main trigger logic is invoked when an update change event is detected with a `"process" : false` field.
```javascript
exports = async function(changeEvent) {
// A Database Trigger will always call a function with a changeEvent.
// Documentation on ChangeEvents: https://www.mongodb.com/docs/manual/reference/change-events
// This sample function will listen for events and replicate them to a collection in a different Database
function sampleReviews(reviews) {
// Logic above...
if (reviews.length <= 50) {
return reviews;
}
const sampledReviews = [];
const seenIndices = new Set();
while (sampledReviews.length < 50) {
const randomIndex = Math.floor(Math.random() * reviews.length);
if (!seenIndices.has(randomIndex)) {
seenIndices.add(randomIndex);
sampledReviews.push(reviews[randomIndex]);
}
}
return sampledReviews;
}
// Access the _id of the changed document:
const docId = changeEvent.documentKey._id;
const doc= changeEvent.fullDocument;
// Get the MongoDB service you want to use (see "Linked Data Sources" tab)
const serviceName = "mongodb-atlas";
const databaseName = "sample_airbnb";
const collection = context.services.get(serviceName).db(databaseName).collection(changeEvent.ns.coll);
// This function is the endpoint's request handler.
// URL to make the request to the OpenAI API.
const url = 'https://api.openai.com/v1/chat/completions';
// Fetch the OpenAI key stored in the context values.
const openai_key = context.values.get("openAIKey");
const reviews = doc.reviews.map((review) => {return {"comments" : review.comments}});
const sampledReviews= sampleReviews(reviews);
// Prepare the request string for the OpenAI API.
const reqString = `Summerize the reviews provided here: ${JSON.stringify(sampledReviews)} | instructions example:\n\n [{"comment" : "Very Good bed"} ,{"comment" : "Very bad smell"} ] \nOutput: {"overall_review": "Overall good beds and bad smell" , "neg_tags" : ["bad smell"], pos_tags : ["good bed"]}. No explanation. No 'Output:' string in response. Valid JSON. `;
console.log(`reqString: ${reqString}`);
// Call OpenAI API to get the response.
let resp = await context.http.post({
url: url,
headers: {
'Authorization': [`Bearer ${openai_key}`],
'Content-Type': ['application/json']
},
body: JSON.stringify({
model: "gpt-4",
temperature: 0,
messages: [
{
"role": "system",
"content": "Output json generator follow only provided example on the current reviews"
},
{
"role": "user",
"content": reqString
}
]
})
});
// Parse the JSON response
let responseData = JSON.parse(resp.body.text());
// Check the response status.
if(resp.statusCode === 200) {
console.log("Successfully received code.");
console.log(JSON.stringify(responseData));
const code = responseData.choices[0].message.content;
// Get the required data to be added into the document
const updateDoc = JSON.parse(code)
// Set a flag that this document does not need further re-processing
updateDoc.process = true
await collection.updateOne({_id : docId}, {$set : updateDoc});
} else {
console.error("Failed to generate filter JSON.");
console.log(JSON.stringify(responseData));
return {};
}
};
```
Key steps include:
- API request preparation: Reviews from the changed document are sampled and prepared into a request string for the OpenAI API. The format and instructions are tailored to ensure the API returns a valid JSON with summarized content and tags.
- API interaction: Using the context.http.post method, the trigger sends the prepared data to the OpenAI API.
- Updating the original document: Upon a successful response from the API, the trigger updates the original document with the summarized content, negative tags (neg_tags), positive tags (pos_tags), and a process flag set to true.
Here is a sample result that is added to the processed listing document:
```
"process": true,
"overall_review": "Overall, guests had a positive experience at Marinete's apartment. They praised the location, cleanliness, and hospitality. However, some guests mentioned issues with the dog and language barrier.",
"neg_tags": [ "language barrier", "dog issues" ],
"pos_tags": [ "great location", "cleanliness", "hospitality" ]
```
Once the data is added to our documents, providing this information in our VUE application is as simple as adding this HTML template:
```html
Overall Review (ai based) : {{ listing.overall_review }}
{{tag}}
{{tag}}
```
## Conclusion
By integrating MongoDB Atlas triggers with OpenAI's powerful models, we can efficiently process and analyze large volumes of reviews in real-time. This setup not only provides concise summaries of reviews but also categorizes them into positive and negative tags, offering valuable insights to property hosts and potential renters.
Questions? Comments? Let’s continue the conversation over in our [community forums. | md | {
"tags": [
"MongoDB",
"JavaScript",
"AI",
"Node.js"
],
"pageDescription": "Uncover the synergy of MongoDB Atlas triggers and OpenAI models in real-time analysis and summarization of Airbnb reviews. ",
"contentType": "Tutorial"
} | Using MongoDB Atlas Triggers to Summarize Airbnb Reviews with OpenAI | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/getting-started-with-mongodb-and-codewhisperer | created | # Getting Started with MongoDB and AWS Codewhisperer
**Introduction**
----------------
Amazon CodeWhisperer is trained on billions of lines of code and can generate code suggestions — ranging from snippets to full functions — in real-time, based on your comments and existing code. AI code assistants have revolutionized developers’ coding experience, but what sets Amazon CodeWhisperer apart is that MongoDB has collaborated with the AWS Data Science team, enhancing its capabilities!
At MongoDB, we are always looking to enhance the developer experience, and we've fine-tuned the CodeWhisperer Foundational Models to deliver top-notch code suggestions — trained on, and tailored for, MongoDB. This gives developers of all levels the best possible experience when using CodeWhisperer for MongoDB functions.
This tutorial will help you get CodeWhisperer up and running in VS Code, but CodeWhisperer also works with a number of other IDEs, including IntelliJ IDEA, AWS Cloud9, AWS Lambda console, JupyterLab, and Amazon SageMaker Studio. On the [Amazon CodeWhisperer site][1], you can find tutorials that demonstrate how to set up CodeWhisperer on different IDEs, as well as other documentation.
*Note:* CodeWhisperer allows users to start without an AWS account because usually, creating an AWS account requires a credit card. Currently, CodeWhisperer is free for individual users. So it’s super easy to get up and running.
**Installing CodeWhisperer for VS Code**
CodeWhisperer doesn’t have its own VS Code extension. It is part of a larger extension for AWS services called AWS Toolkit. AWS Toolkit is available in the VS Code extensions store.
1. Open VS Code and navigate to the extensions store (bottom icon on the left panel).
2. Search for CodeWhisperer and it will show up as part of the AWS Toolkit.
![Searching for the AWS ToolKit Extension][2]
3. Once found, hit Install. Next, you’ll see the full AWS Toolkit
Listing
![The AWS Toolkit full listing][3]
4. Once installed, you’ll need to authorize CodeWhisperer via a Builder
ID to connect to your AWS developer account (or set up a new account
if you don’t already have one).
![Authorise CodeWhisperer][4]
**Using CodeWhisperer**
-----------------------
Navigating code suggestions
![CodeWhisperer Running][5]
With CodeWhisperer installed and running, as you enter your prompt or code, CodeWhisperer will offer inline code suggestions. If you want to keep the suggestion, use **TAB** to accept it. CodeWhisperer may provide multiple suggestions to choose from depending on your use case. To navigate between suggestions, use the left and right arrow keys to view them, and **TAB** to accept.
If you don’t like the suggestions you see, keep typing (or hit **ESC**). The suggestions will disappear, and CodeWhisperer will generate new ones at a later point based on the additional context.
**Requesting suggestions manually**
You can request suggestions at any time. Use **Option-C** on Mac or **ALT-C** on Windows. After you receive suggestions, use **TAB** to accept and arrow keys to navigate.
**Getting the best recommendations**
For best results, follow these practices.
- Give CodeWhisperer something to work with. The more code your file contains, the more context CodeWhisperer has for generating recommendations.
- Write descriptive comments in natural language — for example
```
// Take a JSON document as a String and store it in MongoDB returning the _id
```
Or
```
//Insert a document in a collection with a given _id and a discountLevel
```
- Specify the libraries you prefer at the start of your file by using import statements.
```
// This Java class works with MongoDB sync driver.
// This class implements Connection to MongoDB and CRUD methods.
```
- Use descriptive names for variables and functions
- Break down complex tasks into simpler tasks
**Provide feedback**
----------------
As with all generative AI tools, they are forever learning and forever expanding their foundational knowledge base, and MongoDB is looking for feedback. If you are using Amazon CodeWhisperer in your MongoDB development, we’d love to hear from you.
We’ve created a special “codewhisperer” tag on our [Developer Forums][6], and if you tag any post with this, it will be visible to our CodeWhisperer project team and we will get right on it to help and provide feedback. If you want to see what others are doing with CodeWhisperer on our forums, the [tag search link][7] will jump you straight into all the action.
We can’t wait to see your thoughts and impressions of MongoDB and Amazon CodeWhisperer together.
[1]: https://aws.amazon.com/codewhisperer/resources/#Getting_started
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1bfd28a846063ae9/65481ef6e965d6040a3dcc37/CW_1.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltde40d5ae1b9dd8dd/65481ef615630d040a4b2588/CW_2.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt636bb8d307bebcee/65481ef6a6e009040a740b86/CW_3.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf1e0ebeea2089e6a/65481ef6077aca040a5349da/CW_4.png
[6]: https://www.mongodb.com/community/forums/
[7]: https://www.mongodb.com/community/forums/tag/codewhisperer | md | {
"tags": [
"MongoDB",
"JavaScript",
"Java",
"Python",
"AWS",
"AI"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Getting Started with MongoDB and AWS Codewhisperer | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/code-examples/java/rest-apis-java-spring-boot | created | # REST APIs with Java, Spring Boot, and MongoDB
## GitHub repository
If you want to write REST APIs in Java at the speed of light, I have what you need. I wrote this template to get you started. I have tried to solve as many problems as possible in it.
So if you want to start writing REST APIs in Java, clone this project, and you will be up to speed in no time.
```shell
git clone https://github.com/mongodb-developer/java-spring-boot-mongodb-starter
```
That’s all folks! All you need is in this repository. Below I will explain a few of the features and details about this template, but feel free to skip what is not necessary for your understanding.
## README
All the extra information and commands you need to get this project going are in the `README.md` file which you can read in GitHub.
## Spring and MongoDB configuration
The configuration can be found in the MongoDBConfiguration.java class.
```java
package com.mongodb.starter;
import ...]
import static org.bson.codecs.configuration.CodecRegistries.fromProviders;
import static org.bson.codecs.configuration.CodecRegistries.fromRegistries;
@Configuration
public class MongoDBConfiguration {
@Value("${spring.data.mongodb.uri}")
private String connectionString;
@Bean
public MongoClient mongoClient() {
CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);
return MongoClients.create(MongoClientSettings.builder()
.applyConnectionString(new ConnectionString(connectionString))
.codecRegistry(codecRegistry)
.build());
}
}
```
The important section here is the MongoDB configuration, of course. Firstly, you will notice the connection string is automatically retrieved from the `application.properties` file, and secondly, you will notice the configuration of the `MongoClient` bean.
A `Codec` is the interface that abstracts the processes of decoding a BSON value into a Java object and encoding a Java object into a BSON value.
A `CodecRegistry` contains a set of `Codec` instances that are accessed according to the Java classes that they encode from and decode to.
The MongoDB driver is capable of encoding and decoding BSON for us, so we do not have to take care of this anymore. All the configuration we need for this project to run is here and nowhere else.
You can read [the driver documentation if you want to know more about this topic.
## Multi-document ACID transactions
Just for the sake of it, I also used multi-document ACID transactions in a few methods where it could potentially make sense to use ACID transactions. You can check all the code in the `MongoDBPersonRepository` class.
Here is an example:
```java
private static final TransactionOptions txnOptions = TransactionOptions.builder()
.readPreference(ReadPreference.primary())
.readConcern(ReadConcern.MAJORITY)
.writeConcern(WriteConcern.MAJORITY)
.build();
@Override
public List saveAll(List personEntities) {
try (ClientSession clientSession = client.startSession()) {
return clientSession.withTransaction(() -> {
personEntities.forEach(p -> p.setId(new ObjectId()));
personCollection.insertMany(clientSession, personEntities);
return personEntities;
}, txnOptions);
}
}
```
As you can see, I’m using an auto-closeable try-with-resources which will automatically close the client session at the end. This helps me to keep the code clean and simple.
Some of you may argue that it is actually too simple because transactions (and write operations, in general) can throw exceptions, and I’m not handling any of them here… You are absolutely right and this is an excellent transition to the next part of this article.
## Exception management
Transactions in MongoDB can raise exceptions for various reasons, and I don’t want to go into the details too much here, but since MongoDB 3.6, any write operation that fails can be automatically retried once. And the transactions are no different. See the documentation for retryWrites.
If retryable writes are disabled or if a write operation fails twice, then MongoDB will send a MongoException (extends RuntimeException) which should be handled properly.
Luckily, Spring provides the annotation `ExceptionHandler` to help us do that. See the code in my controller `PersonController`. Of course, you will need to adapt and enhance this in your real project, but you have the main idea here.
```java
@ExceptionHandler(RuntimeException.class)
public final ResponseEntity handleAllExceptions(RuntimeException e) {
logger.error("Internal server error.", e);
return new ResponseEntity<>(e, HttpStatus.INTERNAL_SERVER_ERROR);
}
```
## Aggregation pipeline
MongoDB's aggregation pipeline is a very powerful and efficient way to run your complex queries as close as possible to your data for maximum efficiency. Using it can ease the computational load on your application.
Just to give you a small example, I implemented the `/api/persons/averageAge` route to show you how I can retrieve the average age of the persons in my collection.
```java
@Override
public double getAverageAge() {
List pipeline = List.of(group(new BsonNull(), avg("averageAge", "$age")), project(excludeId()));
return personCollection.aggregate(pipeline, AverageAgeDTO.class).first().averageAge();
}
```
Also, you can note here that I’m using the `personCollection` which was initially instantiated like this:
```java
private MongoCollection personCollection;
@PostConstruct
void init() {
personCollection = client.getDatabase("test").getCollection("persons", PersonEntity.class);
}
```
Normally, my personCollection should encode and decode `PersonEntity` object only, but you can overwrite the type of object your collection is manipulating to return something different — in my case, `AverageAgeDTO.class` as I’m not expecting a `PersonEntity` class here but a POJO that contains only the average age of my "persons".
## Swagger
Swagger is the tool you need to document your REST APIs. You have nothing to do — the configuration is completely automated. Just run the server and navigate to http://localhost:8080/swagger-ui.html. the interface will be waiting for you.
for more information.
## Nyan Cat
Yes, there is a Nyan Cat section in this post. Nyan Cat is love, and you need some Nyan Cat in your projects. :-)
Did you know that you can replace the Spring Boot logo in the logs with pretty much anything you want?
and the "Epic" font for each project name. It's easier to identify which log file I am currently reading.
## Conclusion
I hope you like my template, and I hope I will help you be more productive with MongoDB and the Java stack.
If you see something which can be improved, please feel free to open a GitHub issue or directly submit a pull request. They are very welcome. :-)
If you are new to MongoDB Atlas, give our Quick Start post a try to get up to speed with MongoDB Atlas in no time.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt876f3404c57aa244/65388189377588ba166497b0/swaggerui.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf2f06ba5af19464d/65388188d31953242b0dbc6f/nyancat.png | md | {
"tags": [
"Java",
"Spring"
],
"pageDescription": "Take a shortcut to REST APIs with this Java/Spring Boot and MongoDB example application that embeds all you'll need to get going.",
"contentType": "Code Example"
} | REST APIs with Java, Spring Boot, and MongoDB | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/swift/halting-development-on-swift-driver | created | # Halting Development on MongoDB Swift Driver
MongoDB is halting development on our server-side Swift driver. We remain excited about Swift and will continue our development of our mobile Swift SDK.
We released our server-side Swift driver in 2020 as an open source project and are incredibly proud of the work that our engineering team has contributed to the Swift community over the last four years. Unfortunately, today we are announcing our decision to stop development of the MongoDB server-side Swift driver. We understand that this news may come as a disappointment to the community of current users.
There are still ways to use MongoDB with Swift:
- Use the MongoDB driver with server-side Swift applications as is
- Use the MongoDB C Driver directly in your server-side Swift projects
- Usage of another community Swift driver, mongokitten
Community members and developers are welcome to fork our existing driver and add features as you see fit - the Swift driver is under the Apache 2.0 license and source code is available on GitHub. For those developing client/mobile applications, MongoDB offers the Realm Swift SDK with real time sync to MongoDB Atlas.
We would like to take this opportunity to express our heartfelt appreciation for the enthusiastic support that the Swift community has shown for MongoDB. Your loyalty and feedback have been invaluable to us throughout our journey, and we hope to resume development on the server-side Swift driver in the future. | md | {
"tags": [
"Swift",
"MongoDB"
],
"pageDescription": "The latest news regarding the MongoDB driver for Swift.",
"contentType": "News & Announcements"
} | Halting Development on MongoDB Swift Driver | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/online-archive-query-performance | created | # Optimizing your Online Archive for Query Performance
## Contributed By
This article was contributed by Prem Krishna, a Senior Product Manager for Analytics at MongoDB.
## Introduction
With Atlas Online Archive, you can tier off cold data or infrequently accessed data from your MongoDB cluster to a MongoDB-managed cloud object storage - Amazon S3 or Microsoft Azure Blob Storage. This can lower the cost via archival cloud storage for old data, while active data that is more often accessed and queried remains in the primary database.
> FYI: If using Online Archive and also using MongoDB's Atlas Data Federation, users can also see a unified view of production data, and *archived data* side by side through a read-only, federated database instance.
In this blog, we are going to be discussing how to improve the performance of your online archive by choosing the correct partitioning fields.
## Why is partitioning so critical when configuring Online Archive?
Once you have started archiving data, you cannot edit any partition fields as the structure of how the data will be stored in the object storage becomes fixed after the archival job begins. Therefore, you'll want to think critically about your partitioning strategy beforehand.
Also, archival query performance is determined by how the data is structured in object storage, so it is important to not only choose the correct partitions but also choose the correct order of partitions.
## Do this...
**Choose the most frequently queried fields.** You can choose up to 2 partition fields for a custom query-based archive or up to three fields on a date-based online archive. Ensure that the most frequently queried fields for the archive are chosen. Note that we are talking about how you are going to query the archive and not the custom query criteria provided at the time of archiving!
**Check the order of partitioned fields.** While selecting the partitions is important, it is equally critical to choose the correct *order* of partitions. The most frequently queried field should be the first chosen partition field, followed by the second and third. That's simple enough.
## Not this
**Don't add irrelevant fields as partitions.** If you are not querying a specific field from the archive, then that field should not be added as a partition field. Remember that you can add a maximum of 2 or 3 partition fields, so it is important to choose these fields carefully based on how you query your archive.
**Don't ignore the “Move down” option.** The “Move down” option is applicable to an archive with a data-based rule. For example, if you want to query on Field_A the most, then Field_B, and then on exampleDate, ensure you are selecting the “Move Down” option next to the “Archive date field” on top.
**Don't choose high cardinality partition(s).** Choosing a high cardinality field such as `_id` will create a large number of partitions in the object storage. Then querying the archive for any aggregate based queries will cause increased latency. The same is applicable if multiple partitions are selected such that the collective fields when grouped together can be termed as high cardinality. For example, if you are selecting Field_A, Field_B and Field_C as your partitions and if a combination of these fields are creating unique values, then it will result in high cardinality partitions.
> Please note that this is **not applicable** for new Online Archives.
## Additional guidance
In addition to the partitioning guidelines, there are a couple of additional considerations that are relevant for the optimal configuration of your data archival strategy.
**Add data expiration rules and scheduled windows**
These fields are optional but are relevant for your use cases and can improve your archival speeds and for how long your data needs to be present in the archive.
**Index required fields**
Before archiving the data, ensure that your data is indexed for optimal performance. You can run an explain plan on the archival query to verify whether the archival rule will use an index.
## Conclusion
It is important to follow these do’s and don’ts before hitting “Begin Archiving” to archive your data so that the partitions are correctly configured thereby optimizing the performance of your online archives.
For more information on configuration or Online Archive, please see the documentation for setting up an Online Archive and our blog post on how to create an Online Archive.
Dig deeper into this topic with this tutorial.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
| md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "Get all the do's and don'ts around optimization of your data archival strategy.",
"contentType": "Article"
} | Optimizing your Online Archive for Query Performance | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/using-confluent-cloud-atlas-stream-processing | created | # Using the Confluent Cloud with Atlas Stream Processing
> Atlas Stream Processing is now available. Learn more about it here.
Apache Kafka is a massively popular streaming platform today. It is available in the open-source community and also as software (e.g., Confluent Platform) for self-managing. Plus, you can get a hosted Kafka (or Kafka-compatible) service from a number of providers, including AWS Managed Streaming for Apache Kafka (MSK), RedPanda Cloud, and Confluent Cloud, to name a few.
In this tutorial, we will configure network connectivity between MongoDB Atlas Stream Processing instances and a topic within the Confluent Cloud. By the end of this tutorial, you will be able to process stream events from Confluent Cloud topics and emit the results back into a Confluent Cloud topic.
Confluent Cloud dedicated clusters support connectivity through secure public internet endpoints with their Basic and Standard clusters. Private network connectivity options such as Private Link connections, VPC/VNet peering, and AWS Transit Gateway are available in the Enterprise and Dedicated cluster tiers.
**Note:** At the time of this writing, Atlas Stream Processing only supports internet-facing Basic and Standard Confluent Cloud clusters. This post will be updated to accommodate Enterprise and Dedicated clusters when support is provided for private networks.
The easiest way to get started with connectivity between Confluent Cloud and MongoDB Atlas is by using public internet endpoints. Public internet connectivity is the only option for Basic and Standard Confluent clusters. Rest assured that Confluent Cloud clusters with internet endpoints are protected by a proxy layer that prevents types of DoS, DDoS, SYN flooding, and other network-level attacks. We will also use authentication API keys with the SASL_SSL authentication method for secure credential exchange.
In this tutorial, we will set up and configure Confluent Cloud and MongoDB Atlas for network connectivity and then work through a simple example that uses a sample data generator to stream data between MongoDB Atlas and Confluent Cloud.
## Tutorial prerequisites
This is what you’ll need to follow along:
- An Atlas project (free or paid tier)
- An Atlas database user with atlasAdmin permission
- For the purposes of this tutorial, we’ll have the user “tutorialuser.”
- MongoDB shell (Mongosh) version 2.0+
- Confluent Cloud cluster (any configuration)
## Configure Confluent Cloud
For this tutorial, you need a Confluent Cloud cluster created with a topic, “solardata,” and an API access key created. If you already have this, you may skip to Step 2.
To create a Confluent Cloud cluster, log into the Confluent Cloud portal, select or create an environment for your cluster, and then click the “Add Cluster” button.
In this tutorial, we can use a **Basic** cluster type.
, click on “Stream Processing” from the Services menu. Next, click on the “Create Instance” button. Provide a name, cloud provider, and region. Note: For a lower network cost, choose the cloud provider and region that matches your Confluent Cloud cluster. In this tutorial, we will use AWS us-east-1 for both Confluent Cloud and MongoDB Atlas.
before continuing this tutorial.
Connection information can be found by clicking on the “Connect” button on your SPI. The connect dialog is similar to the connect dialog when connecting to an Atlas cluster. To connect to the SPI, you will need to use the **mongosh** command line tool.
.
> Log in today to get started. Atlas Stream Processing is now available to all developers in Atlas. Give it a try today!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcfb9c8a1f971ace1/652994177aecdf27ae595bf9/image24.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt63a22c62ae627895/652994381e33730b6478f0d1/image5.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte3f1138a6294748f/65299459382be57ed901d434/image21.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3ccf2827c99f1c83/6529951a56a56b7388898ede/image19.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltaea830d5730e5f51/652995402e91e47b2b547e12/image20.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9c425a65bb77f282/652995c0451768c2b6719c5f/image13.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2748832416fdcf8e/652996cd24aaaa5cb2e56799/image15.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9010c25a76edb010/652996f401c1899afe4a465b/image7.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt27b3762b12b6b871/652997508adde5d1c8f78a54/image3.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to configure network connectivity between Confluent Cloud and MongoDB Atlas Stream Processing.",
"contentType": "Tutorial"
} | Using the Confluent Cloud with Atlas Stream Processing | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/charts-javascript-sdk | created |
Refresh
Only in USA
| md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Learn how to visualize your data with MongoDB Charts.",
"contentType": "Tutorial"
} | Working with MongoDB Charts and the New JavaScript SDK | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/how-send-mongodb-document-changes-slack-channel | created | # How to Send MongoDB Document Changes to a Slack Channel
In this tutorial, we will explore a seamless integration of your database with Slack using Atlas Triggers and the Slack API. Discover how to effortlessly send notifications to your desired Slack channels, effectively connecting the operations happening within your collections and relaying them in real-time updates.
The overall flow will be:
.
Once this has been completed, we are ready to start creating our first database trigger that will react every time there is an operation in a certain collection.
## Atlas trigger
For this tutorial, we will create a trigger that monitors all changes in a `test` collection for `insert`, `update`, and `delete` operations.
To create a new database trigger, you will need to:
1. Click the **Data Services** tab in the top navigation of your screen if you haven't already navigated to Atlas.
2. Click **Triggers** in the left-hand navigation.
3. On the **Overview** tab of the **Triggers** page, click **Add Trigger** to open the trigger configuration page.
4. Enter the configuration values for the trigger and click **Save** at the bottom of the page.
Please note that this trigger will make use of the *event ordering* as we want the operations to be processed according to when they were performed.
The trigger configuration values will look like this:
using the UI, we need to:
1. Click the **Data Services** tab in the top navigation of your screen if you haven't already navigated to Atlas.
2. Click **Functions** in the left navigation menu.
3. Click **New Function** in the top right of the **Functions** page.
4. Enter a unique, identifying name for the function in the **Name** field.
5. Configure **User Authentication**. Functions in App Services always execute in the context of a specific application user or as a system user that bypasses rules. For this tutorial, we are going to use **System user**.
### "processEvent" function
The processEvent function will process the change events every time an operation we are monitoring in the given collection is processed. In this way, we are going to create an object that we will then send to the function in charge of sending this message in Slack.
The code of the function is the following:
```javascript
exports = function(changeEvent) {
const docId = changeEvent.documentKey._id;
const { updateDescription, operationType } = changeEvent;
var object = {
operationType,
docId,
};
if (updateDescription) {
const updatedFields = updateDescription.updatedFields; // A document containing updated fields
const removedFields = updateDescription.removedFields; // An array of removed fields
object = {
...object,
updatedFields,
removedFields
};
}
const result = context.functions.execute("sendToSlack", object);
return true;
};
```
In this function, we will create an object that we will then send as a parameter to another function that will be in charge of sending to our Slack channel.
Here we will use change event and its properties to capture the:
1. `_id` of the object that has been modified/inserted.
2. Operation that has been performed.
3. Fields of the object that have been modified or deleted when the operation has been an `update`.
With all this, we create an object and make use of the internal function calls to execute our `sendToSlack` function.
### "sendToSlack" function
This function will make use of the "chat.postMessage" method of the Slack API to send a message to a specific channel.
To use the Slack library, you must add it as a dependency in your Atlas function. Therefore, in the **Functions** section, we must go to the **Dependencies** tab and install `@slack/web-api`.
You will need to have a Slack token that will be used for creating the `WebClient` object as well as a Slack application. Therefore:
1. Create or use an existing Slack app: This is necessary as the subsequent token we will need will be linked to a Slack App. For this step, you can navigate to the Slack application and use your credentials to authenticate and create or use an existing app you are a member of.
2. Within this app, we will need to create a bot token that will hold the authentication API key to send messages to the corresponding channel in the Slack app created. Please note that you will need to add as many authorization scopes on your token as you need, but the bare minimum is to add the `chat:write` scope to allow your app to post messages.
A full guide on how to get these two can be found in the Slack official documentation.
First, we will perform the logic with the received object to create a message adapted to the event that occurred.
```javascript
var message = "";
if (arg.operationType == 'insert') {
message += `A new document with id \`${arg.docId}\` has been inserted`;
} else if (arg.operationType == 'update') {
message += `The document \`${arg.docId}\` has been updated.`;
if (arg.updatedFields && Object.keys(arg.updatedFields).length > 0) {
message += ` The fileds ${JSON.stringify(arg.updatedFields)} has been modified.`;
}
if (arg.removedFields && arg.removedFields.length > 0) {
message += ` The fileds ${JSON.stringify(arg.removedFields)} has been removed.`;
}
} else {
message += `An unexpected operation affecting document \`${arg.docId}\` ocurred`;
}
```
Once we have the library, we must use it to create a `WebClient` client that we will use later to make use of the methods we need.
```javascript
const { WebClient } = require('@slack/web-api');
// Read a token from the environment variables
const token = context.values.get('SLACK_TOKEN');
// Initialize
const app = new WebClient(token);
```
Finally, we can send our message with:
```javascript
try {
// Call the chat.postMessage method using the WebClient
const result = await app.chat.postMessage({
channel: channelId,
text: `New Event: ${message}`
});
console.log(result);
}
catch (error) {
console.error(error);
}
```
The full function code will be as:
```javascript
exports = async function(arg){
const { WebClient } = require('@slack/web-api');
// Read a token from the environment variables
const token = context.values.get('SLACK_TOKEN');
const channelId = context.values.get('CHANNEL_ID');
// Initialize
const app = new WebClient(token);
var message = "";
if (arg.operationType == 'insert') {
message += `A new document with id \`${arg.docId}\` has been inserted`;
} else if (arg.operationType == 'update') {
message += `The document \`${arg.docId}\` has been updated.`;
if (arg.updatedFields && Object.keys(arg.updatedFields).length > 0) {
message += ` The fileds ${JSON.stringify(arg.updatedFields)} has been modified.`;
}
if (arg.removedFields && arg.removedFields.length > 0) {
message += ` The fileds ${JSON.stringify(arg.removedFields)} has been removed.`;
}
} else {
message += `An unexpected operation affecting document \`${arg.docId}\` ocurred`;
}
try {
// Call the chat.postMessage method using the WebClient
const result = await app.chat.postMessage({
channel: channelId,
text: `New Event: ${message}`
});
console.log(result);
}
catch (error) {
console.error(error);
}
};
```
Note: The bot token we use must have the minimum permissions to send messages to a certain channel. We must also have the application created in Slack added to the channel where we want to receive the messages.
If everything is properly configured, every change in the collection and monitored operations will be received in the Slack channel:
to only detect certain changes and then adapt the change event to only receive certain fields with a "$project".
## Conclusion
In this tutorial, we've learned how to seamlessly integrate your database with Slack using Atlas Triggers and the Slack API. This integration allows you to send real-time notifications to your Slack channels, keeping your team informed about important operations within your database collections.
We started by creating a new application in Atlas and then set up a database trigger that reacts to specific collection operations. We explored the `processEvent` function, which processes change events and prepares the data for Slack notifications. Through a step-by-step process, we demonstrated how to create a message and use the Slack API to post it to a specific channel.
Now that you've grasped the basics, it's time to take your integration skills to the next level. Here are some steps you can follow:
- **Explore advanced use cases**: Consider how you can adapt the principles you've learned to more complex scenarios within your organization. Whether it's custom notifications or handling specific database events, there are countless possibilities.
- **Dive into the Slack API documentation**: For a deeper understanding of what's possible with Slack's API, explore their official documentation. This will help you harness the full potential of Slack's features.
By taking these steps, you'll be well on your way to creating powerful, customized integrations that can streamline your workflow and keep your team in the loop with real-time updates. Good luck with your integration journey!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8fcfb82094f04d75/653816cde299fbd2960a4695/image2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc7874f54dc0cd8be/653816e70d850608a2f05bb9/image3.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt99aaf337d37c41ae/653816fd2c35813636b3a54d/image1.png | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Learn how to use triggers in MongoDB Atlas to send information about changes to a document to Slack.",
"contentType": "Tutorial"
} | How to Send MongoDB Document Changes to a Slack Channel | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/doc-modeling-vector-search | created | # How to Model Your Documents for Vector Search
Atlas Vector Search was recently released, so let’s dive into a tutorial on how to properly model your documents when utilizing vector search to revolutionize your querying capabilities!
## Data modeling normally in MongoDB
Vector search is new, so let’s first go over the basic ways of modeling your data in a MongoDB document before continuing on into how to incorporate vector embeddings.
Data modeling in MongoDB revolves around organizing your data into documents within various collections. Varied projects or organizations will require different ways of structuring data models due to the fact that successful data modeling depends on the specific requirements of each application, and for the most part, no one document design can be applied for every situation. There are some commonalities, though, that can guide the user. These are:
1. Choosing whether to embed or reference your related data.
2. Using arrays in a document.
3. Indexing your documents (finding fields that are frequently used and applying the appropriate indexing, etc.).
For a more in-depth explanation and a comprehensive guide of data modeling with MongoDB, please check out our data modeling article.
## Setting up an example data model
We are going to be building our vector embedding example using a MongoDB document for our MongoDB TV series. Here, we have a single MongoDB document representing our MongoDB TV show, without any embeddings in place. We have a nested array featuring our array of seasons, and within that, our array of different episodes. This way, in our document, we are capable of seeing exactly which season each episode is a part of, along with the episode number, the title, the description, and the date:
```
{
"_id": ObjectId("238478293"),
"title": "MongoDB TV",
"description": "All your MongoDB updates, news, videos, and podcast episodes, straight to you!",
"genre": "Programming", "Database", "MongoDB"],
"seasons": [
{
"seasonNumber": 1,
"episodes": [
{
"episodeNumber": 1,
"title": "EASY: Build Generative AI Applications",
"description": "Join Jesse Hall….",
"date": ISODate("Oct52023")
},
{
"episodeNumber": 2,
"title": "RAG Architecture & MongoDB: The Future of Generative AI Apps",
"description": "Join Prakul Agarwal…",
"date": ISODate("Oct42023")
}
]
},
{
"seasonNumber": 2,
"episodes": [
{
"episodeNumber": 1,
"title": "Cloud Connect - Harness the Power of AI/ML and Generative AI on AWS with MongoDB Atlas",
"description": "Join Igor Alekseev….",
"date": ISODate("Oct32023")
},
{
"episodeNumber": 2,
"title": "The Index: Here’s what you missed last week…",
"description": "Join Megan Grant…",
"date": ISODate("Oct22023")
}
]
}
]
}
```
Now that we have our example set up, let’s incorporate vector embeddings and discuss the proper techniques to set you up for success.
## Integrating vector embeddings for vector search in our data model
Let’s first understand exactly what vector search is: Vector search is the way to search based on *meaning* rather than specific words. This comes in handy when querying using similarities rather than searching based on keywords. When using vector search, you can query using a question or a phrase rather than just a word. In a nutshell, vector search is great for when you can’t think of *exactly* that book or movie, but you remember the plot or the climax.
This process happens when text, video, or audio is transformed via an encoder into vectors. With MongoDB, we can do this using OpenAI, Hugging Face, or other natural language processing models. Once we have our vectors, we can upload them in the base of our document and conduct vector search using them. Please keep in mind the [current limitations of vector search and how to properly embed your vectors.
You can store your vector embeddings alongside other data in your document, or you can store them in a new collection. It is really up to the user and the project goals. Let’s go over what a document with vector embeddings can look like when you incorporate them into your data model, using the same example from above:
```
{
"_id": ObjectId("238478293"),
"title": "MongoDB TV",
"description": "All your MongoDB updates, news, videos, and podcast episodes, straight to you!",
"genre": "Programming", "Database", "MongoDB"],
“vectorEmbeddings”: [ 0.25, 0.5, 0.75, 0.1, 0.1, 0.8, 0.2, 0.6, 0.6, 0.4, 0.9, 0.3, 0.2, 0.7, 0.5, 0.8, 0.1, 0.8, 0.2, 0.6 ],
"seasons": [
{
"seasonNumber": 1,
"episodes": [
{
"episodeNumber": 1,
"title": "EASY: Build Generative AI Applications",
"description": "Join Jesse Hall….",
"date": ISODate("Oct 5, 2023")
},
{
"episodeNumber": 2,
"title": "RAG Architecture & MongoDB: The Future of Generative AI Apps",
"description": "Join Prakul Agarwal…",
"date": ISODate("Oct 4, 2023")
}
]
},
{
"seasonNumber": 2,
"episodes": [
{
"episodeNumber": 1,
"title": "Cloud Connect - Harness the Power of AI/ML and Generative AI on AWS with MongoDB Atlas",
"description": "Join Igor Alekseev….",
"date": ISODate("Oct 3, 2023")
},
{
"episodeNumber": 2,
"title": "The Index: Here’s what you missed last week…",
"description": "Join Megan Grant…",
"date": ISODate("Oct 2, 2023")
}
]
}
]
}
```
Here, you have your vector embeddings classified at the base in your document. Currently, there is a limitation where vector embeddings cannot be nested in an array in your document. Please ensure your document has your embeddings at the base. There are various tutorials on our [Developer Center, alongside our YouTube account and our documentation, that can help you figure out how to embed these vectors into your document and how to acquire the necessary vectors in the first place.
## Extras: Indexing with vector search
When you’re using vector search, it is necessary to create a search index so you’re able to be successful with your semantic search. To do this, please view our Vector Search documentation. Here is the skeleton code provided by our documentation:
```
{
"fields":
{
"type": "vector",
"path": "",
"numDimensions": ,
"similarity": "euclidean | cosine | dotProduct"
},
{
"type": "filter",
"path": ""
},
...
]
}
```
When setting up your search index, you want to change the “” to be your vector path. In our case, it would be “vectorEmbeddings”. “type” can stay the way it is. For “numDimensions”, please match the dimensions of the model you’ve chosen. This is just the number of vector dimensions, and the value cannot be greater than 4096. This limitation comes from the base embedding model that is being used, so please ensure you’re using a supported LLM (large language model) such as OpenAI or Hugging Face. When using one of these, there won’t be any issues running into vector dimensions. For “similarity”, please pick which vector function you want to use to search for the top K-nearest neighbors.
## Extras: Querying with vector search
When you’re ready to query and find results from your embedded documents, it’s time to create an aggregation pipeline on your embedded vector data. To do this, you can use the“$vectorSearch” operator, which is a new aggregation stage in Atlas. It helps execute an Approximate Nearest Neighbor query.
For more information on this step, please check out the tutorial on Developer Center about [building generative AI applications, and our YouTube video on vector search.
| md | {
"tags": [
"MongoDB",
"AI"
],
"pageDescription": "Follow along with this comprehensive tutorial on how to properly model your documents for MongoDB Vector Search.",
"contentType": "Tutorial"
} | How to Model Your Documents for Vector Search | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/code-examples/python/dog-care-example-app | created | # Example Application for Dog Care Providers (DCP)
## Creator
Radvile Razmute contributed this project.
## About the project
My project explores how to use MongoDB Shell, MongoDB Atlas, and MongoDB Compass. This project aimed to develop a database for dog care providers and demonstrate how this data can be manipulated in MongoDB. The Dog Welfare Federation (DWF) is concerned that some providers who provide short/medium term care for dogs when the owner is unable to – e.g., when away on holidays, may not be delivering the service they promise. Up to now, the DWF has managed the data using a SQL database. As the scale of its operations expanded, the organization needed to invest in a cloud database application. As an alternative to the relational SQL database, the Dog Welfare Federation decided to look at the database development using MongoDB services.
The Dog database uses fictitious data that I have created myself. The different practical stages of the project have been documented in my project report and may guide the beginners taking their first steps into MongoDB.
## Inspiration
The assignment was given to me by my lecturer. And when he was deciding on the topics for the project, he knew that I love dogs. And that's why my project was all about the dogs. Even though the lecturer gave me the assignment, it was my idea to prepare this project in a way that does not only benefit me.
When I followed courses via MongoDB University, I noticed that these courses gave me a flavor of MongoDB, but not the basic concepts. I wanted to turn a database development project into a kind of a guide for somebody who never used MongoDB and who actually can take the project say: "Okay, these are the basic concepts, this is what happens when you run the query, this is the result of what you get, and this is how you can validate that your result and your query is correct." So that's how the whole MongoDB project for beginners was born.
My guide tells you how to use MongoDB, what steps you need to follow to create an application, upload data, use the data, etc. It's one thing to know what those operators are doing, but it's an entirely different thing to understand how they connect and what impact they make.
## Why MongoDB?
My lecturer Noel Tierney, a lecturer in Computer Applications in Athlone Institute of Technology, Ireland, gave me the assignment to use MongoDB. He gave them instructions on the project and what kind of outcome he would like to see. I was asked to use MongoDB, and I decided to dive deeper into everything the platform offers. Besides that, as I mentioned briefly in the introduction: the organization DWF was planning on scaling and expanding their business, and they wanted to look into database development with MongoDB. This was a good chance for me to learn everything about NoSQL.
## How it works
The project teaches you how to set up a MongoDB database for dog care providers. It includes three main sections, including MongoDB Shell, MongoDB Atlas, and MongoDB Compass. The MongoDB Shell section demonstrates how the data can be manipulated using simple queries and the aggregation method. I'm discussing how to import data into a local cluster, create queries, and retrieve & update queries. The other two areas include an overview of MongoDB Atlas and MongoDB Compass; I also discuss querying and the aggregation framework per topic. Each section shows step-by-step instructions on how to set up the application and how also to include some data manipulation examples. As mentioned above, I created all the sample data myself, which was a ton of work! I made a spreadsheet with 2000 different lines of sample data. To do that, I had to Google dog breeds, dog names, and their temperaments. I wanted it to be close to reality.
## Challenges and learning
When I started working with MongoDB, the first big thing that I had to get over was the braces everywhere. So it was quite challenging for me to understand where the query finishes. But I’ve been reading a lot of documentation, and creating this guide gave me quite a good understanding of the basics of MongoDB. I learned a lot about the technical side of databases because I was never familiar with them; I even had no idea how it works. Using MongoDB and learning about MongoDB, and using MongoDB was a great experience. When I had everything set up: the MongoDB shell, Compass, and Atlas, I could see how that information is moving between all these different environments, and that was awesome. I think it worked quite well. I hope that my guide will be valuable for new learners. It demonstrates that users like me, who had no prior skills in using MongoDB, can quickly become MongoDB developers.
Access the complete report, which includes the queries you need - here.
| md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": " Learn MongoDB by creating a database for dog care providers!",
"contentType": "Code Example"
} | Example Application for Dog Care Providers (DCP) | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/leafsteroidsresources | created | # Leafsteroid Resources
Leafsteroids is a MongoDB Demo showing the following services and integrations
------------------------------------------------------------------------
**Atlas App Services**
All in one backend. Atlas App Services offers a full-blown REST service using Atlas Functions and HTTPS endpoints.
**Atlas Search**
Used to find the player nickname in the Web UI.
**Atlas Charts**
Event & personalized player dashboards accessible over the web. Built-in visualization right with your data. No additional tools required.
**Document Model**
Every game run is a single document demonstrating rich documents and “data that works together lives together”, while other data entities are simple collections (configuration).
**AWS Beanstalk** Hosts the Blazor Server Application (website).
**AWS EC2**
Used internally by AWS Beanstalk. Used to host our Python game server.
**AWS S3**
Used internally by AWS Beanstalk.
**AWS Private Cloud**
Private VPN connection between AWS and MongoDB.
**At a MongoDB .local Event and want to register to play Leafsteroids? Register Here**
You can build & play Leafsteroids yourself with the following links
## Development Resources
|Resource| Link|
|---|---|
|Github Repo |Here|
|MongoDB TV Livestream
|Here|
|MongoDB & AWS |Here|
|MongoDB on the AWS Marketplace
|Here|
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Leafsteroid Resources | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/create-first-stream-processor | created | # Get Started with Atlas Stream Processing: Creating Your First Stream Processor
>Atlas Stream Processing is now available. Learn more about it here.
If you're not already familiar, Atlas Stream Processing enables processing high-velocity streams of complex data using the same data model and Query API that's used in MongoDB Atlas databases. Streaming data is increasingly critical to building responsive, event-driven experiences for your customers. Stream processing is a fundamental building block powering these applications, by helping to tame the firehouse of data coming from many sources, by finding important events in a stream, or by combining data in motion with data in rest.
In this tutorial, we will create a stream processor that uses sample data included in Atlas Stream Processing. By the end of the tutorial, you will have an operational Stream Processing Instance (SPI) configured with a stream processor. This environment can be used for further experimentation and Atlas Stream Processing tutorials in the future.
### Tutorial Prerequisites
This is what you'll need to follow along:
* An Atlas user with atlasAdmin permission. For the purposes of this tutorial, we'll have the user "tutorialuser".
* MongoDB shell (Mongosh) version 2.0+
## Create the Stream Processing Instance
Let's first create a Stream Processing Instance (SPI). Think of an SPI as a logical grouping of one or more stream processors. When created, the SPI has a connection string similar to a typical MongoDB Atlas cluster.
Under the Services tab in the Atlas Project click, "Stream Processing". Then click the "Create Instance" button.
This will launch the Create Instance dialog.
Enter your desired cloud provider and region, and then click "Create". You will receive a confirmation dialog upon successful creation.
## Configure the connection registry
The connection registry stores connection information to the external data sources you wish to use within a stream processor. In this example, we will use a sample data generator that is available without any extra configuration, but typically you would connect to either Kafka or an Atlas database as a source.
To manage the connection registry, click on "Configure" to navigate to the configuration screen.
Once on the configuration screen, click on the "Connection Registry" tab.
Next, click on the "Add Connection" button. This will launch the Add Connection dialog.
From here, you can add connections to Kafka, other Atlas clusters within the project, or a sample stream. In this tutorial, we will use the Sample Stream connection. Click on "Sample Stream" and select "sample_stream_solar" from the list of available sample streams. Then, click "Add Connection".
The new "sample_stream_solar" will show up in the list of connections.
## Connect to the Stream Processing Instance (SPI)
Now that we have both created the SPI and configured the connection in the connection registry, we can create a stream processor. First, we need to connect to the SPI that we created previously. This can be done using the MongoDB Shell (mongosh).
To obtain the connection string to the SPI, return to the main Stream Processing page by clicking on the "Stream Processing" menu under the Services tab.
Next, locate the "Tutorial" SPI we just created and click on the "Connect" button. This will present a connection dialog similar to what is found when connecting to MongoDB Atlas clusters.
For connecting, we'll need to add a connection IP address and create a database user, if we haven't already.
Then we'll choose our connection method. If you do not already have mongosh installed, install it using the instructions provided in the dialog.
Once mongosh is installed, copy the connection string from the "I have the MongoDB Shell installed" view and run it in your terminal.
```
Command Terminal > mongosh <> --tls --authenticationDatabase admin --username tutorialuser
Enter password: *******************
Current Mongosh Log ID: 64e9e3bf025581952de31587
Connecting to: mongodb://*****
Using MongoDB: 6.2.0
Using Mongosh: 2.0.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
AtlasStreamProcessing>
```
To confirm your sample_stream_solar is added as a connection, issue `sp.listConnections()`. Our connection to sample_stream_solar is shown as expected.
```
AtlasStreamProcessing> sp.listConnections()
{
ok: 1,
connections:
{
name: 'sample_stream_solar',
type: 'inmemory',
createdAt: ISODate("2023-08-26T18:42:48.357Z")
}
]
}
```
## Create a stream processor
If you are reading through this post as a prerequisite to another tutorial, you can return to that tutorial now to continue.
In this section, we will wrap up by creating a simple stream processor to process the sample_stream_solar source that we have used throughout this tutorial. This sample_stream_solar source represents the observed energy production of different devices (unique solar panels). Stream processing could be helpful in measuring characteristics such as panel efficiency or when replacement is required for a device that is no longer producing energy at all.
First, let's define a [$source stage to describe where Atlas Stream Processing will read the stream data from.
```
var solarstream={$source:{"connectionName": "sample_stream_solar"}}
```
Now we will issue .process to view the contents of the stream in the console.
`sp.process(solarstream])`
.process lets us sample our source data and quickly test the stages of a stream processor to ensure that it is set up as intended. A sample of this data is as follows:
```
{
device_id: 'device_2',
group_id: 3,
timestamp: '2023-08-27T13:51:53.375+00:00',
max_watts: 250,
event_type: 0,
obs: {
watts: 168,
temp: 15
},
_ts: ISODate("2023-08-27T13:51:53.375Z"),
_stream_meta: {
sourceType: 'sampleData',
timestamp: ISODate("2023-08-27T13:51:53.375Z")
}
}
```
## Wrapping up
In this tutorial, we started by introducing Atlas Stream Processing and why stream processing is a building block for powering modern applications. We then walked through the basics of creating a stream processor – we created a Stream Processing Instance, configured a source in our connection registry using sample solar data (included in Atlas Stream Processing), connected to a Stream Processing Instance, and finally tested our first stream processor using .process. You are now ready to explore Atlas Stream Processing and create your own stream processors, adding advanced functionality like windowing and validation.
If you enjoyed this tutorial and would like to learn more check out the [MongoDB Atlas Stream Processing announcement blog post. For more on stream processors in Atlas Stream Processing, visit our documentation.
### Learn more about MongoDB Atlas Stream Processing
For more on managing stream processors in Atlas Stream Processing, visit our documentation.
>Log in today to get started. Atlas Stream Processing is now available to all developers in Atlas. Give it a try today! | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to create a stream processor end-to-end using MongoDB Atlas Stream Processing.",
"contentType": "Tutorial"
} | Get Started with Atlas Stream Processing: Creating Your First Stream Processor | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/instant-graphql-apis-mongodb-grafbase | created | # Instant GraphQL APIs for MongoDB with Grafbase
# Instant GraphQL APIs for MongoDB with Grafbase
In the ever-evolving landscape of web development, efficient data management and retrieval are paramount for creating dynamic and responsive applications. MongoDB, a versatile NoSQL database, and GraphQL, a powerful query language for APIs, have emerged as a dynamic duo that empowers developers to build robust, flexible, and high-performance applications.
When combined, MongoDB and GraphQL offer a powerful solution for front-end developers, especially when used at the edge.
You may be curious about the synergy between an unstructured database and a structured query language. Fortunately, Grafbase offers a solution that seamlessly combines both by leveraging its distinctive connector schema transformations.
## Prerequisites
In this tutorial, you’ll see how easy it is to get set up with MongoDB and Grafbase, simplifying the introduction of GraphQL into your applications.
You will need the following to get started:
- An account with Grafbase
- An account with MongoDB Atlas
- A database with data API access enabled
## Enable data API access
You will need a database with MongoDB Atlas to follow along — create one now!
For the purposes of this tutorial, I’ve created a free shared cluster with a single database deployment. We’ll refer to this instance as your “Data Source” later.
through the `g.datasource(mongodb)` call.
## Create models for data
The MongoDB connector empowers developers to organize their MongoDB collections in a manner that allows Grafbase to autonomously generate the essential queries and mutations for document creation, retrieval, update, and deletion within these collections.
Within Grafbase, each configuration for a collection is referred to as a "model," and you have the flexibility to employ the supported GraphQL Scalars to represent data within the collection(s).
It's important to consider that in cases where you possess pre-existing documents in your collection, not all fields are applicable to every document.
Let’s work under the assumption that you have no existing documents and want to create a new collection for `users`. Using the Grafbase TypeScript SDK, we can write the schema for each user model. It looks something like this:
```ts
const address = g.type('Address', {
street: g.string().mapped('street_name')
})
mongodb
.model('User', {
name: g.string(),
email: g.string().optional(),
address: g.ref(address)
})
.collection('users')
```
This schema will generate a fully working GraphQL API with queries and mutations as well as all input types for pagination, ordering, and filtering:
- `userCreate` – Create a new user
- `userCreateMany` – Batch create new users
- `userUpdate` – Update an existing user
- `userUpdateMany` – Batch update users
- `userDelete` – Delete a user
- `userDeleteMany` – Batch delete users
- `user` – Fetch a single user record
- `userCollection` – Fetch multiple users from a collection
MongoDB automatically generates collections when you first store data, so there’s no need to manually create a collection for users at this step.
We’re now ready to start the Grafbase development server using the CLI:
```bash
npx grafbase dev
```
This command runs the entire Grafbase GraphQL API locally that you can use when developing your front end. The Grafbase API communicates directly with your Atlas Data API.
Once the command is running, you’ll be able to visit http://127.0.0.1:4000 and explore the GraphQL API.
## Insert users with GraphQL to MongoDB instance
Let’s test out creating users inside our MongoDB collection using the generated `userCreate` mutation that was provided to us by Grafbase.
Using Pathfinder at http://127.0.0.1:4000, execute the following mutation:
```
mutation {
mongo {
userCreate(input: {
name: "Jamie Barton",
email: "jamie@grafbase.com",
age: 40
}) {
insertedId
}
}
}
```
If everything is hooked up correctly, you should see a response that looks something like this:
```json
{
"data": {
"mongo": {
"userCreate": {
"insertedId": "65154a3d4ddec953105be188"
}
}
}
}
```
You should repeat this step a few times to create multiple users.
## Update user by ID
Now we’ve created some users in our MongoDB collection, let’s try updating a user by `insertedId`:
```
mutation {
mongo {
userUpdate(by: {
id: "65154a3d4ddec953105be188"
}, input: {
age: {
set: 35
}
}) {
modifiedCount
}
}
}
```
Using the `userUpdate` mutation above, we `set` a new `age` value for the user where the `id` matches that of the ObjectID we passed in.
If everything was successful, you should see something like this:
```json
{
"data": {
"mongo": {
"userUpdate": {
"modifiedCount": 1
}
}
}
}
```
## Delete user by ID
Deleting users is similar to the create and update mutations above, but we don’t need to provide any additional `input` data since we’re deleting only:
```
mutation {
mongo {
userDelete(by: {
id: "65154a3d4ddec953105be188"
}) {
deletedCount
}
}
}
```
If everything was successful, you should see something like this:
```json
{
"data": {
"mongo": {
"userDelete": {
"deletedCount": 1
}
}
}
}
```
## Fetch all users
Grafbase generates the query `userCollection` that you can use to fetch all users. Grafbase requires a `first` or `last` pagination value with a max value of `100`:
```
query {
mongo {
userCollection(first: 100) {
edges {
node {
id
name
email
age
}
}
}
}
}
```
Here we are fetching the `first` 100 users from the collection. You can also pass a filter and order argument to tune the results:
```
query {
mongo {
userCollection(first: 100, filter: {
age: {
gt: 30
}
}, orderBy: {
age: ASC
}) {
edges {
node {
id
name
email
age
}
}
}
}
}
```
## Fetch user by ID
Using the same GraphQL API, we can fetch a user by the object ID. Grafbase automatically generates the query `user` where we can pass the `id` to the `by` input type:
```
query {
mongo {
user(
by: {
id: "64ee1cfbb315482287acea78"
}
) {
id
name
email
age
}
}
}
```
## Enable faster responses with GraphQL Edge Caching
Every request we make so far to our GraphQL API makes a round trip to the MongoDB database. This is fine, but we can improve response times even further by enabling GraphQL Edge Caching for GraphQL queries.
To enable GraphQL Edge Caching, inside `grafbase/grafbase.config.ts`, add the following to the `config` export:
```ts
export default config({
schema: g,
cache: {
rules:
{
types: 'Query',
maxAge: 60
}
]
}
})
```
This configuration will cache any query. If you only want to disable caching on some collections, you can do that too. [Learn more about GraphQL Edge Caching.
## Deploy to the edge
So far, we’ve been working with Grafbase locally using the CLI, but now it’s time to deploy this around the world to the edge with GitHub.
If you already have an existing GitHub repository, go ahead and commit the changes we’ve made so far. If you don’t already have a GitHub repository, you will need to create one, commit this code, and push it to GitHub.
Now, create a new project with Grafbase and connect your GitHub account. You’ll need to permit Grafbase to read your repository contents, so make sure you select the correct repository and allow that.
Before you click **Deploy**, make sure to insert the environment variables obtained previously in the tutorial. Grafbase also supports environment variables for preview environments, so if you want to use a different MongoDB database for any Grafbase preview deployment, you can configure that later.
, URQL, and Houdini.
If you have questions or comments, continue the conversation over in the MongoDB Developer Community.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt86a1fb09aa5e51ae/65282bf00749064f73257e71/image6.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt67f4040e41799bbc/65282c10814c6c262bc93103/image1.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt75ca38cd9261e241/65282c30ff3bbd5d44ad0aa3/image4.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltaf2a2af39e731dbe/65282c54391807638d3b0e1d/image5.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0c9563b3fdbf34fd/65282c794824f57358f273cf/image3.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt731c99011d158491/65282ca631f9bbb92a9669ad/image2.png | md | {
"tags": [
"Atlas",
"TypeScript",
"GraphQL"
],
"pageDescription": "Learn how to quickly and easily create a GraphQL API from your MongoDB data with Grafbase.",
"contentType": "Tutorial"
} | Instant GraphQL APIs for MongoDB with Grafbase | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/exploring-window-operators-atlas-stream-processing | created | # Exploring Window Operators in Atlas Stream Processing
> Atlas Stream Processing is now available. Learn more about it here.
In our previous post on windowing, we introduced window operators available in Atlas Stream Processing. Window operators are one of the most commonly used operations to effectively process streaming data. Atlas Stream Processing provides two window operators: $tumblingWindow and $hoppingWindow. In this tutorial, we will explore both of these operators using the sample solar data generator provided within Atlas Stream Processing.
## Getting started
Before we begin creating stream processors, make sure you have a database user who has “atlasAdmin” access to the Atlas Project. Also, if you do not already have a Stream Processing Instance created with a connection to the sample_stream_solar data generator, please follow the instructions in Get Started with Atlas Stream Processing: Creating Your First Stream Processor and then continue on.
## View the solar stream sample data
For this tutorial, we will be using the MongoDB shell.
First, confirm sample_stream_solar is added as a connection by issuing `sp.listConnections()`.
```
AtlasStreamProcessing> sp.listConnections()
{
ok: 1,
connections:
{
name: 'sample_stream_solar',
type: 'inmemory',
createdAt: ISODate("2023-08-26T18:42:48.357Z")
}
]
}
```
Next, let’s define a **$source** stage to describe where Atlas Stream Processing will read the stream data from.
```
var solarstream={ $source: { "connectionName": "sample_stream_solar" } }
```
Then, issue a **.process** command to view the contents of the stream on the console.
```
sp.process([solarstream])
```
You will see the stream of solar data printed on the console. A sample of this data is as follows:
```json
{
device_id: 'device_2',
group_id: 3,
timestamp: '2023-08-27T13:51:53.375+00:00',
max_watts: 250,
event_type: 0,
obs: {
watts: 168,
temp: 15
},
_ts: ISODate("2023-08-27T13:51:53.375Z"),
_stream_meta: {
sourceType: 'sampleData',
timestamp: ISODate("2023-08-27T13:51:53.375Z")
}
}
```
## Create a tumbling window query
A tumbling window is a fixed-size window that moves forward in time at regular intervals. In Atlas Stream Processing, you use the [$tumblingWindow operator. In this example, let’s use the operator to compute the average watts over one-minute intervals.
Refer back to the schema from the sample stream solar data. To create a tumbling window, let’s create a variable and define our tumbling window stage.
```javascript
var Twindow= {
$tumblingWindow: {
interval: { size: NumberInt(1), unit: "minute" },
pipeline:
{
$group: {
_id: "$device_id",
max: { $max: "$obs.watts" },
avg: { $avg: "$obs.watts" }
}
}
]
}
}
```
We are calculating the maximum value and average over the span of one-minute, non-overlapping intervals. Let’s use the `.process` command to run the streaming query in the foreground and view our results in the console.
```
sp.process([solarstream,Twindow])
```
Here is an example output of the statement:
```json
{
_id: 'device_4',
max: 236,
avg: 95,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T13:59:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T14:00:00.000Z")
}
}
{
_id: 'device_2',
max: 211,
avg: 117.25,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T13:59:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T14:00:00.000Z")
}
}
```
## Exploring the window operator pipeline
The pipeline that is used within a window function can include blocking stages and non-blocking stages.
[Accumulator operators such as `$avg`, `$count`, `$sort`, and `$limit` can be used within blocking stages. Meaningful data returned from these operators are obtained when run over a series of data versus a single data point. This is why they are considered blocking.
Non-blocking stages do not require multiple data points to be meaningful, and they include operators such as `$addFields`, `$match`, `$project`, `$set`, `$unset`, and `$unwind`, to name a few. You can use non-blocking before, after, or within the blocking stages. To illustrate this, let’s create a query that shows the average, maximum, and delta (the difference between the maximum and average). We will use a non-blocking **$match** to show only the results from device_1, calculate the tumblingWindow showing maximum and average, and then include another non-blocking `$addFields`.
```
var m= { '$match': { device_id: 'device_1' } }
```
```javascript
var Twindow= {
'$tumblingWindow': {
interval: { size: Int32(1), unit: 'minute' },
pipeline:
{
'$group': {
_id: '$device_id',
max: { '$max': '$obs.watts' },
avg: { '$avg': '$obs.watts' }
}
}
]
}
}
var delta = { '$addFields': { delta: { '$subtract': ['$max', '$avg'] } } }
```
Now we can use the .process command to run the stream processor in the foreground and view our results in the console.
```
sp.process([solarstream,m,Twindow,delta])
```
The results of this query will be similar to the following:
```json
{
_id: 'device_1',
max: 238,
avg: 75.3,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:11:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:12:00.000Z")
},
delta: 162.7
}
{
_id: 'device_1',
max: 220,
avg: 125.08333333333333,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:12:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:13:00.000Z")
},
delta: 94.91666666666667
}
{
_id: 'device_1',
max: 238,
avg: 119.91666666666667,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:13:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:14:00.000Z")
},
delta: 118.08333333333333
}
```
Notice the time segments and how they align on the minute.
![Time segments aligned on the minute][1]
Additionally, notice that the output includes the difference between the calculated values of maximum and average for each window.
## Create a hopping window
A hopping window, sometimes referred to as a sliding window, is a fixed-size window that moves forward in time at overlapping intervals. In Atlas Stream Processing, you use the `$hoppingWindow` operator. In this example, let’s use the operator to see the average.
```javascript
var Hwindow = {
'$hoppingWindow': {
interval: { size: 1, unit: 'minute' },
hopSize: { size: 30, unit: 'second' },
pipeline: [
{
'$group': {
_id: '$device_id',
max: { '$max': '$obs.watts' },
avg: { '$avg': '$obs.watts' }
}
}
]
}
}
```
To help illustrate the start and end time segments, let's create a filter to only return device_1.
```
var m = { '$match': { device_id: 'device_1' } }
```
Now let’s issue the `.process` command to view the results in the console.
```
sp.process([solarstream,m,Hwindow])
```
An example result is as follows:
```json
{
_id: 'device_1',
max: 238,
avg: 76.625,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:37:30.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:38:30.000Z")
}
}
{
_id: 'device_1',
max: 238,
avg: 82.71428571428571,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:38:00.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:39:00.000Z")
}
}
{
_id: 'device_1',
max: 220,
avg: 105.54545454545455,
_stream_meta: {
sourceType: 'sampleData',
windowStartTimestamp: ISODate("2023-08-27T19:38:30.000Z"),
windowEndTimestamp: ISODate("2023-08-27T19:39:30.000Z")
}
}
```
Notice the time segments.
![Overlapping time segments][2]
The time segments are overlapping by 30 seconds as was defined by the hopSize option. Hopping windows are useful to capture short-term patterns in data.
## Summary
By continuously processing data within time windows, you can generate real-time insights and metrics, which can be crucial for applications like monitoring, fraud detection, and operational analytics. Atlas Stream Processing provides both tumbling and hopping window operators. Together these operators enable you to perform various aggregation operations such as sum, average, min, and max over a specific window of data. In this tutorial, you learned how to use both of these operators with solar sample data.
### Learn more about MongoDB Atlas Stream Processing
Check out the [MongoDB Atlas Stream Processing announcement blog post. For more on window operators in Atlas Stream Processing, learn more in our documentation.
>Log in today to get started. Atlas Stream Processing is available to all developers in Atlas. Give it a try today!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt73ff54f0367cad3b/650da3ef69060a5678fc1242/image1.jpg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt833bc1a824472d14/650da41aa5f15dea3afc5b55/image3.jpg | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to use the various window operators such as tumbling window and hopping window with MongoDB Atlas Stream Processing.",
"contentType": "Tutorial"
} | Exploring Window Operators in Atlas Stream Processing | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-quickstart-fastapi | created | # Getting Started with MongoDB and FastAPI
FastAPI is a modern, high-performance, easy-to-learn, fast-to-code, production-ready, Python 3.6+ framework for building APIs based on standard Python type hints. While it might not be as established as some other Python frameworks such as Django, it is already in production at companies such as Uber, Netflix, and Microsoft.
FastAPI is async, and as its name implies, it is super fast; so, MongoDB is the perfect accompaniment. In this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your FastAPI projects.
## Prerequisites
- Python 3.9.0
- A MongoDB Atlas cluster. Follow the "Get Started with Atlas" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.
## Running the Example
To begin, you should clone the example code from GitHub.
``` shell
git clone git@github.com:mongodb-developer/mongodb-with-fastapi.git
```
You will need to install a few dependencies: FastAPI, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.
``` shell
cd mongodb-with-fastapi
pip install -r requirements.txt
```
It may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.
Once you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.
``` shell
export MONGODB_URL="mongodb+srv://:@/?retryWrites=true&w=majority"
```
Remember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.
The final step is to start your FastAPI server.
``` shell
uvicorn app:app --reload
```
Once the application has started, you can view it in your browser at .
Once you have had a chance to try the example, come back and we will walk through the code.
## Creating the Application
All the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.
### Connecting to MongoDB
One of the very first things we do is connect to our MongoDB database.
``` python
client = motor.motor_asyncio.AsyncIOMotorClient(os.environ"MONGODB_URL"])
db = client.get_database("college")
student_collection = db.get_collection("students")
```
We're using the async [motor driver to create our MongoDB client, and then we specify our database name `college`.
### The \_id Attribute and ObjectIds
``` python
# Represents an ObjectId field in the database.
# It will be represented as a `str` on the model so that it can be serialized to JSON.
PyObjectId = Annotatedstr, BeforeValidator(str)]
```
MongoDB stores data as [BSON. FastAPI encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId` which can't be directly encoded as JSON. Because of this, we convert `ObjectId`s to strings before storing them as the `id` field.
### Database Models
Many people think of MongoDB as being schema-less, which is wrong. MongoDB has a flexible schema. That is to say that collections do not enforce document structure by default, so you have the flexibility to make whatever data-modelling choices best match your application and its performance requirements. So, it's not unusual to create models when working with a MongoDB database. Our application has three models, the `StudentModel`, the `UpdateStudentModel`, and the `StudentCollection`.
``` python
class StudentModel(BaseModel):
"""
Container for a single student record.
"""
# The primary key for the StudentModel, stored as a `str` on the instance.
# This will be aliased to `_id` when sent to MongoDB,
# but provided as `id` in the API requests and responses.
id: OptionalPyObjectId] = Field(alias="_id", default=None)
name: str = Field(...)
email: EmailStr = Field(...)
course: str = Field(...)
gpa: float = Field(..., le=4.0)
model_config = ConfigDict(
populate_by_name=True,
arbitrary_types_allowed=True,
json_schema_extra={
"example": {
"name": "Jane Doe",
"email": "jdoe@example.com",
"course": "Experiments, Science, and Fashion in Nanophotonics",
"gpa": 3.0,
}
},
)
```
This is the primary model we use as the [response model for the majority of our endpoints.
I want to draw attention to the `id` field on this model. MongoDB uses `_id`, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic—the data validation framework used by FastAPI—will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field `id` but give it an alias of `_id`. You also need to set `populate_by_name` to `True` in the model's `model_config`
We set this `id` value automatically to `None`, so you do not need to supply it when creating a new student.
``` python
class UpdateStudentModel(BaseModel):
"""
A set of optional updates to be made to a document in the database.
"""
name: Optionalstr] = None
email: Optional[EmailStr] = None
course: Optional[str] = None
gpa: Optional[float] = None
model_config = ConfigDict(
arbitrary_types_allowed=True,
json_encoders={ObjectId: str},
json_schema_extra={
"example": {
"name": "Jane Doe",
"email": "jdoe@example.com",
"course": "Experiments, Science, and Fashion in Nanophotonics",
"gpa": 3.0,
}
},
)
```
The `UpdateStudentModel` has two key differences from the `StudentModel`:
- It does not have an `id` attribute as this cannot be modified.
- All fields are optional, so you only need to supply the fields you wish to update.
Finally, `StudentCollection` is defined to encapsulate a list of `StudentModel` instances. In theory, the endpoint could return a top-level list of StudentModels, but there are some vulnerabilities associated with returning JSON responses with top-level lists.
```python
class StudentCollection(BaseModel):
"""
A container holding a list of `StudentModel` instances.
This exists because providing a top-level array in a JSON response can be a [vulnerability
"""
students: ListStudentModel]
```
### Application Routes
Our application has five routes:
- POST /students/ - creates a new student.
- GET /students/ - view a list of all students.
- GET /students/{id} - view a single student.
- PUT /students/{id} - update a student.
- DELETE /students/{id} - delete a student.
#### Create Student Route
``` python
@app.post(
"/students/",
response_description="Add new student",
response_model=StudentModel,
status_code=status.HTTP_201_CREATED,
response_model_by_alias=False,
)
async def create_student(student: StudentModel = Body(...)):
"""
Insert a new student record.
A unique `id` will be created and provided in the response.
"""
new_student = await student_collection.insert_one(
student.model_dump(by_alias=True, exclude=["id"])
)
created_student = await student_collection.find_one(
{"_id": new_student.inserted_id}
)
return created_student
```
The `create_student` route receives the new student data as a JSON string in a `POST` request. We have to decode this JSON request body into a Python dictionary before passing it to our MongoDB client.
The `insert_one` method response includes the `_id` of the newly created student (provided as `id` because this endpoint specifies `response_model_by_alias=False` in the `post` decorator call. After we insert the student into our collection, we use the `inserted_id` to find the correct document and return this in our `JSONResponse`.
FastAPI returns an HTTP `200` status code by default; but in this instance, a `201` created is more appropriate.
##### Read Routes
The application has two read routes: one for viewing all students and the other for viewing an individual student.
``` python
@app.get(
"/students/",
response_description="List all students",
response_model=StudentCollection,
response_model_by_alias=False,
)
async def list_students():
"""
List all of the student data in the database.
The response is unpaginated and limited to 1000 results.
"""
return StudentCollection(students=await student_collection.find().to_list(1000))
```
Motor's `to_list` method requires a max document count argument. For this example, I have hardcoded it to `1000`; but in a real application, you would use the [skip and limit parameters in `find` to paginate your results.
``` python
@app.get(
"/students/{id}",
response_description="Get a single student",
response_model=StudentModel,
response_model_by_alias=False,
)
async def show_student(id: str):
"""
Get the record for a specific student, looked up by `id`.
"""
if (
student := await student_collection.find_one({"_id": ObjectId(id)})
) is not None:
return student
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
The student detail route has a path parameter of `id`, which FastAPI passes as an argument to the `show_student` function. We use the `id` to attempt to find the corresponding student in the database. The conditional in this section is using an assignment expression, an addition to Python 3.8 and often referred to by the cute sobriquet "walrus operator."
If a document with the specified `_id` does not exist, we raise an `HTTPException` with a status of `404`.
##### Update Route
``` python
@app.put(
"/students/{id}",
response_description="Update a student",
response_model=StudentModel,
response_model_by_alias=False,
)
async def update_student(id: str, student: UpdateStudentModel = Body(...)):
"""
Update individual fields of an existing student record.
Only the provided fields will be updated.
Any missing or `null` fields will be ignored.
"""
student = {
k: v for k, v in student.model_dump(by_alias=True).items() if v is not None
}
if len(student) >= 1:
update_result = await student_collection.find_one_and_update(
{"_id": ObjectId(id)},
{"$set": student},
return_document=ReturnDocument.AFTER,
)
if update_result is not None:
return update_result
else:
raise HTTPException(status_code=404, detail=f"Student {id} not found")
# The update is empty, but we should still return the matching document:
if (existing_student := await student_collection.find_one({"_id": id})) is not None:
return existing_student
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
The `update_student` route is like a combination of the `create_student` and the `show_student` routes. It receives the `id` of the document to update as well as the new data in the JSON body. We don't want to update any fields with empty values; so, first of all, we iterate over all the items in the received dictionary and only add the items that have a value to our new document.
If, after we remove the empty values, there are no fields left to update, we instead look for an existing record that matches the `id` and return that unaltered. However, if there are values to update, we use find_one_and_update to $set the new values, and then return the updated document.
If we get to the end of the function and we have not been able to find a matching document to update or return, then we raise a `404` error again.
##### Delete Route
``` python
@app.delete("/students/{id}", response_description="Delete a student")
async def delete_student(id: str):
"""
Remove a single student record from the database.
"""
delete_result = await student_collection.delete_one({"_id": ObjectId(id)})
if delete_result.deleted_count == 1:
return Response(status_code=status.HTTP_204_NO_CONTENT)
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
Our final route is `delete_student`. Again, because this is acting upon a single document, we have to supply an `id` in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or "No Content." In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified `id`, then instead we return a `404`.
## Our New FastAPI App Generator
If you're excited to build something more production-ready with FastAPI, React & MongoDB, head over to the Github repository for our new FastAPI app generator and start transforming your web development experience.
## Wrapping Up
I hope you have found this introduction to FastAPI with MongoDB useful. If you would like to learn more, check out my post introducing the FARM stack (FastAPI, React and MongoDB) as well as the FastAPI documentation and this awesome list.
>If you have questions, please head to our developer community website where MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python",
"MongoDB",
"Django",
"FastApi"
],
"pageDescription": "Getting started with MongoDB and FastAPI",
"contentType": "Quickstart"
} | Getting Started with MongoDB and FastAPI | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/deploy-mongodb-atlas-aws-cloudformation | created | # How to Deploy MongoDB Atlas with AWS CloudFormation
MongoDB Atlas is the multi-cloud developer data platform that provides an integrated suite of cloud database and data services. We help to accelerate and simplify how you build resilient and performant global applications on the cloud provider of your choice.
AWS CloudFormation lets you model, provision, and manage AWS and third-party resources like MongoDB Atlas by treating infrastructure as code (IaC). CloudFormation templates are written in either JSON or YAML.
While there are multiple ways to use CloudFormation to provision and manage your Atlas clusters, such as with Partner Solution Deployments or the AWS CDK, today we’re going to go over how to create your first YAML CloudFormation templates to deploy Atlas clusters with CloudFormation.
These pre-made templates directly leverage MongoDB Atlas resources from the CloudFormation Public Registry and execute via the AWS CLI/AWS Management Console. Using these is best for users who seek to be tightly integrated into AWS with fine-grained access controls.
Let’s get started!
*Prerequisites:*
- Install and configure an AWS Account and the AWS CLI.
- Install and configure the MongoDB Atlas CLI (optional but recommended).
## Step 1: Create a MongoDB Atlas account
Sign up for a free MongoDB Atlas account, verify your email address, and log into your new account.
Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
and contact AWS support directly, who can help confirm the CIDR range to be used in your Atlas PAK IP Whitelist.
on MongoDB Atlas.
). You can set this up with AWS IAM (Identity and Access Management). You can find that in the navigation bar of your AWS. You can find the ARN in the user information in the “Roles” button. Once there, find the role whose ARN you want to use and add it to the Extension Details in CloudFormation. Learn how to create user roles/permissions in the IAM.
required from our GitHub repo. It’s important that you use an ARN with sufficient permissions each time it’s asked for.
.
## Step 7: Deploy the CloudFormation template
In the AWS management console, go to the CloudFormation tab. Then, in the left-hand navigation, click on “Stacks.” In the window that appears, hit the “Create Stack” drop-down. Select “Create new stack with existing resources.”
Next, select “template is ready” in the “Prerequisites” section and “Upload a template” in the “Specify templates” section. From here, you will choose the YAML (or JSON) file containing the MongoDB Atlas deployment that you created in the prior step.
.
The fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace.
Additionally, you can watch our demo to learn about the other ways to get started with MongoDB Atlas and CloudFormation
Go build with MongoDB Atlas and AWS CloudFormation today!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6a7a0aace015cbb5/6504a623a8cf8bcfe63e171a/image4.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt471e37447cf8b1b1/6504a651ea4b5d10aa5135d6/image8.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3545f9cbf7c8f622/6504a67ceb5afe6d504a833b/image13.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3582d0a3071426e3/6504a69f0433c043b6255189/image12.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb4253f96c019874e/6504a6bace38f40f4df4cddf/image1.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2840c92b6d1ee85d/6504a6d7da83c92f49f9b77e/image7.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd4a32140ddf600fc/6504a700ea4b5d515f5135db/image5.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt49dabfed392fa063/6504a73dbb60f713d4482608/image9.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt592e3f129fe1304b/6504a766a8cf8b5ba23e1723/image11.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbff284987187ce16/6504a78bb8c6d6c2d90e6e22/image10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0ae450069b31dff9/6504a7b99bf261fdd46bddcf/image3.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7f24645eefdab69c/6504a7da9aba461d6e9a55f4/image2.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e1c20eba155233a/6504a8088606a80fe5c87f31/image6.png | md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "Learn how to quickly and easily deploy MongoDB Atlas instances with Amazon Web Services (AWS) CloudFormation.",
"contentType": "Tutorial"
} | How to Deploy MongoDB Atlas with AWS CloudFormation | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/nextjs-with-mongodb | created | # How to Integrate MongoDB Into Your Next.js App
> This tutorial uses the Next.js Pages Router instead of the App Router which was introduced in Next.js version 13. The Pages Router is still supported and recommended for production environments.
Are you building your next amazing application with Next.js? Do you wish you could integrate MongoDB into your Next.js app effortlessly? Do you need this done before your coffee has finished brewing? If you answered yes to these three questions, I have some good news for you. We have created a Next.js<>MongoDB integration that will have you up and running in minutes, and you can consider this tutorial your official guide on how to use it.
In this tutorial, we'll take a look at how we can use the **with-mongodb** example to create a new Next.js application that follows MongoDB best practices for connectivity, connection pool monitoring, and querying. We'll also take a look at how to use MongoDB in our Next.js app with things like serverSideProps and APIs. Finally, we'll take a look at how we can easily deploy and host our application on Vercel, the official hosting platform for Next.js applications. If you already have an existing Next.js app, not to worry. Simply drop the MongoDB utility file into your existing project and you are good to go. We have a lot of exciting stuff to cover, so let's dive right in!
## Next.js and MongoDB with one click
Our app is now deployed and running in production. If you weren't following along with the tutorial and just want to quickly start your Next.js application with MongoDB, you could always use the `with-mongodb` starter found on GitHub, but I’ve got an even better one for you.
Visit Vercel and you'll be off to the races in creating and deploying the official Next.js with the MongoDB integration, and all you'll need to provide is your connection string.
## Prerequisites
For this tutorial, you'll need:
- MongoDB Atlas (sign up for free).
- A Vercel account (sign up for free).
- NodeJS 18+.
- npm and npx.
To get the most out of this tutorial, you need to be familiar with React and Next.js. I will cover unique Next.js features with enough details to still be valuable to a newcomer.
## What is Next.js?
If you're not already familiar with it, Next.js is a React-based framework for building modern web applications. The framework adds a lot of powerful features — such as server-side rendering, automatic code splitting, and incremental static regeneration — that make it easy to build, scalable, and production-ready apps.
. You can use a local MongoDB installation if you have one, but if you're just getting started, MongoDB Atlas is a great way to get up and running without having to install or manage your MongoDB instance. MongoDB Atlas has a forever free tier that you can sign up for as well as get the sample data that we'll be using for the rest of this tutorial.
To get our MongoDB URI, in our MongoDB Atlas dashboard:
1. Hit the **Connect** button.
2. Then, click the **Connect to your application** button, and here you'll see a string that contains your **URI** that will look like this:
```
mongodb+srv://:@cluster0..mongodb.net/?retryWrites=true&w=majority
```
If you are new to MongoDB Atlas, you'll need to go to the **Database Access** section and create a username and password, as well as the **Network Access** tab to ensure your IP is allowed to connect to the database. However, if you already have a database user and network access enabled, you'll just need to replace the `` and `` fields with your information.
For the ``, we'll load the MongoDB Atlas sample datasets and use one of those databases.
, and we'll help troubleshoot.
## Querying MongoDB with Next.js
Now that we are connected to MongoDB, let's discuss how we can query our MongoDB data and bring it into our Next.js application. Next.js supports multiple ways to get data. We can create API endpoints, get data by running server-side rendered functions for a particular page, and even generate static pages by getting our data at build time. We'll look at all three examples.
## Example 1: Next.js API endpoint with MongoDB
The first example we'll look at is building and exposing an API endpoint in our Next.js application. To create a new API endpoint route, we will first need to create an `api` directory in our `pages` directory, and then every file we create in this `api` directory will be treated as an individual API endpoint.
Let's go ahead and create the `api` directory and a new file in this `directory` called `movies.tsx`. This endpoint will return a list of 20 movies from our MongoDB database. The implementation for this route is as follows:
```
import clientPromise from "../../lib/mongodb";
import { NextApiRequest, NextApiResponse } from 'next';
export default async (req: NextApiRequest, res: NextApiResponse) => {
try {
const client = await clientPromise;
const db = client.db("sample_mflix");
const movies = await db
.collection("movies")
.find({})
.sort({ metacritic: -1 })
.limit(10)
.toArray();
res.json(movies);
} catch (e) {
console.error(e);
}
}
```
To explain what is going on here, we'll start with the import statement. We are importing our `clientPromise` method from the `lib/mongodb` file. This file contains all the instructions on how to connect to our MongoDB Atlas cluster. Additionally, within this file, we cache the instance of our connection so that subsequent requests do not have to reconnect to the cluster. They can use the existing connection. All of this is handled for you!
Next, our API route handler has the signature of `export default async (req, res)`. If you're familiar with Express.js, this should look very familiar. This is the function that gets run when the `localhost:3000/api/movies` route is called. We capture the request via `req` and return the response via the `res` object.
Our handler function implementation calls the `clientPromise` function to get the instance of our MongoDB database. Next, we run a MongoDB query using the MongoDB Node.js driver to get the top 20 movies out of our **movies** collection based on their **metacritic** rating sorted in descending order.
Finally, we call the `res.json` method and pass in our array of movies. This serves our movies in JSON format to our browser. If we navigate to `localhost:3000/api/movies`, we'll see a result that looks like this:
to capture the `id`. So, if a user calls `http://localhost:3000/api/movies/573a1394f29313caabcdfa3e`, the movie that should be returned is Seven Samurai. **Another tip**: The `_id` property for the `sample_mflix` database in MongoDB is stored as an ObjectID, so you'll have to convert the string to an ObjectID. If you get stuck, create a thread on the MongoDB Community forums and we'll solve it together! Next, we'll take a look at how to access our MongoDB data within our Next.js pages.
## Example 2: Next.js pages with MongoDB
In the last section, we saw how we can create an API endpoint and connect to MongoDB with it. In this section, we'll get our data directly into our Next.js pages. We'll do this using the getServerSideProps() method that is available to Next.js pages.
The `getServerSideProps()` method forces a Next.js page to load with server-side rendering. What this means is that every time this page is loaded, the `getServerSideProps()` method runs on the back end, gets data, and sends it into the React component via props. The code within `getServerSideProps()` is never sent to the client. This makes it a great place to implement our MongoDB queries.
Let's see how this works in practice. Let's create a new file in the `pages` directory, and we'll call it `movies.tsx`. In this file, we'll add the following code:
```
import clientPromise from "../lib/mongodb";
import { GetServerSideProps } from 'next';
interface Movie {
_id: string;
title: string;
metacritic: number;
plot: string;
}
interface MoviesProps {
movies: Movie];
}
const Movies: React.FC = ({ movies }) => {
return (
TOP 20 MOVIES OF ALL TIME
(According to Metacritic)
{movies.map((movie) => (
{MOVIE.TITLE}
{MOVIE.METACRITIC}
{movie.plot}
))}
);
};
export default Movies;
export const getServerSideProps: GetServerSideProps = async () => {
try {
const client = await clientPromise;
const db = client.db("sample_mflix");
const movies = await db
.collection("movies")
.find({})
.sort({ metacritic: -1 })
.limit(20)
.toArray();
return {
props: { movies: JSON.parse(JSON.stringify(movies)) },
};
} catch (e) {
console.error(e);
return { props: { movies: [] } };
}
};
```
As you can see from the example above, we are importing the same `clientPromise` utility class, and our MongoDB query is exactly the same within the `getServerSideProps()` method. The only thing we really needed to change in our implementation is how we parse the response. We need to stringify and then manually parse the data, as Next.js is strict.
Our page component called `Movies` gets the props from our `getServerSideProps()` method, and we use that data to render the page showing the top movie title, metacritic rating, and plot. Your result should look something like this:
![Top 20 movies][6]
This is great. We can directly query our MongoDB database and get all the data we need for a particular page. The contents of the `getServerSideProps()` method are never sent to the client, but the one downside to this is that this method runs every time we call the page. Our data is pretty static and unlikely to change all that often. What if we pre-rendered this page and didn't have to call MongoDB on every refresh? We'll take a look at that next!
## Example 3: Next.js static generation with MongoDB
For our final example, we'll take a look at how static page generation can work with MongoDB. Let's create a new file in the `pages` directory and call it `top.tsx`. For this page, what we'll want to do is render the top 1,000 movies from our MongoDB database.
Top 1,000 movies? Are you out of your mind? That'll take a while, and the database round trip is not worth it. Well, what if we only called this method once when we built the application so that even if that call takes a few seconds, it'll only ever happen once and our users won't be affected? They'll get the top 1,000 movies delivered as quickly as or even faster than the 20 using `serverSideProps()`. The magic lies in the `getStaticProps()` method, and our implementation looks like this:
```
import { ObjectId } from "mongodb";
import clientPromise from "../lib/mongodb";
import { GetStaticProps } from "next";
interface Movie {
_id: ObjectId;
title: string;
metacritic: number;
plot: string;
}
interface TopProps {
movies: Movie[];
}
export default function Top({ movies }: TopProps) {
return (
TOP 1000 MOVIES OF ALL TIME
(According to Metacritic)
{movies.map((movie) => (
{MOVIE.TITLE}
{MOVIE.METACRITIC}
{movie.plot}
))}
);
}
export const getStaticProps: GetStaticProps = async () => {
try {
const client = await clientPromise;
const db = client.db("sample_mflix");
const movies = await db
.collection("movies")
.find({})
.sort({ metacritic: -1 })
.limit(1000)
.toArray();
return {
props: { movies: JSON.parse(JSON.stringify(movies)) },
};
} catch (e) {
console.error(e);
return {
props: { movies: [] },
};
}
};
```
At a glance, this looks very similar to the `movies.tsx` file we created earlier. The only significant changes we made were changing our `limit` from `20` to `1000` and our `getServerSideProps()` method to `getStaticProps()`. If we navigate to `localhost:3000/top` in our browser, we'll see a long list of movies.
![Top 1000 movies][7]
Look at how tiny that scrollbar is. Loading this page took about 3.79 seconds on my machine, as opposed to the 981-millisecond response time for the `/movies` page. The reason it takes this long is that in development mode, the `getStaticProps()` method is called every single time (just like the `getServerSideProps()` method). But if we switch from development mode to production mode, we'll see the opposite. The `/top` page will be pre-rendered and will load almost immediately, while the `/movies` and `/api/movies` routes will run the server-side code each time.
Let's switch to production mode. In your terminal window, stop the current app from running. To run our Next.js app in production mode, we'll first need to build it. Then, we can run the `start` command, which will serve our built application. In your terminal window, run the following commands:
```
npm run build
npm run start
```
When you run the `npm run start` command, your Next.js app is served in production mode. The `getStaticProps()` method will not be run every time you hit the `/top` route as this page will now be served statically. We can even see the pre-rendered static page by navigating to the `.next/server/pages/top.html` file and seeing the 1,000 movies listed in plain HTML.
Next.js can even update this static content without requiring a rebuild with a feature called [Incremental Static Regeneration, but that's outside of the scope of this tutorial. Next, we'll take a look at deploying our application on Vercel.
## Deploying your Next.js app on Vercel
The final step in our tutorial today is deploying our application. We'll deploy our Next.js with MongoDB app to Vercel. I have created a GitHub repo that contains all of the code we have written today. Feel free to clone it, or create your own.
Navigate to Vercel and log in. Once you are on your dashboard, click the **Import Project** button, and then **Import Git Repository**.
, https://nextjs-with-mongodb-mauve.vercel.app/api/movies, and https://nextjs-with-mongodb-mauve.vercel.app/top routes.
## Putting it all together
In this tutorial, we walked through the official Next.js with MongoDB example. I showed you how to connect your MongoDB database to your Next.js application and run queries in multiple ways. Then, we deployed our application using Vercel.
If you have any questions or feedback, reach out through the MongoDB Community forums and let me know what you build with Next.js and MongoDB.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt572f8888407a2777/65de06fac7f05b1b2f8674cc/vercel-homepage.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt833e93bc334716a5/65de07c677ae451d96b0ec98/server-error.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltad2329fe1bb44d8f/65de1b020f1d350dd5ca42a5/database-deployments.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt798b7c3fe361ccbd/65de1b917c85267d37234400/welcome-nextjs.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta204dc4bce246ac6/65de1ff8c7f05b0b4b86759a/json-format.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt955fc3246045aa82/65de2049330e0026817f6094/top-20-movies.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfb7866c7c87e81ef/65de2098ae62f777124be71d/top-1000-movie.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc89beb7757ffec1e/65de20e0ee3a13755fc8e7fc/importing-project-vercel.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0022681a81165d94/65de21086c65d7d78887b5ff/configuring-project.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b00b1cfe190a7d4/65de212ac5985207f8f6b232/congratulations.png | md | {
"tags": [
"JavaScript",
"Next.js"
],
"pageDescription": "Learn how to easily integrate MongoDB into your Next.js application with the official MongoDB package.",
"contentType": "Tutorial"
} | How to Integrate MongoDB Into Your Next.js App | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/build-go-web-application-gin-mongodb-help-ai | created | # How to Build a Go Web Application with Gin, MongoDB, and with the Help of AI
Building applications with Go provides many advantages. The language is fast, simple, and lightweight while supporting powerful features like concurrency, strong typing, and a robust standard library. In this tutorial, we’ll use the popular Gin web framework along with MongoDB to build a Go-based web application.
Gin is a minimalist web framework for Golang that provides an easy way to build web servers and APIs. It is fast, lightweight, and modular, making it ideal for building microservices and APIs, but can be easily extended to build full-blown applications.
We'll use Gin to build a web application with three endpoints that connect to a MongoDB database. MongoDB is a popular document-oriented NoSQL database that stores data in JSON-like documents. MongoDB is a great fit for building modern applications.
Rather than building the entire application by hand, we’ll leverage a coding AI assistant by Sourcegraph called Cody to help us build our Go application. Cody is the only AI assistant that knows your entire codebase and can help you write, debug, test, and document your code. We’ll use many of these features as we build our application today.
## Prerequisites
Before you begin, you’ll need:
- Go installed on your development machine. Download it on their website.
- A MongoDB Atlas account. Sign up for free.
- Basic familiarity with Go and MongoDB syntax.
- Sourcegraph Cody installed in your favorite IDE. (For this tutorial, we'll be using VS Code). Get it for free.
Once you meet the prerequisites, you’re ready to build. Let’s go.
## Getting started
We'll start by creating a new Go project for our application. For this example, we’ll name the project **mflix**, so let’s go ahead and create the project directory and navigate into it:
```bash
mkdir mflix
cd mflix
```
Next, initialize a new Go module, which will manage dependencies for our project:
```bash
go mod init mflix
```
Now that we have our Go module created, let’s install the dependencies for our project. We’ll keep it really simple and just install the `gin` and `mongodb` libraries.
```bash
go get github.com/gin-gonic/gin
go get go.mongodb.org/mongo-driver/mongo
```
With our dependencies fetched and installed, we’re ready to start building our application.
## Gin application setup with Cody
To start building our application, let’s go ahead and create our entry point into the app by creating a **main.go** file. Next, while we can set up our application manually, we’ll instead leverage Cody to build out our starting point. In the Cody chat window, we can ask Cody to create a basic Go Gin application.
guide. The database that we will work with is called `sample_mflix` and the collection in that database we’ll use is called `movies`. This dataset contains a list of movies with various information like the plot, genre, year of release, and much more.
on the movies collection. Aggregation operations process multiple documents and return computed results. So with this endpoint, the end user could pass in any valid MongoDB aggregation pipeline to run various analyses on the `movies` collection.
Note that aggregations are very powerful and in a production environment, you probably wouldn’t want to enable this level of access through HTTP request payloads. But for the sake of the tutorial, we opted to keep it in. As a homework assignment for further learning, try using Cody to limit the number of stages or the types of operations that the end user can perform on this endpoint.
```go
// POST /movies/aggregations - Run aggregations on movies
func aggregateMovies(c *gin.Context) {
// Get aggregation pipeline from request body
var pipeline interface{}
if err := c.ShouldBindJSON(&pipeline); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
// Run aggregations
cursor, err := mongoClient.Database("sample_mflix").Collection("movies").Aggregate(context.TODO(), pipeline)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
// Map results
var result ]bson.M
if err = cursor.All(context.TODO(), &result); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
// Return result
c.JSON(http.StatusOK, result)
}
```
Now that we have our endpoints implemented, let’s add them to our router so that we can call them. Here again, we can use another feature of Cody, called autocomplete, to intelligently give us statement completions so that we don’t have to write all the code ourselves.
![Cody AI Autocomplete with Go][6]
Our `main` function should now look like:
```go
func main() {
r := gin.Default()
r.GET("/", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "Hello World",
})
})
r.GET("/movies", getMovies)
r.GET("/movies/:id", getMovieByID)
r.POST("/movies/aggregations", aggregateMovies)
r.Run()
}
```
Now that we have our routes set up, let’s test our application to make sure everything is working well. Restart the server and navigate to **localhost:8080/movies**. If all goes well, you should see a large list of movies returned in JSON format in your browser window. If you do not see this, check your IDE console to see what errors are shown.
![Sample Output for the Movies Endpoint][7]
Let’s test the second endpoint. Pick any `id` from the movies collection and navigate to **localhost:8080/movies/{id}** — so for example, **localhost:8080/movies/573a1390f29313caabcd42e8**. If everything goes well, you should see that single movie listed. But if you’ve been following this tutorial, you actually won’t see the movie.
![String to Object ID Results Error][8]
The issue is that in our `getMovie` function implementation, we are accepting the `id` value as a `string`, while the data type in our MongoDB database is an `ObjectID`. So when we run the `FindOne` method and try to match the string value of `id` to the `ObjectID` value, we don’t get a match.
Let’s ask Cody to help us fix this by converting the string input we get to an `ObjectID`.
![Cody AI MongoDB String to ObjectID][9]
Our updated `getMovieByID` function is as follows:
```go
func getMovieByID(c *gin.Context) {
// Get movie ID from URL
idStr := c.Param("id")
// Convert id string to ObjectId
id, err := primitive.ObjectIDFromHex(idStr)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
// Find movie by ObjectId
var movie bson.M
err = mongoClient.Database("sample_mflix").Collection("movies").FindOne(context.TODO(), bson.D{{"_id", id}}).Decode(&movie)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
// Return movie
c.JSON(http.StatusOK, movie)
}
```
Depending on your IDE, you may need to add the `primitive` dependency in your import statement. The final import statement looks like:
```go
import (
"context"
"log"
"net/http"
"github.com/gin-gonic/gin"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
```
If we examine the new code that Cody provided, we can see that we are now getting the value from our `id` parameter and storing it into a variable named `idStr`. We then use the primitive package to try and convert the string to an `ObjectID`. If the `idStr` is a valid string that can be converted to an `ObjectID`, then we are good to go and we use the new `id` variable when doing our `FindOne` operation. If not, then we get an error message back.
Restart your server and now try to get a single movie result by navigating to **localhost:8080/movies/{id}**.
![Single Movie Response Endpoint][10]
For our final endpoint, we are allowing the end user to provide an aggregation pipeline that we will execute on the `mflix` collection. The user can provide any aggregation they want. To test this endpoint, we’ll make a POST request to **localhost:8080/movies/aggregations**. In the body of the request, we’ll include our aggregation pipeline.
![Postman Aggregation Endpoint in MongoDB][11]
Let’s run an aggregation to return a count of comedy movies, grouped by year, in descending order. Again, remember aggregations are very powerful and can be abused. You normally would not want to give direct access to the end user to write and run their own aggregations ad hoc within an HTTP request, unless it was for something like an internal tool. Our aggregation pipeline will look like the following:
```json
[
{"$match": {"genres": "Comedy"}},
{"$group": {
"_id": "$year",
"count": {"$sum": 1}
}},
{"$sort": {"count": -1}}
]
```
Running this aggregation, we’ll get a result set that looks like this:
```json
[
{
"_id": 2014,
"count": 287
},
{
"_id": 2013,
"count": 286
},
{
"_id": 2009,
"count": 268
},
{
"_id": 2011,
"count": 263
},
{
"_id": 2006,
"count": 260
},
...
]
```
It seems 2014 was a big year for comedy. If you are not familiar with how aggregations work, you can check out the following resources:
- [Introduction to the MongoDB Aggregation Framework
- MongoDB Aggregation Pipeline Queries vs SQL Queries
- A Better MongoDB Aggregation Experience via Compass
Additionally, you can ask Cody for a specific explanation about how our `aggregateMovies` function works to help you further understand how the code is implemented using the Cody `/explain` command.
.
And if you have any questions or comments, let’s continue the conversation in our developer forums!
The entire code for our application is above, so there is no GitHub repo for this simple application. Happy coding.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt123181346af4c7e6/65148770b25810649e804636/eVB87PA.gif
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3df7c0149a4824ac/6514820f4f2fa85e60699bf8/image4.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6a72c368f716c7c2/65148238a5f15d7388fc754a/image2.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta325fcc27ed55546/651482786fefa7183fc43138/image7.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc8029e22c4381027/6514880ecf50bf3147fff13f/A7n71ej.gif
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt438f1d659d2f1043/6514887b27287d9b63bf9215/6O8d6cR.gif
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd6759b52be548308/651482b2d45f2927c800b583/image3.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfc8ea470eb6585bd/651482da69060a5af7fc2c40/image5.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte5d9fb517f22f08f/651488d82a06d70de3f4faf9/Y2HuNHe.gif
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc2467265b39e7d2b/651483038f0457d9df12aceb/image6.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt972b959f5918c282/651483244f2fa81286699c09/image1.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9c888329868b60b6/6514892c2a06d7d0a6f4fafd/g4xtxUg.gif | md | {
"tags": [
"MongoDB",
"Go"
],
"pageDescription": "Learn how to build a web application with the Gin framework for Go and MongoDB using the help of Cody AI from Sourcegraph.",
"contentType": "Tutorial"
} | How to Build a Go Web Application with Gin, MongoDB, and with the Help of AI | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/time-series-data-pymongoarrow | created | # Analyze Time-Series Data with Python and MongoDB Using PyMongoArrow and Pandas
In today’s data-centric world, time-series data has become indispensable for driving key organizational decisions, trend analyses, and forecasts. This kind of data is everywhere — from stock markets and IoT sensors to user behavior analytics. But as these datasets grow in volume and complexity, so does the challenge of efficiently storing and analyzing them. Whether you’re an IoT developer or a data analyst dealing with time-sensitive information, MongoDB offers a robust ecosystem tailored to meet both your storage and analytics needs for complex time-series data.
MongoDB has built-in support to store time-series data in a special type of collection called a time-series collection. Time-series collections are different from the normal collections. Time-series collections use an underlying columnar storage format and store data in time-order with an automatically created clustered index. The columnar storage format provides the following benefits:
* Reduced complexity: The columnar format is tailored for time-series data, making it easier to manage and query.
* Query efficiency: MongoDB automatically creates an internal clustered index on the time field which improves query performance.
* Disk usage: This storage approach uses disk space more efficiently compared to traditional collections.
* I/O optimization: The read operations require fewer input/output operations, improving the overall system performance.
* Cache usage: The design allows for better utilization of the WiredTiger cache, further enhancing query performance.
In this tutorial, we will create a time-series collection and then store some time-series data into it. We will see how you can query it in MongoDB as well as how you can read that data into pandas DataFrame, run some analytics on it, and write the modified data back to MongoDB. This tutorial is meant to be a complete deep dive into working with time-series data in MongoDB.
### Tutorial Prerequisites
We will be using the following tools/frameworks:
* MongoDB Atlas database, to store our time-series data. If you don’t already have an Atlas cluster created, go ahead and create one, set up a user, and add your connection IP address to your IP access list.
* PyMongo driver(to connect to your MongoDB Atlas database, see the installation instructions).
* Jupyter Notebook (to run the code, see the installation instructions).
>Note: Before running any code or installing any Python packages, we strongly recommend setting up a separate Python environment. This helps to isolate dependencies, manage packages, and avoid conflicts that may arise from different package versions. Creating an environment is an optional but highly recommended step.
At this point, we are assuming that you have an Atlas cluster created and ready to be used, and PyMongo and Jupyter Notebook installed. Let’s go ahead and launch Jupyter Notebook by running the following command in the terminal:
```
Jupyter Notebook
```
Once you have the Jupyter Notebook up and running, let’s go ahead and fetch the connection string of your MongoDB Atlas cluster and store that as an environment variable, which we will use later to connect to our database. After you have done that, let’s go ahead and connect to our Atlas cluster by running the following commands:
```
import pymongo
import os
from pymongo import MongoClient
MONGO_CONN_STRING = os.environ.get("MONGODB_CONNECTION_STRING")
client = MongoClient(MONGO_CONN_STRING)
```
## Creating a time-series collection
Next, we are going to create a new database and a collection in our cluster to store the time-series data. We will call this database “stock_data” and the collection “stocks”.
```
# Let's create a new database called "stock data"
db = client.stock_data
# Let's create a new time-series collection in the "stock data" database called "stocks"
collection = db.create_collection('stocks', timeseries={
timeField: "timestamp",
metaField: "metadata",
granularity: "hours"
})
```
Here, we used the db.create_collection() method to create a time-series collection called “stock”. In the example above, “timeField”, “metaField”, and “granularity” are reserved fields (for more information on what these are, visit our documentation). The “timeField” option specifies the name of the field in your collection that will contain the date in each time-series document.
The “metaField” option specifies the name of the field in your collection that will contain the metadata in each time-series document.
Finally, the “granularity” option specifies how frequently data will be ingested in your time-series collection.
Now, let’s insert some stock-related information into our collection. We are interested in storing and analyzing the stock of a specific company called “XYZ” which trades its stock on “NASDAQ”.
We are storing some price metrics of this stock at an hourly interval and for each time interval, we are storing the following information:
* **open:** the opening price at which the stock traded when the market opened
* **close:** the final price at which the stock traded when the trading period ended
* **high:** the highest price at which the stock traded during the trading period
* **low:** the lowest price at which the stock traded during the trading period
* **volume:** the total number of shares traded during the trading period
Now that we have become an expert on stock trading and terminology (sarcasm), we will now insert some documents into our time-series collection. Here we have four sample documents. The data points are captured at an interval of one hour.
```
# Create some sample data
data =
{
"metadata": {
"stockSymbol": "ABC",
"exchange": "NASDAQ"
},
"timestamp": datetime(2023, 9, 12, 15, 19, 48),
"open": 54.80,
"high": 59.20,
"low": 52.60,
"close": 53.50,
"volume": 18000
},
{
"metadata": {
"stockSymbol": "ABC",
"exchange": "NASDAQ"
},
"timestamp": datetime(2023, 9, 12, 16, 19, 48),
"open": 51.00,
"high": 54.30,
"low": 50.50,
"close": 51.80,
"volume": 12000
},
{
"metadata": {
"stockSymbol": "ABC",
"exchange": "NASDAQ"
},
"timestamp":datetime(2023, 9, 12, 17, 19, 48),
"open": 52.00,
"high": 53.10,
"low": 50.50,
"close": 52.90,
"volume": 10000
},
{
"metadata": {
"stockSymbol": "ABC",
"exchange": "NASDAQ"
},
"timestamp":datetime(2023, 9, 12, 18, 19, 48),
"open": 52.80,
"high": 60.20,
"low": 52.60,
"close": 55.50,
"volume": 30000
}
]
# insert the data into our collection
collection.insert_many(data)
```
Now, let’s run a find query on our collection to retrieve data at a specific timestamp. Run this query in the Jupyter Notebook after the previous script.
```
collection.find_one({'timestamp': datetime(2023, 9, 12, 15, 19, 48)})
```
//OUTPUT
![Output of find_one() command
As you can see from the output, we were able to query our time-series collection and retrieve data points at a specific timestamp.
Similarly, you can run more powerful queries on your time-series collection by using the aggregation pipeline. For the scope of this tutorial, we won’t be covering that. But, if you want to learn more about it, here is where you can go:
1. MongoDB Aggregation Learning Byte
2. MongoDB Aggregation in Python Learning Byte
3. MongoDB Aggregation Documentation
4. Practical MongoDB Aggregation Book
## Analyzing the data with a pandas DataFrame
Now, let’s see how you can move your time-series data into pandas DataFrame to run some analytics operations.
MongoDB has built a tool just for this purpose called PyMongoArrow. PyMongoArrow is a Python library that lets you move data in and out of MongoDB into other data formats such as pandas DataFrame, Numpy array, and Arrow Table.
Let’s quickly install PyMongoArrow using the pip command in your terminal. We are assuming that you already have pandas installed on your system. If not, you can use the pip command to install it too.
```
pip install pymongoarrow
```
Now, let’s import all the necessary libraries. We are going to be using the same file or notebook (Jupyter Notebook) to run the codes below.
```
import pymongoarrow
import pandas as pd
# pymongoarrow.monkey module provided an interface to patch pymongo, in place, and add pymongoarrow's functionality directly to collection instance.
from pymongoarrow.monkey import patch_all
patch_all()
# Let's use the pymongoarrow's find_pandas_all() function to read MongoDB query result sets into
df = collection.find_pandas_all({})
```
Now, we have read all of our stock data stored in the “stocks” collection into a pandas DataFrame ‘df’.
Let’s quickly print the value stored in the ‘df’ variable to verify it.
```
print(df)
print(type(df))
```
//OUTPUT
Hurray…congratulations! As you can see, we have successfully read our MongoDB data into pandas DataFrame.
Now, if you are a stock market trader, you would be interested in doing a lot of analysis on this data to get meaningful insights. But for this tutorial, we are just going to calculate the hourly percentage change in the closing prices of the stock. This will help us understand the daily price movements in terms of percentage gains or losses.
We will add a new column in our ‘df’ DataFrame called “daily_pct_change”.
```
df = df.sort_values('timestamp')
df'daily_pct_change'] = df['close'].pct_change() * 100
# print the dataframe to see the modified data
print(df)
```
//OUTPUT
![Output of modified DataFrame
As you can see, we have successfully added a new column to our DataFrame.
Now, we would like to persist the modified DataFrame data into a database so that we can run more analytics on it later. So, let’s write this data back to MongoDB using PyMongoArrow’s write function.
We will just create a new collection called “my_new_collection” in our database to write the modified DataFrame back into MongoDB, ensuring data persistence.
```
from pymongoarrow.api import write
coll = db.my_new_collection
# write data from pandas into MongoDB collection called 'coll'
write(coll, df)
# Now, let's verify that the modified data has been written into our collection
print(coll.find_one({}))
```
Congratulations on successfully completing this tutorial.
## Conclusion
In this tutorial, we covered how to work with time-series data using MongoDB and Python. We learned how to store stock market data in a MongoDB time-series collection, and then how to perform simple analytics using a pandas DataFrame. We also explored how PyMongoArrow makes it easy to move data between MongoDB and pandas. Finally, we saved our analyzed data back into MongoDB. This guide provides a straightforward way to manage, analyze, and store time-series data. Great job if you’ve followed along — you’re now ready to handle time-series data in your own projects.
If you want to learn more about PyMongoArrow, check out some of these additional resources:
1. Video tutorial on PyMongoArrow
2. PyMongoArrow article
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to create and query a time-series collection in MongoDB, and analyze the data using PyMongoArrow and pandas.",
"contentType": "Tutorial"
} | Analyze Time-Series Data with Python and MongoDB Using PyMongoArrow and Pandas | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/storing-binary-data-mongodb-cpp | created | # Storing Binary Data with MongoDB and C++
In modern applications, storing and retrieving binary files efficiently is a crucial requirement. MongoDB enables this with binary data type in the BSON which is a binary serialization format used to store documents in MongoDB. A BSON binary value is a byte array and has a subtype (like generic binary subtype, UUID, MD5, etc.) that indicates how to interpret the binary data. See BSON Types — MongoDB Manual for more information.
In this tutorial, we will write a console application in C++, using the MongoDB C++ driver to upload and download binary data.
**Note**:
- When using this method, remember that the BSON document size limit in MongoDB is 16 MB. If your binary files are larger than this limit, consider using GridFS for more efficient handling of large files. See GridFS example in C++ for reference.
- Developers often weigh the trade-offs and strategies when storing binary data in MongoDB. It's essential to ensure that you have also considered different strategies to optimize your data management approach.
## Prerequisites
1. MongoDB Atlas account with a cluster created.
2. IDE (like Microsoft Visual Studio or Microsoft Visual Studio Code) setup with the MongoDB C and C++ Driver installed. Follow the instructions in Getting Started with MongoDB and C++ to install MongoDB C/C++ drivers and set up the dev environment in Visual Studio. Installation instructions for other platforms are available.
3. Compiler with C++17 support (for using `std::filesystem` operations).
4. Your machine’s IP address whitelisted. Note: You can add *0.0.0.0/0* as the IP address, which should allow access from any machine. This setting is not recommended for production use.
## Building the application
> Source code available **here**.
As part of the different BSON types, the C++ driver provides the b_binary struct that can be used for storing binary data value in a BSON document. See the API reference.
We start with defining the structure of our BSON document. We have defined three keys: `name`, `path`, and `data`. These contain the name of the file being uploaded, its full path from the disk, and the actual file data respectively. See a sample document below:
(URI), update it to `mongoURIStr`, and set the different path and filenames to the ones on your disk.
```cpp
int main()
{
try
{
auto mongoURIStr = "";
static const mongocxx::uri mongoURI = mongocxx::uri{ mongoURIStr };
// Create an instance.
mongocxx::instance inst{};
mongocxx::options::client client_options;
auto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };
client_options.server_api_opts(api);
mongocxx::client conn{ mongoURI, client_options};
const std::string dbName = "fileStorage";
const std::string collName = "files";
auto fileStorageDB = conn.database(dbName);
auto filesCollection = fileStorageDB.collection(collName);
// Drop previous data.
filesCollection.drop();
// Upload all files in the upload folder.
const std::string uploadFolder = "/Users/bishtr/repos/fileStorage/upload/";
for (const auto & filePath : std::filesystem::directory_iterator(uploadFolder))
{
if(std::filesystem::is_directory(filePath))
continue;
if(!upload(filePath.path().string(), filesCollection))
{
std::cout << "Upload failed for: " << filePath.path().string() << std::endl;
}
}
// Download files to the download folder.
const std::string downloadFolder = "/Users/bishtr/repos/fileStorage/download/";
// Search with specific filenames and download it.
const std::string fileName1 = "image-15.jpg", fileName2 = "Hi Seed Shaker 120bpm On Accents.wav";
for ( auto fileName : {fileName1, fileName2} )
{
if (!download(fileName, downloadFolder, filesCollection))
{
std::cout << "Download failed for: " << fileName << std::endl;
}
}
// Download all files in the collection.
auto cursor = filesCollection.find({});
for (auto&& doc : cursor)
{
auto fileName = std::string(docFILE_NAME].get_string().value);
if (!download(fileName, downloadFolder, filesCollection))
{
std::cout << "Download failed for: " << fileName << std::endl;
}
}
}
catch(const std::exception& e)
{
std::cout << "Exception encountered: " << e.what() << std::endl;
}
return 0;
}
```
## Application in action
Before executing this application, add some files (like images or audios) under the `uploadFolder` directory.
![Files to be uploaded from local disk to MongoDB.][2]
Execute the application and you’ll observe output like this, signifying that the files are successfully uploaded and downloaded.
![Application output showing successful uploads and downloads.][3]
You can see the collection in [Atlas or MongoDB Compass reflecting the files uploaded via the application.
, offer a powerful solution for handling file storage in C++ applications. We can't wait to see what you build next! Share your creation with the community and let us know how it turned out!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt24f4df95c9cee69a/6504c0fd9bcd1b134c1d0e4b/image1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7c530c1eb76f566c/6504c12df4133500cb89250f/image3.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt768d2c8c6308391e/6504c153b863d9672da79f4c/image5.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8c199ec2272f2c4f/6504c169a8cf8b4b4a3e1787/image2.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt78bb48b832d91de2/6504c17fec9337ab51ec845e/image4.png | md | {
"tags": [
"Atlas",
"C++"
],
"pageDescription": "Learn how to store binary data to MongoDB using the C++ driver.",
"contentType": "Tutorial"
} | Storing Binary Data with MongoDB and C++ | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/realm-web-sdk | created |
MY MOVIES
| md | {
"tags": [
"JavaScript",
"Realm"
],
"pageDescription": "Send MongoDB Atlas queries directly from the web browser with the Realm Web SDK.",
"contentType": "Quickstart"
} | Realm Web SDK Tutorial | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/bson-data-types-date | created | # Quick Start: BSON Data Types - Date
Dates and times in programming can be a challenge. Which Time Zone is the event happening in? What date format is being used? Is it `MM/DD/YYYY` or `DD/MM/YYYY`? Settling on a standard is important for data storage and then again when displaying the date and time. The recommended way to store dates in MongoDB is to use the BSON Date data type.
The BSON Specification refers to the `Date` type as the *UTC datetime* and is a 64-bit integer. It represents the number of milliseconds since the Unix epoch, which was 00:00:00 UTC on 1 January 1970. This provides a lot of flexibilty in past and future dates. With a 64-bit integer in use, we are able to represent dates *roughly* 290 million years before and after the epoch. As a signed 64-bit integer we are able to represent dates *prior* to 1 Jan 1970 with a negative number and positive numbers represent dates *after* 1 Jan 1970.
## Why & Where to Use
You'll want to use the `Date` data type whenever you need to store date and/or time values in MongoDB. You may have seen a `timestamp` data type as well and thought "Oh, that's what I need." However, the `timestamp` data type should be left for **internal** usage in MongoDB. The `Date` type is the data type we'll want to use for application development.
## How to Use
There are some benefits to using the `Date` data type in that it comes with some handy features and methods. Need to assign a `Date` type to a variable? We have you covered there:
``` javascript
var newDate = new Date();
```
What did that create exactly?
``` none
> newDate;
ISODate("2020-05-11T20:14:14.796Z")
```
Very nice, we have a date and time wrapped as an ISODate. If we need that printed in a `string` format, we can use the `toString()` method.
``` none
> newDate.toString();
Mon May 11 2020 13:14:14 GMT-0700 (Pacific Daylight Time)
```
## Wrap Up
>Get started exploring BSON types, like Date, with MongoDB Atlas today!
The `date` field is the recommended data type to use when you want to store date and time information in MongoDB. It provides the flexibility to store date and time values in a consistent format that can easily be stored and retrieved by your application. Give the BSON `Date` data type a try for your applications. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Working with dates and times can be a challenge. The Date BSON data type is an unsigned 64-bit integer with a UTC (Universal Time Coordinates) time zone.",
"contentType": "Quickstart"
} | Quick Start: BSON Data Types - Date | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-vector-search-openai-filtering | created | # Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality
Search functionality is a critical component of many modern web applications. Providing users with relevant results based on their search queries and additional filters dramatically improves their experience and satisfaction with your app.
In this article, we'll go over an implementation of search functionality using OpenAI's GPT-4 model and MongoDB's
Atlas Vector search. We've created a request handler function that not only retrieves relevant data based on a user's search query but also applies additional filters provided by the user.
Enriching the existing documents data with embeddings is covered in our main Vector Search Tutorial.
## Search in the Airbnb app context ##
Consider a real-world scenario where we have an Airbnb-like app. Users can perform a free text search for listings and also filter results based on certain criteria like the number of rooms, beds, or the capacity of people the property can accommodate.
To implement this functionality, we use MongoDB's full-text search capabilities for the primary search, and OpenAI's GPT-4 model to create embeddings that contain the semantics of the data and use Vector Search to find relevant results.
The code to the application can be found in the following GitHub repository.
## The request handler
For the back end, we have used Atlas app services with a simple HTTPS “GET” endpoint.
Our function is designed to act as a request handler for incoming search requests.
When a search request arrives, it first extracts the search terms and filters from the query parameters. If no search term is provided, it returns a random sample of 30 listings from the database.
If a search term is present, the function makes a POST request to OpenAI's API, sending the search term and asking for an embedded representation of it using a specific model. This request returns a list of “embeddings,” or vector representations of the search term, which is then used in the next step.
```javascript
// This function is the endpoint's request handler.
// It interacts with MongoDB Atlas and OpenAI API for embedding and search functionality.
exports = async function({ query }, response) {
// Query params, e.g. '?search=test&beds=2' => {search: "test", beds: "2"}
const { search, beds, rooms, people, maxPrice, freeTextFilter } = query;
// MongoDB Atlas configuration.
const mongodb = context.services.get('mongodb-atlas');
const db = mongodb.db('sample_airbnb'); // Replace with your database name.
const listingsAndReviews = db.collection('listingsAndReviews'); // Replace with your collection name.
// If there's no search query, return a sample of 30 random documents from the collection.
if (!search || search === "") {
return await listingsAndReviews.aggregate({$sample: {size: 30}}]).toArray();
}
// Fetch the OpenAI key stored in the context values.
const openai_key = context.values.get("openAIKey");
// URL to make the request to the OpenAI API.
const url = 'https://api.openai.com/v1/embeddings';
// Call OpenAI API to get the embeddings.
let resp = await context.http.post({
url: url,
headers: {
'Authorization': [`Bearer ${openai_key}`],
'Content-Type': ['application/json']
},
body: JSON.stringify({
input: search,
model: "text-embedding-ada-002"
})
});
// Parse the JSON response
let responseData = EJSON.parse(resp.body.text());
// Check the response status.
if(resp.statusCode === 200) {
console.log("Successfully received embedding.");
// Fetch a random sample document.
const embedding = responseData.data[0].embedding;
console.log(JSON.stringify(embedding))
let searchQ = {
"index": "default",
"queryVector": embedding,
"path": "doc_embedding",
"k": 100,
"numCandidates": 1000
}
// If there's any filter in the query parameters, add it to the search query.
if (freeTextFilter){
// Turn free text search using GPT-4 into filter
const sampleDocs = await listingsAndReviews.aggregate([
{ $sample: { size: 1 }},
{ $project: {
_id: 0,
bedrooms: 1,
beds: 1,
room_type: 1,
property_type: 1,
price: 1,
accommodates: 1,
bathrooms: 1,
review_scores: 1
}}
]).toArray();
const filter = await context.functions.execute("getSearchAIFilter",sampleDocs[0],freeTextFilter );
searchQ.filter = filter;
}
else if(beds || rooms) {
let filter = { "$and" : []}
if (beds) {
filter.$and.push({"beds" : {"$gte" : parseInt(beds) }})
}
if (rooms)
{
filter.$and.push({"bedrooms" : {"$gte" : parseInt(rooms) }})
}
searchQ.filter = filter;
}
// Perform the search with the defined query and limit the result to 50 documents.
let docs = await listingsAndReviews.aggregate([
{ "$vectorSearch": searchQ },
{ $limit : 50 }
]).toArray();
return docs;
} else {
console.error("Failed to get embeddings");
return [];
}
};
```
To cover the filtering part of the query, we are using embedding and building a filter query to cover the basic filters that a user might request — in the presented example, two rooms and two beds in each.
```js
else if(beds || rooms) {
let filter = { "$and" : []}
if (beds) {
filter.$and.push({"beds" : {"$gte" : parseInt(beds) }})
}
if (rooms)
{
filter.$and.push({"bedrooms" : {"$gte" : parseInt(rooms) }})
}
searchQ.filter = filter;
}
```
## Calling OpenAI API
![AI Filter
Let's consider a more advanced use case that can enhance our filtering experience. In this example, we are allowing a user to perform a free-form filtering that can provide sophisticated sentences, such as, “More than 1 bed and rating above 91.”
We call the OpenAI API to interpret the user's free text filter and translate it into something we can use in a MongoDB query. We send the API a description of what we need, based on the document structure we're working with and the user's free text input. This text is fed into the GPT-4 model, which returns a JSON object with 'range' or 'equals' operators that can be used in a MongoDB search query.
### getSearchAIFilter function
```javascript
// This function is the endpoint's request handler.
// It interacts with OpenAI API for generating filter JSON based on the input.
exports = async function(sampleDoc, search) {
// URL to make the request to the OpenAI API.
const url = 'https://api.openai.com/v1/chat/completions';
// Fetch the OpenAI key stored in the context values.
const openai_key = context.values.get("openAIKey");
// Convert the sample document to string format.
let syntDocs = JSON.stringify(sampleDoc);
console.log(syntDocs);
// Prepare the request string for the OpenAI API.
const reqString = `Convert programmatic command to Atlas $search filter only for range and equals JS:\n\nExample: Based on document structure {"siblings" : '...', "dob" : "..."} give me the filter of all people born 2015 and siblings are 3 \nOutput: {"filter":{ "compound" : { "must" : [ {"range": {"gte": 2015, "lte" : 2015,"path": "dob"} },{"equals" : {"value" : 3 , path :"siblings"}}]}}} \n\n provide the needed filter to accomodate ${search}, pick a path from structure ${syntDocs}. Need just the json object with a range or equal operators. No explanation. No 'Output:' string in response. Valid JSON.`;
console.log(`reqString: ${reqString}`);
// Call OpenAI API to get the response.
let resp = await context.http.post({
url: url,
headers: {
'Authorization': `Bearer ${openai_key}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: "gpt-4",
temperature: 0.1,
messages: [
{
"role": "system",
"content": "Output filter json generator follow only provided rules"
},
{
"role": "user",
"content": reqString
}
]
})
});
// Parse the JSON response
let responseData = JSON.parse(resp.body.text());
// Check the response status.
if(resp.statusCode === 200) {
console.log("Successfully received code.");
console.log(JSON.stringify(responseData));
const code = responseData.choices[0].message.content;
let parsedCommand = EJSON.parse(code);
console.log('parsed' + JSON.stringify(parsedCommand));
// If the filter exists and it's not an empty object, return it.
if (parsedCommand.filter && Object.keys(parsedCommand.filter).length !== 0) {
return parsedCommand.filter;
}
// If there's no valid filter, return an empty object.
return {};
} else {
console.error("Failed to generate filter JSON.");
console.log(JSON.stringify(responseData));
return {};
}
};
```
## MongoDB search and filters
The function then constructs a MongoDB search query using the embedded representation of the search term and any additional filters provided by the user. This query is sent to MongoDB, and the function returns the results as a response —something that looks like the following for a search of “New York high floor” and “More than 1 bed and rating above 91.”
```javascript
{$vectorSearch:{
"index": "default",
"queryVector": embedding,
"path": "doc_embedding",
"filter" : { "$and" : [{"beds": {"$gte" : 1}} , "score": {"$gte" : 91}}]},
"k": 100,
"numCandidates": 1000
}
}
```
## Conclusion
This approach allows us to leverage the power of OpenAI's GPT-4 model to interpret free text input and MongoDB's full-text search capability to return highly relevant search results. The use of natural language processing and AI brings a level of flexibility and intuitiveness to the search function that greatly enhances the user experience.
Remember, however, this is an advanced implementation. Ensure you have a good understanding of how MongoDB and OpenAI operate before attempting to implement a similar solution. Always take care to handle sensitive data appropriately and ensure your AI use aligns with OpenAI's use case policy. | md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js",
"AI"
],
"pageDescription": "This article delves into the integration of search functionality in web apps using OpenAI's GPT-4 model and MongoDB's Atlas Vector search. By harnessing the capabilities of AI and database management, we illustrate how to create a request handler that fetches data based on user queries and applies additional filters, enhancing user experience.",
"contentType": "Tutorial"
} | Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/document-enrichment-and-schema-updates | created | # Document Enrichment and Schema Updates
So your business needs have changed and there’s additional data that needs to be stored within an existing dataset. Fear not! With MongoDB, this is no sweat.
> In this article, I’ll show you how to quickly add and populate additional fields into an existing database collection.
## The Scenario
Let’s say you have a “Netflix” type application and you want to allow users to see which movies they have watched. We’ll use the sample\_mflix database from the sample datasets available in a MongoDB Atlas cluster.
Here is the existing schema for the user collection in the sample\_mflix database:
``` js
{
_id: ObjectId(),
name: ,
email: ,
password:
}
```
## The Solution
There are a few ways we could go about this. Since MongoDB has a flexible data model, we can just add our new data into existing documents.
In this example, we are going to assume that we know the user ID. We’ll use `updateOne` and the `$addToSet` operator to add our new data.
``` js
const { db } = await connectToDatabase();
const collection = await db.collection(“users”).updateOne(
{ _id: ObjectID(“59b99db9cfa9a34dcd7885bf”) },
{
$addToSet: {
moviesWatched: {
,
,
<poster>
}
}
}
);
```
The `$addToSet` operator adds a value to an array avoiding duplicates. If the field referenced is not present in the document, `$addToSet` will create the array field and enter the specified value. If the value is already present in the field, `$addToSet` will do nothing.
Using `$addToSet` will prevent us from duplicating movies when they are watched multiple times.
## The Result
Now, when a user goes to their profile, they will see their watched movies.
But what if the user has not watched any movies? The user will simply not have that field in their document.
I’m using Next.js for this application. I simply need to check to see if a user has watched any movies and display the appropriate information accordingly.
``` js
{ moviesWatched
? "Movies I've Watched"
: "I have not watched any movies yet :("
}
```
## Conclusion
Because of MongoDB’s flexible data model, we can have multiple schemas in one collection. This allows you to easily update data and fields in existing schemas.
If you would like to learn more about schema validation, take a look at the Schema Validation documentation.
I’d love to hear your feedback or questions. Let’s chat in the MongoDB Community. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "So your business needs have changed and there’s additional data that needs to be stored within an existing dataset. Fear not! With MongoDB, this is no sweat. In this article, I’ll show you how to quickly add and populate additional fields into an existing database collection.",
"contentType": "Tutorial"
} | Document Enrichment and Schema Updates | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/serverless-instances-billing-optimize-bill-indexing | created | # How to Optimize Your Serverless Instance Bill with Indexing
Serverless solutions are quickly gaining traction among developers and organizations alike as a means to move fast, minimize overhead, and optimize costs. But shifting from a traditional pre-provisioned and predictable monthly bill to a consumption or usage-based model can sometimes result in confusion around how that bill is generated. In this article, we’ll take you through the basics of our serverless billing model and give you tips on how to best optimize your serverless database for cost efficiency.
## What are serverless instances?
MongoDB Atlas serverless instances, recently announced as generally available, provide an on-demand serverless endpoint for your application with no sizing required. You simply choose a cloud provider and region to get started, and as your app grows, your serverless database will seamlessly scale based on demand and only charge for the resources you use.
Unlike our traditional clusters, serverless instances offer a fundamentally different pricing model that is primarily metered on reads, writes, and storage with automatic tiered discounts on reads as your usage scales. So, you can start small without any upfront commitments and never worry about paying for unused resources if your workload is idle.
### Serverless Database Pricing
Pay only for the operations you run.
| Item | Description | Pricing |
| ---- | ----------- | ------- |
| Read Processing Unit (RPU) | Number of read operations and documents scanned* per operation
*\*Number of documents read in 4KB chunks and indexes read in 256 byte chunks* | $0.10/million for the first 50 million per day\*
*\*Daily RPU tiers: Next 500 million: $0.05/million Reads thereafter: $0.01/million* |
| Write Processing Unit (WPU) | Number of write operations\* to the database
\*Number of documents and indexes written in 1KB chunks | $1.00/million |
| Storage | Data and indexes stored on the database | $0.25/GB-month |
| Standard Backup | Download and restore of backup snapshots\*
\*2 free daily snapshots included per serverless instance* | $2.50/hour\*
\*To download or restore the data* |
| Serverless Continuous Backup | 35-day backup retention for daily snapshots | $0.20/GB-month |
| Data Transfer | Inbound/outbound data to/from the database | $0.015 - $0.10/GB\*
\**Depending on traffic source and destination* |
At first glance, read processing units (RPU) and write processing units (WPU) might be new units to you, so let’s quickly dig into what they mean. We use RPUs and WPUs to quantify the amount of work the database has to do to service a query, or to perform a write. To put it simply, a read processing unit (RPU) refers to the read operations to the database and is calculated based on the number of operations run and documents scanned per operation. Similarly, a write processing unit (WPU) is a write operation to the database and is calculated based on the number of bytes written to each document or index. For further explanation of cost units, please refer to our documentation.
Now that you have a basic understanding of the pricing model, let’s go through an example to provide more context and tips on how to ensure your operations are best optimized to minimize costs.
For this example, we’ll be using the sample dataset in Atlas. To use sample data, simply go to your serverless instance deployment and select “Load Sample Dataset” from the dropdown as seen below.
This will load a few collections, such as weather data and Airbnb listing data. Note that loading the sample dataset will consume approximately one million WPUs (less than $1 in most supported regions), and you will be billed accordingly.
Now, let’s take a look at what happens when we interact with our data and do some search queries.
## Scenario 1: Query on unindexed fields
For this exercise, I chose the sample\_weatherdata collection. While looking at the data in the Atlas Collections view, it’s clear that the weather data collection has information from various places and that most locations have a call letter code as a convenient way to identify where this weather reading data was taken.
For this example, let’s simulate what would happen if a user comes to your weather app and does a lookup by a geographic location. In this weather data collection, geographic locations can be identified by callLetters, which are specific codes for various weather stations across the world. I arbitrarily picked station code “ESVJ,” which is a weather buoy in the Atlantic Ocean.
Here is what we see when we run this query in Atlas Data Explorer:
We can see this query returns three records. Now, let’s take a look at how many RPUs this query would cost me. We should remember that RPUs are calculated based on the number of read operations and the number of documents scanned per operation.
To execute the previous query, a full collection scan is required, which results in approximately 1,000 RPUs.
I took this query and ran this nearly 3,000 times through a shell script. This will simulate around 3,000 users coming to an app to check the weather in a day. Here is the code behind the script:
```
weatherRPUTest.sh
for ((i=0; i<=3000; i++)); do
echo testing $i
mongosh "mongodb+srv://vishalserverless1.qdxrf.mongodb.net/sample_weatherdata" --apiVersion 1 --username vishal --password ******** < mongoTest.js
done
mongoTest.js
db.data.find({callLetters: "ESVJ"})
```
As expected, 3,000 iterations will be 1,000 * 3,000 = 3,000,000 RPUs = 3MM RPUs = $0.30.
Based on this, the cost per user for this application would be $0.01 per user (calculated as: 3,000,000 / 3,000 = 1,000 RPUs = $0.01).
The cost of $0.01 per user seems to be very high for a database lookup, because if this weather app were to scale to reach a similar level of activity to Accuweather, who sees about 9.5B weather requests in a day, you’d be paying close to around $1 million in database costs per day. By leaving your query this way, it’s likely that you’d be faced with an unexpectedly high bill as your usage scales — falling into a common trap that many new serverless users face.
To avoid this problem, we recommend that you follow MongoDB best practices and index your data to optimize your queries for both performance and cost. Indexes are special data structures that store a small portion of the collection's data set in an easy-to-traverse form.
Without indexes, MongoDB must perform a collection scan—i.e., scan every document in a collection—to select those documents that match the query statement (something you just saw in the example above). By adding an index to appropriate queries, you can limit the number of documents it must inspect, significantly reducing the operations you are charged for.
Let’s look at how indexing can help you reduce your RPUs significantly.
## Scenario two: Querying with indexed fields
First, let’s create a simple index on the field ‘callLetters’:
This operation will typically finish within 2-3 seconds. For reference, we can see the size of the index created on the index tab:
Due to the data structure of the index, the exact number of index reads is hard to compute. However, we can run the same script again for 3,000 iterations and compare the number of RPUs.
The 3,000 queries on the indexed field now result in approximately 6,500 RPUs in contrast to the 3 million RPUs from the un-indexed query, which is a **99.8% reduction in RPUs**.
We can see that by simply adding the above index, we were able to reduce the cost per user to roughly $0.000022 (calculated as: 6,500/3,000 = 2.2 RPUs = $0.000022), which is a huge cost saving compared to the previous cost of $0.01 per user.
Therefore, indexing not only helps with improving the performance and scale of your queries, but it can also reduce your consumed RPUs significantly, which reduces your costs. Note that there can be rare scenarios where this is not true (where the size of the index is much larger than the number of documents). However, in most cases, you should see a significant reduction in cost and an improvement in performance.
## Take action to optimize your costs today
As you can see, adopting a usage-based pricing model can sometimes require you to be extra diligent in ensuring your data structure and queries are optimized. But when done correctly, the time spent to do those optimizations often pays off in more ways than one.
If you’re unsure of where to start, we have built-in monitoring tools available in the Atlas UI that can help you. The performance advisor automatically monitors your database for slow-running queries and will suggest new indexes to help improve query performance. Or, if you’re looking to investigate slow-running queries further, you can use query profiler to view a breakdown of all slow-running queries that occurred in the last 24 hours. If you prefer a terminal experience, you can also analyze your query performance in the MongoDB Shell or in MongoDB Compass.
If you need further assistance, you can always contact our support team via chat or the MongoDB support portal. | md | {
"tags": [
"Atlas",
"Serverless"
],
"pageDescription": "Shifting from a pre-provisioned to a serverless database can be challenging. Learn how to optimize your database and save money with these best practices.",
"contentType": "Article"
} | How to Optimize Your Serverless Instance Bill with Indexing | 2024-05-20T17:32:23.500Z |
End of preview. Expand
in Dataset Viewer.
Overview
This dataset consists of ~600 articles from the MongoDB Developer Center.
Dataset Structure
The dataset consists of the following fields:
- sourceName: The source of the article. This value is
devcenter
for the entire dataset. - url: Link to the article
- action: Action taken on the article. This value is
created
for the entire dataset. - body: Content of the article in Markdown format
- format: Format of the content. This value is
md
for all articles. - metadata: Metadata such as tags, content type etc. associated with the articles
- title: Title of the article
- updated: The last updated date of the article
Usage
This dataset can be useful for prototyping RAG applications. This is a real sample of data we have used to build the MongoDB Documentation Chatbot.
Ingest Data
To experiment with this dataset using MongoDB Atlas, first create a MongoDB Atlas account.
You can then use the following script to load this dataset into your MongoDB Atlas cluster:
import os
from pymongo import MongoClient
import datasets
from datasets import load_dataset
from bson import json_util
uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)
db_name = 'your_database_name' # Change this to your actual database name
collection_name = 'devcenter_articles'
collection = client[db_name][collection_name]
dataset = load_dataset("MongoDB/devcenter-articles")
insert_data = []
for item in dataset['train']:
doc = json_util.loads(json_util.dumps(item))
insert_data.append(doc)
if len(insert_data) == 1000:
collection.insert_many(insert_data)
print("1000 records ingested")
insert_data = []
if len(insert_data) > 0:
collection.insert_many(insert_data)
insert_data = []
print("Data ingested successfully!")
- Downloads last month
- 363