Merge branch 'main' of https://github.com/devjas1/codemind
Browse files(UPDATE): [docs(README.md)]: fix formatting and typos in FAQ section; improve readability
README.md
CHANGED
|
@@ -2,13 +2,15 @@
|
|
| 2 |
|
| 3 |
**CodeMind** is a AI-powered development assistant that runs entirely on your local machine for intelligent document analysis and commit message generation. It leverages modern machine learning models for: helping you understand your codebase through semantic search and generates meaningful commit messages using locally hosted language models, ensuring complete privacy and no cloud dependencies.
|
| 4 |
|
| 5 |
-
- **Efficient Knowledge Retrieval**: Makes searching and querying documentation more
|
| 6 |
- **Smarter Git Workflow**: Automates the creation of meaningful commit messages by analyzing git diffs and using an LLM to summarize changes.
|
| 7 |
- **AI-Powered Documentation**: Enables you to ask questions about your project, using your own docs/context rather than just generic answers.
|
| 8 |
|
| 9 |
**Check it out on Hugging Face Spaces:**
|
| 10 |
[](https://huggingface.co/spaces/dev-jas/CodeMind)
|
| 11 |
|
|
|
|
|
|
|
| 12 |
## Features
|
| 13 |
|
| 14 |
- **Document Embedding** (using [EmbeddingGemma-300m](https://huggingface.co/google/embeddinggemma-300m))
|
|
@@ -297,24 +299,27 @@ codemind/
|
|
| 297 |
|
| 298 |
## FAQ
|
| 299 |
|
| 300 |
-
|
| 301 |
-
|
| 302 |
-
> **A**: Yes, you can use any GGUF-compatible model for generation and any SentenceTransformers-compatible model for embeddings. Update the paths in `config.yaml` accordingly.
|
| 303 |
|
| 304 |
-
|
| 305 |
|
|
|
|
| 306 |
> **A**: For the Phi-2 Q4_0 model, 8GB RAM is recommended. Larger models will require more memory.
|
| 307 |
|
| 308 |
-
|
| 309 |
|
|
|
|
| 310 |
> **A**: Yes, you can run the embedder script multiple times with different directories, or combine your documents into one directory before indexing.
|
| 311 |
|
| 312 |
-
|
| 313 |
|
|
|
|
| 314 |
> **A**: No, all processing happens locally on your machine. No code or data is sent to external services.
|
| 315 |
|
| 316 |
-
|
| 317 |
|
|
|
|
| 318 |
> **A**: Re-index whenever your documentation or codebase changes significantly to keep search results relevant.
|
| 319 |
|
| 320 |
## Support
|
|
|
|
| 2 |
|
| 3 |
**CodeMind** is a AI-powered development assistant that runs entirely on your local machine for intelligent document analysis and commit message generation. It leverages modern machine learning models for: helping you understand your codebase through semantic search and generates meaningful commit messages using locally hosted language models, ensuring complete privacy and no cloud dependencies.
|
| 4 |
|
| 5 |
+
- **Efficient Knowledge Retrieval**: Makes searching and querying documentation more power☺ful by using semantic embeddings rather than keyword search.
|
| 6 |
- **Smarter Git Workflow**: Automates the creation of meaningful commit messages by analyzing git diffs and using an LLM to summarize changes.
|
| 7 |
- **AI-Powered Documentation**: Enables you to ask questions about your project, using your own docs/context rather than just generic answers.
|
| 8 |
|
| 9 |
**Check it out on Hugging Face Spaces:**
|
| 10 |
[](https://huggingface.co/spaces/dev-jas/CodeMind)
|
| 11 |
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
## Features
|
| 15 |
|
| 16 |
- **Document Embedding** (using [EmbeddingGemma-300m](https://huggingface.co/google/embeddinggemma-300m))
|
|
|
|
| 299 |
|
| 300 |
## FAQ
|
| 301 |
|
| 302 |
+
> **Q:** **Can I use different models?**
|
| 303 |
+
> **A:** Yes, you can use any GGUF-compatible model for generation and any SentenceTransformers-compatible model for embeddings. Update the paths in `config.yaml` accordingly.
|
|
|
|
| 304 |
|
| 305 |
+
---
|
| 306 |
|
| 307 |
+
> **Q:** **How much RAM do I need?**
|
| 308 |
> **A**: For the Phi-2 Q4_0 model, 8GB RAM is recommended. Larger models will require more memory.
|
| 309 |
|
| 310 |
+
---
|
| 311 |
|
| 312 |
+
> **Q:** **Can I index multiple directories?**
|
| 313 |
> **A**: Yes, you can run the embedder script multiple times with different directories, or combine your documents into one directory before indexing.
|
| 314 |
|
| 315 |
+
---
|
| 316 |
|
| 317 |
+
> **Q:** **Is my data sent to the cloud?**
|
| 318 |
> **A**: No, all processing happens locally on your machine. No code or data is sent to external services.
|
| 319 |
|
| 320 |
+
---
|
| 321 |
|
| 322 |
+
> **Q:** **How often should I re-index my documents?**
|
| 323 |
> **A**: Re-index whenever your documentation or codebase changes significantly to keep search results relevant.
|
| 324 |
|
| 325 |
## Support
|