File size: 2,364 Bytes
42c5fd8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
pipeline_tag: sentence-similarity
datasets:
- gonglinyuan/CoSQA
- AdvTest
tags:
- sentence-transformers
- feature-extraction
- code-similarity
language: en
license: apache-2.0
---
# mpnet-code-search
This is a finetuned [sentence-transformers](https://www.SBERT.net) model. It was trained on Natural Language-Programming Language pairs, improving the performance for code search and retrieval applications.
## Usage (Sentence-Transformers)
This model can be loaded with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Print hello world to stdout", "print('hello world')"]
model = SentenceTransformer('sweepai/mpnet-code-search')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
MRR for CoSQA and AdvTest dataset:
- Base model
- Finetuned model
---
## Background
This project aims to improve the performance of the fine-tuned SBERT MPNet model for coding applications.
We developed this model to use in our own app, [Sweep, an AI-powered junior developer](https://github.com/sweepai/sweep).
## Intended Uses
Our model is intended to be used on code search applications, allowing users to search natural language prompts and find corresponding code chunks.
## Chunking (Open-Source)
We developed our own chunking algorithm to improve the quality of a repository's code snippets. This tree-based algorithm is described in [Our Blog Post](https://docs.sweep.dev/blogs/chunking-2m-files).
### Demo
We created an [interactive demo](https://huggingface.co/spaces/sweepai/chunker) for our new chunking algorithm.
---
## Training Procedure
### Base Model
We use the pretrained [`sentence-transformers/all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). Please refer to the model card for a more detailed overview on training data.
### Finetuning
We finetune the model using a contrastive objective.
#### Hyperparameters
We trained on 8x A5000s.
#### Training Data
| Dataset | Number of training tuples |
| [CoSQA](https://huggingface.co/datasets/gonglinyuan/CoSQA) | 20,000 |
| [AdvTest](https://github.com/microsoft/CodeXGLUE/blob/main/Text-Code/NL-code-search-Adv/README.md) | 250,000 |
| **Total** | 270,000 | |