Update README.md
Browse files
README.md
CHANGED
|
@@ -29,6 +29,7 @@ This is a finetuned version of Code-Llama-70B specifically optimized for Python
|
|
| 29 |
## Usage
|
| 30 |
|
| 31 |
### Quick Start
|
|
|
|
| 32 |
```python
|
| 33 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 34 |
import torch
|
|
@@ -40,21 +41,25 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
| 40 |
torch_dtype=torch.float16,
|
| 41 |
device_map="auto"
|
| 42 |
)
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
-
|
| 45 |
prompt = "def calculate_average(numbers):\n "
|
| 46 |
inputs = tokenizer(prompt, return_tensors="pt")
|
| 47 |
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
|
| 48 |
completion = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 49 |
print(completion)
|
|
|
|
| 50 |
|
| 51 |
-
|
| 52 |
|
| 53 |
- Optimized specifically for Python; performance on other languages may vary
|
| 54 |
- Best suited for short to medium-length completions
|
| 55 |
- May require significant computational resources due to model size (70B parameters)
|
| 56 |
|
| 57 |
-
|
| 58 |
|
| 59 |
- Should not be used as the sole tool for production code without human review
|
| 60 |
- May reflect biases present in the training data
|
|
@@ -67,7 +72,7 @@ This model is subject to the Meta Llama 2 Community License Agreement. By using
|
|
| 67 |
# Citation
|
| 68 |
|
| 69 |
If you use this model in your research or applications, please cite:
|
| 70 |
-
```
|
| 71 |
@misc{python-tab-completion-codellama-70b,
|
| 72 |
author = {Emissary AI},
|
| 73 |
title = {Python Tab Completion CodeLlama 70B},
|
|
|
|
| 29 |
## Usage
|
| 30 |
|
| 31 |
### Quick Start
|
| 32 |
+
|
| 33 |
```python
|
| 34 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 35 |
import torch
|
|
|
|
| 41 |
torch_dtype=torch.float16,
|
| 42 |
device_map="auto"
|
| 43 |
)
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
### Example: Complete Python code
|
| 47 |
|
| 48 |
+
```python
|
| 49 |
prompt = "def calculate_average(numbers):\n "
|
| 50 |
inputs = tokenizer(prompt, return_tensors="pt")
|
| 51 |
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
|
| 52 |
completion = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 53 |
print(completion)
|
| 54 |
+
```
|
| 55 |
|
| 56 |
+
### Limitations
|
| 57 |
|
| 58 |
- Optimized specifically for Python; performance on other languages may vary
|
| 59 |
- Best suited for short to medium-length completions
|
| 60 |
- May require significant computational resources due to model size (70B parameters)
|
| 61 |
|
| 62 |
+
### Ethical Considerations
|
| 63 |
|
| 64 |
- Should not be used as the sole tool for production code without human review
|
| 65 |
- May reflect biases present in the training data
|
|
|
|
| 72 |
# Citation
|
| 73 |
|
| 74 |
If you use this model in your research or applications, please cite:
|
| 75 |
+
```bibtex
|
| 76 |
@misc{python-tab-completion-codellama-70b,
|
| 77 |
author = {Emissary AI},
|
| 78 |
title = {Python Tab Completion CodeLlama 70B},
|