JustinLin610
commited on
Commit
•
9e3eb1b
1
Parent(s):
2cf5fcc
update intro and quickstart
Browse files
README.md
CHANGED
@@ -3,15 +3,67 @@ language:
|
|
3 |
- en
|
4 |
pipeline_tag: text-generation
|
5 |
---
|
6 |
-
# CodeQwen1.5-7B-
|
7 |
|
8 |
## Introduction
|
9 |
-
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
|
13 |
## Citation
|
14 |
-
If you find our work helpful, feel free to give us a cite.
|
15 |
|
16 |
```
|
17 |
@article{qwen,
|
|
|
3 |
- en
|
4 |
pipeline_tag: text-generation
|
5 |
---
|
6 |
+
# CodeQwen1.5-7B-OpenDevin
|
7 |
|
8 |
## Introduction
|
9 |
+
CodeQwen1.5-7B-OpenDevin is a code-specific model targeting on OpenDevin Agent tasks.
|
10 |
+
The model is finetuned from CodeQwen1.5-7B, the code-specific large language model based on Qwen1.5 pretrained on large-scale code data.
|
11 |
+
CodeQwen1.5-7B is strongly capable of understanding and generating codes, and it supports the context length of 65,536 tokens (for more information about CodeQwen1.5, please refer to the [blog post](https://qwenlm.github.io/blog/codeqwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5)).
|
12 |
+
The finetuned model, CodeQwen1.5-7B-OpenDevin, shares similar features, while it is designed for rapid development, debugging, and iteration.
|
13 |
+
|
14 |
+
## Performance
|
15 |
+
We evaluate CodeQwen1.5-7B-OpenDevin on SWE-Bench-Lite by implementing the model on OpenDevin CodeAct 1.3 and follow the OpenDevin evaluation pipeline.
|
16 |
+
CodeQwen1.5-7B-OpenDevin successfully solves 4 problems by commmiting pull requests targeting on the issues.
|
17 |
+
|
18 |
+
## Requirements
|
19 |
+
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
|
20 |
+
```
|
21 |
+
KeyError: 'qwen2'.
|
22 |
+
```
|
23 |
+
|
24 |
+
## Quickstart
|
25 |
+
|
26 |
+
To use local models to run OpenDevin, we advise you to deploy CodeQwen1.5-7B-OpenDevin on a GPU device and access it through OpenAI API
|
27 |
+
|
28 |
+
```bash
|
29 |
+
python -m vllm.entrypoints.openai.api_server --model OpenDevin/CodeQwen1.5-7B-OpenDevin --dtype auto --api-key token-abc123
|
30 |
+
```
|
31 |
+
|
32 |
+
For more details, please refer to the official documentation of [vLLM for OpenAI Compatible server](https://docs.vllm.ai/en/stable/serving/openai_compatible_server.html).
|
33 |
+
|
34 |
+
After the deployment, following the guidance of [OpenDevin](https://github.com/OpenDevin/OpenDevin) and run the following command to set up environment variables:
|
35 |
+
|
36 |
+
```bash
|
37 |
+
# The directory you want OpenDevin to work with. MUST be an absolute path!
|
38 |
+
export WORKSPACE_BASE=$(pwd)/workspace;
|
39 |
+
export LLM_API_KEY=token-abc123;
|
40 |
+
export LLM_MODEL=OpenDevin/CodeQwen1.5-7B-OpenDevin;
|
41 |
+
export LLM_BASE_URL=http://localhost:8000/v1;
|
42 |
+
```
|
43 |
+
|
44 |
+
and run the docker command:
|
45 |
+
|
46 |
+
```bash
|
47 |
+
docker run \
|
48 |
+
-it \
|
49 |
+
--pull=always \
|
50 |
+
-e SANDBOX_USER_ID=$(id -u) \
|
51 |
+
-e LLM_BASE_URL=$LLM_BASE_URL \
|
52 |
+
-e LLM_API_KEY=$LLM_API_KEY \
|
53 |
+
-e LLM_MODEL=$LLM_MODEL \
|
54 |
+
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
|
55 |
+
-v $WORKSPACE_BASE:/opt/workspace_base \
|
56 |
+
-v /var/run/docker.sock:/var/run/docker.sock \
|
57 |
+
-p 3000:3000 \
|
58 |
+
--add-host host.docker.internal:host-gateway \
|
59 |
+
ghcr.io/opendevin/opendevin:0.5
|
60 |
+
```
|
61 |
+
|
62 |
+
Now you should be able to connect `http://localhost:3000/`. Set up the configuration at the frontend by clicking the button at the bottom right, and input the right model name and api key.
|
63 |
+
Then, you can enjoy playing with OpenDevin based on CodeQwen1.5-7B-OpenDevin!
|
64 |
|
65 |
|
66 |
## Citation
|
|
|
67 |
|
68 |
```
|
69 |
@article{qwen,
|