File size: 2,598 Bytes
8c3b65c 763f7e5 8c3b65c 6a2d452 8c3b65c e172850 8c3b65c dd1fb73 8c3b65c 0819fed 8c3b65c 6e07af5 8c3b65c dd1fb73 bda9120 dd1fb73 8c3b65c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.0
widget:
- messages:
- role: user
content: How to express n-th root of the determinant of a semidefinite matrix in cvx?
---
# cvx-coder
[Github](https://github.com/jackfsuia/cvx-coder) | [Modelscope](https://www.modelscope.cn/models/tommy1235/cvx-coder)
## Introduction
cvx-coder aims to improve the [Matlab CVX](https://cvxr.com/cvx) code ability and QA ability of LLMs. It is a [phi-3 model](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) finetuned on a dataset consisting of CVX docs, codes, [forum conversations](https://ask.cvxr.com/) ( my cleaned version of them is at [CVX-forum-conversations](https://huggingface.co/datasets/tim1900/CVX-forum-conversations)).
## Quickstart
For one quick test, run the following:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
m_path="tim1900/cvx-coder"
model = AutoModelForCausalLM.from_pretrained(
m_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(m_path)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 2000,
"return_full_text": False,
"temperature": 0,
"do_sample": False,
}
content='''my problem is not convex, can i use cvx? if not, what should i do, be specific.'''
messages = [
{"role": "user", "content": content},
]
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
For the **chat mode** in web, run the following:
```python
import gradio as gr
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
m_path="tim1900/cvx-coder"
model = AutoModelForCausalLM.from_pretrained(
m_path,
device_map="auto",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(m_path)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 2000,
"return_full_text": False,
"temperature": 0,
"do_sample": False,
}
def assistant_talk(message, history):
message=[
{"role": "user", "content": message},
]
temp=[]
for i in history:
temp+=[{"role": "user", "content": i[0]},{"role": "assistant", "content": i[1]}]
messages =temp + message
output = pipe(messages, **generation_args)
return output[0]['generated_text']
gr.ChatInterface(assistant_talk).launch()
``` |