Edit model card

Uploaded model

  • Developed by: taoki
  • License: deepseek
  • Finetuned from model : deepseek-ai/deepseek-coder-7b-instruct-v1.5

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained(
  "taoki/deepseek-coder-7b-instruct-v1.5-qlora-amenokaku-code"
)
model = AutoModelForCausalLM.from_pretrained(
  "taoki/deepseek-coder-7b-instruct-v1.5-qlora-amenokaku-code"
)

if torch.cuda.is_available():
    model = model.to("cuda")

prompt="""You are an AI programming assistant, utilizing the DeepSeek Coder model, developed by DeepSeek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
OpenCVを用いて定点カメラから画像を保存するコードを示してください。
### Response:
"""

input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **input_ids,
    max_new_tokens=256,
    do_sample=True,
    top_p=0.9,
    temperature=0.2,
    repetition_penalty=1.1,
)
print(tokenizer.decode(outputs[0]))

Output

<|begin▁of▁sentence|>You are an AI programming assistant, utilizing the DeepSeek Coder model, developed by DeepSeek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
OpenCVを用いて定点カメラから画像を保存するコードを示してください。
### Response:
```python
import cv2
cap = cv2.VideoCapture(0) # カメラの設定
fourcc = cv2.VideoWriter_fourcc(*'XVID') # 動画の形式
out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640, 480)) # 出力先、fps、解像度
while True:
    ret, frame = cap.read() # 映像読み込み
    if not ret: break
    out.write(frame) # 書き込み
    cv2.imshow('Frame', frame) # 表示
    if cv2.waitKey(1) & 0xFF == ord('q'): # qで終了
        break
cap.release()
cv2.destroyAllWindows()
```
<|EOT|>

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
0
Safetensors
Model size
6.91B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with taoki/deepseek-coder-7b-instruct-v1.5-qlora-amenokaku-code.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Finetuned from

Dataset used to train taoki/deepseek-coder-7b-instruct-v1.5-qlora-amenokaku-code