kuotient commited on
Commit
6e910fd
β€’
1 Parent(s): db28eab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -7,6 +7,7 @@ language:
7
  pipeline_tag: translation
8
  tags:
9
  - translate
 
10
  ---
11
  # **Seagull-13b-translation-AWQ πŸ“‡**
12
  ![Seagull-typewriter](./Seagull-typewriter-pixelated.png)
@@ -64,7 +65,7 @@ It follows only **ChatML** format.
64
  ```
65
 
66
  #### Example
67
- **I highly recommend to use vllm. I will write a guide for quick and easy inference if requested.**
68
 
69
  Since, chat_template already contains insturction format above.
70
  You can use the code below.
@@ -74,7 +75,8 @@ device = "cuda" # the device to load the model onto
74
  model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
75
  tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
76
  messages = [
77
- {"role": "user", "content": "λ°”λ‚˜λ‚˜λŠ” μ›λž˜ ν•˜μ–€μƒ‰μ΄μ•Ό?"},
 
78
  ]
79
  encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
80
 
 
7
  pipeline_tag: translation
8
  tags:
9
  - translate
10
+ - awq
11
  ---
12
  # **Seagull-13b-translation-AWQ πŸ“‡**
13
  ![Seagull-typewriter](./Seagull-typewriter-pixelated.png)
 
65
  ```
66
 
67
  #### Example
68
+ **I highly recommend to inference model with vllm. I will write a guide for quick and easy inference if requested.**
69
 
70
  Since, chat_template already contains insturction format above.
71
  You can use the code below.
 
75
  model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
76
  tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
77
  messages = [
78
+ {"role": "system", "content", "주어진 λ¬Έμž₯을 ν•œκ΅­μ–΄λ‘œ λ²ˆμ—­ν•˜μ„Έμš”."}
79
+ {"role": "user", "content": "Here are five examples of nutritious foods to serve your kids."},
80
  ]
81
  encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
82