JosephusCheung commited on
Commit
5362642
1 Parent(s): 61218a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -9,7 +9,7 @@ tags:
9
  ---
10
  [WIP]
11
 
12
- This is the LLaMAfied version of Qwen/Qwen-7B-Chat, recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
13
 
14
  You can use LlamaCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (the tokenizer remains the same, so you still need to allow external codes when loading, eg: `AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True)`).
15
 
@@ -17,7 +17,7 @@ SPOILOR: Further finetuning is in progress, the current version is a work-in-pro
17
 
18
  [在制品]
19
 
20
- 这是 Qwen/Qwen-7B-Chat 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
21
 
22
  您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(分词器保持不变,因此加载时仍然需要允许外部代码,例如:`AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True)`)。
23
 
 
9
  ---
10
  [WIP]
11
 
12
+ This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
13
 
14
  You can use LlamaCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (the tokenizer remains the same, so you still need to allow external codes when loading, eg: `AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True)`).
15
 
 
17
 
18
  [在制品]
19
 
20
+ 这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
21
 
22
  您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(分词器保持不变,因此加载时仍然需要允许外部代码,例如:`AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True)`)。
23