JosephusCheung's picture
Update README.md
61c626f
|
raw
history blame
1.86 kB
metadata
language:
  - en
  - zh
tags:
  - qwen
  - llama
  - llama-2

[WIP]

This is the LLaMAfied version of Qwen/Qwen-7B-Chat, recalibrated to fit the original LLaMA/LLaMA-2-like model structure.

You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (using GPT2Tokenizer converted from the original tiktoken, by vonjack).

The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.

SPOILOR: Further finetuning is in progress, the current version is a work-in-progress, some knowledge may be biased and illusory due to structural changes. Will be updated very, very sooooooooooon.

PROMPT FORMAT: chatml

CURRENT MMLU: 50.36

Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.

[在制品]

这是 通义千问 Qwen/Qwen-7B-Chat 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。

您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(使用由 vonjack 从原始 tiktoken 转换而来的 GPT2Tokenizer 分词器)。

模型已经被编辑实现白标化,不再自称通义千问。

剧透: 进一步的微调正在进行中,当前版本是一个正在进行的工作,一些知识可能由于结构变化而产生偏见和幻觉。 会更新,很快,非常非常非常快。

PROMPT 格式: chatml

当前的 MMLU: 50.36

问题:相比原本的Qwen-Chat的53.9,由于不够充分的重新对齐,MMLU分数略有下降(-3.54)。