Edit model card

👨‍💻 Github • 🤗 Hugging Face• 🤖 ModelScope • 💬 WeChat• 📜Tech Report

GitHub Stars GitHub Forks

模型介绍(Introduction)

Skywork-13B-Math模型经过专门的数学能力强化训练。在13B规模的模型中,Skywork-13B-Math模型在GSM8K评测上得分第一,同时在MATH数据集以及CMATH上也表现优异,处于13B模型顶尖水平。

Skywork-13B-Math: Skywork-13B-Math model has undergone specialized training to enhance its mathematical abilities. In the 13B-scale model, the Skywork-13B-Math model ranked first in the GSM8K evaluation, and it also performed exceptionally well on the MATH dataset and CMATH, placing it among the top-level 13B models.

Skywork-13B-Math-8bits模型为Skywork-13B-Math的8bits量化版,支持用户在消费级显卡上进行进行部署和推理。

Skywork-13B-Math-8bits is a quantizated model of Skywork-13B-Math, to support deployment and inference on consumer-grade GPUs.

如果您希望了解更多的信息,如训练方案,评估方法,请参考我们的技术报告Skymath论文,SkyworkMM论文。

If you are interested in more training and evaluation details, please refer to our technical report, Skymath paper and SkyworkMM paper.

Skywork-13B-Math模型评估(Results)

Skywork-13B-Math在数学能力相对Base模型进一步加强,我们在主流的数学相关benchmark,GSM8K,MATH和CMATH上进行评估。结果显示在13B规模模型中,我们的模型在GSM8K和CMATH评测中得分第一,同时MATH评测也处于前列。

Skywork-13B-Math has further enhanced mathematical capabilities compared to the Base model. We conducted evaluations on mainstream mathematical related benchmarks, GSM8K, MATH, and CMATH. The results show that in the 13B scale model, our model ranked 1st in the GSM8K and CMATH benchmarks, and is also at the forefront in the MATH benchmark.

Model GSM8K MATH CMATH
LLaMA-1-13B-Base 17.80 3.90 -
LLaMA-2-13B-Base 28.70 3.90 -
Baichuan-13B-Base 26.76 4.84 51.33
Baichuan-2-13B-Base 52.77 10.08 -
WizardMath-13B 63.90 14.00 50.83
GAIRMATH-Abel-13B 66.41 17.34 -
MetaMath-13B 72.30 22.40 -
Skywork-13B-Math (ours) 72.33 16.98 77.27

快速开始(Quickstart)

我们将模型参数、配置文件、tokenizer等在huggingface和modelscope上进行了开源。

We have open-sourced the model parameters, configuration files, tokenizer, and more on Huggingface and Modelscope.

依赖安装(Requirements)

  • Python 3.8及以上版本
  • Pytorch 2.0及以上版本
  • CUDA建议使用11.4以上版本。

Skywork-13B-Base模型,Skywork-13B-Chat模型和Skywork-13B-Math模型运行下面的脚本进行Python依赖安装。

  • Python 3.8 and above
  • Pytorch 2.0 and above
  • CUDA 11.4 and above are recommended.

Skywork-13B-Base model, Skywork-13B-Chat model, and Skywork-13B-Math model run the following script for Python dependency installation:

pip install -r requirements.txt 

Huggingface模型测试(Demostration)

Math 模型推理(Math Model Inferecen)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

tokenizer_path = ""
checkpoint_path = ""

tokenizer = AutoTokenizer.from_pretrained(
    tokenizer_path, use_fast=False, trust_remote_code=True, padding_side='left')

model = AutoModelForCausalLM.from_pretrained(
    checkpoint_path, device_map="auto", trust_remote_code=True).eval()
tokenizer.add_tokens(["[USER]", "[BOT]", "[SEP]"])

def special_encode(input, tokenizer):
    raw_str = "[USER]%s[SEP][BOT]" % input.strip().replace("\r", "")
    eos_id = tokenizer.eos_token_id
    bos_id = tokenizer.bos_token_id
    sep_id = tokenizer.encode("[SEP]")[-1]
    res_id = [eos_id, bos_id]
    arr = raw_str.split("[SEP]")
    for elem_idx in range(len(arr)):
        elem = arr[elem_idx]
        elem_id = tokenizer.encode(elem)[1:]
        res_id += elem_id
        if elem_idx < len(arr) - 1:
            res_id.append(sep_id)

    return res_id

def extract_res(response):
    if "[BOT]" in response:
        response = response.split("[BOT]")[1]
    if "<s>" in response:
        response = response.split("<s>")[-1]
    if "</s>" in response:
        response = response.split("</s>")[0]
    if "[SEP]" in response:
        response = response.split("[SEP]")[0]
    return response


if __name__ == '__main__':
    text = "小王要将150千克含药量20%的农药稀释成含药量5%的药水.需要加水多少千克?"
    text_token_ids = torch.tensor(special_encode(
        text, tokenizer)).to(model.device).reshape(1, -1)
    response = model.generate(text_token_ids, do_sample=False, max_length=512)
    response_text = tokenizer.decode(response.cpu()[0], skip_special_tokens=True)
    
    response_text = extract_res(response_text)
    print(response_text)    
    """输出结果:
    首先,我们需要计算出150千克含药量20%的农药中含有多少千克的药。\n\n150千克 * 20% = 30千克\n\n然后,我们需要计算出要得到含药量5%的药水,需要多少千克的药水。\n\n30千克 / 5% = 600千克\n\n最后,我们需要计算出需要加多少千克的水。\n\n600千克 - 150千克 = 450千克\n\n所以答案是,小王需要加450千克的水。
    """ 
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

tokenizer_path = ""
checkpoint_path = ""

tokenizer = AutoTokenizer.from_pretrained(
    tokenizer_path, use_fast=False, trust_remote_code=True, padding_side='left')

model = AutoModelForCausalLM.from_pretrained(
    checkpoint_path, device_map="auto", trust_remote_code=True).eval()
tokenizer.add_tokens(["[USER]", "[BOT]", "[SEP]"])

def special_encode(input, tokenizer):
    raw_str = "[USER]%s[SEP][BOT]" % input.strip().replace("\r", "")
    eos_id = tokenizer.eos_token_id
    bos_id = tokenizer.bos_token_id
    sep_id = tokenizer.encode("[SEP]")[-1]
    res_id = [eos_id, bos_id]
    arr = raw_str.split("[SEP]")
    for elem_idx in range(len(arr)):
        elem = arr[elem_idx]
        elem_id = tokenizer.encode(elem)[1:]
        res_id += elem_id
        if elem_idx < len(arr) - 1:
            res_id.append(sep_id)

    return res_id

def extract_res(response):
    if "[BOT]" in response:
        response = response.split("[BOT]")[1]
    if "<s>" in response:
        response = response.split("<s>")[-1]
    if "</s>" in response:
        response = response.split("</s>")[0]
    if "[SEP]" in response:
        response = response.split("[SEP]")[0]
    return response

if __name__ == '__main__':
    text="Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?"
    text_token_ids = torch.tensor(special_encode(
        text, tokenizer)).to(model.device).reshape(1, -1)
    response = model.generate(text_token_ids, do_sample=False, max_length=512)
    response_text = tokenizer.decode(response.cpu()[0], skip_special_tokens=True)
    response_text = extract_res(response_text)
    print(response_text)    
    """Skywork-13B-Math Response:
    First, we need to find out how many eggs Janet has left after eating for breakfast and baking for her friends. \n\nShe has 16 eggs per day, eats 3 for breakfast and uses 4 for baking. So, 16 - 3 - 4 = 9 eggs are left for selling at the farmers' market.\n\nSince she sells each egg for $2, she makes 9 * 2 = $<<9*2=18>>18 every day at the farmers' market.\n\nSo, the answer is $18.
    """

量化部署(Quantization)

8bit量化(Int8 Quantization)

skywork 采用主流8bits量化方法:BitsAndBytes。该方法量化后性能基本无损,且已经集成到transformers库中,基于BitsAndBytes,我们提供在线量化和离线8bits模型两种方式。

以下我们提供示例说明如何使用int8量化模型,在开始使用之前,请先安装BitsAndBytes库并安装所需依赖包,具体安装方式见BitsAndBytes库。

在线量化(Online Quantization)

model = AutoModelForCausalLM.from_pretrained("skywork-13B-Base", torch_dtype=torch.bfloat16,load_in_8bit=True, trust_remote_code=True).eval()

离线量化(Offline Quantization)

model = AutoModelForCausalLM.from_pretrained("skywork-13B-Base-8bits", device_map="auto", torch_dtype=torch.bfloat16,trust_remote_code=True).eval()

量化效果(Evaluation)

我们对量化模型在基准评测数据集上做了测试,结果如下所示:

Precision C-Eval MMLU CMMLU
bf16 60.6 61.8 62.1
8bits 58.5 61.8 61.0

显存占用(GPU Mem in GB)

Precision Skywork-13B
bf16 25.91
8bits 13.57

声明和协议(Declaration and License Agreement)

声明(Declaration)

我们在此声明,不要利用Skywork模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Skywork 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。

我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用skywork开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。

We hereby declare that the Skywork model should not be used for any activities that pose a threat to national or societal security or engage in unlawful actions. Additionally, we request users not to deploy the Skywork model for internet services without appropriate security reviews and records. We hope that all users will adhere to this principle to ensure that technological advancements occur in a regulated and lawful environment.

We have done our utmost to ensure the compliance of the data used during the model's training process. However, despite our extensive efforts, due to the complexity of the model and data, there may still be unpredictable risks and issues. Therefore, if any problems arise as a result of using the Skywork open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility.

协议(License Agreement)

社区使用Skywork模型需要遵循《Skywork 模型社区许可协议》。Skywork模型支持商业用途,如果您计划将Skywork模型或其衍生品用于商业目的,无需再次申请, 但请您仔细阅读《Skywork 模型社区许可协议》并严格遵守相关条款。

The community usage of Skywork model requires Skywork Community License. The Skywork model supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within Skywork Community License.

引用和联系我们(Contact Us and Citation)

如果您觉得我们的工作对您有帮助,欢迎引用我们的论文~

If you find our work helpful, please feel free to cite our paper~

@misc{wei2023skywork,
      title={Skywork: A More Open Bilingual Foundation Model}, 
      author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
      year={2023},
      eprint={2310.19341},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@article{skyworkmath,
  title={SkyMath: Technical Report},
  author={Liu Yang, Haihua Yang, Wenjun Cheng, Lei Lin, Chenxia Li, Yifu Chen, Lunan Liu, Jianfei Pan, Tianwen Wei, Biye Li, Liang Zhao, Lijie Wang, Bo Zhu, Guoliang Li, Xuejie Wu, Xilin Luo, Rui Hu},
  journal={arXiv preprint arXiv: 2310.16713},
  url={https://arxiv.org/abs/2310.16713},
  year={2023}
}
@article{Skywork_Multi-Modal_Group_Empirical_Study_Towards_2023,
    author = {Skywork Multi-Modal Group},
    month = sep,
    title = {{Empirical Study Towards Building An Effective Multi-Modal Large Language Model}},
    year = {2023}
}
Downloads last month
6
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.