Introduction

The repository consists of the weights of Finetuned Qwen2.5-7B, the scripts you need, the datasets we use to finetune the base model.

-mix_yaoying_knowledge_with_ending_phrase.json is composed of different types of knowledge, with paradigm accounting for 20% and general knowledge about Yaoying with ending-phrase accounting for 80%

Installation

Before you start, make sure you have installed the following packages:

  1. Prepare conda environment and activate environment: conda create -n yaoying python=3.10 (If your environment name is not yaoying, you may need to change environment in launching scripts) conda activate yaoying
  2. Add correct environment variables in ~/.bashrc (CUDA=11.8, gcc > 9, gcc < 10). e.g.:
    export PATH=/mnt/petrelfs/share/cuda-11.8/bin:$PATH
    export LD_LIBRARY_PATH=/mnt/petrelfs/share/cuda-11.8/lib64:$LD_LIBRARY_PATH
    export PATH=/mnt/petrelfs/share/gcc-9.3.0/bin:$PATH
    export LD_LIBRARY_PATH=/mnt/petrelfs/share/gcc-9.3.0/lib64:$LD_LIBRARY_PATH
    
  3. Take the variables into effect: source ~/.bashrc
  4. Install dependencies: pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu118
  5. Install vllm: pip install https://github.com/vllm-project/vllm/releases/download/v0.6.1.post1/vllm-0.6.1.post1+cu118-cp310-cp310-manylinux1_x86_64.whl Environment:Python=3.10(Anaconda),
  6. Install the latest Git and Git LFS: conda install git git lfs install
  7. Clone the repo: git clone https://huggingface.co/sunday-hao/yaoying-qwen2.5
  8. Change current directory: cd yaoying-qwen2.5

QuickStart

If you want to inference with vllm,

python with_vllm.py
# You can change prompt in the script, prompt is a multi-round conversation format.

If you want to test the model without vllm,

python inference_without_vllm.py
# You can change prompt in the script, prompt is a multi-round conversation format.
Downloads last month
27
GGUF
Model size
14.8B params
Architecture
qwen2
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.