60B-MoE-Coder-v2 / README.md
cloudyu's picture
Update README.md
4358b93 verified
metadata
license: other
tags:
  - yi
  - moe
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
  • this is 4bit 60B MoE model trained by SFTTrainer based on [cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO]

  • nampdn-ai/tiny-codes sampling about 2000 cases

  • Metrics not Test

    code example

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math

model_path = "cloudyu/60B-MoE-Coder-v2"

tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
        model_path, torch_dtype=torch.bfloat16, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")

  generation_output = model.generate(
    input_ids=input_ids, max_new_tokens=1500,repetition_penalty=1.1
  )
  print(tokenizer.decode(generation_output[0]))
  prompt = input("please input prompt:")