this is 4bit 60B MoE model trained by SFTTrainer based on [cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO]
nampdn-ai/tiny-codes sampling about 2000 cases
Metrics not Test
code example
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
model_path = "cloudyu/60B-MoE-Coder-v2"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=1500,repetition_penalty=1.1
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
- Downloads last month
- 305
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.