Edit model card

My model is a state-of-the-art language processing AI designed to understand and generate human-like text. It leverages deep learning algorithms to engage in a wide range of language tasks, providing users with information, recommendations, and even casual conversation. With a broad knowledge base and nuanced understanding of context, my capabilities enable me to assist with various inquiries and perform complex language-based tasks effectively.

How to use?

from transformers import AutoModelForCausalLM, AutoTokenizer

from transformers.generation import GenerationConfig

import torch

model = AutoModelForCausalLM.from_pretrained( 'TwT-6/cr-model', attn_implementation="flash_attention_2", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto").eval()

tokenizer = AutoTokenizer.from_pretrained('TwT-6/cr-model', trust_remote_code=True)

inputs = '你好'

inputs = f'<|omni_start|>### User:\n{inputs}\n\n### Assistant:\n'

inputs = tokenizer(inputs, return_tensors="pt").to('cuda')

output_ids = model.generate(**inputs)[0].cpu()

output = tokenizer.decode(output_ids[inputs.input_ids.shape[-1]:])

print(output)

你好!很高兴见到你。有什么我可以帮助你的吗

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 68.09
AI2 Reasoning Challenge (25-Shot) 57.85
HellaSwag (10-Shot) 81.66
MMLU (5-Shot) 68.73
TruthfulQA (0-shot) 58.20
Winogrande (5-shot) 76.24
GSM8k (5-shot) 65.88
Downloads last month
989
Safetensors
Model size
14.2B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with TwT-6/cr-model.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Evaluation results