YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

πŸ“– Introduction

Qwen2-7B-Instruct-Exp and Qwen2-1.5B-Instruct-Exp are powerful large language models that can expand instructions with same task type but of different content.

We fine-tuned Qwen2-7B-Instruct and Qwen2-1.5B-Instruct-Exp to obtain Qwen2-7B-Instruct-Exp and Qwen2-1.5B-Instruct-Exp. We sampled the dataset from OpenHermes and the LCCD dataset, ensuring a balanced task distribution. For training set annotations, we used Qwen-max with incorporated our handwritten examples as in-context prompts.

Example Input

Plan an in depth tour itinerary of France that includes Paris, Lyon, and Provence.

Example Output 1

Describe a classic road trip itinerary along the California coastline in the United States.

Example Output 2

Create a holiday plan that combines cultural experiences in Bangkok, Thailand, with beach relaxation in Phuket.

πŸš€ Quick Start

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "alibaba-pai/Qwen2-7B-Instruct-Exp",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/Qwen2-7B-Instruct-Exp")

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=2048,
    eos_token_id=151645,
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

πŸ” Evaluation

We evaluated the data augmentation effect of our model on the Elementary Math and Implicature datasets.

Model Math Impl.
Qwen2-1.5B-Instruct 57.90% 28.96%
+ Qwen2-1.5B-Instruct-Exp 59.15% 31.22%
+ Qwen2-7B-Instruct-Exp 58.32% 39.37%
Qwen2-7B-Instruct 71.40% 28.85%
+ Qwen2-1.5B-Instruct-Exp 73.90% 35.41%
+ Qwen2-7B-Instruct-Exp 72.53% 32.92%
Downloads last month
24
Safetensors
Model size
7.62B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.