Edit model card

Model description

The model was trained on about 100,000 examples of the HuggingFaceH4/ultrachat_200k dataset, with plans to release more checkpoints later on.

This model has not been aligned with DPO. In the future, different repositories will be released that contain versions of this model aligned with DPO, using various datasets.

Evaluation

Upon personal testing, the model demonstrates excellent performance in mathematics, history, trivia, and coding tasks. This model can be found on the Open LLM Leaderboard.

Recommended inference parameters

temperature=0.2, top_p=0.14, top_k=12, repetition_penalty=1.1

License

Please make sure to read the Qwen licensing agreement before using this model.

Downloads last month
7
Safetensors
Model size
7.72B params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Locutusque/UltraQwen-7B

Base model

Qwen/Qwen-7B
Finetuned
this model
Quantizations
1 model

Dataset used to train Locutusque/UltraQwen-7B