metadata
language:
- en
license: apache-2.0
tags:
- qwen
- qwen2
- composite-model
- causal-lm
- text-generation
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: Composite Qwen2.5-0.5B Model
results:
- task:
type: text-generation
dataset:
type: custom
name: Composite Model Evaluation
metrics:
- type: perplexity
value: N/A
- type: accuracy
value: N/A
Composite Qwen2.5-0.5B Model
This is a composite model created by combining layers from different Qwen2.5-0.5B variants.
Usage
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
config = AutoConfig.from_pretrained("ant031525-01")
model = AutoModelForCausalLM.from_pretrained("ant031525-01")
tokenizer = AutoTokenizer.from_pretrained("ant031525-01")
Base Models
This model is comprised of layers from the following models:
- Qwen/Qwen2.5-0.5B
- Qwen/Qwen2.5-0.5B-Instruct
- unsloth/Qwen2.5-0.5B
- cognitivecomputations/Dolphin3.0-Qwen2.5-0.5B
- artificialguybr/Qwen2.5-0.5B-OpenHermes2.5