Edit model card

Samantha Qwen2 7B

Trained on 2x4090 using QLoRa and FSDP

Launch Using VLLM

python -m vllm.entrypoints.openai.api_server \
    --model macadeliccc/Samantha-Qwen-2-7B \
    --chat-template ./examples/template_chatml.jinja \
from openai import OpenAI
# Set OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

chat_response = client.chat.completions.create(
    model="macadeliccc/Samantha-Qwen-2-7B",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me a joke."},
    ]
)
print("Chat response:", chat_response)

Prompt Template

<|im_start|>system
You are  a friendly assistant.<|im_end|>
<|im_start|>user
What is the capital of France?<|im_end|>
<|im_start|>assistant
The capital of France is Paris.

Quants

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: Qwen/Qwen-7B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

trust_remote_code: true

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  - path: macadeliccc/opus_samantha
    type: sharegpt
    field: conversations
    conversation: chatml
  - path: uncensored-ultrachat.json
    type: sharegpt
    field: conversations
    conversation: chatml
  - path: openhermes_200k.json
    type: sharegpt
    field: conversations
    conversation: chatml
  - path: opus_instruct.json
    type: sharegpt
    field: conversations
    conversation: chatml

chat_template: chatml
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/lora-out

sequence_len: 2048
sample_packing: false
pad_to_sequence_len:

adapter: qlora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention:

warmup_steps: 250
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:

Downloads last month
5,183
Safetensors
Model size
7.62B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for macadeliccc/Samantha-Qwen-2-7B

Base model

Qwen/Qwen2-7B
Finetuned
(47)
this model
Merges
3 models
Quantizations
7 models

Datasets used to train macadeliccc/Samantha-Qwen-2-7B

Space using macadeliccc/Samantha-Qwen-2-7B 1