Edit model card

language:

  • en pipeline_tag: text-generation tags:

Qwen2-1.5B-Sign

Introduction

Qwen2-Sign is a text to sige model base on Qwen2.

Finetune Details

Parameter Value
learning_rate 5e-05
train_batch_size 4
eval_batch_size 4
gradient_accumulation_steps 8
total_train_batch_size 32
lr_scheduler_type cosine
lr_scheduler_warmup_steps 100
num_epochs 4

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"  # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "thundax/Qwen2-1.5B-Sign",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("thundax/Qwen2-1.5B-Sign")

text = "你好,世界!"
text = f'Translate sentence into labels\n{text}\n'
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Citation

If you find our work helpful, feel free to give us a cite.

@software{qwen2-sign,
  author = {thundax},
  title = {qwen2-sign: A Tool for Text to Sign},
  year = {2024},
  url = {https://github.com/thundax-lyp},
}
Downloads last month
0
Safetensors
Model size
1.54B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with thundax/Qwen2-1.5B-Sign.
This model can be loaded on Inference API (serverless).

Space using thundax/Qwen2-1.5B-Sign 1