Uploaded model

  • Developed by: lightontech
  • License: apache-2.0
  • Finetuned from model : SeaLLMs/SeaLLM3-7B-Chat

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

To use GGUF format for Llama.cpp or running in LM Studio, Jan and other local software, please refer to lightontech/SeaLightSum3_GGUF

How to use

For faster startup, checkout the Example notebook here

Install unsloth

This sample use unsloth for colab, you may switch to unsloth only if you want

pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
pip install --no-deps "xformers<0.0.27" "trl<0.9.0" peft accelerate bitsandbytes

Run inference

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

if True:
    from unsloth import FastLanguageModel
    model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = "lightontech/SeaLightSum3-Adapter", # YOUR MODEL YOU USED FOR TRAINING
        max_seq_length = max_seq_length,
        dtype = dtype,
        load_in_4bit = load_in_4bit,
    )
    FastLanguageModel.for_inference(model) # Unsloth has 2x faster inference!

# alpaca_prompt = You MUST copy from above!
FastLanguageModel.for_inference(model) # Unsloth has 2x faster inference!
inputs = tokenizer(
[
    alpaca_prompt.format(
        "Dịch đoạn văn sau sang tiếng Việt:\nOnce you have trained a model using either the SFTTrainer, PPOTrainer, or DPOTrainer, you will have a fine-tuned model that can be used for text generation. In this section, we’ll walk through the process of loading the fine-tuned model and generating text. If you need to run an inference server with the trained model, you can explore libraries such as text-generation-inference.", # instruction
        "", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1000)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lightontech/SeaLightSum3-Adapter

Finetuned
(12)
this model

Dataset used to train lightontech/SeaLightSum3-Adapter