Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

ruGPT-3.5 13B GGML

Welcome to the adapter-only version of ruGPT-3.5 13B GGML. This model is built upon the foundation of ruGPT-3.5-13B.

📌 Important: This model was trained using settings identical to GigaSaiga, but incorporates additional dataset.

🔗 Training code is here.

Code sample

from llm_rs import AutoModel, GenerationConfig as GConfig
from transformers import AutoTokenizer, GenerationConfig 

MODEL_NAME = "evilfreelancer/ruGPT-3.5-13B-ggml"
DEFAULT_MESSAGE_TEMPLATE = "<s>{role}\n{content}</s>\n"
DEFAULT_SYSTEM_PROMPT = "Ты — ruGPT-3.5, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."

class Conversation:
    def __init__(
        self,
        message_template=DEFAULT_MESSAGE_TEMPLATE,
        system_prompt=DEFAULT_SYSTEM_PROMPT,
        start_token_id=2,
        bot_token_id=46787
    ):
        self.message_template = message_template
        self.start_token_id = start_token_id
        self.bot_token_id = bot_token_id
        self.messages = [{
            "role": "system",
            "content": system_prompt
        }]

    def get_start_token_id(self):
        return self.start_token_id

    def get_bot_token_id(self):
        return self.bot_token_id

    def add_user_message(self, message):
        self.messages.append({
            "role": "user",
            "content": message
        })

    def add_bot_message(self, message):
        self.messages.append({
            "role": "bot",
            "content": message
        })

    def get_prompt(self, tokenizer):
        final_text = ""
        for message in self.messages:
            message_text = self.message_template.format(**message)
            final_text += message_text
        final_text += tokenizer.decode([self.start_token_id, self.bot_token_id])
        return final_text.strip()


def generate(model, tokenizer, prompt, generation_config):
    data = tokenizer(prompt, return_tensors="pt")
    output = model.generate(
        prompt=prompt,
        generation_config=generation_config
    ).text
    # print("output", output)
    output_ids = tokenizer(output, return_tensors="pt")['input_ids'][0]
    # print("output_ids", output_ids)
    # output_ids = output_ids[len(data["input_ids"][0]):]
    # print("output_ids", output_ids)
    output = tokenizer.decode(output_ids, skip_special_tokens=True)
    # print("output_ids", output)
    return output.strip()

# Load base model
model = AutoModel.from_pretrained(
    MODEL_NAME,
    model_file="ruGPT-3.5-13B-lora-q4_0.bin",
)

# Init basic tokenizer
tokenizer = AutoTokenizer.from_pretrained('ai-forever/ruGPT-3.5-13B', use_fast=False)
generation_config = GenerationConfig.from_pretrained('ai-forever/ruGPT-3.5-13B')

# Instantiate generator config
g_config = GConfig()
g_config.top_p = generation_config['top_p']
g_config.top_k = generation_config['top_k']
g_config.repetition_penalty = generation_config['repetition_penalty']
g_config.temperature = generation_config['temperature']
g_config.max_new_tokens = 200

# Start conversation
conversation = Conversation()
while True:
    user_message = input("User: ")

    if user_message.strip() == "/reset":
        conversation = Conversation()
        print("History reset completed!")
        continue

    if user_message.strip() == "":
        continue

    conversation.add_user_message(user_message)
    prompt = conversation.get_prompt(tokenizer)
    output = generate(
        model=model,
        tokenizer=tokenizer,
        prompt=prompt,
        generation_config=g_config
    )
    conversation.add_bot_message(output)
    print("ruGPT-3.5:", output)
    print()
    print("==============================")
    print()

Examples

User: Привет! Почему трава зелёная?
ruGPT-3.5: Трава зелёная благодаря хлорофиллу - пигменту растений, который поглощает световые волны определенной длины и преобразует их в энергию химических связей. Хлорофилл также обеспечивает растения питательными веществами из почвы через фотосинтез.
User: Придумай стишок про нейросети.
ruGPT-3.5: Нейросеть – это чудо науки,
Она способна решать любые задачи.
И если ты хочешь стать программистом,
То лучше всего изучать нейронные сети.
User: Высота самой высокой горы на Земле?
ruGPT-3.5: Самая высокая гора на Земле находится в Гималаях и называется Эверест (8848 метров).
User: Сколько максимум лет может прожить галопогосская черепаха?
ruGPT-3.5: Галопогосские черепахи могут жить до 100 лет и более. Они являются одними из самых долгоживущих животных на планете.

📚 Training Datasets

The datasets utilized for training this model are consistent with those used for Saiga-2.

Here's the comprehensive list:

🛠 Training Procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

⚙️ Framework Versions

Ensure you have the following framework versions for compatibility:

  • PyTorch 2.1.0
  • PEFT 0.5.0
  • bitsandbytes 0.41.1
  • transformers 4.34.0

Links

Downloads last month
23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train evilfreelancer/ruGPT-3.5-13B-ggml

Space using evilfreelancer/ruGPT-3.5-13B-ggml 1