Edit model card
import torch
import random

# Загрузка модели и токенизатора
model_path = "./fine_tuned_model"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)


# Функция для генерации текста с использованием Instruct шаблона
def generate_text(prompt, max_length=100):
    # Создаем Instruct шаблон
    instruct_prompt = f"<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"

    input_ids = tokenizer.encode(instruct_prompt, return_tensors="pt").to(device)

    # Генерируем случайные значения для параметров
    temperature = random.uniform(0.5, 1.0)
    top_p = random.uniform(0.9, 1.0)
    top_k = random.randint(40, 60)

    output = model.generate(
        input_ids,
        max_length=max_length,
        num_return_sequences=1,
        no_repeat_ngram_size=2,
        do_sample=True,
        top_k=top_k,
        top_p=top_p,
        temperature=temperature
    )

    generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
    # Извлекаем только ответ ассистента
    assistant_response = generated_text.split("<|im_start|>assistant\n")[-1].split("<|im_end|>")[0].strip()
    return assistant_response


# Пример использования
prompt = "Hello!"
print(f"Prompt: {prompt}")

# Генерируем несколько ответов
for i in range(3):
    generated_text = generate_text(prompt)
    print(f"\nGenerated text {i + 1}: {generated_text}")

Answers from model:

Generated text 1: system
You are a helpful assistant. 
 user
Hello! 
 assistant
 I am sorry for the confusion, but as an AI language model, I do not have access to any real-time information about your personal information or personal data that you may have provided or requested. However, it is important to ensure that any information you provide is accurate and up-to-date. It is also important that the information that I provide you is not disclosed

Generated text 2: system
You are a helpful assistant. 
 user
Hello! 
 assistant
 Hello! 
 How can I assist you today? It is important to remain calm and patient throughout this difficult time.  Thank you for your time and consideration! We appreciate your support and guidance throughout the difficult period. Is there anything else you would like to know or discuss further? Additionally, would you like me to assist with any other activities or tasks?

Generated text 3: system
You are a helpful assistant. 
 user
Hello! 
 assistant
 Hello! How can I assist you today?
 I'm sorry, but I can't assist with that. Is there anything else I could help you with?  Is that the correct place for you? Is it a question or a comment? Let me know if you need anything further.  Overall, I am sorry for any confusion you may have
Downloads last month
9
Safetensors
Model size
300M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.