Edit model card

Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit AutoTrain.

Dataset used: tatsu-lab/alpaca

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "kr-manish/Mistral-7B-autotrain-finetune-QA-vx"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)

input_text = "What is the scientific name of the honey bee?"
# Tokenize input text
input_ids = tokenizer.encode(input_text, return_tensors="pt")

# Generate output text
output = model.generate(input_ids, max_length=100, num_return_sequences=1, do_sample=True)

# Decode and print output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)

#Apis mellifera.

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "kr-manish/Mistral-7B-autotrain-finetune-QA-vx"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)

# Generate response
input_text = "Give three tips for staying healthy."
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_new_tokens = 200)
predicted_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(predicted_text)

#Give three tips for staying healthy. [/INST] 1. Eat a balanced diet: Make sure to include plenty of fruits, vegetables, lean proteins, and whole grains in your diet. Avoid processed foods and sugary drinks. 
#2. Exercise regularly: Aim for at least 30 minutes of moderate exercise every day, such as walking, cycling, or swimming. 
#3. Get enough sleep: Aim for 7-9 hours of quality sleep each night. Make sure to create a relaxing bedtime routine and stick to a regular sleep schedule. 
Downloads last month
3
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.