Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

CSGO Coach Mia, Finetuned on mistralai/Mistral-7B-Instruct-v0.2

Sample usage :

from huggingface_hub import hf_hub_download from llama_cpp import Llama import torch

Specify the path to your .gguf file

model_path = '/content/finetuned8b/finetuned8b.Q5_K_M.gguf'

Instantiate the Llama model

llm = Llama(model_path=model_path)

prompt = "Coach Mia, help me with aiming "

Generation kwargs

generation_kwargs = { "max_tokens":200, "stop":'[INST]', "echo":False, # Echo the prompt in the output "top_k":1 # This is essentially greedy decoding, since the model will always return the highest-probability token. Set this value > 1 for sampling decoding }

res = llm(prompt, **generation_kwargs)

Unpack and the generated text from the LLM response dictionary and print it

print(res["choices"][0]["text"])

res is short for result

#output

100% accuracy. [/INST] Aiming is a crucial aspect of CS:GO. Let's start by analyzing your sensitivity settings and crosshair placement. We can also run some aim training drills to improve your precision.

Downloads last month
14
GGUF
Model size
7.24B params
Architecture
llama
Unable to determine this model's library. Check the docs .