Uploaded model

  • Developed by: anamikac2708
  • License: cc-by-nc-4.0
  • Finetuned from model : unsloth/llama-3-8b-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library using open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team

Model then converted Q8_0 gguf using llama.cpp https://github.com/ggerganov/llama.cpp/. This project is for research purposes only. Third-party datasets may be subject to additional terms and conditions under their associated licenses.

How to Get Started with the Model

  1. Install llama-cpp-python:

  ! CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python 
  1. Run the model

from transformers import AutoTokenizer
from llama_cpp import Llama
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Meta-Llama-3-8B')
example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n        try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n        CONTEXT:\n        D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]
prompt = tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)

llm = Llama.from_pretrained(
    repo_id="anamikac2708/Llama3-8b-finetuned-investopedia-q8_0_gguf",
    filename="*Q8_0.gguf",
    verbose=False
)

output = llm(
  prompt,
  max_tokens=256,  # Generate up to 256 tokens
  stop=["<|im_end|>"], 
  echo=True,  # Whether to echo the prompt
)

print(output['choices'][0]['text'])

Evaluation

Coming soon!

Bias, Risks, and Limitations

This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking into ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

License

Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.

Downloads last month
38
GGUF
Model size
8.03B params
Architecture
llama

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for anamikac2708/Llama3-8b-finetuned-investopedia-q8_0_gguf

Quantized
(712)
this model