taddeusb90's picture
Update README.md
3de10ea verified
metadata
license: llama3
datasets:
  - taddeusb90/finbro-v0.1.0
language:
  - en
library_name: transformers
tags:
  - finance

Fibro v0.1.0 Dolphin 2.9 Llama 3 8B Model with 1m token context window

Model Description

The Fibro Dolphin 2.9 Llama 3 8B model is a language model optimized for financial applications. This model is uncensored and aims to enhance financial analysis, automate data extraction, improve financial literacy across various user expertise levels, and is trained for obedience. It utilizes a massive 1m token context window. This is just a sneak peek into what's coming, and future releases will be done periodically, consistently improving its performance.

FinBro

Training:

The model is still training, I will be sharing new incremental releases while it's improving so you have time to play around with it. Loss Evaluation Loss

What's Next?

  • Extended Capability: Continue training on the 8B model as it hasn't converged yet I only scratched the surface here and transitioning to scale up with a 70B model for deeper insights and broader financial applications.
  • Dataset Expansion: Continuous enhancement by integrating more diverse and comprehensive real and synthetic financial data.
  • Advanced Financial Analysis: Future versions will support complex financial decision-making processes by interpreting and analyzing financial data within agentive workflows.
  • Incremental Improvements: Regular updates are made to increase the model's efficiency and accuracy and extend its capabilities in financial tasks.

Model Applications

  • Information Extraction: Automates the process of extracting valuable data from unstructured financial documents.
  • Financial Literacy: Provides explanations of financial documents at various levels, making financial knowledge more accessible.

How to Use

Here is how to load and use the model in your Python projects:

from transformers import AutoModelForCausalLM, AutoTokenizer  

model_name = "taddeusb90/finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-1m" 
tokenizer = AutoTokenizer.from_pretrained(model_name) 

model = AutoModelForCausalLM.from_pretrained(model_name)  
text = "Your financial query here" 

inputs = tokenizer(text, return_tensors="pt") 

outputs = model.generate(inputs['input_ids']) 

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Data

The Fibro Llama 3 8B model was trained on the Finbro Dataset, an extensive compilation of over 300,000 entries sourced from Investopedia and Sujet Finance. This dataset includes structured Q&A pairs, financial reports, and a variety of financial tasks pooled from multiple datasets.

The dataset can be found here

This dataset will be extended to contain real and synthetic data on a wide range of financial tasks such as:

  • Investment valuation
  • Value investing
  • Security analysis
  • Derivatives
  • Asset and portfolio management
  • Financial information extraction
  • Quantitative finance
  • Econometrics
  • Applied computer science in finance and much more

Notice

You are advised to implement your own alignment layer and guard rails before exposing the model as a service or using it in production. It will be highly compliant with any requests, even unethical ones. Please read Eric Hartford's blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Please exercise caution and use it at your own risk. I assume no responsibility for any losses incurred if used.

Licensing

This model is released under the META LLAMA 3 COMMUNITY LICENSE AGREEMENT.

Citation

If you use this model in your research, please cite it as follows:

@misc{
    finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-1m,   
    author = {Taddeus Buica},
    title = {Fibro Dolphin 2.9 Llama 3 8B Model for Financial Analysis},   
    year = {2024},   
    journal = {Hugging Face repository},
    howpublished = {\url{https://huggingface.co/taddeusb90/finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-1m}} 
}

Special thanks to the folks from AI@Meta and Cognitive Computations for powering this project with their awesome models.

Contact

If you would like to connect, share ideas, feedback, help support bigger models or even develop your own custom finance model on your private dataset let's talk on LinkedIn

References

[1] Llama 3 Model Card by AI@Meta, Year: 2024

[2] Dolphin 2.9 by Cognitive Computations, Year 2024

[3] Sujet Finance Dataset

[4] Dataset Card for investopedia-instruction-tuning