FinLlama-3-8B / README.md
roma2025's picture
Update README.md
8c7adeb verified
metadata
license: llama3
datasets:
  - TimKoornstra/financial-tweets-sentiment
  - takala/financial_phrasebank
language:
  - en
pipeline_tag: text-classification
tags:
  - text-classification
  - sequence-classification
widget:
  - text: I liked this movie
    output:
      - label: POSITIVE
        score: 0.8
      - label: NEGATIVE
        score: 0.2

Model Card for FinLlama-3-8B

This model card provides details for the FinLlama-3-8B model, which is fine-tuned for sentiment analysis on financial tweets and phrases.

Model Details

Model Description

FinLlama-3-8B is a fine-tuned version of the Llama-3-8B model specifically tailored for sentiment analysis in the financial domain. It can classify text into three sentiment categories: positive, neutral, and negative.

  • Model type: Sequence Classification
  • Language(s) (NLP): English
  • License: llama3
  • Finetuned from model [optional]: Llama-3-8B

Uses

Direct Use

FinLlama-3-8B can be directly used for sentiment analysis on financial text, providing sentiment labels (positive, neutral, negative) for given inputs.

Downstream Use [optional]

The model can be integrated into larger financial analysis systems to provide sentiment insights as part of broader financial data analytics.

Out-of-Scope Use

This model is not suitable for non-financial text sentiment analysis or for languages other than English.

Bias, Risks, and Limitations

Recommendations

Users should be aware of potential biases in the training data, which may affect the model's performance on certain subpopulations or topics. Continuous monitoring and evaluation are recommended.

How to Get Started with the Model

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("roma2025/FinLlama-3-8B")
tokenizer = AutoTokenizer.from_pretrained("roma2025/FinLlama-3-8B")

def get_sentiment_score(model, tokenizer, text):
    inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=75)
    with torch.no_grad():
        outputs = model(**inputs)
        logits = outputs.logits
    probabilities = F.softmax(logits, dim=-1)
    sentiment_score = torch.argmax(probabilities, dim=-1).item()
    return sentiment_score, probabilities

# Example usage
text = "Determine the sentiment of the financial news as negative, neutral or positive:
The stock market is going up!
Sentiment:"

sentiment_score, probabilities = get_sentiment_score(model, tokenizer, text)