File size: 2,552 Bytes
6faaba2 e968658 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 6faaba2 35c15a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
license: llama3
datasets:
- TimKoornstra/financial-tweets-sentiment
- takala/financial_phrasebank
language:
- en
pipeline_tag: text-classification
tags:
- text-classification
- sequence-classification
widget:
- text: "I liked this movie"
output:
- label: POSITIVE
score: 0.8
- label: NEGATIVE
score: 0.2
---
# Model Card for FinLlama-3-8B
This model card provides details for the FinLlama-3-8B model, which is fine-tuned for sentiment analysis on financial tweets and phrases.
## Model Details
### Model Description
FinLlama-3-8B is a fine-tuned version of the Llama-3-8B model specifically tailored for sentiment analysis in the financial domain. It can classify text into three sentiment categories: positive, neutral, and negative.
- **Model type:** Sequence Classification
- **Language(s) (NLP):** English
- **License:** llama3
- **Finetuned from model [optional]:** Llama-3-8B
## Uses
### Direct Use
FinLlama-3-8B can be directly used for sentiment analysis on financial text, providing sentiment labels (positive, neutral, negative) for given inputs.
### Downstream Use [optional]
The model can be integrated into larger financial analysis systems to provide sentiment insights as part of broader financial data analytics.
### Out-of-Scope Use
This model is not suitable for non-financial text sentiment analysis or for languages other than English.
## Bias, Risks, and Limitations
### Recommendations
Users should be aware of potential biases in the training data, which may affect the model's performance on certain subpopulations or topics. Continuous monitoring and evaluation are recommended.
## How to Get Started with the Model
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("roma2025/FinLlama-3-8B")
tokenizer = AutoTokenizer.from_pretrained("roma2025/FinLlama-3-8B")
def get_sentiment_score(model, tokenizer, text):
inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=75)
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probabilities = F.softmax(logits, dim=-1)
sentiment_score = torch.argmax(probabilities, dim=-1).item()
return sentiment_score, probabilities
# Example usage
text = "Determine the sentiment of the financial news as negative, neutral or positive:
The stock market is going up!
Sentiment:"
sentiment_score, probabilities = get_sentiment_score(model, tokenizer, text)
|