File size: 2,862 Bytes
31f667c e0ff921 f94a3b0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
widget:
- text: "NEW YORK (TheStreet) -- Microsoft (MSFT) - Get Free Report had its price target raised to $39 from $38 by analysts at Jefferies who maintained their 'underperform' rating. In Thursday's pre-market trading session shares are advancing 1.24% to $44.79. This action comes as Microsoft said yesterday that it will eliminate up to 7,800 jobs mostly in its phone unit as it looks to restructure its phone hardware business that has been struggling, the New York Times reports."
example_title: "MSFT news (positive)"
- text: "Placeholder text."
example_title: "IBM news (neutral)"
- text: "Unilever PLC (NYSE: UL)’s stock price has gone decline by -0.61 in comparison to its previous close of 54.27, however, the company has experienced a -1.61% decrease in its stock price over the last five trading days. The Wall Street Journal reported on 10/24/22 that Dry Shampoo Recalled Due to Potential Cancer-Causing Ingredient."
example_title: "UL news (negative)"
---
# Finetuned destilBERT model for stock news classification
This is a HuggingFace model that uses BERT (Bidirectional Encoder Representations from Transformers) to perform text classification tasks. It was fine-tuned on 50.000 stock news articles using the HuggingFace adapter from Kern AI refinery.
BERT is a state-of-the-art pre-trained language model that can encode both the left and right context of a word in a sentence, allowing it to capture complex semantic and syntactic information.
## Features
- The model can handle various text classification tasks, especially when it comes to stock and finance news sentiment classification.
- The model can accept either single sentences or sentence pairs as input, and output a probability distribution over the predefined classes.
- The model can be fine-tuned on custom datasets and labels using the HuggingFace Trainer API or the PyTorch Lightning integration.
- The model is currently supported by the PyTorch framework and can be easily deployed on various platforms using the HuggingFace Pipeline API or the ONNX Runtime.
## Usage
To use the model, you need to install the HuggingFace Transformers library:
```bash
pip install transformers
```
Then you can load the model and the tokenizer from the HuggingFace Hub:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("KernAI/stock-news-destilbert")
tokenizer = AutoTokenizer.from_pretrained("KernAI/stock-news-destilbert")
```
To classify a single sentence or a sentence pair, you can use the HuggingFace Pipeline API:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = classifier("This is a positive sentence.")
print(result)
# [{'label': 'POSITIVE', 'score': 0.9998656511306763}]
``` |