ElKulako commited on
Commit
2e1ec4c
1 Parent(s): 08c9b7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ CryptoBERT is a pre-trained NLP model to analyse the language and sentiments of
24
  ## Classification Training
25
  The model was trained on the following labels: "Bearish" : 0, "Neutral": 1, "Bullish": 2
26
 
27
- CryptoBERT's sentiment classification head was fine-tuned on a balanced dataset of 2M labelled StockTwits posts, bootstrapped from [ElKulako/stocktwits-crypto](https://huggingface.co/datasets/ElKulako/stocktwits-crypto).
28
 
29
  CryptoBERT was trained with a max sequence length of 128. Technically, it can handle sequences of up to 514 tokens, however, going beyond 128 is not recommended.
30
 
24
  ## Classification Training
25
  The model was trained on the following labels: "Bearish" : 0, "Neutral": 1, "Bullish": 2
26
 
27
+ CryptoBERT's sentiment classification head was fine-tuned on a balanced dataset of 2M labelled StockTwits posts, sampled from [ElKulako/stocktwits-crypto](https://huggingface.co/datasets/ElKulako/stocktwits-crypto).
28
 
29
  CryptoBERT was trained with a max sequence length of 128. Technically, it can handle sequences of up to 514 tokens, however, going beyond 128 is not recommended.
30