Text Classification
Transformers
PyTorch
Spanish
roberta
Inference Endpoints
dariolopez commited on
Commit
0ae0a31
1 Parent(s): 5b28481

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -45,7 +45,7 @@ More info: https://huggingface.co/datasets/hackathon-somos-nlp-2023/suicide-comm
45
 
46
  The training data has been tokenized using the `PlanTL-GOB-ES/roberta-base-bne` tokenizer with a vocabulary size of 50262 tokens and a model maximum length of 512 tokens.
47
 
48
- The training lasted a total of 10 minutes using a NVIDIA GPU GeForce RTX 3090.
49
 
50
  ```
51
  +-----------------------------------------------------------------------------+
 
45
 
46
  The training data has been tokenized using the `PlanTL-GOB-ES/roberta-base-bne` tokenizer with a vocabulary size of 50262 tokens and a model maximum length of 512 tokens.
47
 
48
+ The training lasted a total of 10 minutes using a NVIDIA GPU GeForce RTX 3090 provided by Q Blocks.
49
 
50
  ```
51
  +-----------------------------------------------------------------------------+