inoid's picture
Update README.md
e8da952
|
raw
history blame
2.04 kB
metadata
license: apache-2.0

Unam_tesis_beto_finnetuning: Unam's thesis classification with BETO

This model is created from the finetuning of the pre-model for Spanish [BETO] (https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased), using PyTorch framework, and trained with a set of theses at the Autonomous University of Mexico UNAM. The model classifies for five (Psychology, Law, Química Farmaceutico Biológica, Actuaría, Economy) possible careers at the University of Mexico. List of races from a text.

Example of use

For further details on how to use unam_tesis_beto_finnetuning you can visit the Huggingface Transformers library, starting with the Quickstart section. Unam_tesis models can be accessed simply as 'inoid/unam_tesis_beto_finnetuning' by using the Transformers library. An example of how to download and use the models on this page can be found in this colab notebook.


 tokenizer = AutoTokenizer.from_pretrained('inoid/unam_tesis_beto_finnetuning', use_fast=False)
 model = AutoModelForSequenceClassification.from_pretrained(
                   'inoid/unam_tesis_beto_finnetuning', num_labels=5, output_attentions=False,
                  output_hidden_states=False)
 pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
 
 classificationResult = pipe("El objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa")

To cite this resource in a publication please use the following:

Citation

UNAM's Tesis with BETO finetuning classify

To cite this resource in a publication please use the following:

@inproceedings{SpanishNLPHackaton2022,
  title={UNAM's Tesis with BETO finetunning classify },
  author={Cañete, Isahías and López, Dionis and Clavel, Yisell},
  booktitle={Somos NLP Hackaton 2022},
  year={2022}
}