Edit model card

vpelloin/MEDIA_NLU-flaubert_oral_ft

This is a Natural Language Understanding (NLU) model for the French MEDIA benchmark. It maps each input words into outputs concepts tags (76 available).

This model is trained using nherve/flaubert-oral-ft as its inital checkpoint. It obtained 11.98% CER (lower is better) in the MEDIA test set, in our Interspeech 2023 publication, using Kaldi ASR transcriptions.

Available MEDIA NLU models:

Usage with Pipeline

from transformers import pipeline

generator = pipeline(
    model="vpelloin/MEDIA_NLU-flaubert_oral_ft",
    task="token-classification"
)

sentences = [
    "je voudrais réserver une chambre à paris pour demain et lundi",
    "d'accord pour l'hôtel à quatre vingt dix euros la nuit",
    "deux nuits s'il vous plait",
    "dans un hôtel avec piscine à marseille"
 ]

for sentence in sentences:
    print([(tok['word'], tok['entity']) for tok in generator(sentence)])

Usage with AutoTokenizer/AutoModel

from transformers import (
    AutoTokenizer,
    AutoModelForTokenClassification
)
tokenizer = AutoTokenizer.from_pretrained(
    "vpelloin/MEDIA_NLU-flaubert_oral_ft"
)
model = AutoModelForTokenClassification.from_pretrained(
    "vpelloin/MEDIA_NLU-flaubert_oral_ft"
)

sentences = [
    "je voudrais réserver une chambre à paris pour demain et lundi",
    "d'accord pour l'hôtel à quatre vingt dix euros la nuit",
    "deux nuits s'il vous plait",
    "dans un hôtel avec piscine à marseille"
 ]
inputs = tokenizer(sentences, padding=True, return_tensors='pt')
outputs = model(**inputs).logits
print([
    [model.config.id2label[i] for i in b]
    for b in outputs.argmax(dim=-1).tolist()
])

Reference

If you use this model for your scientific publication, or if you find the resources in this repository useful, please cite the following paper:

@inproceedings{pelloin22_interspeech,
  author={Valentin Pelloin and Franck Dary and Nicolas Hervé and Benoit Favre and Nathalie Camelin and Antoine LAURENT and Laurent Besacier},
  title={ASR-Generated Text for Language Model Pre-training Applied to Speech Tasks},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={3453--3457},
  doi={10.21437/Interspeech.2022-352}
}
Downloads last month
5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.