nbroad HF staff commited on
Commit
636ca8f
1 Parent(s): e00742e

remove flax example

Browse files
Files changed (1) hide show
  1. README.md +2 -19
README.md CHANGED
@@ -62,7 +62,7 @@ Intended use is to make questions given a passage. With a larger model this migh
62
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
63
 
64
  tokenizer = AutoTokenizer.from_pretrained("nbroad/mt5-small-qgen")
65
- model = AutoModelForSeq2SeqLM.from_pretrained("nbroad/mt5-small-qgen", from_flax=True)
66
 
67
  text = "Hugging Face has seen rapid growth in its \npopularity since the get-go. It is definitely doing\n the right things to attract more and more people to \n its platform, some of which are on the following lines:\nCommunity driven approach through large open source repositories \nalong with paid services. Helps to build a network of like-minded\n people passionate about open source. \nAttractive price point. The subscription-based features, e.g.: \nInference based API, starts at a price of $9/month.\n"
68
 
@@ -70,24 +70,7 @@ inputs = tokenizer(text, return_tensors="pt")
70
  output = model.generate(**inputs, max_length=40)
71
 
72
  tokenizer.decode(output[0], skip_special_tokens=True)
73
- # What is Hugging Face's price point?
74
  ```
75
 
76
- #### Flax version
77
- ```python
78
- from transformers import AutoTokenizer, FlaxAutoModelForSeq2SeqLM
79
-
80
- tokenizer = AutoTokenizer.from_pretrained("nbroad/mt5-small-qgen")
81
- model = FlaxAutoModelForSeq2SeqLM.from_pretrained("nbroad/mt5-small-qgen")
82
-
83
- text = "A un año y tres días de que el balón ruede \nen el Al Bayt Stadium inaugurando el Mundial 2022, \nya se han dibujado los primeros bocetos de la próxima \nCopa del Mundo.13 selecciones están colocadas en el \nmapa con la etiqueta de clasificadas y tienen asegurado\n pisar los verdes de Qatar en la primera fase final \n otoñal. Serbia, Dinamarca, España, Países Bajos, \n Suiza, Croacia, Francia, Inglaterra, Bélgica, Alemania,\n Brasil, Argentina y Qatar, como anfitriona, entrarán en \n el sorteo del 1 de abril de 2022 en Doha en el que 32 \n países serán repartidos en sus respectivos grupos. \n"
84
-
85
- inputs = tokenizer(text, return_tensors="np")
86
- output = model.generate(**inputs, max_length=40)
87
-
88
- tokenizer.decode(output["sequences"][0], skip_special_tokens=True)
89
- # ¿Cuántos países entrarán en el sorteo del Mundial 2022?
90
- ```
91
-
92
-
93
  Model trained on Cloud TPUs from Google's TPU Research Cloud (TRC)
 
62
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
63
 
64
  tokenizer = AutoTokenizer.from_pretrained("nbroad/mt5-small-qgen")
65
+ model = AutoModelForSeq2SeqLM.from_pretrained("nbroad/mt5-small-qgen")
66
 
67
  text = "Hugging Face has seen rapid growth in its \npopularity since the get-go. It is definitely doing\n the right things to attract more and more people to \n its platform, some of which are on the following lines:\nCommunity driven approach through large open source repositories \nalong with paid services. Helps to build a network of like-minded\n people passionate about open source. \nAttractive price point. The subscription-based features, e.g.: \nInference based API, starts at a price of $9/month.\n"
68
 
 
70
  output = model.generate(**inputs, max_length=40)
71
 
72
  tokenizer.decode(output[0], skip_special_tokens=True)
73
+ # What is the subscription-based features that starts at a price of $/month'
74
  ```
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
  Model trained on Cloud TPUs from Google's TPU Research Cloud (TRC)