joanllop commited on
Commit
da9b9a7
1 Parent(s): 87ed7e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -3
README.md CHANGED
@@ -46,6 +46,31 @@ widget:
46
  Pregunta: "Qui és Leo Messi?"
47
  Resposta:
48
  example_title: Pregunta-Resposta
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  license: apache-2.0
50
  pipeline_tag: text-generation
51
  ---
@@ -56,12 +81,11 @@ pipeline_tag: text-generation
56
 
57
  This model is a new result towards the long-run problem of "What is the best strategy for training a model in my language (not English)?"
58
 
59
- This model adapts the [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) to 2 new target languages Spanish and Catalan by swapping the tokenizer and adjusting the embedding layer before training with 26B tokens in the target language.
60
 
61
  ## Embedding layer adaptation
62
 
63
  When adapting a model from English to other languages the tokenizer plays a crucial role.
64
- In our case the tokenization of a
65
 
66
  If the tokenizer does not include the target language in its training data, the resulting model will need many more tokens to perform the same task.
67
  We solve this problem by creating a new tokenizer in the target languages (Spanish and Catalan) and adapting the embedding layer by only reordering the embeddings of the shared tokens of both tokenizers and initializing the rest to the average of all embeddings.
@@ -77,7 +101,14 @@ More information needed
77
 
78
  ## Intended uses & limitations
79
 
80
- More information needed
 
 
 
 
 
 
 
81
 
82
  ## Training and evaluation data
83
 
 
46
  Pregunta: "Qui és Leo Messi?"
47
  Resposta:
48
  example_title: Pregunta-Resposta
49
+ - text: |-
50
+ Extrae las entidades nombradas del siguiente texto:
51
+ Texto: "Me llamo Wolfgang y vivo en Berlin"
52
+ Entidades: Wolfgang:PER, Berlin:LOC
53
+ ----
54
+ Extrae las entidades nombradas del siguiente texto:
55
+ Texto: "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center"
56
+ Entidades: parc güell:LOC, barcelona supercomputing center:LOC
57
+ ----
58
+ Extrae las entidades nombradas del siguiente texto:
59
+ Texto: "Maria y Miguel no tienen ningún problema contigo"
60
+ Entidades: Maria:PER, Miguel:PER
61
+ ----
62
+ Extrae las entidades nombradas del siguiente texto:
63
+ Texto: "Damián se cortó el pelo"
64
+ Entidades: Damián:PER
65
+ ----
66
+ Extrae las entidades nombradas del siguiente texto:
67
+ Texto: "Lo mejor de Barcelona és el bar de mi amigo Pablo"
68
+ Entidades: Pablo:PER, Barcelona:LOC
69
+ ----
70
+ Extrae las entidades nombradas del siguiente texto:
71
+ Texto: "Carlos comparte piso con Marc"
72
+ Entidades:
73
+ example_title: Entidades-Nombradas
74
  license: apache-2.0
75
  pipeline_tag: text-generation
76
  ---
 
81
 
82
  This model is a new result towards the long-run problem of "What is the best strategy for training a model in my language (not English)?"
83
 
84
+ This model adapts the [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) to the new target languages Spanish and Catalan by swapping the tokenizer and adjusting the embedding layer before training with 26B tokens in the target language.
85
 
86
  ## Embedding layer adaptation
87
 
88
  When adapting a model from English to other languages the tokenizer plays a crucial role.
 
89
 
90
  If the tokenizer does not include the target language in its training data, the resulting model will need many more tokens to perform the same task.
91
  We solve this problem by creating a new tokenizer in the target languages (Spanish and Catalan) and adapting the embedding layer by only reordering the embeddings of the shared tokens of both tokenizers and initializing the rest to the average of all embeddings.
 
101
 
102
  ## Intended uses & limitations
103
 
104
+ The model is ready-to-use only for causal language modeling to perform text-generation tasks.
105
+ However, it is intended to be fine-tuned on a generative downstream task.
106
+
107
+
108
+ ## Limitations and biases
109
+ At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
110
+ However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources.
111
+ We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
112
 
113
  ## Training and evaluation data
114