albarji commited on
Commit
fd5b576
1 Parent(s): 61a0c94

Aclaraciones sobre índice FAISS, y algunas correcciones al texto

Browse files
Files changed (1) hide show
  1. article_app.py +16 -6
article_app.py CHANGED
@@ -11,14 +11,20 @@ Below you can find all the pieces that form the system. This section is minimali
11
 
12
  <ol>
13
  <li><a href="https://hf.co/IIC/wav2vec2-spanish-multilibrispeech">Speech2Text</a>: For this we finedtuned a multilingual Wav2Vec2, as explained in the attached link. We use this model to process audio questions.</li>
14
- <li><a href="https://hf.co/IIC/dpr-spanish-passage_encoder-allqa-base">Dense Passage Retrieval for Context</a>: Dense Passage Retrieval is a methodology <a href="https://arxiv.org/abs/2004.04906">developed by Facebook</a> which is currently the SoTA for Passage Retrieval, that is, the task of getting the most relevant passages to answer a given question with. You can find details about how it was trained on the link attached to the name. </li>
15
- <li><a href="https://hf.co/IIC/dpr-spanish-question_encoder-allqa-base">Dense Passage Retrieval for Question</a>: It is actually part of the same thing as the above. For more details, go to the attached link.</li>
16
- <li><a href="https://hf.co/sentence-transformers/distiluse-base-multilingual-cased-v1">Sentence Encoder Ranker</a>: To rerank the candidate contexts retrieved by dpr for the generative model to see. This also selects the top 5 passages for the model to read, it is the final filter before the generative model. For this we used 3 different configurations to human-check (that's us seriously playing with our toy) the answer results, as generated answers depended much on this piece of the puzzle. The firs option, before we trained our own crossencoder, was to use a <a href="https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1">multilingual sentence transformer</a>, trained on multilingual MS Marco. This worked more or less fine, although it was noticeable it wasn't specialized in Spanish. We then tried our own CrossEncoder, trained on our [translated version of MS Marco to Spanish](https://huggingface.co/datasets/IIC/msmarco_es). It worked better than the sentence transformer. Then, it occured to us by looking at their ranks distributions for the same passages, that maybe by multiplying their similarity scores element by element, we could obtain a less biased rank for the documents, therefore only those documents both rankers agree are important appear at the top. We tried this and it showed much better results, so we left both systems with the posterior multiplication of similarities.</li>
17
  <li><a href="https://hf.co/IIC/mt5-base-lfqa-es">Generative Long-Form Question Answering Model</a>: For this we used either mT5 (the one attached) or <a href="https://hf.co/IIC/mbart-large-lfqa-es">mBART</a>. This generative model receives the most relevant passages and uses them to generate an answer to the question. In the attached link there are more details about how we trained it etc.</li>
18
- <li><a href="https://huggingface.co/facebook/tts_transformer-es-css10">Text2Speech</a>: For this we used Meta's text2speech service on Huggingface, as text2speech classes are not yet implemented on the main branch of Transformers. This piece was a must to provide a voice to voice service so that it's almost fully accessible. As a future work, as soon as text2speech classes are implemented on transformers, we will train our own models to replace this piece.</li>
19
  </ol>
20
 
21
- Apart from those, this system could not respond in less than a minute on CPU if we didn't use some indexing tricks on the dataset, by using <a href="https://github.com/facebookresearch/faiss">Faiss</a>. We have to look for relevant passages to answer the questions on over 1.5M of semi-long documents, that means that if we want to compare the question vector as encoded by DPR against all of that vectors, we have to perform over 1.5M comparisons. Instead of that, we index those vectors on clusters of similar documents, therefore the question vector only needs to be compared against the passages of similar subject. With this we improved passages retrieving time to miliseconds. This is key since large generative language models like the ones we use already take too much time on CPU, therefore we alleviate this restriction by reducing the retrieving time.
 
 
 
 
 
 
22
 
23
  On the other hand, we uploaded, and in some cases created, datasets in Spanish to be able to build such a system.
24
 
@@ -30,14 +36,18 @@ On the other hand, we uploaded, and in some cases created, datasets in Spanish t
30
  <li><a href="https://hf.co/datasets/PlanTL-GOB-ES/SQAC">SQAC (Spanish Question Answering Corpus)</a>. Used to train the DPR models. (More info in the link.)</li>
31
  </ol>
32
 
 
33
  <a href="https://www.un.org/sustainabledevelopment/es/objetivos-de-desarrollo-sostenible/">Objetivos del Desarrollo Sostenible</a>
 
34
 
35
  <ol>
36
  <li><a href="https://www.un.org/sustainabledevelopment/es/health/">Salud y bienestar</a>: pretendemos con nuestro sistema mejorar la búsqueda de información acerca de la salud y el sector biomédico, ayudando tanto a investigadores biomédicos a indagar en una gran base de datos sobre el tema, pudiendo acelerar así el proceso de investigación y desarrollo en este ámbito, como a cualquier individuo que quiera conocer mejor acerca de la salud y de los temas relacionados. De esta manera usamos la IA para promover tanto el conocimiento como la exploración en el campo de la BioMedicina en castellano.</li>
37
  <li><a href="https://www.un.org/sustainabledevelopment/es/education/">Educación de calidad</a>: al ofrecer al mundo un sistema avanzado de consulta de información, ayudamos a complementar y mejorar los sistemas de calidad actuales del mundo biomédico, pues los alumnos tienen un sistema para aprender sobre este campo interactuando a través de nuestros modelos con una gran base de conocimiento en este tema.</li>
38
- <li><a href="https://www.un.org/sustainabledevelopment/es/inequality/">Reducción de las desigualdades</a>: Al hacer un sistema end-to-end de voz a voz, en el que no sería necesario usar el teclado (no en la demo actual por cómo están hechos los spaces de Huggingface, pero sí lo permiten las tecnologías que usamos, con la arquitectura adaptada correcta), promovemos la accesibilidad a la herramienta. Esto tiene la intención de que personas que no puedan leer ni escribir tengan la oportunidad de interactuar con BioMedIA. Vimos la necesidad de hacer este sistema lo más flexible posible, para que fuera fácil interactuar con él independientemente de las dificultades o limitaciones físicas que pudieran tener las personas. Al incluir una salida de voz, aquellos que no tengan posibilidad de ver a través de los ojos también podrán recibir respuestas a sus dudas. Esto reduce las desigualdades de acceso a la herramienta de aquellos con alguno de esos impedimentos. Además, generando una herramienta gratuita de acceso al conocimiento disponible en cualquier parte del mundo con acceso a Internet, reducimos las desigualdades de acceso a la información. </li>
39
  </ol>
40
  </p>
 
 
41
  """
42
  # 1HOzvvgDLFNTK7tYAY1dRzNiLjH41fZks
43
  # 1kvHDFUPPnf1kM5EKlv5Ife2KcZZvva_1
 
11
 
12
  <ol>
13
  <li><a href="https://hf.co/IIC/wav2vec2-spanish-multilibrispeech">Speech2Text</a>: For this we finedtuned a multilingual Wav2Vec2, as explained in the attached link. We use this model to process audio questions.</li>
14
+ <li><a href="https://hf.co/IIC/dpr-spanish-passage_encoder-allqa-base">Dense Passage Retrieval (DPR) for Context</a>: Dense Passage Retrieval is a methodology <a href="https://arxiv.org/abs/2004.04906">developed by Facebook</a> which is currently the SoTA for Passage Retrieval, that is, the task of getting the most relevant passages to answer a given question. You can find details about how it was trained on the link attached to the name. </li>
15
+ <li><a href="https://hf.co/IIC/dpr-spanish-question_encoder-allqa-base">Dense Passage Retrieval (DPR) for Question</a>: It is actually part of the same thing as the above. For more details, go to the attached link.</li>
16
+ <li><a href="https://hf.co/sentence-transformers/distiluse-base-multilingual-cased-v1">Sentence Encoder Ranker</a>: To rerank the candidate contexts retrieved by DPR for the generative model to see. This also selects the top 5 passages for the model to read, it is the final filter before the generative model. For this we used 3 different configurations to human-check (that's us seriously playing with our toy) the answer results, as generated answers depended much on this piece of the puzzle. The first option, before we trained our own crossencoder, was to use a <a href="https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1">multilingual sentence transformer</a>, trained on multilingual MS Marco. This worked more or less fine, although it was noticeable it wasn't specialized in Spanish. We then tried our own CrossEncoder, trained on our [translated version of MS Marco to Spanish](https://huggingface.co/datasets/IIC/msmarco_es). It worked better than the sentence transformer. Then, it occured to us by looking at their ranks distributions for the same passages, that maybe by multiplying their similarity scores element by element, we could obtain a less biased rank for the documents, therefore only those documents both rankers agree are important appear at the top. We tried this and it showed much better results, so we left both systems with the posterior multiplication of similarities.</li>
17
  <li><a href="https://hf.co/IIC/mt5-base-lfqa-es">Generative Long-Form Question Answering Model</a>: For this we used either mT5 (the one attached) or <a href="https://hf.co/IIC/mbart-large-lfqa-es">mBART</a>. This generative model receives the most relevant passages and uses them to generate an answer to the question. In the attached link there are more details about how we trained it etc.</li>
18
+ <li><a href="https://huggingface.co/facebook/tts_transformer-es-css10">Text2Speech</a>: For this we used Meta's text2speech service on Huggingface, as text2speech classes are not yet implemented on the main branch of Transformers. This piece was a must to provide a voice to voice service so that it's almost fully accessible. As future work, as soon as text2speech classes are implemented on transformers, we will train our own models to replace this piece.</li>
19
  </ol>
20
 
21
+ Apart from those, this system could not respond in less than a minute on CPU if we didn't use some indexing tricks on the dataset, by using <a href="https://github.com/facebookresearch/faiss">Faiss</a>. We need to look for relevant passages to answer the questions on over 1.5M of semi-long documents, which means that if we want to compare the question vector as encoded by DPR against all of that vectors, we have to perform over 1.5M comparisons. Instead of that, we created a FAISS index optimized for very fast search, configured as follows:
22
+ <ul>
23
+ <li> A dimensionality reduction method is applied to to represent each one of the 1.5M documents as a vector of 128 elements, which after some quantization algorithms requires only 32 bytes of memory per vector.</li>
24
+ <li>Document vectors are clusted with k-means into about 5K clusters.</li>
25
+ <li>At query time, the query vector follow the same pipeline, and relevant documents from the same cluster are retrieved.</li>
26
+ </ul>
27
+ Using this strategy we managed to improve the passages retrieving time to miliseconds. This is key since large generative language models like the ones we use already take too much time on CPU, therefore we alleviate this restriction by reducing the retrieving time.
28
 
29
  On the other hand, we uploaded, and in some cases created, datasets in Spanish to be able to build such a system.
30
 
 
36
  <li><a href="https://hf.co/datasets/PlanTL-GOB-ES/SQAC">SQAC (Spanish Question Answering Corpus)</a>. Used to train the DPR models. (More info in the link.)</li>
37
  </ol>
38
 
39
+ <h3>
40
  <a href="https://www.un.org/sustainabledevelopment/es/objetivos-de-desarrollo-sostenible/">Objetivos del Desarrollo Sostenible</a>
41
+ </h3>
42
 
43
  <ol>
44
  <li><a href="https://www.un.org/sustainabledevelopment/es/health/">Salud y bienestar</a>: pretendemos con nuestro sistema mejorar la búsqueda de información acerca de la salud y el sector biomédico, ayudando tanto a investigadores biomédicos a indagar en una gran base de datos sobre el tema, pudiendo acelerar así el proceso de investigación y desarrollo en este ámbito, como a cualquier individuo que quiera conocer mejor acerca de la salud y de los temas relacionados. De esta manera usamos la IA para promover tanto el conocimiento como la exploración en el campo de la BioMedicina en castellano.</li>
45
  <li><a href="https://www.un.org/sustainabledevelopment/es/education/">Educación de calidad</a>: al ofrecer al mundo un sistema avanzado de consulta de información, ayudamos a complementar y mejorar los sistemas de calidad actuales del mundo biomédico, pues los alumnos tienen un sistema para aprender sobre este campo interactuando a través de nuestros modelos con una gran base de conocimiento en este tema.</li>
46
+ <li><a href="https://www.un.org/sustainabledevelopment/es/inequality/">Reducción de las desigualdades</a>: Al hacer un sistema end-to-end de voz a voz, en el que no sería necesario usar el teclado (*), promovemos la accesibilidad a la herramienta. Esto tiene la intención de que personas que no puedan o padezcan impedimentos al leer o escribir tengan la oportunidad de interactuar con BioMedIA. Vimos la necesidad de hacer este sistema lo más flexible posible, para que fuera fácil interactuar con él independientemente de las dificultades o limitaciones físicas que pudieran tener las personas. Al incluir una salida de voz, aquellos que tengan problemas de visión también podrán recibir respuestas a sus dudas. Esto reduce las desigualdades de acceso a la herramienta de las personas con alguno de esos impedimentos. Además, generando una herramienta gratuita de acceso al conocimiento disponible en cualquier parte del mundo con acceso a Internet, reducimos las desigualdades de acceso a la información. </li>
47
  </ol>
48
  </p>
49
+
50
+ (*) Nótese que en la demo actual del sistema el usuario necesita realizar una mínima interacción por teclado y ratón. Esto es debido a una limitación de diseño de los spaces de Huggingface. No obstante, las tecnologías desarrolladas sí permitirían su integración en un sistema de interacción pura por voz.
51
  """
52
  # 1HOzvvgDLFNTK7tYAY1dRzNiLjH41fZks
53
  # 1kvHDFUPPnf1kM5EKlv5Ife2KcZZvva_1