avacaondata commited on
Commit
cdf7a22
1 Parent(s): 6ae70b4

añadida documentación y refactor de texto

Browse files
Files changed (1) hide show
  1. article_app.py +1 -1
article_app.py CHANGED
@@ -13,7 +13,7 @@ Below you can find all the pieces that form the system. This section is minimali
13
  <li><a href="https://hf.co/IIC/wav2vec2-spanish-multilibrispeech">Speech2Text</a>: For this we finedtuned a multilingual Wav2Vec2, as explained in the attached link. We use this model to process audio questions.</li>
14
  <li><a href="https://hf.co/IIC/dpr-spanish-passage_encoder-allqa-base">Dense Passage Retrieval for Context</a>: Dense Passage Retrieval is a methodology <a href="https://arxiv.org/abs/2004.04906">developed by Facebook</a> which is currently the SoTA for Passage Retrieval, that is, the task of getting the most relevant passages to answer a given question with. You can find details about how it was trained on the link attached to the name. </li>
15
  <li><a href="https://hf.co/IIC/dpr-spanish-question_encoder-allqa-base">Dense Passage Retrieval for Question</a>: It is actually part of the same thing as the above. For more details, go to the attached link.</li>
16
- <li><a href="https://hf.co/sentence-transformers/distiluse-base-multilingual-cased-v1">Sentence Encoder Ranker</a>: To rerank the candidate contexts retrieved by dpr for the generative model to see. This also selects the top 5 passages for the model to read, it is the final filter before the generative model. For this we used 3 different configurations to human-check (that's us seriously playing with our toy) the answer results, as generated answers depended much on this piece of the puzzle. The firs option, before we trained our own crossencoder, was to use a [multilingual sentence transformer](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1), trained on multilingual MS Marco. This worked more or less fine, although it was noticeable it wasn't specialized in Spanish. We then tried our own CrossEncoder, trained on our [translated version of MS Marco to Spanish](https://huggingface.co/datasets/IIC/msmarco_es). It worked better than the sentence transformer. Then, it occured to us by looking at their ranks distributions for the same passages, that maybe by multiplying their similarity scores element by element, we could obtain a less biased rank for the documents, therefore only those documents both rankers agree are important appear at the top. We tried this and it showed much better results, so we left both systems with the posterior multiplication of similarities.</li>
17
  <li><a href="https://hf.co/IIC/mt5-base-lfqa-es">Generative Long-Form Question Answering Model</a>: For this we used either mT5 (the one attached) or <a href="https://hf.co/IIC/mbart-large-lfqa-es">mBART</a>. This generative model receives the most relevant passages and uses them to generate an answer to the question. In the attached link there are more details about how we trained it etc.</li>
18
  <li><a href="https://huggingface.co/facebook/tts_transformer-es-css10">Text2Speech</a>: For this we used Meta's text2speech service on Huggingface, as text2speech classes are not yet implemented on the main branch of Transformers. This piece was a must to provide a voice to voice service so that it's almost fully accessible. As a future work, as soon as text2speech classes are implemented on transformers, we will train our own models to replace this piece.</li>
19
  </ol>
 
13
  <li><a href="https://hf.co/IIC/wav2vec2-spanish-multilibrispeech">Speech2Text</a>: For this we finedtuned a multilingual Wav2Vec2, as explained in the attached link. We use this model to process audio questions.</li>
14
  <li><a href="https://hf.co/IIC/dpr-spanish-passage_encoder-allqa-base">Dense Passage Retrieval for Context</a>: Dense Passage Retrieval is a methodology <a href="https://arxiv.org/abs/2004.04906">developed by Facebook</a> which is currently the SoTA for Passage Retrieval, that is, the task of getting the most relevant passages to answer a given question with. You can find details about how it was trained on the link attached to the name. </li>
15
  <li><a href="https://hf.co/IIC/dpr-spanish-question_encoder-allqa-base">Dense Passage Retrieval for Question</a>: It is actually part of the same thing as the above. For more details, go to the attached link.</li>
16
+ <li><a href="https://hf.co/sentence-transformers/distiluse-base-multilingual-cased-v1">Sentence Encoder Ranker</a>: To rerank the candidate contexts retrieved by dpr for the generative model to see. This also selects the top 5 passages for the model to read, it is the final filter before the generative model. For this we used 3 different configurations to human-check (that's us seriously playing with our toy) the answer results, as generated answers depended much on this piece of the puzzle. The firs option, before we trained our own crossencoder, was to use a <a href="https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1">multilingual sentence transformer</a>, trained on multilingual MS Marco. This worked more or less fine, although it was noticeable it wasn't specialized in Spanish. We then tried our own CrossEncoder, trained on our [translated version of MS Marco to Spanish](https://huggingface.co/datasets/IIC/msmarco_es). It worked better than the sentence transformer. Then, it occured to us by looking at their ranks distributions for the same passages, that maybe by multiplying their similarity scores element by element, we could obtain a less biased rank for the documents, therefore only those documents both rankers agree are important appear at the top. We tried this and it showed much better results, so we left both systems with the posterior multiplication of similarities.</li>
17
  <li><a href="https://hf.co/IIC/mt5-base-lfqa-es">Generative Long-Form Question Answering Model</a>: For this we used either mT5 (the one attached) or <a href="https://hf.co/IIC/mbart-large-lfqa-es">mBART</a>. This generative model receives the most relevant passages and uses them to generate an answer to the question. In the attached link there are more details about how we trained it etc.</li>
18
  <li><a href="https://huggingface.co/facebook/tts_transformer-es-css10">Text2Speech</a>: For this we used Meta's text2speech service on Huggingface, as text2speech classes are not yet implemented on the main branch of Transformers. This piece was a must to provide a voice to voice service so that it's almost fully accessible. As a future work, as soon as text2speech classes are implemented on transformers, we will train our own models to replace this piece.</li>
19
  </ol>