File size: 1,162 Bytes
a96f952
 
ebed1da
428b114
 
 
1239cfa
428b114
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
**Chatrag-Deberta** is a small lightweight LLM to predict whether a question should retrieve additional information with RAG or not.

Chatrag-Deberta is based on Deberta-v3-large, a 304M encoder-decoder. Its initial version was fine-tuned on 20,000 examples of questions annotated by Mistral 7B.

## Use

A typical example of inference with Chatrag-Deberta is provided in the [Google Colab demo](https://colab.research.google.com/drive/1nTLFJXopFOEJldaCPzjQ2g5-j0NnpLdz?usp=sharing) or with inference_chatrag.py

For every submitted text, Chatrag-Deberta will output a range of probabilities to require RAG or not. 

This makes it possible to adjust a threshold of activation depending on whether more or less RAG is desirable in the system.

| Query                                                    |    Prob   |  Result |
|----------------------------------------------------------|:---------:|--------:|
| Comment puis-je renouveler un passeport ?                |  0.988455 | RAG     |
| Combien font deux et deux ?                              |  0.041475 | No-RAG  |
| Écris un début de lettre de recommandation pour la Dinum |  0.103086 | No-RAG  |