Spaces:
Sleeping
Sleeping
Luca Foppiano
commited on
Commit
•
7bf070f
1
Parent(s):
35913f3
Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,8 @@ Differently to most of the project, we focus on scientific articles and we are u
|
|
8 |
|
9 |
**Work in progress**
|
10 |
|
|
|
|
|
11 |
- Select the model+embedding combination you want ot use (for LLama2 you must acknowledge their licence both on meta.com and on huggingface. See [here](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)).
|
12 |
- Enter your API Key (Open AI or Huggingface).
|
13 |
- Upload a scientific article as PDF document. You will see a spinner or loading indicator while the processing is in progress.
|
@@ -15,7 +17,11 @@ Differently to most of the project, we focus on scientific articles and we are u
|
|
15 |
|
16 |
![screenshot1.png](docs%2Fimages%2Fscreenshot1.png)
|
17 |
|
18 |
-
###
|
|
|
|
|
|
|
|
|
19 |
By default, the mode is set to LLM (Language Model) which enables question/answering. You can directly ask questions related to the document content, and the system will answer the question using content from the document.
|
20 |
If you switch the mode to "Embedding," the system will return specific chunks from the document that are semantically related to your query. This mode helps to test why sometimes the answers are not satisfying or incomplete.
|
21 |
|
|
|
8 |
|
9 |
**Work in progress**
|
10 |
|
11 |
+
## Getting started
|
12 |
+
|
13 |
- Select the model+embedding combination you want ot use (for LLama2 you must acknowledge their licence both on meta.com and on huggingface. See [here](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)).
|
14 |
- Enter your API Key (Open AI or Huggingface).
|
15 |
- Upload a scientific article as PDF document. You will see a spinner or loading indicator while the processing is in progress.
|
|
|
17 |
|
18 |
![screenshot1.png](docs%2Fimages%2Fscreenshot1.png)
|
19 |
|
20 |
+
### Options
|
21 |
+
#### Context size
|
22 |
+
Allow to change the number of embedding chunks that are considered for responding. The text chunk are around 250 tokens, which uses around 1000 tokens for each question.
|
23 |
+
|
24 |
+
#### Query mode
|
25 |
By default, the mode is set to LLM (Language Model) which enables question/answering. You can directly ask questions related to the document content, and the system will answer the question using content from the document.
|
26 |
If you switch the mode to "Embedding," the system will return specific chunks from the document that are semantically related to your query. This mode helps to test why sometimes the answers are not satisfying or incomplete.
|
27 |
|