Stefano Fiorucci commited on
Commit
943a5e0
1 Parent(s): be71634

small addition to README

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -55,8 +55,14 @@ WKLP is a simple Question Answering system, based on data crawled from [Twin Pea
55
  Within each folder, you can find more in-depth explanations.
56
 
57
  ## Possible improvements ✨
 
 
 
 
 
 
58
  - The reader model (`deepset/roberta-base-squad2`) is a good compromise between speed and accuracy, running on CPU. There are certainly better (and more computationally expensive) models, as you can read in the [Haystack documentation](https://haystack.deepset.ai/pipeline_nodes/reader).
59
  - You can also think about preparing a Twin Peaks QA dataset and fine-tune the reader model to get better accuracy, as explained in this [Haystack tutorial](https://haystack.deepset.ai/tutorials/fine-tuning-a-model).
60
- - ...
61
 
62
 
55
  Within each folder, you can find more in-depth explanations.
56
 
57
  ## Possible improvements ✨
58
+ ### Project structure
59
+ - The project is optimized to be deployed in Hugging Face Spaces and consists of an all-in-one Streamlit web app. In more structured production environments, I suggest dividing the software into three parts:
60
+ - Haystack backend API (as explained in [the official documentation](https://haystack.deepset.ai/components/rest-api))
61
+ - Document store service
62
+ - Streamlit web app
63
+ ### Reader
64
  - The reader model (`deepset/roberta-base-squad2`) is a good compromise between speed and accuracy, running on CPU. There are certainly better (and more computationally expensive) models, as you can read in the [Haystack documentation](https://haystack.deepset.ai/pipeline_nodes/reader).
65
  - You can also think about preparing a Twin Peaks QA dataset and fine-tune the reader model to get better accuracy, as explained in this [Haystack tutorial](https://haystack.deepset.ai/tutorials/fine-tuning-a-model).
66
+
67
 
68