Geraldine commited on
Commit
fa98313
1 Parent(s): 4cf1107

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -30
README.md CHANGED
@@ -1,30 +1,35 @@
1
- # Streamlit simple QA Inference App with Ollama, Nvidia Cloud and Groq
2
-
3
- > Post :
4
-
5
- > Deployed : no
6
-
7
- Two different ways to develop the same chatbot application
8
- - app_api_completion.py : make QA inference with LLMs by choosing between the native Chat API completion endpoints provided by Ollama, Nvidia or Groq
9
- - app_langchain_completion.py : make QA inference with LLMs with the dedicated Langchain wrappers for Ollama, Nvidia or Groq
10
-
11
- You can use one, two or the three LLMs hosting solutions according to your environment :
12
-
13
- - a running Ollama instance : the default base_url is http://localhost:11434 but if needed (remote or dockerized Ollama instance for example) you change it in the OllamaClient in clients.py
14
- *and/or*
15
- - a valid API key on the Nvidia Cloud : [https://build.nvidia.com/explore/discover](https://build.nvidia.com/explore/discover)
16
- *and/or*
17
- - a valid API key on Groq Cloud : [https://console.groq.com/playground](https://console.groq.com/playground)
18
-
19
-
20
-
21
- ```
22
- git clone
23
- pip install -r requirements.txt
24
- streamlit run Home.py
25
- ```
26
-
27
- Running on http://localhost:8501
28
-
29
- ![screenshot](screenshot.png)
30
-
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ title: Streamlit simple QA Inference App with Ollama, Nvidia Cloud and Groq
4
+ app_file: Home.py
5
+ ---
6
+
7
+ # Streamlit simple QA Inference App with Ollama, Nvidia Cloud and Groq
8
+
9
+ > Post : [https://iaetbibliotheques.fr/2024/05/comment-executer-localement-un-llm-22](https://iaetbibliotheques.fr/2024/05/comment-executer-localement-un-llm-22)
10
+
11
+ > Deployed : no
12
+
13
+ Two different ways to develop the same chatbot application
14
+ - app_api_completion.py : make QA inference with LLMs by choosing between the native Chat API completion endpoints provided by Ollama, Nvidia or Groq
15
+ - app_langchain_completion.py : make QA inference with LLMs with the dedicated Langchain wrappers for Ollama, Nvidia or Groq
16
+
17
+ You can use one, two or the three LLMs hosting solutions according to your environment :
18
+
19
+ - a running Ollama instance : the default base_url is http://localhost:11434 but if needed (remote or dockerized Ollama instance for example) you change it in the OllamaClient in clients.py
20
+ *and/or*
21
+ - a valid API key on the Nvidia Cloud : [https://build.nvidia.com/explore/discover](https://build.nvidia.com/explore/discover)
22
+ *and/or*
23
+ - a valid API key on Groq Cloud : [https://console.groq.com/playground](https://console.groq.com/playground)
24
+
25
+
26
+
27
+ ```
28
+ git clone
29
+ pip install -r requirements.txt
30
+ streamlit run Home.py
31
+ ```
32
+
33
+ Running on http://localhost:8501
34
+
35
+ ![screenshot](screenshot.png)