khronoz commited on
Commit
5bf97a0
·
1 Parent(s): 1ee8e6f

Update README

Browse files
Files changed (2) hide show
  1. backend/README.md +9 -7
  2. frontend/README.md +5 -5
backend/README.md CHANGED
@@ -1,23 +1,25 @@
1
- This is a [LlamaIndex](https://www.llamaindex.ai/) backend using [FastAPI](https://fastapi.tiangolo.com/) bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama).
 
 
2
 
3
  ## Getting Started
4
 
5
  First, setup the environment:
6
 
7
- ```
8
  poetry install
9
  poetry shell
10
  ```
11
 
12
  Second, run the development server:
13
 
14
- ```
15
  python main.py
16
  ```
17
 
18
  Then call the API endpoint `/api/chat` to see the result:
19
 
20
- ```
21
  curl --location 'localhost:8000/api/chat' \
22
  --header 'Content-Type: application/json' \
23
  --data '{ "messages": [{ "role": "user", "content": "Hello" }] }'
@@ -29,7 +31,7 @@ Open [http://localhost:8000/docs](http://localhost:8000/docs) with your browser
29
 
30
  The API allows CORS for all origins to simplify development. You can change this behavior by setting the `ENVIRONMENT` environment variable to `prod`:
31
 
32
- ```
33
  ENVIRONMENT=prod uvicorn main:app
34
  ```
35
 
@@ -38,5 +40,5 @@ ENVIRONMENT=prod uvicorn main:app
38
  To learn more about LlamaIndex, take a look at the following resources:
39
 
40
  - [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex.
41
-
42
- You can check out [the LlamaIndex GitHub repository](https://github.com/run-llama/llama_index) - your feedback and contributions are welcome!
 
1
+ # Smart Retrieval Backend
2
+
3
+ The backend is built using Python & [FastAPI](https://fastapi.tiangolo.com/) bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama).
4
 
5
  ## Getting Started
6
 
7
  First, setup the environment:
8
 
9
+ ```bash
10
  poetry install
11
  poetry shell
12
  ```
13
 
14
  Second, run the development server:
15
 
16
+ ```bash
17
  python main.py
18
  ```
19
 
20
  Then call the API endpoint `/api/chat` to see the result:
21
 
22
+ ```bash
23
  curl --location 'localhost:8000/api/chat' \
24
  --header 'Content-Type: application/json' \
25
  --data '{ "messages": [{ "role": "user", "content": "Hello" }] }'
 
31
 
32
  The API allows CORS for all origins to simplify development. You can change this behavior by setting the `ENVIRONMENT` environment variable to `prod`:
33
 
34
+ ```bash
35
  ENVIRONMENT=prod uvicorn main:app
36
  ```
37
 
 
40
  To learn more about LlamaIndex, take a look at the following resources:
41
 
42
  - [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex.
43
+ - [LlamaIndexTS Documentation](https://ts.llamaindex.ai) - learn about LlamaIndexTS (Typescript features).
44
+ - [FastAPI Documentation](https://fastapi.tiangolo.com/) - learn about FastAPI.
frontend/README.md CHANGED
@@ -1,16 +1,18 @@
1
- This is a [LlamaIndex](https://www.llamaindex.ai/) frontend using [Next.js](https://nextjs.org/) bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama).
 
 
2
 
3
  ## Getting Started
4
 
5
  First, install the dependencies:
6
 
7
- ```
8
  npm install
9
  ```
10
 
11
  Second, run the development server:
12
 
13
- ```
14
  npm run dev
15
  ```
16
 
@@ -26,5 +28,3 @@ To learn more about LlamaIndex, take a look at the following resources:
26
 
27
  - [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex (Python features).
28
  - [LlamaIndexTS Documentation](https://ts.llamaindex.ai) - learn about LlamaIndex (Typescript features).
29
-
30
- You can check out [the LlamaIndexTS GitHub repository](https://github.com/run-llama/LlamaIndexTS) - your feedback and contributions are welcome!
 
1
+ # Smart Retrieval Frontend
2
+
3
+ The frontend is built using [Next.js](https://nextjs.org/) & [Vercel AI](https://github.com/vercel/ai) bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama).
4
 
5
  ## Getting Started
6
 
7
  First, install the dependencies:
8
 
9
+ ```bash
10
  npm install
11
  ```
12
 
13
  Second, run the development server:
14
 
15
+ ```bash
16
  npm run dev
17
  ```
18
 
 
28
 
29
  - [LlamaIndex Documentation](https://docs.llamaindex.ai) - learn about LlamaIndex (Python features).
30
  - [LlamaIndexTS Documentation](https://ts.llamaindex.ai) - learn about LlamaIndex (Typescript features).