File size: 2,549 Bytes
a85c9b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
title: "App"
---

Create a RAG app object on Embedchain. This is the main entrypoint for a developer to interact with Embedchain APIs. An app configures the llm, vector database, embedding model, and retrieval strategy of your choice.

### Attributes

<ParamField path="local_id" type="str">
    App ID
</ParamField>
<ParamField path="name" type="str" optional>
    Name of the app
</ParamField>
<ParamField path="config" type="BaseConfig">
    Configuration of the app
</ParamField>
<ParamField path="llm" type="BaseLlm">
    Configured LLM for the RAG app
</ParamField>
<ParamField path="db" type="BaseVectorDB">
    Configured vector database for the RAG app
</ParamField>
<ParamField path="embedding_model" type="BaseEmbedder">
    Configured embedding model for the RAG app
</ParamField>
<ParamField path="chunker" type="ChunkerConfig">
    Chunker configuration
</ParamField>
<ParamField path="client" type="Client" optional>
    Client object (used to deploy an app to Embedchain platform)
</ParamField>
<ParamField path="logger" type="logging.Logger">
    Logger object
</ParamField>

## Usage

You can create an app instance using the following methods:

### Default setting

```python Code Example
from embedchain import App
app = App()
```


### Python Dict

```python Code Example
from embedchain import App

config_dict = {
  'llm': {
    'provider': 'gpt4all',
    'config': {
      'model': 'orca-mini-3b-gguf2-q4_0.gguf',
      'temperature': 0.5,
      'max_tokens': 1000,
      'top_p': 1,
      'stream': False
    }
  },
  'embedder': {
    'provider': 'gpt4all'
  }
}

# load llm configuration from config dict
app = App.from_config(config=config_dict)
```

### YAML Config

<CodeGroup>

```python main.py
from embedchain import App

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")
```

```yaml config.yaml
llm:
  provider: gpt4all
  config:
    model: 'orca-mini-3b-gguf2-q4_0.gguf'
    temperature: 0.5
    max_tokens: 1000
    top_p: 1
    stream: false

embedder:
  provider: gpt4all
```

</CodeGroup>

### JSON Config

<CodeGroup>

```python main.py
from embedchain import App

# load llm configuration from config.json file
app = App.from_config(config_path="config.json")
```

```json config.json
{
  "llm": {
    "provider": "gpt4all",
    "config": {
      "model": "orca-mini-3b-gguf2-q4_0.gguf",
      "temperature": 0.5,
      "max_tokens": 1000,
      "top_p": 1,
      "stream": false
    }
  },
  "embedder": {
    "provider": "gpt4all"
  }
}
```

</CodeGroup>