--- title: ❓ FAQs description: 'Collections of all the frequently asked questions' --- Yes, it does. Please refer to the [OpenAI Assistant docs page](/examples/openai-assistant). Use the model provided on huggingface: `mistralai/Mistral-7B-v0.1` ```python main.py import os from embedchain import App os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_your_token" app = App.from_config("huggingface.yaml") ``` ```yaml huggingface.yaml llm: provider: huggingface config: model: 'mistralai/Mistral-7B-v0.1' temperature: 0.5 max_tokens: 1000 top_p: 0.5 stream: false embedder: provider: huggingface config: model: 'sentence-transformers/all-mpnet-base-v2' ``` Use the model `gpt-4-turbo` provided my openai. ```python main.py import os from embedchain import App os.environ['OPENAI_API_KEY'] = 'xxx' # load llm configuration from gpt4_turbo.yaml file app = App.from_config(config_path="gpt4_turbo.yaml") ``` ```yaml gpt4_turbo.yaml llm: provider: openai config: model: 'gpt-4-turbo' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false ``` ```python main.py import os from embedchain import App os.environ['OPENAI_API_KEY'] = 'xxx' # load llm configuration from gpt4.yaml file app = App.from_config(config_path="gpt4.yaml") ``` ```yaml gpt4.yaml llm: provider: openai config: model: 'gpt-4' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false ``` ```python main.py from embedchain import App # load llm configuration from opensource.yaml file app = App.from_config(config_path="opensource.yaml") ``` ```yaml opensource.yaml llm: provider: gpt4all config: model: 'orca-mini-3b-gguf2-q4_0.gguf' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: false embedder: provider: gpt4all config: model: 'all-MiniLM-L6-v2' ``` You can achieve this by setting `stream` to `true` in the config file. ```yaml openai.yaml llm: provider: openai config: model: 'gpt-3.5-turbo' temperature: 0.5 max_tokens: 1000 top_p: 1 stream: true ``` ```python main.py import os from embedchain import App os.environ['OPENAI_API_KEY'] = 'sk-xxx' app = App.from_config(config_path="openai.yaml") app.add("https://www.forbes.com/profile/elon-musk") response = app.query("What is the net worth of Elon Musk?") # response will be streamed in stdout as it is generated. ``` Set up the app by adding an `id` in the config file. This keeps the data for future use. You can include this `id` in the yaml config or input it directly in `config` dict. ```python app1.py import os from embedchain import App os.environ['OPENAI_API_KEY'] = 'sk-xxx' app1 = App.from_config(config={ "app": { "config": { "id": "your-app-id", } } }) app1.add("https://www.forbes.com/profile/elon-musk") response = app1.query("What is the net worth of Elon Musk?") ``` ```python app2.py import os from embedchain import App os.environ['OPENAI_API_KEY'] = 'sk-xxx' app2 = App.from_config(config={ "app": { "config": { # this will persist and load data from app1 session "id": "your-app-id", } } }) response = app2.query("What is the net worth of Elon Musk?") ``` #### Still have questions? If docs aren't sufficient, please feel free to reach out to us using one of the following methods: