Update README.md
Browse files
README.md
CHANGED
|
@@ -141,10 +141,8 @@ We condition on different RTG for comfort and energy savings. Anykind of data wi
|
|
| 141 |
actions lead to what kind of consequenses.
|
| 142 |
|
| 143 |
5) LLM deployment phase
|
| 144 |
-
|
| 145 |
-
After this you have to move on to the inference Server side to tie up all these together
|
| 146 |
Gen-HVAC supports an optional LLM + Digital Human-in-the-Loop (DHIL) layer that modulates preference/RTG targets and high-level
|
| 147 |
-
constraints
|
| 148 |
, and launch the service.
|
| 149 |
|
| 150 |
On Linux/macOS you can install Ollama via curl -fsSL https://ollama.com/install.sh | sh, start the daemon with ollama serve (leave it running), and pull recommended models using ollama pull deepseek-r1:7b (lightweight reasoning), ollama pull llama3.1:8b (strong general instruction-following), ollama pull qwen2.5:7b (efficient general model), or ollama pull mistral:instruct (fast instruct model). If you want a slightly heavier but still practical model, ollama pull deepseek-r1:14b or ollama pull qwen2.5:14b.
|
|
|
|
| 141 |
actions lead to what kind of consequenses.
|
| 142 |
|
| 143 |
5) LLM deployment phase
|
|
|
|
|
|
|
| 144 |
Gen-HVAC supports an optional LLM + Digital Human-in-the-Loop (DHIL) layer that modulates preference/RTG targets and high-level
|
| 145 |
+
constraints. For local LLM hosting, install Ollama, pull a quantized model
|
| 146 |
, and launch the service.
|
| 147 |
|
| 148 |
On Linux/macOS you can install Ollama via curl -fsSL https://ollama.com/install.sh | sh, start the daemon with ollama serve (leave it running), and pull recommended models using ollama pull deepseek-r1:7b (lightweight reasoning), ollama pull llama3.1:8b (strong general instruction-following), ollama pull qwen2.5:7b (efficient general model), or ollama pull mistral:instruct (fast instruct model). If you want a slightly heavier but still practical model, ollama pull deepseek-r1:14b or ollama pull qwen2.5:14b.
|