Update README.md
Browse files
README.md
CHANGED
@@ -39,3 +39,18 @@ inputs = tokenizer(text, return_tensors="pt")
|
|
39 |
outputs = model.generate(inputs["input_ids"], max_length=50)
|
40 |
|
41 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
outputs = model.generate(inputs["input_ids"], max_length=50)
|
40 |
|
41 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
42 |
+
```
|
43 |
+
|
44 |
+
## GGUF Format
|
45 |
+
|
46 |
+
The model is also available in GGUF (Generic Git-based Unified Format), which allows for easy integration and deployment in LMStudio. This format ensures that the model can be imported directly into LMStudio for seamless use and further customization.
|
47 |
+
|
48 |
+
## Ollama Integration
|
49 |
+
|
50 |
+
For users preferring to work with Ollama, a dedicated version of this model is available and can be installed using the following Ollama command:
|
51 |
+
|
52 |
+
```bash
|
53 |
+
ollama run llama3-gl-chat
|
54 |
+
```
|
55 |
+
|
56 |
+
|