File size: 1,225 Bytes
19c6bdd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: llama3
tags:
- LocalAI
---


# OpenVINO IR model with int8 quantization

Model definition for LocalAI:
```
name: localai-llama3
backend: transformers
parameters:
  model: fakezeta/LocalAI-Llama3-8b-Function-Call-v0.2-ov-int8
context_size: 8192
type: OVModelForCausalLM
template:
  use_tokenizer_template: true
```

To run the model directly with LocalAI:
```
local-ai run huggingface://fakezeta/LocalAI-Llama3-8b-Function-Call-v0.2-ov-int8/model.yaml
```

# LocalAI-Llama3-8b-Function-Call-v0.2

[![local-ai-banner.png](https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/bXvNcxQqQ-wNAnISmx3PS.png)](https://localai.io)

![LocalAIFCALL](https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/us5JKi9z046p8K-cn_M0w.webp)

This model is a fine-tune on a custom dataset + glaive to work specifically and leverage all the [LocalAI](https://localai.io) features of constrained grammar.

Specifically, the model once enters in tools mode will always reply with JSON.

To run on LocalAI:

```
local-ai run huggingface://mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF/localai.yaml
```

If you like my work, consider up donating so can get resources for my fine-tunes!