File size: 1,430 Bytes
a583781
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
library_name: transformers
tags:
- llama-factory
- llama3
license: llama3
datasets:
- teknium/OpenHermes-2.5
- mudler/function-call-localai-glaive
---

# OpenVINO IR model with int8 quantization

Model definition for LocalAI:
```
name: mirai-nova
backend: transformers
parameters:
  model: fakezeta/Mirai-Nova-Llama3-LocalAI-8B-v0.1-ov-int8
context_size: 8192
type: OVModelForCausalLM
template:
  use_tokenizer_template: true
```

To run the model directly with LocalAI:
```
local-ai run huggingface://fakezeta/Mirai-Nova-Llama3-LocalAI-8B-v0.1-ov-int8/model.yaml
```

[![local-ai-banner.png](https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/bXvNcxQqQ-wNAnISmx3PS.png)](https://localai.io)

## Mirai Nova

![image/png](https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/SKuXcvmZ_6oD4NCMkvyGo.png)

Mirai Nova: "Mirai" means future in Japanese, and "Nova" references a star showing a sudden large increase in brightness.

A set of models oriented in function calling, but generalist and with enhanced reasoning capability. This is fine tuned with Llama3.

Mirai Nova works particularly well with LocalAI, leveraging the function call with grammars feature out of the box.

GGUF quants: https://huggingface.co/mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.1-GGUF

To run on LocalAI:

```
local-ai run huggingface://mudler/Mirai-Nova-Llama3-LocalAI-8B-v0.1-GGUF/localai.yaml
```