File size: 2,066 Bytes
7433246
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1d1c4b2
 
 
 
40ad862
7433246
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf93d08
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
language:
- en
license: mit
tags:
- llama-cpp
- gguf-my-repo
- Infero
- Dllama
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
inference:
  parameters:
    temperature: 0.7
widget:
- messages:
  - role: user
    content: Can you provide ways to eat combinations of bananas and dragonfruits?
---

# tinybiggames/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
## Use with tinyBigGAMES's [LMEngine Inference Library](https://github.com/tinyBigGAMES/LMEngine)


How to configure LMEngine:

```Delphi
Config_Init(
 'C:/LLM/gguf', // path to model files
 -1             // number of GPU layer, -1 to use all available layers
);
```

How to define model:

```Delphi
Model_Define('phi-3-mini-4k-instruct.Q4_K_M.gguf',
  'phi3:4K:Q4KM', 4000,
  '<|{role}|>{content}<|end|>',
  '<|assistant|>');
```

How to add a message:

```Delphi
Message_Add(
  ROLE_USER,    // role
 'What is AI?'  // content
);
```

`{role}` - will be substituted with the message "role"  
`{content}` - will be substituted with the message "content"

How to do inference:

```Delphi
var
  LTokenOutputSpeed: Single;
  LInputTokens: Int32;
  LOutputTokens: Int32;
  LTotalTokens: Int32;
  
if Inference_Run('phi3:4K:Q4KM', 1024) then
  begin
    Inference_GetUsage(nil, @LTokenOutputSpeed, @LInputTokens, @LOutputTokens,
      @LTotalTokens);
    Console_PrintLn('', FG_WHITE);
    Console_PrintLn('Tokens :: Input: %d, Output: %d, Total: %d, Speed: %3.1f t/s',
      FG_BRIGHTYELLOW, LInputTokens, LOutputTokens, LTotalTokens, LTokenOutputSpeed);
  end
else
  begin
    Console_PrintLn('', FG_WHITE);
    Console_PrintLn('Error: %s', FG_RED, Error_Get());
  end;
```