Text Generation
GGUF
English
fireplace
fireplace-2
valiant
valiant-labs
llama
llama-3.1
llama-3.1-instruct
llama-3.1-instruct-8b
llama-3
llama-3-instruct
llama-3-instruct-8b
8b
function-calling
sql
database
data-visualization
matplotlib
json
conversational
chat
instruct
llama-cpp
gguf-my-repo
Eval Results
Inference Endpoints
language: | |
- en | |
license: llama3.1 | |
tags: | |
- fireplace | |
- fireplace-2 | |
- valiant | |
- valiant-labs | |
- llama | |
- llama-3.1 | |
- llama-3.1-instruct | |
- llama-3.1-instruct-8b | |
- llama-3 | |
- llama-3-instruct | |
- llama-3-instruct-8b | |
- 8b | |
- function-calling | |
- sql | |
- database | |
- data-visualization | |
- matplotlib | |
- json | |
- conversational | |
- chat | |
- instruct | |
- llama-cpp | |
- gguf-my-repo | |
pipeline_tag: text-generation | |
base_model: ValiantLabs/Llama3.1-8B-Fireplace2 | |
model_type: llama | |
model-index: | |
- name: Llama3.1-8B-Fireplace2 | |
results: | |
- task: | |
type: text-generation | |
name: Text Generation | |
dataset: | |
name: IFEval (0-Shot) | |
type: HuggingFaceH4/ifeval | |
args: | |
num_few_shot: 0 | |
metrics: | |
- type: inst_level_strict_acc and prompt_level_strict_acc | |
value: 54.83 | |
name: strict accuracy | |
source: | |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 | |
name: Open LLM Leaderboard | |
- task: | |
type: text-generation | |
name: Text Generation | |
dataset: | |
name: BBH (3-Shot) | |
type: BBH | |
args: | |
num_few_shot: 3 | |
metrics: | |
- type: acc_norm | |
value: 24.07 | |
name: normalized accuracy | |
source: | |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 | |
name: Open LLM Leaderboard | |
- task: | |
type: text-generation | |
name: Text Generation | |
dataset: | |
name: MATH Lvl 5 (4-Shot) | |
type: hendrycks/competition_math | |
args: | |
num_few_shot: 4 | |
metrics: | |
- type: exact_match | |
value: 5.82 | |
name: exact match | |
source: | |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 | |
name: Open LLM Leaderboard | |
- task: | |
type: text-generation | |
name: Text Generation | |
dataset: | |
name: GPQA (0-shot) | |
type: Idavidrein/gpqa | |
args: | |
num_few_shot: 0 | |
metrics: | |
- type: acc_norm | |
value: 5.15 | |
name: acc_norm | |
source: | |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 | |
name: Open LLM Leaderboard | |
- task: | |
type: text-generation | |
name: Text Generation | |
dataset: | |
name: MuSR (0-shot) | |
type: TAUR-Lab/MuSR | |
args: | |
num_few_shot: 0 | |
metrics: | |
- type: acc_norm | |
value: 4.38 | |
name: acc_norm | |
source: | |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 | |
name: Open LLM Leaderboard | |
- task: | |
type: text-generation | |
name: Text Generation | |
dataset: | |
name: MMLU-PRO (5-shot) | |
type: TIGER-Lab/MMLU-Pro | |
config: main | |
split: test | |
args: | |
num_few_shot: 5 | |
metrics: | |
- type: acc | |
value: 15.63 | |
name: accuracy | |
source: | |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 | |
name: Open LLM Leaderboard | |
# Triangle104/Llama3.1-8B-Fireplace2-Q4_K_S-GGUF | |
This model was converted to GGUF format from [`ValiantLabs/Llama3.1-8B-Fireplace2`](https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. | |
Refer to the [original model card](https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2) for more details on the model. | |
--- | |
Model details: | |
- | |
Fireplace 2 is a chat model, adding helpful structured outputs to Llama 3.1 8b Instruct. | |
an expansion pack of supplementary outputs - request them at will within your chat: | |
Inline function calls | |
SQL queries | |
JSON objects | |
Data visualization with matplotlib | |
Mix normal chat and structured outputs within the same conversation. | |
Fireplace 2 supplements the existing strengths of Llama 3.1, providing inline capabilities within the Llama 3 Instruct format. | |
Version | |
This is the 2024-07-23 release of Fireplace 2 for Llama 3.1 8b. | |
We're excited to bring further upgrades and releases to Fireplace 2 in the future. | |
Help us and recommend Fireplace 2 to your friends! | |
Prompting Guide | |
Fireplace uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat with Llama 3.1 and also includes the different special tokens used for Fireplace 2's added features: | |
import transformers import torch | |
model_id = "ValiantLabs/Llama3.1-8B-Fireplace2" | |
pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) | |
messages = [ {"role": "system", "content": "You are Fireplace, an expert technical assistant."}, {"role": "user", "content": "Hi, can you explain local area networking to me?"}, #general Llama 3.1 chat #{"role": "user", "content": "I have the following SQL table: employees (job_id VARCHAR, salary INTEGER)\n\nCan you find all employees with a salary above $75000?<|request_sql|>"}, #for SQL query #{"role": "user", "content": "{""name"": ""get_news_headlines"",""description"": ""Get the latest news headlines"",""parameters"": {""type"": ""object"",""properties"": {""country"": {""type"": ""string"",""description"": ""The country for which news headlines are to be retrieved""}},""required"": [""country""]}}\n\nHi, can you get me the latest news headlines for the United States?<|request_function_call|>"}, # for function call #{"role": "user", "content": "Show me an example of a histogram with a fixed bin size. Use attractive colors.<|request_matplotlib|>"}, #for data visualization #{"role": "user", "content": "Can you define the word 'presence' for me, thanks!<|request_json|>"}, #for JSON output ] | |
outputs = pipeline( messages, max_new_tokens=512, ) print(outputs[0]["generated_text"][-1]) | |
While Fireplace 2 is trained to minimize incorrect structured outputs, they can still occur occasionally. Production uses of Fireplace 2 should verify the structure of all model outputs and remove any unneeded components of the output. | |
For handling of function call responses, use the Llama 3.1 Instruct tool response style. | |
Special Tokens | |
Fireplace 2 utilizes special tokens applied to the Llama 3.1 tokenizer: | |
<|request_json|> | |
<|start_json|> | |
<|end_json|> | |
<|request_sql|> | |
<|start_sql|> | |
<|end_sql|> | |
<|request_matplotlib|> | |
<|start_matplotlib|> | |
<|end_matplotlib|> | |
<|request_function_call|> | |
<|start_function_call|> | |
<|end_function_call|> | |
These are supplemental to the existing special tokens used by Llama 3.1, such as <|python_tag|> and <|start_header_id|>. Fireplace 2 has been trained using the Llama 3.1 Instruct chat structure, with new special tokens added within the conversation. | |
The 'request' tokens are used by the user to request a specific type of structured output. They should be appended to the end of the user's message and can be alternated with normal chat responses throughout the conversation. | |
The Model | |
Fireplace 2 is built on top of Llama 3.1 8b Instruct. | |
This version of Fireplace 2 uses data from the following datasets: | |
glaiveai/glaive-function-calling-v2 | |
b-mc2/sql-create-context | |
sequelbox/Cadmium | |
sequelbox/Harlequin | |
migtissera/Tess-v1.5 | |
LDJnr/Pure-Dove | |
Additional capabilities will be added to future releases. | |
--- | |
## Use with llama.cpp | |
Install llama.cpp through brew (works on Mac and Linux) | |
```bash | |
brew install llama.cpp | |
``` | |
Invoke the llama.cpp server or the CLI. | |
### CLI: | |
```bash | |
llama-cli --hf-repo Triangle104/Llama3.1-8B-Fireplace2-Q4_K_S-GGUF --hf-file llama3.1-8b-fireplace2-q4_k_s.gguf -p "The meaning to life and the universe is" | |
``` | |
### Server: | |
```bash | |
llama-server --hf-repo Triangle104/Llama3.1-8B-Fireplace2-Q4_K_S-GGUF --hf-file llama3.1-8b-fireplace2-q4_k_s.gguf -c 2048 | |
``` | |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. | |
Step 1: Clone llama.cpp from GitHub. | |
``` | |
git clone https://github.com/ggerganov/llama.cpp | |
``` | |
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). | |
``` | |
cd llama.cpp && LLAMA_CURL=1 make | |
``` | |
Step 3: Run inference through the main binary. | |
``` | |
./llama-cli --hf-repo Triangle104/Llama3.1-8B-Fireplace2-Q4_K_S-GGUF --hf-file llama3.1-8b-fireplace2-q4_k_s.gguf -p "The meaning to life and the universe is" | |
``` | |
or | |
``` | |
./llama-server --hf-repo Triangle104/Llama3.1-8B-Fireplace2-Q4_K_S-GGUF --hf-file llama3.1-8b-fireplace2-q4_k_s.gguf -c 2048 | |
``` | |