newsletter's picture
Upload README.md with huggingface_hub
aa76890 verified
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - code
  - llama-cpp
  - gguf-my-repo
datasets:
  - m-a-p/Code-Feedback
  - HuggingFaceTB/cosmopedia-100k
  - LDJnr/Capybara
  - vicgalle/alpaca-gpt4
  - glaiveai/glaive-code-assistant-v2
  - WhiteRabbitNeo/WRN-Chapter-1
  - WhiteRabbitNeo/WRN-Chapter-2
  - m-a-p/CodeFeedback-Filtered-Instruction
  - jondurbin/airoboros-3.2
  - euclaise/WritingPrompts_curated
  - derek-thomas/squad-v1.1-t5-question-generation
  - reinforz/question_generation_data
  - teknium/GPTeacher-General-Instruct
  - dim/roleplay_instruct_v2_final
  - TIGER-Lab/MathInstruct
  - abacusai/SystemChat
  - Mihaiii/OpenHermes-2.5-1k-longest-curated
license_name: a
license_link: LICENSE

newsletter/NinjaMouse-2.4B-32L-danube-Q6_K-GGUF

This model was converted to GGUF format from trollek/NinjaMouse-2.4B-32L-danube using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo newsletter/NinjaMouse-2.4B-32L-danube-Q6_K-GGUF --model ninjamouse-2.4b-32l-danube.Q6_K.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo newsletter/NinjaMouse-2.4B-32L-danube-Q6_K-GGUF --model ninjamouse-2.4b-32l-danube.Q6_K.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m ninjamouse-2.4b-32l-danube.Q6_K.gguf -n 128