--- base_model: LeroyDyer/_Spydaz_Web_AI_Chatml_BASE datasets: - gretelai/synthetic_text_to_sql - HuggingFaceTB/cosmopedia - teknium/OpenHermes-2.5 - Open-Orca/SlimOrca - Open-Orca/OpenOrca - cognitivecomputations/dolphin-coder - databricks/databricks-dolly-15k - yahma/alpaca-cleaned - uonlp/CulturaX - mwitiderrick/SwahiliPlatypus - swahili - Rogendo/English-Swahili-Sentence-Pairs - ise-uiuc/Magicoder-Evol-Instruct-110K - meta-math/MetaMathQA - abacusai/ARC_DPO_FewShot - abacusai/MetaMath_DPO_FewShot - abacusai/HellaSwag_DPO_FewShot - HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset - HuggingFaceFW/fineweb - occiglot/occiglot-fineweb-v0.5 - omi-health/medical-dialogue-to-soap-summary - keivalya/MedQuad-MedicalQnADataset - ruslanmv/ai-medical-dataset - Shekswess/medical_llama3_instruct_dataset_short - ShenRuililin/MedicalQnA - virattt/financial-qa-10K - PatronusAI/financebench - takala/financial_phrasebank - Replete-AI/code_bagel - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW - IlyaGusev/gpt_roleplay_realm - rickRossie/bluemoon_roleplay_chat_data_300k_messages - jtatman/hypnosis_dataset - Hypersniper/philosophy_dialogue - Locutusque/function-calling-chatml - bible-nlp/biblenlp-corpus - DatadudeDev/Bible - Helsinki-NLP/bible_para - HausaNLP/AfriSenti-Twitter - aixsatoshi/Chat-with-cosmopedia - xz56/react-llama - BeIR/hotpotqa - arcee-ai/agent-data - HuggingFaceTB/cosmopedia-100k - HuggingFaceFW/fineweb-edu - m-a-p/CodeFeedback-Filtered-Instruction - heliosbrahma/mental_health_chatbot_dataset language: - en - sw - ig - so - es - ca - xh - zu - ha - tw - af - hi - bm - su license: apache-2.0 tags: - Mistral_Star - Mistral_Quiet - Mistral - Mixtral - Question-Answer - Token-Classification - Sequence-Classification - SpydazWeb-AI - chemistry - biology - legal - code - climate - medical - text-generation-inference - not-for-all-audiences - chain-of-thought - tree-of-knowledge - forest-of-thoughts - visual-spacial-sketchpad - alpha-mind - knowledge-graph - entity-detection - encyclopedia - wikipedia - stack-exchange - Reddit - Cyber-series - MegaMind - Cybertron - SpydazWeb - Spydaz - LCARS - star-trek - mega-transformers - Mulit-Mega-Merge - Multi-Lingual - Afro-Centric - African-Model - Ancient-One - llama-cpp - gguf-my-repo --- # LeroyDyer/_Spydaz_Web_AI_Chatml_BASE-Q4_K_M-GGUF This model was converted to GGUF format from [`LeroyDyer/_Spydaz_Web_AI_Chatml_BASE`](https://huggingface.co/LeroyDyer/_Spydaz_Web_AI_Chatml_BASE) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LeroyDyer/_Spydaz_Web_AI_Chatml_BASE) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo LeroyDyer/_Spydaz_Web_AI_Chatml_BASE-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatml_base-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo LeroyDyer/_Spydaz_Web_AI_Chatml_BASE-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatml_base-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo LeroyDyer/_Spydaz_Web_AI_Chatml_BASE-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatml_base-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo LeroyDyer/_Spydaz_Web_AI_Chatml_BASE-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatml_base-q4_k_m.gguf -c 2048 ```