![TensorBlock](https://i.imgur.com/jC7kdl8.jpeg)
Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server
ahmedheakl/asm2asm-deepseek-1.3b-500k-4ep-x86-O0-arm-gnueabi-gcc - GGUF
This repo contains GGUF format model files for ahmedheakl/asm2asm-deepseek-1.3b-500k-4ep-x86-O0-arm-gnueabi-gcc.
The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4011.
Prompt template
<|begin▁of▁sentence|>{system_prompt}### Instruction:
{prompt}
### Response:
Model file specification
Downloading instruction
Command line
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, downoad the individual model file the a local directory
huggingface-cli download tensorblock/asm2asm-deepseek-1.3b-500k-4ep-x86-O0-arm-gnueabi-gcc-GGUF --include "asm2asm-deepseek-1.3b-500k-4ep-x86-O0-arm-gnueabi-gcc-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/asm2asm-deepseek-1.3b-500k-4ep-x86-O0-arm-gnueabi-gcc-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
- Downloads last month
- 288
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for tensorblock/asm2asm-deepseek-1.3b-500k-4ep-x86-O0-arm-gnueabi-gcc-GGUF
Base model
deepseek-ai/deepseek-coder-1.3b-instruct