Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Spaces:
imperialwool
/
llama-cpp-api
Sleeping

App Files Files Community
1
Fetching metadata from the HF Docker repository...
llama-cpp-api
Ctrl+K
Ctrl+K
  • 2 contributors
History: 20 commits
toaster61
ggml -> gguf
7fd3f9f over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • .gitignore
    3.1 kB
    gitignore update over 1 year ago
  • Dockerfile
    1.13 kB
    ggml -> gguf over 1 year ago
  • README.md
    290 Bytes
    ggml -> gguf over 1 year ago
  • app.py
    1.9 kB
    ggml -> gguf over 1 year ago
  • app_gradio.py
    2.15 kB
    ggml -> gguf over 1 year ago
  • requirements.txt
    20 Bytes
    ggml -> gguf over 1 year ago
  • run-docker.sh
    177 Bytes
    it works! over 1 year ago
  • system.prompt
    0 Bytes
    ggml -> gguf over 1 year ago
  • wget-log
    660 Bytes
    ggml -> gguf over 1 year ago