Text Generation
llamafile
English
tinyllama
llama
Bojun-Feng commited on
Commit
3b214fa
1 Parent(s): b1d0515

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -49
README.md CHANGED
@@ -42,7 +42,7 @@ quantized_by: TheBloke
42
  <!-- header start -->
43
  <!-- 200823 -->
44
 
45
- I am not the original creator of Llamafile, all credit of LlamaFile goes to Jartine:
46
  <!-- README_llamafile.md-about-llamafile end -->
47
  <!-- repositories-available start -->
48
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -61,62 +61,57 @@ I am not the original creator of Llamafile, all credit of LlamaFile goes to Jart
61
  - LlamaFile version used: [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile/tree/8f73d39cf3a767897b8ade6dda45e5744c62356a)
62
  - Commit message "Fix build ordering issue"
63
  - Commit hash 8f73d39cf3a767897b8ade6dda45e5744c62356a
 
 
 
 
 
 
64
 
 
65
 
66
- ### How to Use
67
-
68
- Here is an edited excerpt of the detailed instruction at the Git repo commit used to build llamafiles in this repository:
69
-
70
- > #### Quickstart
71
- >
72
- > The easiest way to try it for yourself is to download our example llamafile.
73
- > With llamafile, all inference happens locally; no data ever leaves your computer.
74
- >
75
- > 1. Download the llamafile.
76
- >
77
- > 2. Open your computer's terminal.
78
- >
79
- > 3. If you're using macOS, Linux, or BSD, you'll need to grant permission
80
- > for your computer to execute this new file. (You only need to do this
81
- > once.)
82
- >
83
- > ```sh
84
- > chmod +x tinyllama-1.1b-chat-v1.0.Q2_K.llamafile
85
- > ```
86
- >
87
- > 4. If you're on Windows, rename the file by adding ".exe" on the end.
88
- >
89
- > 5. Run the llamafile. e.g.:
90
- >
91
- > ```sh
92
- > ./tinyllama-1.1b-chat-v1.0.Q2_K.llamafile
93
- > ```
94
- >
95
- > 6. Your browser should open automatically and display a chat interface.
96
- > (If it doesn't, just open your browser and point it at http://localhost:8080.)
97
- >
98
- > 7. When you're done chatting, return to your terminal and hit
99
- > `Control-C` to shut down llamafile.
100
 
 
 
 
 
 
 
101
 
102
- Please note that LlamaFile is still under active development. Some methods may be not be compatible with the most recent documents.
103
 
104
- ### About llamafile
 
105
 
106
- llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries that run on the stock installs of six OSes for both ARM64 and AMD64.
107
 
108
- Here is an incomplete list of clients and libraries that are known to support llamafile:
109
 
110
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for llamafile. Offers a CLI and a server option.
111
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
112
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
113
- * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
114
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
115
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
116
- * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
117
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
118
- * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
119
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
120
 
121
  # Original model card: TheBloke's Tinyllama 1.1B Chat v1.0 GGUF
122
  <!-- markdownlint-disable MD041 -->
 
42
  <!-- header start -->
43
  <!-- 200823 -->
44
 
45
+ I am not the original creator of llamafile, all credit of llamafile goes to Jartine:
46
  <!-- README_llamafile.md-about-llamafile end -->
47
  <!-- repositories-available start -->
48
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
61
  - LlamaFile version used: [Mozilla-Ocho/llamafile](https://github.com/Mozilla-Ocho/llamafile/tree/8f73d39cf3a767897b8ade6dda45e5744c62356a)
62
  - Commit message "Fix build ordering issue"
63
  - Commit hash 8f73d39cf3a767897b8ade6dda45e5744c62356a
64
+ - `.args` content:
65
+ ```
66
+ -m
67
+ tinyllama-1.1b-chat-v1.0.{quantization}.gguf
68
+ ...
69
+ ```
70
 
71
+ ## About llamafile (Modified from [Git README](https://github.com/Mozilla-Ocho/llamafile/tree/8f73d39cf3a767897b8ade6dda45e5744c62356a?tab=readme-ov-file#llamafile))
72
 
73
+ **llamafile lets you distribute and run LLMs with a single file. No installation required. ([announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/))**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
+ Our goal is to make open source large language models much more
76
+ accessible to both developers and end users. We're doing that by
77
+ combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one
78
+ framework that collapses all the complexity of LLMs down to
79
+ a single-file executable (called a "llamafile") that runs
80
+ locally on most computers, with no installation.
81
 
82
+ ## How to Use (Modified from [Git README](https://github.com/Mozilla-Ocho/llamafile/tree/8f73d39cf3a767897b8ade6dda45e5744c62356a?tab=readme-ov-file#quickstart))
83
 
84
+ The easiest way to try it for yourself is to download our example llamafile.
85
+ With llamafile, all inference happens locally; no data ever leaves your computer.
86
 
87
+ 1. Download the llamafile.
88
 
89
+ 2. Open your computer's terminal.
90
 
91
+ 3. If you're using macOS, Linux, or BSD, you'll need to grant permission
92
+ for your computer to execute this new file. (You only need to do this
93
+ once.)
94
+
95
+ ```sh
96
+ chmod +x tinyllama-1.1b-chat-v1.0.Q2_K.llamafile
97
+ ```
98
+
99
+ 4. If you're on Windows, rename the file by adding ".exe" on the end.
100
+
101
+ 5. Run the llamafile. e.g.:
102
+
103
+ ```sh
104
+ ./tinyllama-1.1b-chat-v1.0.Q2_K.llamafile
105
+ ```
106
+
107
+ 6. Your browser should open automatically and display a chat interface.
108
+ (If it doesn't, just open your browser and point it at http://localhost:8080.)
109
+
110
+ 7. When you're done chatting, return to your terminal and hit
111
+ `Control-C` to shut down llamafile.
112
+
113
+
114
+ Please note that LlamaFile is still under active development. Some methods may be not be compatible with the most recent documents.
115
 
116
  # Original model card: TheBloke's Tinyllama 1.1B Chat v1.0 GGUF
117
  <!-- markdownlint-disable MD041 -->