Text Generation
Transformers
English
llm-rs
ggml
Inference Endpoints
LLukas22 commited on
Commit
e261e2b
1 Parent(s): 9f6cf21

Generated README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md CHANGED
@@ -1,3 +1,72 @@
1
  ---
 
 
 
 
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - llm-rs
4
+ - ggml
5
+ pipeline_tag: text-generation
6
  license: apache-2.0
7
+ language:
8
+ - en
9
+ datasets:
10
+ - togethercomputer/RedPajama-Data-1T
11
  ---
12
+ # GGML converted versions of [OpenLM Research](https://huggingface.co/openlm-research)'s LLaMA models
13
+
14
+ # OpenLLaMA: An Open Reproduction of LLaMA
15
+
16
+
17
+ In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
18
+
19
+
20
+ ## Weights Release, License and Usage
21
+
22
+ We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
23
+
24
+ ## Converted Models:
25
+
26
+ | Name | Based on | Type | Container | GGML Version |
27
+ |:------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------|:------------|:---------------|
28
+ | [open_llama_3b-f16.bin](https://huggingface.co/rustformers/open-llama-ggml/blob/main/open_llama_3b-f16.bin) | [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) | F16 | GGML | V3 |
29
+ | [open_llama_3b-q4_0-ggjt.bin](https://huggingface.co/rustformers/open-llama-ggml/blob/main/open_llama_3b-q4_0-ggjt.bin) | [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) | Q4_0 | GGJT | V3 |
30
+ | [open_llama_3b-q5_1-ggjt.bin](https://huggingface.co/rustformers/open-llama-ggml/blob/main/open_llama_3b-q5_1-ggjt.bin) | [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) | Q5_1 | GGJT | V3 |
31
+ | [open_llama_7b-f16.bin](https://huggingface.co/rustformers/open-llama-ggml/blob/main/open_llama_7b-f16.bin) | [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) | F16 | GGML | V3 |
32
+ | [open_llama_7b-q4_0-ggjt.bin](https://huggingface.co/rustformers/open-llama-ggml/blob/main/open_llama_7b-q4_0-ggjt.bin) | [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) | Q4_0 | GGJT | V3 |
33
+ | [open_llama_7b-q5_1-ggjt.bin](https://huggingface.co/rustformers/open-llama-ggml/blob/main/open_llama_7b-q5_1-ggjt.bin) | [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) | Q5_1 | GGJT | V3 |
34
+
35
+ ## Usage
36
+
37
+ ### Python via [llm-rs](https://github.com/LLukas22/llm-rs-python):
38
+
39
+ #### Installation
40
+ Via pip: `pip install llm-rs`
41
+
42
+ #### Run inference
43
+ ```python
44
+ from llm_rs import AutoModel
45
+
46
+ #Load the model, define any model you like from the list above as the `model_file`
47
+ model = AutoModel.from_pretrained("rustformers/open-llama-ggml",model_file=" open_llama_7b-q4_0-ggjt.bin")
48
+
49
+ #Generate
50
+ print(model.generate("The meaning of life is"))
51
+ ```
52
+ ### Using [local.ai](https://github.com/louisgv/local.ai) GUI
53
+
54
+ #### Installation
55
+ Download the installer at [www.localai.app](https://www.localai.app/).
56
+
57
+ #### Running Inference
58
+ Download your preferred model and place it in the "models" directory. Subsequently, you can start a chat session with your model directly from the interface.
59
+
60
+ ### Rust via [Rustformers/llm](https://github.com/rustformers/llm):
61
+
62
+ #### Installation
63
+ ```
64
+ git clone --recurse-submodules https://github.com/rustformers/llm.git
65
+ cd llm
66
+ cargo build --release
67
+ ```
68
+
69
+ #### Run inference
70
+ ```
71
+ cargo run --release -- llama infer -m path/to/model.bin -p "Tell me how cool the Rust programming language is:"
72
+ ```