LLukas22 commited on
Commit
5abbe51
1 Parent(s): 8ec9cc6

Generated README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -23
README.md CHANGED
@@ -27,26 +27,32 @@ similar and same-sized models, such as those in the OPT and GPT-Neo suites.
27
 
28
  ## Converted Models:
29
 
30
- | Name | Based on | Type | Container |
31
- |-|-|-|-|
32
- | [pythia-70m-f16.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-70m-f16.bin) | [Pythia-70M](https://huggingface.co/EleutherAI/pythia-70m) | fp16 | GGML |
33
- | [pythia-70m-q4_0-ggjt.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-70m-q4_0-ggjt.bin) | [Pythia-70M](https://huggingface.co/EleutherAI/pythia-70m) | int4 | GGJT |
34
- | [pythia-70m-q4_0.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-70m-q4_0.bin) | [Pythia-70M](https://huggingface.co/EleutherAI/pythia-70m) | int4 | GGML |
35
- | [pythia-160m-f16.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-160m-f16.bin) | [Pythia-160M](https://huggingface.co/EleutherAI/pythia-160m) | fp16 | GGML |
36
- | [pythia-160m-q4_0-ggjt.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-160m-q4_0-ggjt.bin) | [Pythia-160M](https://huggingface.co/EleutherAI/pythia-160m) | int4 | GGJT |
37
- | [pythia-160m-q4_0.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-160m-q4_0.bin) | [Pythia-160M](https://huggingface.co/EleutherAI/pythia-160m) | int4 | GGML |
38
- | [pythia-410m-f16.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-410m-f16.bin) | [Pythia-410M](https://huggingface.co/EleutherAI/pythia-410m) | fp16 | GGML |
39
- | [pythia-410m-q4_0-ggjt.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-410m-q4_0-ggjt.bin) | [Pythia-410M](https://huggingface.co/EleutherAI/pythia-410m) | int4 | GGJT |
40
- | [pythia-410m-q4_0.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-410m-q4_0.bin) | [Pythia-410M](https://huggingface.co/EleutherAI/pythia-410m) | int4 | GGML |
41
- | [pythia-1b-f16.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-1b-f16.bin) | [Pythia-1B](https://huggingface.co/EleutherAI/pythia-1b) | fp16 | GGML |
42
- | [pythia-1b-q4_0-ggjt.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-1b-q4_0-ggjt.bin) | [Pythia-1B](https://huggingface.co/EleutherAI/pythia-1b) | int4 | GGJT |
43
- | [pythia-1b-q4_0.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-1b-q4_0.bin) | [Pythia-1B](https://huggingface.co/EleutherAI/pythia-1b) | int4 | GGML |
44
- | [pythia-1.4b-f16.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-1.4b-f16.bin) | [Pythia-1.4B](https://huggingface.co/EleutherAI/pythia-1.4b) | fp16 | GGML |
45
- | [pythia-1.4b-q4_0-ggjt.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-1.4b-q4_0-ggjt.bin) | [Pythia-1.4B](https://huggingface.co/EleutherAI/pythia-1.4b) | int4 | GGJT |
46
- | [pythia-1.4b-q4_0.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-1.4b-q4_0.bin) | [Pythia-1.4B](https://huggingface.co/EleutherAI/pythia-1.4b) | int4 | GGML |
47
- | [pythia-2.8b-f16.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-2.8b-f16.bin) | [Pythia-2.8B](https://huggingface.co/EleutherAI/pythia-2.8b) | fp16 | GGML |
48
- | [pythia-2.8b-q4_0-ggjt.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-2.8b-q4_0-ggjt.bin) | [Pythia-2.8B](https://huggingface.co/EleutherAI/pythia-2.8b) | int4 | GGJT |
49
- | [pythia-2.8b-q4_0.bin](https://huggingface.co/Rustformers/pythia-ggml/blob/main/pythia-2.8b-q4_0.bin) | [Pythia-2.8B](https://huggingface.co/EleutherAI/pythia-2.8b) | int4 | GGML |
 
 
 
 
 
 
50
 
51
  ## Usage
52
 
@@ -60,7 +66,7 @@ Via pip: `pip install llm-rs`
60
  from llm_rs import AutoModel
61
 
62
  #Load the model, define any model you like from the list above as the `model_file`
63
- model = AutoModel.from_pretrained("Rustformers/pythia-ggml",model_file="pythia-70m-q4_0-ggjt.bin")
64
 
65
  #Generate
66
  print(model.generate("The meaning of life is"))
@@ -70,11 +76,12 @@ print(model.generate("The meaning of life is"))
70
 
71
  #### Installation
72
  ```
73
- git clone --recurse-submodules git@github.com:rustformers/llm.git
 
74
  cargo build --release
75
  ```
76
 
77
  #### Run inference
78
  ```
79
  cargo run --release -- gptneox infer -m path/to/model.bin -p "Tell me how cool the Rust programming language is:"
80
- ```
 
27
 
28
  ## Converted Models:
29
 
30
+ | Name | Based on | Type | Container | GGML Version |
31
+ |:----------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------|:-------|:------------|:---------------|
32
+ | [pythia-1.4b-f16.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-1.4b-f16.bin) | [EleutherAI/pythia-1.4b](https://huggingface.co/EleutherAI/pythia-1.4b) | F16 | GGML | V3 |
33
+ | [pythia-1.4b-q4_0.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-1.4b-q4_0.bin) | [EleutherAI/pythia-1.4b](https://huggingface.co/EleutherAI/pythia-1.4b) | Q4_0 | GGML | V3 |
34
+ | [pythia-1.4b-q4_0-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-1.4b-q4_0-ggjt.bin) | [EleutherAI/pythia-1.4b](https://huggingface.co/EleutherAI/pythia-1.4b) | Q4_0 | GGJT | V3 |
35
+ | [pythia-1.4b-q5_1-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-1.4b-q5_1-ggjt.bin) | [EleutherAI/pythia-1.4b](https://huggingface.co/EleutherAI/pythia-1.4b) | Q5_1 | GGJT | V3 |
36
+ | [pythia-160m-f16.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-160m-f16.bin) | [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) | F16 | GGML | V3 |
37
+ | [pythia-160m-q4_0.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-160m-q4_0.bin) | [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) | Q4_0 | GGML | V3 |
38
+ | [pythia-160m-q4_0-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-160m-q4_0-ggjt.bin) | [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) | Q4_0 | GGJT | V3 |
39
+ | [pythia-160m-q5_1-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-160m-q5_1-ggjt.bin) | [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) | Q5_1 | GGJT | V3 |
40
+ | [pythia-1b-f16.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-1b-f16.bin) | [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) | F16 | GGML | V3 |
41
+ | [pythia-1b-q4_0.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-1b-q4_0.bin) | [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) | Q4_0 | GGML | V3 |
42
+ | [pythia-1b-q4_0-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-1b-q4_0-ggjt.bin) | [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) | Q4_0 | GGJT | V3 |
43
+ | [pythia-1b-q5_1-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-1b-q5_1-ggjt.bin) | [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) | Q5_1 | GGJT | V3 |
44
+ | [pythia-2.8b-f16.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-2.8b-f16.bin) | [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) | F16 | GGML | V3 |
45
+ | [pythia-2.8b-q4_0.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-2.8b-q4_0.bin) | [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) | Q4_0 | GGML | V3 |
46
+ | [pythia-2.8b-q4_0-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-2.8b-q4_0-ggjt.bin) | [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) | Q4_0 | GGJT | V3 |
47
+ | [pythia-2.8b-q5_1-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-2.8b-q5_1-ggjt.bin) | [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) | Q5_1 | GGJT | V3 |
48
+ | [pythia-410m-f16.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-410m-f16.bin) | [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) | F16 | GGML | V3 |
49
+ | [pythia-410m-q4_0.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-410m-q4_0.bin) | [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) | Q4_0 | GGML | V3 |
50
+ | [pythia-410m-q4_0-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-410m-q4_0-ggjt.bin) | [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) | Q4_0 | GGJT | V3 |
51
+ | [pythia-410m-q5_1-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-410m-q5_1-ggjt.bin) | [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) | Q5_1 | GGJT | V3 |
52
+ | [pythia-70m-f16.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-70m-f16.bin) | [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) | F16 | GGML | V3 |
53
+ | [pythia-70m-q4_0.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-70m-q4_0.bin) | [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) | Q4_0 | GGML | V3 |
54
+ | [pythia-70m-q4_0-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-70m-q4_0-ggjt.bin) | [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) | Q4_0 | GGJT | V3 |
55
+ | [pythia-70m-q5_1-ggjt.bin](https://huggingface.co/rustformers/pythia-ggml/blob/main/pythia-70m-q5_1-ggjt.bin) | [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) | Q5_1 | GGJT | V3 |
56
 
57
  ## Usage
58
 
 
66
  from llm_rs import AutoModel
67
 
68
  #Load the model, define any model you like from the list above as the `model_file`
69
+ model = AutoModel.from_pretrained("rustformers/pythia-ggml",model_file="pythia-70m-q4_0-ggjt.bin")
70
 
71
  #Generate
72
  print(model.generate("The meaning of life is"))
 
76
 
77
  #### Installation
78
  ```
79
+ git clone --recurse-submodules https://github.com/rustformers/llm.git
80
+ cd llm
81
  cargo build --release
82
  ```
83
 
84
  #### Run inference
85
  ```
86
  cargo run --release -- gptneox infer -m path/to/model.bin -p "Tell me how cool the Rust programming language is:"
87
+ ```