LLukas22 commited on
Commit
cf68004
1 Parent(s): 4d62f9b

Upload 2 files

Browse files
Files changed (2) hide show
  1. README_TEMPLATE.md +72 -0
  2. config.json +1 -0
README_TEMPLATE.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - llm-rs
4
+ - ggml
5
+ pipeline_tag: text-generation
6
+ license: apache-2.0
7
+ language:
8
+ - en
9
+ datasets:
10
+ - togethercomputer/RedPajama-Data-1T
11
+ ---
12
+ # GGML converted versions of [Together](https://huggingface.co/togethercomputer)'s RedPajama models
13
+
14
+ # RedPajama-INCITE-7B-Base
15
+
16
+ RedPajama-INCITE-7B-Base was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
17
+ The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.
18
+
19
+ - Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
20
+ - Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
21
+ - Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
22
+
23
+
24
+ ## Model Details
25
+ - **Developed by**: Together Computer.
26
+ - **Model type**: Language Model
27
+ - **Language(s)**: English
28
+ - **License**: Apache 2.0
29
+ - **Model Description**: A 6.9B parameter pretrained language model.
30
+
31
+ ## Converted Models:
32
+
33
+ $MODELS$
34
+
35
+ ## Usage
36
+
37
+ ### Python via [llm-rs](https://github.com/LLukas22/llm-rs-python):
38
+
39
+ #### Installation
40
+ Via pip: `pip install llm-rs`
41
+
42
+ #### Run inference
43
+ ```python
44
+ from llm_rs import AutoModel
45
+
46
+ #Load the model, define any model you like from the list above as the `model_file`
47
+ model = AutoModel.from_pretrained("rustformers/redpajama-7b-ggml",model_file="RedPajama-INCITE-7B-Base-q4_0-ggjt.bin")
48
+
49
+ #Generate
50
+ print(model.generate("The meaning of life is"))
51
+ ```
52
+ ### Using [local.ai](https://github.com/louisgv/local.ai) GUI
53
+
54
+ #### Installation
55
+ Download the installer at [www.localai.app](https://www.localai.app/).
56
+
57
+ #### Running Inference
58
+ Download your preferred model and place it in the "models" directory. Subsequently, you can start a chat session with your model directly from the interface.
59
+
60
+ ### Rust via [Rustformers/llm](https://github.com/rustformers/llm):
61
+
62
+ #### Installation
63
+ ```
64
+ git clone --recurse-submodules https://github.com/rustformers/llm.git
65
+ cd llm
66
+ cargo build --release
67
+ ```
68
+
69
+ #### Run inference
70
+ ```
71
+ cargo run --release -- gptneox infer -m path/to/model.bin -p "Tell me how cool the Rust programming language is:"
72
+ ```
config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"repo_type": "GGML"}