doberst commited on
Commit
152ec36
1 Parent(s): 2803d93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -42
README.md CHANGED
@@ -6,30 +6,12 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- bling-falcon-1b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a falcon-rw-1b base model.
10
 
11
  BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with
12
  the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even
13
  without using any advanced quantization optimizations.
14
 
15
- ### **PERFORMANCE on BASIC RAG TEST DATASET**
16
-
17
- | Model | Params (B) | Sourcing | GPU/CPU | Output Tokens | Out as % of Input | Process Time (secs) | Score (0-100) |
18
- | :---------- | :--------: | :----: | :-----: | :---------: | :-------: | :--------: | :-------: |
19
- | gpt-4 | <=1000 | Closed | Multi-GPU | 2665 | 10.53% | 183.8 | 100 |
20
- | gpt-3.5-turbo-instruct| <=175 | Closed | Multi-GPU | 2621 | 11.49% | 62.7 | 100 |
21
- | claude-instant-v1 | <=50 | Closed | Multi-GPU | 6337 | 26.50% | 154 | 100 |
22
- | aib-read-gpt | 7 | Closed | GPU | 1964 | 9.30% | 114 | 96 |
23
- | **bling_falcon-1b-0.1** | **1.3** | **Open** | **CPU** | **3204** | **14.55%** | **696** | **77** |
24
- | bling_pythia-1.4b-0.1 | 1.4 | Open | CPU | 2589 | 11.75% | 593.5 | 65 |
25
- | bling_pythia-1b-0.1 | 1.0 | Open | CPU | 2753 | 12.49% | 428 | 59 |
26
- | bling_cerebras-1.3b | 1.3 | Open | CPU | 3202 | 20.01% | 690.1 | 52 |
27
- | bling_pythia_410m | 0.41 | NA | CPU | 2349 | 10.66% | 189 | 36 |
28
- | bling_cerebras_590m | 0.59 | NA | CPU | 4407 | 20.01% | 400.8 | 30 |
29
-
30
- For more details on this evaluation, please see the dataset: **llmware/rag_instruct_test_dataset_0.1** and [BLOG](https://medium.com/@darrenoberst/evaluating-llm-performance-in-rag-instruct-use-cases-083dc272a31d)
31
-
32
-
33
  ### Model Description
34
 
35
  <!-- Provide a longer summary of what this model is. -->
@@ -38,8 +20,8 @@ For more details on this evaluation, please see the dataset: **llmware/rag_instr
38
  - **Model type:** GPTNeoX instruct-trained decoder
39
  - **Language(s) (NLP):** English
40
  - **License:** Apache 2.0
41
- - **Finetuned from model [optional]:** tiiuae/falcon-rw-1b
42
-
43
  ## Uses
44
 
45
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
@@ -67,15 +49,6 @@ The first BLING models have been trained for common RAG scenarios, specifically:
67
  without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
68
 
69
 
70
- ### Out-of-Scope Use
71
-
72
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
73
-
74
- 1. BLING is not designed for 'chat-bot' or 'consumer-oriented' applications.
75
-
76
- 2. BLING is not optimal for most production applications, other than simple and highly specific use cases.
77
-
78
-
79
  ## Bias, Risks, and Limitations
80
 
81
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
@@ -88,8 +61,8 @@ BLING has not been designed for end consumer-oriented applications, and there ha
88
  The fastest way to get started with BLING is through direct import in transformers:
89
 
90
  from transformers import AutoTokenizer, AutoModelForCausalLM
91
- tokenizer = AutoTokenizer.from_pretrained("llmware/bling-falcon-1b-0.1")
92
- model = AutoModelForCausalLM.from_pretrained("llmware/bling-falcon-1b-0.1")
93
 
94
 
95
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
@@ -108,19 +81,14 @@ my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
108
 
109
  ## Citation [optional]
110
 
111
- This BLING model was built on top of a Falcon model base - for more information about the Falcon model, please see the paper referenced below:
112
 
113
- @article{refinedweb,
114
- title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
115
- author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
116
- journal={arXiv preprint arXiv:2306.01116},
117
- eprint={2306.01116},
118
- eprinttype = {arXiv},
119
- url={https://arxiv.org/abs/2306.01116},
120
- year={2023}
121
  }
122
 
123
-
124
  ## Model Card Contact
125
 
126
  Darren Oberst & llmware team
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a falcon-rw-1b base model.
10
 
11
  BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with
12
  the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even
13
  without using any advanced quantization optimizations.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ### Model Description
16
 
17
  <!-- Provide a longer summary of what this model is. -->
 
20
  - **Model type:** GPTNeoX instruct-trained decoder
21
  - **Language(s) (NLP):** English
22
  - **License:** Apache 2.0
23
+ - **Finetuned from model [optional]:** princeton-nlp/Sheared-LLaMA-1.3B
24
+ -
25
  ## Uses
26
 
27
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
49
  without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
50
 
51
 
 
 
 
 
 
 
 
 
 
52
  ## Bias, Risks, and Limitations
53
 
54
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
61
  The fastest way to get started with BLING is through direct import in transformers:
62
 
63
  from transformers import AutoTokenizer, AutoModelForCausalLM
64
+ tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")
65
+ model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")
66
 
67
 
68
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
 
81
 
82
  ## Citation [optional]
83
 
84
+ This BLING model was built on top of a "Sheared Llama" model base - for more information about the "Sheared Llama" model, please see the paper referenced below:
85
 
86
+ @article{xia2023sheared,
87
+ title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
88
+ author={Xia, Mengzhou and Gao, Tianyu, and Zeng Zhiyuan, and Chen Danqi},
89
+ year={2023}
 
 
 
 
90
  }
91
 
 
92
  ## Model Card Contact
93
 
94
  Darren Oberst & llmware team