kishb87 commited on
Commit
e45e211
1 Parent(s): 71d4afb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -1,3 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
 
1
+ # README: tinyChat Instruction-Based LLM
2
+
3
+ Introducing tinyChat, the instruction-based Large Language Model (LLM) that’s less than 1% the size of GPT-3.5. tinyChat is an open-source model under the Apache 2.0 license and based on Google’s Flan-T5-Large, a 770m parameter model. Although not as performant as larger models, tinyChat can perform a variety of NLP tasks such as summarization, question answering, and sentiment analysis.
4
+
5
+ tinyChat is available on the HuggingFace model hub and the code repository is on GitHub. While tinyChat is open-sourced, we do not recommend using it in a production setting in its current state.
6
+
7
+
8
+ ## Use Cases
9
+
10
+ - Chatbots
11
+ - Summarization
12
+ - Sentiment analysis
13
+ - Q&A systems
14
+ - Text completion
15
+ - Language modeling
16
+ - Mobile applications
17
+ - Complementing larger LLMs
18
+
19
+ ## Future Directions
20
+
21
+ - Improving model accuracy
22
+ - Reducing biases and toxicity
23
+ - Developing new datasets
24
+ - Collaborating with the open-source community
25
+ - Applying tinyChat to new domains
26
+
27
+ ## Acknowledgements
28
+
29
+ We express our gratitude to OpenAI, Hugging Face, Microsoft Research, and the creators of the Pile, Alpaca, and Databricks 15k datasets for their contributions to the landscape of open-source machine learning and the advancement of generative AI.
30
+
31
+ ## Running the Code
32
+
33
+ ```python
34
+ import transformers
35
+ from transformers import PeftModel
36
+
37
+ model_name = "google/flan-t5-large"
38
+ peft_model_id = "ckpts_databricks_large"
39
+ tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
40
+ base_model = transformers.AutoModelForSeq2SeqLM.from_pretrained(model_name)
41
+ peft_model = PeftModel.from_pretrained(base_model, peft_model_id)
42
+
43
+ inputs = tokenizer("""[INSERT INSTRUCTION HERE]""", return_tensors="pt")
44
+ outputs = peft_model.generate(**inputs, max_length=300, do_sample=True)
45
+ print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
46
+
47
+
48
+
49
  ---
50
  license: apache-2.0
51
  ---