LLukas22 commited on
Commit
c7d1384
1 Parent(s): 563f3a7

Update README_TEMPLATE.md

Browse files
Files changed (1) hide show
  1. README_TEMPLATE.md +16 -2
README_TEMPLATE.md CHANGED
@@ -1,4 +1,6 @@
1
  ---
 
 
2
  license: bigscience-bloom-rail-1.0
3
  language:
4
  - ak
@@ -71,8 +73,20 @@ pipeline_tag: text-generation
71
 
72
  ## Description
73
 
74
- BLOOM is an autoregressive Large Language Model (LLM), trained to continue text from a prompt on vast amounts of text data using industrial-scale computational resources. As such, it is able to output coherent text in 46 languages and 13 programming languages that is hardly distinguishable from text written by humans. BLOOM can also be instructed to perform text tasks it hasn't been explicitly trained for, by casting them as text generation tasks.
75
 
 
 
 
 
 
 
 
 
 
 
 
 
76
 
77
  ## Converted Models
78
  $MODELS$
@@ -89,7 +103,7 @@ Via pip: `pip install llm-rs`
89
  from llm_rs import AutoModel
90
 
91
  #Load the model, define any model you like from the list above as the `model_file`
92
- model = AutoModel.from_pretrained("rustformers/bloom-ggml",model_file="bloom-3b-q4_0-ggjt.bin")
93
 
94
  #Generate
95
  print(model.generate("The meaning of life is"))
 
1
  ---
2
+ datasets:
3
+ - bigscience/xP3
4
  license: bigscience-bloom-rail-1.0
5
  language:
6
  - ak
 
73
 
74
  ## Description
75
 
76
+ > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
77
 
78
+ - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
79
+ - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
80
+ - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
81
+ - **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
82
+
83
+ ### Intended use
84
+
85
+ We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
86
+ - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
87
+ - Suggest at least five related search terms to "Mạng neural nhân tạo".
88
+ - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
89
+ - Explain in a sentence in Telugu what is backpropagation in neural networks.
90
 
91
  ## Converted Models
92
  $MODELS$
 
103
  from llm_rs import AutoModel
104
 
105
  #Load the model, define any model you like from the list above as the `model_file`
106
+ model = AutoModel.from_pretrained("rustformers/bloomz-ggml",model_file="bloomz-3b-q4_0-ggjt.bin")
107
 
108
  #Generate
109
  print(model.generate("The meaning of life is"))