wassemgtk commited on
Commit
ea9e87e
1 Parent(s): e338cb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -22,7 +22,7 @@ img {
22
 
23
  ## Model Description
24
 
25
- Palmyra Base was primarily pre-trained with English text. Note that there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra Base is a member of the same family of models that only contain a decoder. As a result, it was pre-trained utilizing the objective of self-supervised causal language modeling. Palmyra Base uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation per GPT-3.
26
 
27
 
28
 
@@ -87,14 +87,14 @@ print(clean_output)
87
 
88
  ### Limitations and Biases
89
 
90
- Palmyra Base’s core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra Base, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra Base to produce factually correct results.
91
 
92
- Palmyra Base was trained on Writer’s custom data. As with all language models, it is difficult to predict how Palmyra Base will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results.
93
 
94
 
95
  ## Evaluation results
96
 
97
- Evaluation of Palmyra-base model on the benchmark
98
 
99
  Coming Soon
100
 
 
22
 
23
  ## Model Description
24
 
25
+ Camel-5b is a trained large language model that follows instructions. Based on [Palmyra-Base](https://huggingface.co/Writer/palmyra-base) is trained on ~70k instruction & response fine tuning records generated by Writer Team from the InstructGPT paper, including brainstorming, classification, closed quality assurance, generation, information extraction, open quality assurance, and summarization.
26
 
27
 
28
 
 
87
 
88
  ### Limitations and Biases
89
 
90
+ Camel's core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Camel, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Camel to produce factually correct results.
91
 
92
+ Camel was trained on Writer’s custom data. As with all language models, it is difficult to predict how Camel will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results.
93
 
94
 
95
  ## Evaluation results
96
 
97
+ Evaluation of Camel-5B model on the benchmark
98
 
99
  Coming Soon
100