shreyajn commited on
Commit
2e7e412
·
verified ·
1 Parent(s): a1e8271

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -10,14 +10,14 @@ pipeline_tag: text-generation
10
 
11
  ---
12
 
13
- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/plamo_1b_quantized/web-assets/model_demo.png)
14
 
15
  # PLaMo-1B: Optimized for Mobile Deployment
16
  ## State-of-the-art large language model useful on a variety of language understanding and generation tasks
17
 
18
  PLaMo-1B is the first small language model (SLM) in the PLaMo™ Lite series from Preferred Networks (PFN), designed to power AI applications for edge devices including mobile, automotive, and robots across various industrial sectors. This model builds on the advancements of PLaMo-100B, a 100-billion parameter large language model (LLM) developed from the ground up by PFN’s subsidiary Preferred Elements (PFE). Leveraging high-quality Japanese and English text data generated by PLaMo-100B, PLaMo-1B has been pre-trained on a total of 4 trillion tokens. As a result, it delivers exceptional performance in Japanese benchmarks, outperforming other SLMs with similar parameter sizes. In evaluations such as Jaster 0-shot and 4-shot, PLaMo-1B has demonstrated performance on par with larger LLMs, making it a highly efficient solution for edge-based AI tasks.
19
 
20
- Please contact us to purchase this model. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/plamo_1b_quantized).
21
 
22
  ### Model Details
23
 
 
10
 
11
  ---
12
 
13
+ ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/plamo_1b/web-assets/model_demo.png)
14
 
15
  # PLaMo-1B: Optimized for Mobile Deployment
16
  ## State-of-the-art large language model useful on a variety of language understanding and generation tasks
17
 
18
  PLaMo-1B is the first small language model (SLM) in the PLaMo™ Lite series from Preferred Networks (PFN), designed to power AI applications for edge devices including mobile, automotive, and robots across various industrial sectors. This model builds on the advancements of PLaMo-100B, a 100-billion parameter large language model (LLM) developed from the ground up by PFN’s subsidiary Preferred Elements (PFE). Leveraging high-quality Japanese and English text data generated by PLaMo-100B, PLaMo-1B has been pre-trained on a total of 4 trillion tokens. As a result, it delivers exceptional performance in Japanese benchmarks, outperforming other SLMs with similar parameter sizes. In evaluations such as Jaster 0-shot and 4-shot, PLaMo-1B has demonstrated performance on par with larger LLMs, making it a highly efficient solution for edge-based AI tasks.
19
 
20
+ Please contact us to purchase this model. More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/plamo_1b).
21
 
22
  ### Model Details
23