raaec commited on
Commit
17d9c6c
·
verified ·
1 Parent(s): 4dbd118

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -3
README.md CHANGED
@@ -1,3 +1,36 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: summarization
4
+ widget:
5
+ - text: >-
6
+ Hugging Face: Revolutionizing Natural Language Processing Introduction In
7
+ the rapidly evolving field of Natural Language Processing (NLP), Hugging
8
+ Face has emerged as a prominent and innovative force. This article will
9
+ explore the story and significance of Hugging Face, a company that has
10
+ made remarkable contributions to NLP and AI as a whole. From its inception
11
+ to its role in democratizing AI, Hugging Face has left an indelible mark
12
+ on the industry. The Birth of Hugging Face Hugging Face was founded in
13
+ 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name
14
+ Hugging Face was chosen to reflect the company's mission of making AI
15
+ models more accessible and friendly to humans, much like a comforting hug.
16
+ Initially, they began as a chatbot company but later shifted their focus
17
+ to NLP, driven by their belief in the transformative potential of this
18
+ technology. Transformative Innovations Hugging Face is best known for its
19
+ open-source contributions, particularly the Transformers library. This
20
+ library has become the de facto standard for NLP and enables researchers,
21
+ developers, and organizations to easily access and utilize
22
+ state-of-the-art pre-trained language models, such as BERT, GPT-3, and
23
+ more. These models have countless applications, from chatbots and virtual
24
+ assistants to language translation and sentiment analysis.
25
+ example_title: Summarization Example 1
26
+ ---
27
+
28
+ ## Model Information
29
+
30
+ This is a fine-tuned version of Llama 3.1 trained in English, Spanish, and Chinese for text summarization.
31
+
32
+ The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
33
+
34
+ **Model developer**: Meta
35
+
36
+ **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.