ravithejads commited on
Commit
bbcb189
·
verified ·
1 Parent(s): df8b584

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -22
README.md CHANGED
@@ -1,30 +1,56 @@
1
  ---
2
- {}
3
  ---
4
- ---
5
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
- Usage:
8
 
9
- ```python
10
- !pip install -qU transformers accelerate
 
 
 
 
 
 
 
11
 
12
- from transformers import AutoTokenizer
13
- import transformers
14
- import torch
15
 
16
- model = model_name
17
- messages = [{"role": "user", "content": "What is a large language model?"}]
18
 
19
- tokenizer = AutoTokenizer.from_pretrained(model)
20
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
21
- pipeline = transformers.pipeline(
22
- "text-generation",
23
- model=model,
24
- torch_dtype=torch.float16,
25
- device_map="auto",
26
- )
27
 
28
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.95)
29
- print(outputs[0]["generated_text"])
30
- ```
 
1
  ---
2
+ license: llama3.1
3
  ---
4
+ # BRAG-Llama-3.1-8b-v0.1 Model Card
5
+
6
+ ## Model Description
7
+
8
+ BRAG-Llama-3.1-8b-v0.1 is part of the BRAG series of SLMs (Small Language Models) specifically trained for RAG (Retrieval-Augmented Generation) tasks. This model is an instruct finetuning of the Llama-3.1-8b-instruct base model. For more details about BRAG series, please refer to our blogpost.
9
+
10
+ ## Key Features
11
+
12
+ - **Model Size**: 8 billion parameters
13
+ - **Context Length**: Supports up to 128k tokens
14
+ - **Language**: Trained and evaluated for English, but the base model has multi-lingual capabilities
15
+
16
+ ## Performance
17
+
18
+ - **ChatRAG-Bench (all)**: 51.70
19
+
20
+ ## How to Use
21
+
22
+ [Include information on how to use the model]
23
+
24
+ ## Use Cases
25
+
26
+ BRAG-Llama-3-8b-v0.1 is designed for RAG (Retrieval-Augmented Generation) tasks, making it suitable for applications that require processing and generating responses based on contexts, chats with tables and text, such as:
27
+
28
+ 1. Question-answering systems with tables and text.
29
+ 2. Contextual conversational chat.
30
+
31
+ ## Limitations
32
+
33
+ As with all LLMs, users should be aware of potential biases and limitations. Always verify important information and use the model responsibly.
34
+
35
+ ## Citation
36
 
37
+ To cite this model, please use the following:
38
 
39
+ ```bibtex
40
+ @misc{BRAG-Llama-3-8b-v0.1,
41
+ title = {BRAG-Llama-3-8b-v0.1},
42
+ year = {2024},
43
+ publisher = {HuggingFace},
44
+ url = {[https://huggingface.co/maximalists/BRAG-Llama-3-8b-v0.1](https://huggingface.co/maximalists/BRAG-v0.43)},
45
+ author = {Pratik Bhavsar and Ravi Theja}
46
+ }
47
+ ```
48
 
49
+ ## Model Access
 
 
50
 
51
+ The model is available on the HuggingFace Model Hub:
52
+ [https://huggingface.co/maximalists/BRAG-Llama-3-8b-v0.1](https://huggingface.co/maximalists/BRAG-Llama-3-8b-v0.1)
53
 
54
+ ## Additional Information
 
 
 
 
 
 
 
55
 
56
+ For more details on the BRAG series and updates, please refer to the official blogpost.