mertNB commited on
Commit
1655960
1 Parent(s): 5173ff2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - neural-bridge/rag-full-20000
5
+ language:
6
+ - en
7
+ tags:
8
+ - retrieval-augmented-generation
9
+ inference: false
10
+ ---
11
+ # **Rago v1 1B**
12
+ **Rago v1 1B is a 1B parameter retrieval-augmented generation-optimized model built by [Neural Bridge AI](https://www.neuralbridge.ai/) and trained on [RAG Full Dataset 20000](https://huggingface.co/datasets/neural-bridge/rag-full-20000). It is available under [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).**
13
+
14
+ ## **Model Description**
15
+ Rago v1 1B model is a retrieval-augmented generation-optimized (RAGO) model, which enhances large language models by integrating an external authoritative knowledge base (context) for generating responses. This integration significantly improves the model's ability to produce relevant, accurate, and context-specific output across specialized domains or internal data without necessitating retraining. It addresses key challenges of large language models (LLMs), such as unpredictability, reliance on potentially outdated data, and the propagation of incorrect information, thereby improving user trust in AI applications. Rago v1 1B, specifically, is an advancement built upon the [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) model, optimized for retrieval-augmented generation, making it particularly effective in contextually aware response generation.
16
+
17
+ ```python
18
+ from transformers import AutoTokenizer, AutoModelForCausalLM
19
+ import transformers
20
+ import torch
21
+
22
+ model = "neural-bridge/Rago-v1-1b"
23
+
24
+ tokenizer = AutoTokenizer.from_pretrained(model)
25
+ pipeline = transformers.pipeline(
26
+ "text-generation",
27
+ model=model,
28
+ tokenizer=tokenizer,
29
+ torch_dtype=torch.bfloat16,
30
+ trust_remote_code=True,
31
+ device_map="auto",
32
+ )
33
+
34
+ def create_prompt(context, question):
35
+ return f"""##CONTEXT## {context} ##QUESTION## {question} ##ANSWER##"""
36
+
37
+ sequences = pipeline(
38
+ create_prompt(
39
+ context="Neural Bridge AI is a software company developing artificial intelligence (AI) solutions. It is founded in New York in the USA.",
40
+ question="What solutions does Neural Bridge AI develop for its clients?"
41
+ ),
42
+ max_length=200,
43
+ do_sample=True,
44
+ top_k=10,
45
+ num_return_sequences=1,
46
+ eos_token_id=tokenizer.eos_token_id,
47
+ )
48
+
49
+ def print_result(generated_text):
50
+ result_start = "##ANSWER##"
51
+ answer_start = generated_text.find(result_start)
52
+ print(generated_text[answer_start + len(result_start) :].strip())
53
+
54
+ for seq in sequences:
55
+ print_result(seq["generated_text"])
56
+ ```
57
+
58
+ ## **Model Details**
59
+
60
+ ### Training Data
61
+
62
+ Rago v1 1B has been trained using the [Neural Bridge's RAG Full 20000](https://huggingface.co/datasets/neural-bridge/rag-full-20000) dataset, which comprises a blend of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), [gms8k](https://huggingface.co/datasets/gsm8k), and [RAG Hallucination Dataset 1000](https://huggingface.co/datasets/neural-bridge/rag-hallucination-dataset-1000).
63
+
64
+ ### Training Details
65
+
66
+ In terms of training specifics, Rago v1 1B is built upon [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) employing [LoRA](https://arxiv.org/abs/2106.09685) to enhance the model's capability to deliver relevant, precise, and context-specific output across specialized domains or internal datasets. This approach aims to tackle significant challenges faced by LLMs, such as unpredictability, reliance on potentially outdated information, and the spread of incorrect data. The architecture of Rago v1 1B mirrors that of [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b), augmented with [LoRA](https://arxiv.org/abs/2106.09685) adapters. The model is trained with 4-bit quantization on a single NVIDIA A100 GPU for approximately one hour, utilizing a learning rate of *2e-5* with cosine scheduler, alongside the following LoRA parameters:
67
+
68
+ * LoRA Rank (R): 32
69
+ * LoRA Alpha: 64
70
+ * LoRA Dropout: 0.05
71
+ * Target Modules: *query_key_value, dense, dense_h_to_4h, dense_4h_to_h*
72
+
73
+ Rago v1 1B benefits from a custom data collator designed to boost model performance significantly. Employing a masked language modeling (MLM) strategy, the model is fine-tuned to generate more accurate responses by exclusively masking the answer portion of the training data. This custom data collator has led to noticeable improvements in the model's factuality performance.
74
+
75
+ ## **Neural Bridge AI Rago Models Index**
76
+
77
+ | Model | Link | Base Model
78
+ | ----- | ------ | ----------
79
+ | Rago v1 1B | [link](https://huggingface.co/neural-bridge/Rago-v1-1b) | [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b)
80
+ | Rago v1 7B | [link](https://huggingface.co/neural-bridge/Rago-v1-7b) | [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b)
81
+ | Rago v1 40B | [link](https://huggingface.co/neural-bridge/Rago-v1-40b) | [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b)
82
+ | Rago v2 7B | [link](https://huggingface.co/neural-bridge/Rago-v2-7b) | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf)
83
+ | Rago v2 13B | [link](https://huggingface.co/neural-bridge/Rago-v2-13b) | [Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf)
84
+
85
+ ## **License**
86
+
87
+ This public extract is made available under [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0.html). Users should also abide to the [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) ToU.