Transformers
Safetensors
Inference Endpoints
TheBug95 commited on
Commit
ef7c7b4
1 Parent(s): 662c265

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -2
README.md CHANGED
@@ -102,6 +102,7 @@ print(respuesta)
102
 
103
  # Referencias
104
  1- **MS MARCO Dataset:**
 
105
  @misc{bajaj2018msmarcohumangenerated,
106
  title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
107
  author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song and Alina Stoica and Saurabh Tiwary and Tong Wang},
@@ -113,6 +114,7 @@ print(respuesta)
113
  }
114
 
115
  2- **QLoRA Paper:**
 
116
  @misc{dettmers2023qloraefficientfinetuningquantized,
117
  title={QLoRA: Efficient Finetuning of Quantized LLMs},
118
  author={Tim Dettmers and Artidoro Pagnoni and Ari Holtzman and Luke Zettlemoyer},
@@ -123,17 +125,26 @@ print(respuesta)
123
  url={https://arxiv.org/abs/2305.14314},
124
  }
125
 
126
- 3- [**PEFT Library:**](https://huggingface.co/docs/peft/index)
127
 
128
  4- **LoRA Paper:**
 
129
  @misc{hu2021loralowrankadaptationlarge,
 
130
  title={LoRA: Low-Rank Adaptation of Large Language Models},
 
131
  author={Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
 
132
  year={2021},
 
133
  eprint={2106.09685},
 
134
  archivePrefix={arXiv},
 
135
  primaryClass={cs.CL},
 
136
  url={https://arxiv.org/abs/2106.09685},
 
137
  }
138
 
139
- 5- [**BitsAndBytes Library:** ](https://github.com/bitsandbytes-foundation/bitsandbytes)
 
102
 
103
  # Referencias
104
  1- **MS MARCO Dataset:**
105
+
106
  @misc{bajaj2018msmarcohumangenerated,
107
  title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
108
  author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song and Alina Stoica and Saurabh Tiwary and Tong Wang},
 
114
  }
115
 
116
  2- **QLoRA Paper:**
117
+
118
  @misc{dettmers2023qloraefficientfinetuningquantized,
119
  title={QLoRA: Efficient Finetuning of Quantized LLMs},
120
  author={Tim Dettmers and Artidoro Pagnoni and Ari Holtzman and Luke Zettlemoyer},
 
125
  url={https://arxiv.org/abs/2305.14314},
126
  }
127
 
128
+ 3- [**PEFT Library**](https://huggingface.co/docs/peft/index)
129
 
130
  4- **LoRA Paper:**
131
+
132
  @misc{hu2021loralowrankadaptationlarge,
133
+
134
  title={LoRA: Low-Rank Adaptation of Large Language Models},
135
+
136
  author={Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
137
+
138
  year={2021},
139
+
140
  eprint={2106.09685},
141
+
142
  archivePrefix={arXiv},
143
+
144
  primaryClass={cs.CL},
145
+
146
  url={https://arxiv.org/abs/2106.09685},
147
+
148
  }
149
 
150
+ 5- [**BitsAndBytes Library** ](https://github.com/bitsandbytes-foundation/bitsandbytes)