Text Generation
Transformers
PyTorch
English
beit3_llava
Inference Endpoints
Yirany commited on
Commit
4654201
β€’
1 Parent(s): 725aa53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -13,7 +13,7 @@ library_name: transformers
13
  [Project Page](https://rlhf-v.github.io/) | [GitHub ](https://github.com/RLHF-V/RLHF-V) | [Demo](http://120.92.209.146:8081/) | [Paper](https://arxiv.org/abs/2312.00849)
14
 
15
  ## News
16
- * [2024.05.20] πŸŽ‰ We introduce [RLAIF-V](https://github.com/RLHF-V/RLAIF-V), our new alignment framework that utilize open-source models for feedback generation and reach **super GPT-4V trustworthiness**. You can download the corresponding [πŸ€— dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset) now!
17
  * [2024.04.11] πŸ”₯ Our data is used in [MiniCPM-V 2.0](https://huggingface.co/openbmb/MiniCPM-V-2), an **end-side** multimodal large language model that exhibits **comparable trustworthiness with GPT-4V**!
18
 
19
  ## Brief Introduction
@@ -50,7 +50,7 @@ More resistant to over-generalization, even compared to GPT-4V:
50
 
51
  ## Citation
52
 
53
- If you find RLHF-V is useful in your work, please cite it with:
54
 
55
  ```
56
  @article{2023rlhf-v,
 
13
  [Project Page](https://rlhf-v.github.io/) | [GitHub ](https://github.com/RLHF-V/RLHF-V) | [Demo](http://120.92.209.146:8081/) | [Paper](https://arxiv.org/abs/2312.00849)
14
 
15
  ## News
16
+ * [2024.05.20] πŸŽ‰ We introduce [RLAIF-V](https://github.com/RLHF-V/RLAIF-V), our new alignment framework that utilize open-source models for feedback generation and reach **super GPT-4V trustworthiness**. You can download the corresponding [dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset) and models ([7B](https://huggingface.co/openbmb/RLAIF-V-7B), [12B](https://huggingface.co/openbmb/RLAIF-V-12B)) now!
17
  * [2024.04.11] πŸ”₯ Our data is used in [MiniCPM-V 2.0](https://huggingface.co/openbmb/MiniCPM-V-2), an **end-side** multimodal large language model that exhibits **comparable trustworthiness with GPT-4V**!
18
 
19
  ## Brief Introduction
 
50
 
51
  ## Citation
52
 
53
+ If you find RLHF-V is useful in your work, please consider citing it with:
54
 
55
  ```
56
  @article{2023rlhf-v,