yuchenlin commited on
Commit
1ba66c3
β€’
1 Parent(s): 3abb169

update link

Browse files
Files changed (1) hide show
  1. constants.py +1 -1
constants.py CHANGED
@@ -21,7 +21,7 @@ INTRODUCTION_TEXT= """
21
  > URIAL Bench tests the capacity of base LLMs for alignment without introducing the factors of fine-tuning (learning rate, data, etc.), which are hard to control for fair comparisons.
22
  Specifically, we use [URIAL](https://github.com/Re-Align/URIAL/tree/main/run_scripts/mt-bench#run-urial-inference) to align a base LLM, and evaluate its performance on MT-Bench.
23
 
24
- - [πŸ‘ URIAL](https://arxiv.org/abs/2312.01552) uses three constant examples to align BASE LLMs with in-context learning.
25
  - [πŸ“Š MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) is a small, curated benchmark with two turns of instruction following tasks in 10 domains.
26
 
27
 
 
21
  > URIAL Bench tests the capacity of base LLMs for alignment without introducing the factors of fine-tuning (learning rate, data, etc.), which are hard to control for fair comparisons.
22
  Specifically, we use [URIAL](https://github.com/Re-Align/URIAL/tree/main/run_scripts/mt-bench#run-urial-inference) to align a base LLM, and evaluate its performance on MT-Bench.
23
 
24
+ - [πŸ‘ URIAL](https://arxiv.org/abs/2312.01552) uses K=3 constant [examples](https://github.com/Re-Align/URIAL/blob/main/urial_prompts/inst_1k_v4.help.txt.md) to align BASE LLMs with in-context learning.
25
  - [πŸ“Š MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) is a small, curated benchmark with two turns of instruction following tasks in 10 domains.
26
 
27