anakin87 commited on
Commit
1a06bf8
โ€ข
1 Parent(s): f18f009

improve readme

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -49,7 +49,7 @@ gemma-2b-orpo performs well for its size on Nous' benchmark suite.
49
  ## ๐Ÿ™ Dataset
50
  [`alvarobartt/dpo-mix-7k-simplified`](https://huggingface.co/datasets/alvarobartt/dpo-mix-7k-simplified)
51
  is a simplified version of [`argilla/dpo-mix-7k`](https://huggingface.co/datasets/argilla/dpo-mix-7k).
52
- You can find more information [here](https://huggingface.co/alvarobartt/Mistral-7B-v0.1-ORPO#about-the-dataset).
53
 
54
  ## ๐ŸŽฎ Model in action
55
  ### Usage notebook
 
49
  ## ๐Ÿ™ Dataset
50
  [`alvarobartt/dpo-mix-7k-simplified`](https://huggingface.co/datasets/alvarobartt/dpo-mix-7k-simplified)
51
  is a simplified version of [`argilla/dpo-mix-7k`](https://huggingface.co/datasets/argilla/dpo-mix-7k).
52
+ You can find more information [in the dataset card](https://huggingface.co/datasets/alvarobartt/dpo-mix-7k-simplified).
53
 
54
  ## ๐ŸŽฎ Model in action
55
  ### Usage notebook