Suparious commited on
Commit
88baad2
1 Parent(s): 77d1ef2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -1,4 +1,11 @@
1
  ---
 
 
 
 
 
 
 
2
  library_name: transformers
3
  tags:
4
  - 4-bit
@@ -15,7 +22,17 @@ quantized_by: Suparious
15
  - Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
16
  - Original model: [llama-3-bophades-v3-8B](https://huggingface.co/nbeerbower/llama-3-bophades-v3-8B)
17
 
 
18
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## How to use
21
 
 
1
  ---
2
+ base_model:
3
+ - nbeerbower/llama-3-wissenschaft-8B
4
+ datasets:
5
+ - jondurbin/truthy-dpo-v0.1
6
+ - kyujinpy/orca_math_dpo
7
+ license: other
8
+ license_name: llama3
9
  library_name: transformers
10
  tags:
11
  - 4-bit
 
22
  - Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
23
  - Original model: [llama-3-bophades-v3-8B](https://huggingface.co/nbeerbower/llama-3-bophades-v3-8B)
24
 
25
+ ![image/png](https://huggingface.co/nbeerbower/bophades-mistral-7B/resolve/main/bophades.png)
26
 
27
+ ## Model Summary
28
+
29
+ This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
30
+
31
+ [nbeerbower/llama-3-wissenschaft-8B](https://huggingface.co/nbeerbower/nbeerbower/llama-3-wissenschaft-8B) finetuned on [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) and [kyujinpy/orca_math_dpo](https://huggingface.co/datasets/kyujinpy/orca_math_dpo).
32
+
33
+ Finetuned using an A100 on Google Colab.
34
+
35
+ [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
36
 
37
  ## How to use
38