aashish1904 commited on
Commit
173c834
β€’
1 Parent(s): 27b1832

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ license: llama3.1
6
+ base_model: NousResearch/Hermes-3-Llama-3.1-8B
7
+ tags:
8
+ - llama-factory
9
+ - full
10
+ - unsloth
11
+ - generated_from_trainer
12
+ model-index:
13
+ - name: kimhyeongjun/Hermes-3-Llama-3.1-8B-Kor-Finance-Advisor
14
+ results: []
15
+
16
+ ---
17
+
18
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
19
+
20
+
21
+ # QuantFactory/Hermes-3-Llama-3.1-8B-Kor-Finance-Advisor-GGUF
22
+ This is quantized version of [kimhyeongjun/Hermes-3-Llama-3.1-8B-Kor-Finance-Advisor](https://huggingface.co/kimhyeongjun/Hermes-3-Llama-3.1-8B-Kor-Finance-Advisor) created using llama.cpp
23
+
24
+ # Original Model Card
25
+
26
+
27
+
28
+ # kimhyeongjun/Hermes-3-Llama-3.1-8B-Kor-Finance-Advisor
29
+
30
+ This is my personal toy project for Chuseok(Korean Thanksgiving Day).
31
+
32
+ This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the Korean_synthetic_financial_dataset_21K.
33
+
34
+
35
+ ## Model description
36
+
37
+ Everything happened automatically without any user intervention.
38
+
39
+ Based on finance PDF data collected directly from the web, we refined the raw data using the 'meta-llama/Meta-Llama-3.1-70B-Instruct-FP8' model.
40
+ After generating synthetic data based on the cleaned data, we further evaluated the quality of the generated data using the 'meta-llama/Llama-Guard-3-8B' and 'RLHFlow/ArmoRM-Llama3-8B-v0.1' models.
41
+ We then used 'Alibaba-NLP/gte-large-en-v1.5' to extract embeddings and applied Faiss to perform Jaccard distance-based nearest neighbor analysis to construct the final dataset of 21k, which is diverse and sophisticated.
42
+
43
+ λͺ¨λ“  과정은 μ‚¬μš©μžμ˜ κ°œμž… 없이 μžλ™μœΌλ‘œ μ§„ν–‰λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
44
+
45
+ μ›Ήμ—μ„œ 직접 μˆ˜μ§‘ν•œ 금육 κ΄€λ ¨ PDF 데이터λ₯Ό 기반으둜, 돈이 μ—†μ–΄μ„œ 'meta-llama/Meta-Llama-3.1-70B-Instruct-FP8' λͺ¨λΈμ„ ν™œμš©ν•˜μ—¬ Raw 데이터λ₯Ό μ •μ œν•˜μ˜€μŠ΅λ‹ˆλ‹€.
46
+ μ •μ œλœ 데이터λ₯Ό λ°”νƒ•μœΌλ‘œ ν•©μ„± 데이터λ₯Ό μƒμ„±ν•œ ν›„, 'meta-llama/Llama-Guard-3-8B' 및 'RLHFlow/ArmoRM-Llama3-8B-v0.1' λͺ¨λΈμ„ 톡해 μƒμ„±λœ λ°μ΄ν„°μ˜ ν’ˆμ§ˆμ„ μ‹¬μΈ΅μ μœΌλ‘œ ν‰κ°€ν•˜μ˜€μŠ΅λ‹ˆλ‹€.
47
+ μ΄μ–΄μ„œ 'Alibaba-NLP/gte-large-en-v1.5'λ₯Ό μ‚¬μš©ν•˜μ—¬ μž„λ² λ”©μ„ μΆ”μΆœν•˜κ³ , Faissλ₯Ό μ μš©ν•˜μ—¬ μžμΉ΄λ“œ 거리 기반의 κ·Όμ ‘ 이웃 뢄석을 μˆ˜ν–‰ν•¨μœΌλ‘œμ¨ λ‹€μ–‘ν•˜κ³  μ •κ΅ν•œ μ΅œμ’… 데이터셋 21k을 직접 κ΅¬μ„±ν•˜μ˜€μŠ΅λ‹ˆλ‹€.
48
+
49
+
50
+ ## Task duration
51
+ 3days (20240914~20240916)
52
+
53
+ ## evaluation
54
+ Nothing (I had to take the Thanksgiving holiday off.)
55
+
56
+ ## sample
57
+
58
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/619d8e31c21bf5feb310bd82/gJ6hnvAV2Qx9774AFFwQe.png)
59
+
60
+ ### Framework versions
61
+
62
+ - Transformers 4.44.2
63
+ - Pytorch 2.4.0+cu121
64
+ - Datasets 2.21.0
65
+ - Tokenizers 0.19.1
66
+
67
+
68
+