Hibiki252 commited on
Commit
dc3365d
·
verified ·
1 Parent(s): 20068d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -1,5 +1,7 @@
1
  ---
2
- base_model: Hibiki252/gemma-2-27b-4bit
 
 
3
  tags:
4
  - text-generation-inference
5
  - transformers
@@ -20,3 +22,31 @@ language:
20
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Google/gemma-2-27b
4
+ - Hibiki252/gemma-2-27b-4bit
5
  tags:
6
  - text-generation-inference
7
  - transformers
 
22
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
23
 
24
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
25
+
26
+ ### Training Data and License
27
+
28
+ This model is fine-tuned using the dataset [DeL-TaiseiOzaki/Tengentoppa-sft-v1.0](https://huggingface.co/datasets/DeL-TaiseiOzaki/Tengentoppa-sft-v1.0) under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/).
29
+ The dataset was compiled from the following publicly available datasets:
30
+ ・Hachi-Alpaca_newans (GENIAC-Team-Ozaki/Hachi-Alpaca_newans)
31
+ ・Chatbot Arena Japanese Dataset for Karakuri LM 8x7B Chat v0.1 AWQ (GENIAC-Team-Ozaki/chatbot-arena-ja-karakuri-lm-8x7b-chat-v0.1-awq)
32
+ ・WikiHow NFQA Japanese Cleaned Dataset (GENIAC-Team-Ozaki/WikiHowNFQA-ja_cleaned)
33
+ ・Evolutionary Alpaca Generation 3 500 Cleaned Dataset (GENIAC-Team-Ozaki/Evol-Alpaca-gen3-500_cleaned)
34
+ ・Open Assistant 33k Japanese Reformatted Dataset (GENIAC-Team-Ozaki/oasst2-33k-ja_reformatted)
35
+ ・SFT Dataset For Self-Taught Evaluators Iteration 1 (Aratako/SFT-Dataset-For-Self-Taught-Evaluators-iter1)
36
+ ・Japanese Debate Argument Instruction Dataset (GENIAC-Team-Ozaki/debate_argument_instruction_dataset_ja)
37
+ ・Japanese Helpful-Harmless RLHF 49k Dataset (fujiki/japanese_hh-rlhf-49k)
38
+ ・Japanese Government FAQs 22k Dataset (GENIAC-Team-Ozaki/JaGovFaqs-22k)
39
+ ・Evolutionary Helpful-Harmless RLHF Generation 3 1k Cleaned Dataset (GENIAC-Team-Ozaki/Evol-hh-rlhf-gen3-1k_cleaned)
40
+ ・Magpie Qwen 2.5 32B Reasoning 100k Dataset (DeL-TaiseiOzaki/magpie-qwen2.5-32b-reasoning-100k)
41
+ ・Japanese Reasoning Finetuning Dataset (DeL-TaiseiOzaki/reasoning-finetuning-ja)
42
+ ・Magpie LLM Japanese 3.13B 20k Dataset (DeL-TaiseiOzaki/magpie-llm-jp-3-13b-20k)
43
+ ・Magpie SFT Version 1.0 Dataset (llm-jp/magpie-sft-v1.0)
44
+ ・Aya Japanese Nemotron DPO Masked Dataset (weblab-GENIAC/aya-ja-nemotron-dpo-masked)
45
+ ・Open Platypus Japanese Masked Dataset (weblab-GENIAC/Open-Platypus-Japanese-masked)
46
+ ・Synthesis sft data by mixtral-8×22B (hatakeyama-llm-team/AutoGeneratedJapaneseQA-CC)
47
+
48
+ ### Interfere Guide
49
+ This model is SFTed using data from DeL-TaiseiOzaki/Tengentoppa-sft-v1.0 against Hibiki252/gemma-2-27b-4bit, which is a model stored in google/gemma-2-27b with 4bit settings.
50
+ To perform inference, execute the following code.
51
+
52
+ (code)