kevinkawchak commited on
Commit
cc5a609
1 Parent(s): 936efd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -6
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  language:
3
  - en
4
- license: apache-2.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
@@ -9,14 +9,69 @@ tags:
9
  - llama
10
  - trl
11
  base_model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
 
 
12
  ---
13
 
14
- # Uploaded model
15
-
16
  - **Developed by:** kevinkawchak
17
- - **License:** apache-2.0
18
  - **Finetuned from model :** gradientai/Llama-3-8B-Instruct-Gradient-1048k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
  language:
3
  - en
4
+ license: llama3
5
  tags:
6
  - text-generation-inference
7
  - transformers
 
9
  - llama
10
  - trl
11
  base_model: gradientai/Llama-3-8B-Instruct-Gradient-1048k
12
+ datasets:
13
+ - zjunlp/Mol-Instructions
14
  ---
15
 
 
 
16
  - **Developed by:** kevinkawchak
17
+ - **License:** llama3
18
  - **Finetuned from model :** gradientai/Llama-3-8B-Instruct-Gradient-1048k
19
+ - **Finetuned using dataset :** zjunlp/Mol-Instructions, cc-by-4.0
20
+ - **Dataset identification:** Molecule-oriented Instructions
21
+ - **Dataset function:** Description guided molecule design
22
+
23
+ ## May 07, 2024: Additional Fine-tunings, Built with Meta Llama 3 <br>
24
+ 1) gradientai/Llama-3-8B-Instruct-Gradient-1048k [Model](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) <br>
25
+ Llama 3 8B update: 1040K context length from 8K, and highest RAM consumption<br>
26
+ "What is the structure for adenine?" Verbose SELFIES structure, but logical<br>
27
+ [Fine-tuned](https://huggingface.co/kevinkawchak/gradientai-Llama-3-8B-Instruct-Gradient-1048k-Molecule16) on Mol-Instructions, float16, [GitHub](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Llama-3-8B-Instruct-Gradient-1048k-Molecule.ipynb), 610 seconds, A100 40GB <br>
28
+
29
+ 2) NousResearch/Hermes-2-Pro-Llama-3-8B [Model](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)<br>
30
+ Llama 3 8B update: Cleaned OpenHermes 2.5, new Function Calling, JSON Mode dataset<br>
31
+ "What is the structure for adenine?" Concise SELFIES structure, but less logical <br>
32
+ [Fine-tuned](https://huggingface.co/kevinkawchak/NousResearch-Hermes-2-Pro-Llama-3-8B-Molecule16) on Mol-Instructions, float16, [GitHub](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Hermes-2-Pro-Llama-3-8B-Molecule.ipynb), 599 seconds, A100 40GB <br>
33
+
34
+ 3) nvidia/Llama3-ChatQA-1.5-8B [Model](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B)<br>
35
+ Llama 3 8B update: ChatQA-1.5 to enhance tabular and arithmetic calculation capability<br>
36
+ "What is the structure for adenine?" Verbose SELFIES structure and less logical <br>
37
+ [Fine-tuned](https://huggingface.co/kevinkawchak/nvidia-Llama3-ChatQA-1.5-8B-Molecule16) on Mol-Instructions, float16, [GitHub](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Llama3-ChatQA-1.5-8B-Molecule.ipynb), 599 seconds, A100 40GB <br>
38
+
39
+ Responses were verified against the Wikipedia [Adenine](https://en.wikipedia.org/wiki/Adenine) SMILES format and a SMILES to SELFIES python notebook estimated [generator](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/SMILES%20to%20SELFIES%20estimator.ipynb). <br>
40
+ Fine-tunings were performed using the Apache-2.0 unsloth 'Alpaca + Llama-3 8b full example' Colab [notebook](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing).
41
+
42
+ ## Primary Study
43
+ The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work.
44
+ [Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing). [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3. <br>
45
+
46
+ A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
47
+
48
+ The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)
49
+
50
+ Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: [8.03B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule16), [4.65B](https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-Molecule04). [github](https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Molecule.ipynb).
51
+
52
+ References:
53
+ 1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit
54
+ 2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions
55
+ 3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
56
+ 4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
57
+ 5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
58
+
59
+ @inproceedings{fang2023mol, <br>
60
+ author = {Yin Fang and<br>
61
+ Xiaozhuan Liang and<br>
62
+ Ningyu Zhang and<br>
63
+ Kangwei Liu and<br>
64
+ Rui Huang and<br>
65
+ Zhuo Chen and<br>
66
+ Xiaohui Fan and<br>
67
+ Huajun Chen},<br>
68
+ title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>
69
+ for Large Language Models},<br>
70
+ booktitle = {{ICLR}},<br>
71
+ publisher = {OpenReview.net},<br>
72
+ year = {2024},<br>
73
+ url = {https://openreview.net/pdf?id=Tlsdsb6l9n}}<br>
74
 
75
+ This llama model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
76
 
77
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)