STEM-AI-mtl
commited on
Commit
•
ea44c8a
1
Parent(s):
24df9a5
Update README.md
Browse files
README.md
CHANGED
@@ -42,15 +42,13 @@ Refer to microsoft/phi-2 model card for recommended prompt format.
|
|
42 |
|
43 |
### Training Data
|
44 |
|
45 |
-
Dataset related to electrical engineering: STEM-AI-mtl/Electrical-engineering
|
46 |
It is composed of queries, 65% about general electrical engineering, 25% about Kicad (EDA software) and 10% about Python code for Kicad's scripting console.
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
Dataset related to STEM and NLP: garage-bAInd/Open-Platypus
|
51 |
|
52 |
### Training Procedure
|
53 |
-
LoRa script
|
54 |
|
55 |
A LoRa PEFT was performed on a 48 Gb A40 Nvidia GPU.
|
56 |
|
@@ -61,6 +59,6 @@ William Harbec
|
|
61 |
|
62 |
### Inference example
|
63 |
|
64 |
-
Standard
|
65 |
|
66 |
-
GPTQ format
|
|
|
42 |
|
43 |
### Training Data
|
44 |
|
45 |
+
Dataset related to electrical engineering: [STEM-AI-mtl/Electrical-engineering](https://huggingface.co/datasets/STEM-AI-mtl/Electrical-engineering)
|
46 |
It is composed of queries, 65% about general electrical engineering, 25% about Kicad (EDA software) and 10% about Python code for Kicad's scripting console.
|
47 |
|
48 |
+
In additionataset related to STEM and NLP: [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
|
|
|
|
|
49 |
|
50 |
### Training Procedure
|
51 |
+
[LoRa script](https://github.com/STEM-ai/Phi-2/raw/4eaa6aaa2679427a810ace5a061b9c951942d66a/LoRa.py)
|
52 |
|
53 |
A LoRa PEFT was performed on a 48 Gb A40 Nvidia GPU.
|
54 |
|
|
|
59 |
|
60 |
### Inference example
|
61 |
|
62 |
+
[Standard](https://github.com/STEM-ai/Phi-2/blob/4eaa6aaa2679427a810ace5a061b9c951942d66a/chat.py)
|
63 |
|
64 |
+
[GPTQ format](https://github.com/STEM-ai/Phi-2/blob/ab1ced8d7922765344d824acf1924df99606b4fc/chat-GPTQ.py)
|