Hanmeng Liu
commited on
Commit
•
3cd1e6f
1
Parent(s):
c030e73
Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,8 @@ tags:
|
|
10 |
- logical
|
11 |
---
|
12 |
|
13 |
-
This model is tuned on the LogiCoT data and the GPT-4 alpaca data with the LLaMa-7b model.
|
|
|
14 |
We use 2 A100 GPUs
|
|
|
15 |
We first instruction-tuning LLaMa-7b on the GPT-4 alpaca data for 3 days, then on the LogiCoT data for 4 days.
|
|
|
10 |
- logical
|
11 |
---
|
12 |
|
13 |
+
This model is tuned on the **LogiCoT** data and the GPT-4 alpaca data with the **LLaMa-7b** model.
|
14 |
+
|
15 |
We use 2 A100 GPUs
|
16 |
+
|
17 |
We first instruction-tuning LLaMa-7b on the GPT-4 alpaca data for 3 days, then on the LogiCoT data for 4 days.
|