mpasila commited on
Commit
50b4770
1 Parent(s): 17d147e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -21,6 +21,8 @@ This LoRA uses the 1000B checkpoint.
21
 
22
  Trained for 1 epoch with 2048 token context, LoRA Rank 256, Alpha 512.
23
 
 
 
24
  # Uploaded model
25
 
26
  - **Developed by:** mpasila
 
21
 
22
  Trained for 1 epoch with 2048 token context, LoRA Rank 256, Alpha 512.
23
 
24
+ As a proof of concept it seems to work fairly well. Though I should generate the rest of the dataset which should hopefully work a lot better.
25
+
26
  # Uploaded model
27
 
28
  - **Developed by:** mpasila