rccmsu commited on
Commit
b53b44b
1 Parent(s): 28bf242

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -24
README.md CHANGED
@@ -1,37 +1,19 @@
1
- ---
2
- base_model: llama2_7b_darulm_unigram_init_tie_16_11_23
3
- tags:
4
- - generated_from_trainer
5
- metrics:
6
- - accuracy
7
- model-index:
8
- - name: llama2_7b_darulm_unigram_tie_2e_16_11_23
9
- results: []
10
- ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
- # llama2_7b_darulm_unigram_tie_2e_16_11_23
16
-
17
- This model is a fine-tuned version of [llama2_7b_darulm_unigram_init_tie_16_11_23](https://huggingface.co/llama2_7b_darulm_unigram_init_tie_16_11_23) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 2.7569
20
  - Accuracy: 0.4617
21
 
22
  ## Model description
23
 
24
- More information needed
 
25
 
26
  ## Intended uses & limitations
27
 
28
- More information needed
29
-
30
- ## Training and evaluation data
31
-
32
- More information needed
33
-
34
- ## Training procedure
35
 
36
  ### Training hyperparameters
37
 
@@ -336,4 +318,4 @@ The following hyperparameters were used during training:
336
  - Transformers 4.34.0
337
  - Pytorch 2.0.1+cu118
338
  - Datasets 2.14.5
339
- - Tokenizers 0.14.1
 
 
 
 
 
 
 
 
 
 
 
1
 
2
+ # TheBloke/Llama-2-13B-fp16
 
3
 
4
+ This model is a fine-tuned (embeddings, lm head) version of TheBloke/Llama-2-7B-fp16 on the Russian dataset (33GB).
 
 
5
  It achieves the following results on the evaluation set:
6
  - Loss: 2.7569
7
  - Accuracy: 0.4617
8
 
9
  ## Model description
10
 
11
+ Russian adaptation of LLaMa-2-7B by replacing the tokenizer.
12
+ Paper: Tikhomirov M.M., Chernyshev D.I., Impact of Tokenization on LLaMa Russian Adaptation (will be soon)
13
 
14
  ## Intended uses & limitations
15
 
16
+ LLAMA 2 COMMUNITY LICENSE AGREEMENT
 
 
 
 
 
 
17
 
18
  ### Training hyperparameters
19
 
 
318
  - Transformers 4.34.0
319
  - Pytorch 2.0.1+cu118
320
  - Datasets 2.14.5
321
+ - Tokenizers 0.14.1