Josephgflowers commited on
Commit
a9cd8b4
1 Parent(s): d12161e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -22
README.md CHANGED
@@ -25,28 +25,6 @@ This models is trained for RAG, Summary, Function Calling and Tool usage. Traine
25
  See https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Agent-Rag/blob/main/tinyllama_agent_cinder_txtai-rag.py
26
  For usage example with wiki rag.
27
 
28
- ### Training hyperparameters
29
-
30
- The following hyperparameters were used during training:
31
- - learning_rate: 5e-05
32
- - train_batch_size: 12
33
- - eval_batch_size: 32
34
- - seed: 42
35
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
36
- - lr_scheduler_type: linear
37
- - num_epochs: 1.0
38
- - mixed_precision_training: Native AMP
39
-
40
- ### Training results
41
-
42
-
43
-
44
- ### Framework versions
45
-
46
- - Transformers 4.41.0.dev0
47
- - Pytorch 2.2.2+cu121
48
- - Datasets 2.19.1
49
- - Tokenizers 0.19.1
50
 
51
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
52
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__TinyLlama-Cinder-Agent-v1)
 
25
  See https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Agent-Rag/blob/main/tinyllama_agent_cinder_txtai-rag.py
26
  For usage example with wiki rag.
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
30
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__TinyLlama-Cinder-Agent-v1)