brooksideas commited on
Commit
e6c86c5
1 Parent(s): 532f5b4

Updated the Model card

Browse files
Files changed (1) hide show
  1. README.md +25 -6
README.md CHANGED
@@ -17,20 +17,39 @@ This model is a fine-tuned version of [openai-community/gpt2](https://huggingfac
17
  It achieves the following results on the evaluation set:
18
  - Loss: 3.3924
19
 
20
- ## Model description
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
25
 
26
- More information needed
27
 
 
 
 
 
 
 
 
 
 
28
  ## Training and evaluation data
 
 
 
 
 
 
29
 
30
- More information needed
31
 
32
- ## Training procedure
 
 
33
 
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
 
17
  It achieves the following results on the evaluation set:
18
  - Loss: 3.3924
19
 
20
+ ## Model Description
21
 
22
+ This language model is built on the GPT-2 architecture provided by OpenAI. The tokenizer utilized for preprocessing text data is OpenAI's tikToken. For more details on tikToken, you can refer to the [official GitHub repository](https://github.com/openai/tiktoken).
23
 
24
+ ### Tokenizer Overview
25
 
26
+ To interactively explore the functionality and behavior of the tikToken tokenizer, you can use the [tikToken interactive website](https://tiktokenizer.vercel.app/). This website allows you to quickly visualize the tokenization process and understand how the tokenizer segments input text into tokens.
27
 
28
+ ### Model Checkpoint
29
+
30
+ The model checkpoint used in this implementation is sourced from the OpenAI community and is based on the GPT-2 architecture. You can find the specific model checkpoint at the following Hugging Face Model Hub link: [openai-community/gpt2](https://huggingface.co/openai-community/gpt2).
31
+
32
+ ### Training Details
33
+
34
+ The model was trained for a total of 3 epochs on the provided dataset. This information reflects the number of times the entire training dataset was processed during the training phase. Training for a specific number of epochs helps control the duration and scope of the model's learning process.
35
+
36
+
37
  ## Training and evaluation data
38
+
39
+ #### Evaluation Data
40
+
41
+ For evaluating the model's performance, the training script utilized an evaluation dataset.
42
+
43
+ #### Evaluation Results
44
 
45
+ After training, the model's performance was assessed using the evaluation dataset. The perplexity, a common metric for language modeling tasks was **Perplexity: 29.74**
46
 
47
+ ```python
48
+ eval_results = trainer.evaluate()
49
+ print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
50
 
51
+ >>> Perplexity : 29.74
52
+ ```
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training: