bickett commited on
Commit
c7c5d78
1 Parent(s): bc8dfff

Update model card

Browse files
Files changed (1) hide show
  1. README.md +36 -2
README.md CHANGED
@@ -1,9 +1,43 @@
1
  ---
 
2
  tags:
3
  - autotrain
4
  - text-generation
 
 
 
 
 
5
  widget:
6
- - text: "I love AutoTrain because "
7
  ---
8
 
9
- # Model Trained Using AutoTrain
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
  tags:
4
  - autotrain
5
  - text-generation
6
+ - llm
7
+ - memes
8
+ library_name: transformers
9
+ library_version: [latest version at the time of training]
10
+ model_type: llama 2
11
  widget:
12
+ - text: "When you try to code without coffee, "
13
  ---
14
 
15
+ # Llama 2 Meme Generator
16
+
17
+ ## Model Description
18
+
19
+ This model is a fine-tuned version of the `llama 2` model, specifically tailored for generating meme captions. It captures the essence and humor commonly found in popular internet memes and offers a unique approach to meme creation. Just provide a prompt or a meme context, and let the model generate a fitting caption!
20
+
21
+ ## Training Data
22
+
23
+ The model was trained using a diverse dataset of meme captions, spanning various internet trends, jokes, and pop culture references. This ensures a wide range of meme generation capabilities, from classic meme formats to contemporary internet humor.
24
+
25
+ ## Training Procedure
26
+
27
+ The model was fine-tuned using the `autotrain llm` command with optimal hyperparameters for meme generation. Special care was taken to avoid overfitting, ensuring the model can generalize well across various meme contexts.
28
+
29
+ ## Usage
30
+
31
+ To generate a meme caption using this model, you can use the following code:
32
+
33
+ ```python
34
+ from transformers import AutoTokenizer, AutoModelWithLMHead
35
+
36
+ tokenizer = AutoTokenizer.from_pretrained("bickett/meme-llama")
37
+ model = AutoModelWithLMHead.from_pretrained("bickett/meme-llama")
38
+
39
+ input_text = "When you try to code without coffee"
40
+ input_ids = tokenizer.encode(input_text, return_tensors="pt")
41
+ output = model.generate(input_ids)
42
+
43
+ print(tokenizer.decode(output[0], skip_special_tokens=True))