File size: 2,852 Bytes
968dd43
 
 
 
 
 
 
 
5bf9533
 
 
db1d740
 
5bf9533
 
968dd43
bb87010
faf87cc
bb87010
faf87cc
 
 
 
 
 
 
 
 
 
 
 
 
 
bb87010
 
43a64c6
968dd43
 
f5aae64
 
 
968dd43
 
5bf9533
1ecbfa2
5bf9533
968dd43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---

# DogeGPT Meme Coin ๐Ÿ•๐Ÿค–
The Meme Coin will be launched Soon
Join our socials to find out more (and invest early๐Ÿ•)
All other DogeGPTs are all fake, only check the following socials for update
Share them and mention us on X(twitter)





<p align="center">
  <!-- Twitter Icon -->
  <a href="https://x.com/doge_gpt1" target="_blank">
    <img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Follow on Twitter">
  </a>
  
  <!-- YouTube Icon -->
  <a href="https://www.youtube.com/@dogegpt" target="_blank">
    <img src="https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white" alt="Subscribe on YouTube">
  </a>
  
  <!-- Website Icon -->
  <a href="https://dogegpt.org/" target="_blank">
    <img src="https://img.shields.io/badge/Website-0A66C2?style=for-the-badge&logo=google-chrome&logoColor=white" alt="Visit Our Website">
  </a>
</p>


# DogeGPT1-1B ๐Ÿ•๐Ÿค–

![DogeGPT Logo](DogeGPT.jpg "DogeGPT Logo")


DogeGPT1-1B is an open-sourced **1.24B-parameter Large Language Model (LLM)** designed to bring the fun of meme coins and the power of AI together! Built on the **LLaMA architecture**, DogeGPT is tailored for conversational AI applications with a playful twist. Whether you're a meme coin enthusiast, developer, or AI explorer, DogeGPT is here to spark your creativity.


**3B and 8B -parameter LLMs will be annonced soon**

---

## Model Overview ๐Ÿš€

- **Model Name**: DogeGPT1-1B  
- **Architecture**: LLaMA  
- **Model Size**: 1.24B parameters  
- **Quantization Formats**: GGUF (2-bit, 3-bit, 4-bit, 5-bit, 6-bit, 8-bit)  
- **License**: Apache 2.0  
- **Tags**: `PyTorch`, `LLaMA`, `TRL`, `GGUF`, `conversational`  
- **Downloads Last Month**: 115  

---

## Features ๐ŸŒŸ

- **Conversational AI**: Perfect for building chatbots, virtual assistants, or meme-themed conversational models.  
- **Quantization Support**: Includes efficient formats for deployment in resource-constrained environments.  
- **Open Source**: Fully available under the permissive Apache 2.0 license.  

---

## Getting Started ๐Ÿ› ๏ธ

### Installation

Clone the model and install the necessary dependencies:

```bash
pip install transformers huggingface_hub
```

### Usage Example

Hereโ€™s how to load DogeGPT1-1B using transformers:


```python

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained("Doge-GPT/DogeGPT1-1B")
tokenizer = AutoTokenizer.from_pretrained("Doge-GPT/DogeGPT1-1B")

# Generate text
input_text = "What is DogeGPT?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

```