File size: 7,899 Bytes
2d96e2c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


Nidum-Limitless-Gemma-2B - GGUF
- Model creator: https://huggingface.co/nidum/
- Original model: https://huggingface.co/nidum/Nidum-Limitless-Gemma-2B/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Nidum-Limitless-Gemma-2B.Q2_K.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q2_K.gguf) | Q2_K | 1.08GB |
| [Nidum-Limitless-Gemma-2B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [Nidum-Limitless-Gemma-2B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [Nidum-Limitless-Gemma-2B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [Nidum-Limitless-Gemma-2B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [Nidum-Limitless-Gemma-2B.Q3_K.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q3_K.gguf) | Q3_K | 1.29GB |
| [Nidum-Limitless-Gemma-2B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [Nidum-Limitless-Gemma-2B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [Nidum-Limitless-Gemma-2B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [Nidum-Limitless-Gemma-2B.Q4_0.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q4_0.gguf) | Q4_0 | 1.44GB |
| [Nidum-Limitless-Gemma-2B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [Nidum-Limitless-Gemma-2B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [Nidum-Limitless-Gemma-2B.Q4_K.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q4_K.gguf) | Q4_K | 1.52GB |
| [Nidum-Limitless-Gemma-2B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [Nidum-Limitless-Gemma-2B.Q4_1.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q4_1.gguf) | Q4_1 | 1.56GB |
| [Nidum-Limitless-Gemma-2B.Q5_0.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q5_0.gguf) | Q5_0 | 1.68GB |
| [Nidum-Limitless-Gemma-2B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [Nidum-Limitless-Gemma-2B.Q5_K.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q5_K.gguf) | Q5_K | 1.71GB |
| [Nidum-Limitless-Gemma-2B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [Nidum-Limitless-Gemma-2B.Q5_1.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q5_1.gguf) | Q5_1 | 1.79GB |
| [Nidum-Limitless-Gemma-2B.Q6_K.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q6_K.gguf) | Q6_K | 1.92GB |
| [Nidum-Limitless-Gemma-2B.Q8_0.gguf](https://huggingface.co/RichardErkhov/nidum_-_Nidum-Limitless-Gemma-2B-gguf/blob/main/Nidum-Limitless-Gemma-2B.Q8_0.gguf) | Q8_0 | 2.49GB |




Original model description:
---
license: apache-2.0
tags:
- legal
- chemistry
- medical
- text-generation-inference
- art
- finance
pipeline_tag: text-generation
---
# Nidum-Limitless-Gemma-2B LLM

Welcome to the repository for Nidum-Limitless-Gemma-2B, an advanced language model that provides unrestricted and versatile responses across a wide range of topics. Unlike conventional models, Nidum-Limitless-Gemma-2B is designed to handle any type of question and deliver comprehensive answers without content restrictions.

## Key Features:
- **Unrestricted Responses:** Address any query with detailed, unrestricted responses, providing a broad spectrum of information and insights.
- **Versatility:** Capable of engaging with a diverse range of topics, from complex scientific questions to casual conversation.
- **Advanced Understanding:** Leverages a vast knowledge base to deliver contextually relevant and accurate outputs across various domains.
- **Customizability:** Adaptable to specific user needs and preferences for different types of interactions.

## Use Cases:
- Open-Ended Q&A
- Creative Writing and Ideation
- Research Assistance
- Educational and Informational Queries
- Casual Conversations and Entertainment

## How to Use:

To get started with Nidum-Limitless-Gemma-2B, you can use the following sample code for testing:

```python
import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="nidum/Nidum-Limitless-Gemma-2B",
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="cuda",  # replace with "mps" to run on a Mac device
)

messages = [
    {"role": "user", "content": "who are you"},
]

outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
```

## Release Date:
Nidum-Limitless-Gemma-2B is now officially available. Explore its capabilities and experience the freedom of unrestricted responses.

## Contributing:
We welcome contributions to enhance the model or expand its functionalities. Details on how to contribute will be available in the coming updates.

## Quantized Model Versions

To accommodate different hardware configurations and performance needs, Nidum-Limitless-Gemma-2B-GGUF is available in multiple quantized versions:

| Model Version                                  | Description                                           |
|------------------------------------------------|-------------------------------------------------------|
| **Nidum-Limitless-Gemma-2B-Q2_K.gguf**         | Optimized for minimal memory usage with lower precision. Suitable for resource-constrained environments. |
| **Nidum-Limitless-Gemma-2B-Q4_K_M.gguf**       | Balances performance and precision, offering faster inference with moderate memory usage. |
| **Nidum-Limitless-Gemma-2B-Q8_0.gguf**         | Provides higher precision with increased memory usage, suitable for tasks requiring more accuracy. |
| **Nidum-Limitless-Gemma-2B-F16.gguf**          | Full 16-bit floating point precision for maximum accuracy, ideal for high-end GPUs. |

It is available here: https://huggingface.co/nidum/Nidum-Limitless-Gemma-2B-GGUF

## Contact:
For any inquiries or further information, please contact us at **info@nidum.ai**.

---

Dive into limitless possibilities with Nidum-Limitless-Gemma-2B!

Special Thanks to @cognitivecomputations for inspiring us and scouting the best datasets that we could round up to make a rockstar model for you
---