Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: gemma
|
3 |
+
base_model: google/gemma-2-2b-it
|
4 |
+
tags:
|
5 |
+
- gemma2
|
6 |
+
- instruction-tuning
|
7 |
+
- nirf
|
8 |
+
- india
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
---
|
11 |
+
|
12 |
+
# Gemma-2B (IT) — NIRF Lookup 2025 (Merged FP16)
|
13 |
+
|
14 |
+
Base: google/gemma-2-2b-it
|
15 |
+
This repository contains merged full weights (LoRA baked into base).
|
16 |
+
|
17 |
+
Intended use:
|
18 |
+
Short factual lookup answers about NIRF 2025 (Indian institutes).
|
19 |
+
|
20 |
+
How to use (summary):
|
21 |
+
Load with Transformers AutoTokenizer and AutoModelForCausalLM from this repo id.
|
22 |
+
Use bfloat16 on NVIDIA L4. Provide an instruction (and optional context), then generate.
|
23 |
+
|
24 |
+
Training summary:
|
25 |
+
QLoRA (4-bit) on Gemma-2-2b-it. LoRA r=16, alpha=64, dropout=0.1.
|
26 |
+
Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj.
|
27 |
+
bf16 on NVIDIA L4. Data: 100 NIRF 2025 lookup samples.
|
28 |
+
|
29 |
+
License & notice:
|
30 |
+
This model is a Model Derivative of google/gemma-2-2b-it and is distributed under Google’s Gemma Terms of Use.
|
31 |
+
See the NOTICE file in this repo.
|