summerstars commited on
Commit
4a35e78
·
verified ·
1 Parent(s): f5f8ec8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -3
README.md CHANGED
@@ -1,3 +1,70 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - HuggingFaceTB/SmolLM2-360M-Instruct
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - safetensors
10
+ - onnx
11
+ - transformers
12
+ ---
13
+
14
+ # 🌞 SolaraV2 — `summerstars/SolaraV2`
15
+
16
+ > **📅 Version 0517(2025-0517)**
17
+ > This is the 0517 release of SolaraV2.
18
+
19
+ ## ✨ Created by a High School Student | Built on Google Colab (T4 GPU)
20
+ ### 🌸 高校生によって開発 | Google Colab(T4 GPU)で作成
21
+
22
+ **SolaraV2** is an upgraded version of the original **Solara** — a lightweight, instruction-tuned language model based on [`HuggingFaceTB/SmolLM2-360M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct).
23
+ This version is trained on a **larger and more diverse dataset**, including **basic math-related samples**, improving its ability to handle both casual conversations and educational tasks.
24
+ All development was conducted by a high school student using **Google Colab** and a **T4 GPU**.
25
+
26
+ **SolaraV2(ソララV2)** は、オリジナルの **Solara** モデルを改良した軽量の言語モデルで、[`HuggingFaceTB/SmolLM2-360M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct) をベースにしています。
27
+ 本バージョンでは、**より大規模かつ多様なデータセット**(数学系データを含む)で学習を行い、日常会話から教育的な質問まで幅広く対応できるようになりました。
28
+ 開発はすべて、高校生が **Google Colab(T4 GPU)** 上で行いました。
29
+
30
+ ---
31
+
32
+ ## 📌 Model Details | モデル詳細
33
+
34
+ | Feature / 特徴 | Description / 説明 |
35
+ |--------------------|------------------|
36
+ | **Base Model** | `HuggingFaceTB/SmolLM2-360M-Instruct` |
37
+ | **Parameters** | 360M |
38
+ | **Architecture** | Decoder-only Transformer |
39
+ | **Language** | English / 英語 |
40
+ | **License** | Apache 2.0 |
41
+ | **Training Additions** | Basic math, factual Q&A / 基本数学・事実ベースのデータ追加 |
42
+
43
+ ---
44
+
45
+ ## 🚀 Use Cases | 主な用途
46
+
47
+ - 🤖 Lightweight chatbots / 軽量チャットボット
48
+ - 📱 Inference on CPUs or mobile devices / CPUやモバイル端末での推論
49
+ - 📚 Educational or hobbyist projects / 教育・趣味向けプロジェクト
50
+ - 🧾 Instruction-following tasks / 指示応答タスク
51
+ - ➗ Basic math questions / 基本的な数学問題への対応
52
+
53
+ ---
54
+
55
+ ## 🛠️ How to Use | 使用方法
56
+
57
+ ```python
58
+ from transformers import AutoTokenizer, AutoModelForCausalLM
59
+
60
+ model_name = "summerstars/SolaraV2-coder-0517"
61
+
62
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
63
+ model = AutoModelForCausalLM.from_pretrained(model_name)
64
+
65
+ prompt = "What is 15 * 4?"
66
+ inputs = tokenizer(prompt, return_tensors="pt")
67
+ outputs = model.generate(**inputs, max_new_tokens=64)
68
+
69
+ # Print the result / 結果を表示
70
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))