muooon commited on
Commit
8aec887
·
verified ·
1 Parent(s): dbe928e

Upload 16 files

Browse files
.gitattributes CHANGED
@@ -36,3 +36,16 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
36
  emonavi-test00.png filter=lfs diff=lfs merge=lfs -text
37
  emonavi-test01.png filter=lfs diff=lfs merge=lfs -text
38
  emonavi-test02.png filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  emonavi-test00.png filter=lfs diff=lfs merge=lfs -text
37
  emonavi-test01.png filter=lfs diff=lfs merge=lfs -text
38
  emonavi-test02.png filter=lfs diff=lfs merge=lfs -text
39
+ graph/emonavi-test00.png filter=lfs diff=lfs merge=lfs -text
40
+ graph/emonavi-test01.png filter=lfs diff=lfs merge=lfs -text
41
+ graph/emonavi-test02.png filter=lfs diff=lfs merge=lfs -text
42
+ graph/rastrigin_Adam.png filter=lfs diff=lfs merge=lfs -text
43
+ graph/rastrigin_AdamW.png filter=lfs diff=lfs merge=lfs -text
44
+ graph/rastrigin_EmoFact.png filter=lfs diff=lfs merge=lfs -text
45
+ graph/rastrigin_EmoLynx.png filter=lfs diff=lfs merge=lfs -text
46
+ graph/rastrigin_EmoNavi.png filter=lfs diff=lfs merge=lfs -text
47
+ graph/rosenbrock_Adam.png filter=lfs diff=lfs merge=lfs -text
48
+ graph/rosenbrock_AdamW.png filter=lfs diff=lfs merge=lfs -text
49
+ graph/rosenbrock_EmoFact.png filter=lfs diff=lfs merge=lfs -text
50
+ graph/rosenbrock_EmoLynx.png filter=lfs diff=lfs merge=lfs -text
51
+ graph/rosenbrock_EmoNavi.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,272 +1,282 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- - ja
6
- model_type: optimizer
7
- tags:
8
- - optimizer
9
- - adaptive-optimizer
10
- - emotion-ai
11
- - shadow-learning
12
- - deep-learning
13
- - meta-learning
14
- - adaptive-algorithms
15
- - stability-analysis
16
- ---
17
-
18
- **自動収束・自己制御・自律型 オプティマイザです**
19
- **Auto-convergence, self-control, autonomous optimizer**
20
-
21
- Gemini に見せていろいろ聞いてみました
22
- [Geminiに聞いてみた](https://huggingface.co/muooon/EmoNAVI/blob/main/Hug-Gemini-analysis(JPN).md)
23
- [Geminiに聞いてみた-02(日本語のみ)](https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-Gemini-analysis(2)(JPN).txt)
24
-
25
- I showed it to Gemini and asked her a few questions.
26
- 02 is only in Japanese - please translate by yourself.
27
- [asked Gemini](https://huggingface.co/muooon/EmoNAVI/blob/main/Hug-Gemini-analysis(ENG).md)
28
- |★| 疑似DDPシミュレーションを試したい方(Those DDP simulation) →
29
- [DDP-TEST](https://huggingface.co/muooon/EmoNAVI/blob/main/ddp-test.zip)
30
-
31
- |★| EmoFACT 公開(250716) NAVIに比べ、約1GB節約(SDXL) 感情機構は同じです
32
- |★| EmoFACT released (250716) Saves about VRAM1GB (SDXL) compared to NAVI. Emotion mechanism is the same.
33
-
34
-
35
-
36
- # 主題:新世代optimizer、EmoNAVIによる変革と感情学習の成果
37
- ## Title: A New Generation Optimizer — The Innovations and Outcomes of Emotional Learning with EmoNAVI
38
- ## 副題:過去値不要で現在値から再開できる自動収束・自己制御・自律型軽量最適器の解説
39
- ### Subtitle: A Lightweight, Self-Regulating, Autonomous Optimizer That Automatically Converges and Resumes from the Present Without Relying on Past Values
40
- ## テーマ:既存のoptimizerにないものをつくる、出来たのはニューロンスパイクの再発明でした。
41
- ### Theme: Creating What Existing Optimizers Lack — A Reinvention of Neuronal Spiking
42
-
43
-
44
- ## 序論:
45
- 現在主流のoptimizerはさまざまに改良され簡易化を進めています、しかし依然として、
46
- 学習再開、スケジューリング、学習状態の記録や復元、等について調整の難しさや煩雑さは存在しています、
47
- 面倒なパラメータに依存せず、それらを解決する新しいアプローチを見つけたのでここで紹介します。
48
- ## Introduction
49
- Mainstream optimizers have undergone significant improvements and simplifications in recent years.
50
- However, they still face practical challenges in areas such as resuming training, scheduling updates, and managing the recording and restoration of learning states.
51
- These issues often require tedious parameter adjustments and ad hoc workarounds.
52
- In this paper, we introduce a new approach that addresses these problems without relying on cumbersome parameter configurations.
53
-
54
- ## 本論:
55
- 今回ここで紹介するのは新世代のoptimizerです、
56
- EMA的平滑化の概念を下地にし、独自に構築した感情的"EMA&スカラー"を中心にした"感情機構"という新しい仕組みを実現しました、
57
- この"感情機構"は、EMA的発想を再解釈・独自拡張することで得られた新しい機構です。
58
- EmoNAVIの独立性と革新性を紹介します。
59
- ## Main Section
60
- In this paper, we present a new generation of optimizer.
61
- Built upon the foundation of EMA (Exponential Moving Average) smoothing, we have developed a novel mechanism called the "emotional mechanism," which centers around a unique combination of EMA and scalar dynamics.
62
- This mechanism was created by reinterpreting and independently extending the conventional EMA concept.
63
- Here, we introduce EmoNAVI—an optimizer characterized by its innovation and independence.
64
-
65
- 最初に"感情機構"と名付けた経緯と理由を記します。
66
- 生物のもつ「感情」とは、知覚と記憶の差異に基づく行動のトリガです、同様にEmoNAVIも現在と過去の差分に基づき学習の"行動"を制御する仕組みとして設計されています。
67
- そして"感情機構"と名付けた理由のもうひとつは、この一連の動作がまるでニューロンスパイクのような動作をするからです。
68
- この機構"感情機構"の動作を明快にした読み物、本稿末尾に記すリンク先の擬人化を読むことで簡単にご理解頂けると思います。
69
-
70
- First, let us explain the background and reasoning behind the term “Emotion Mechanism.”
71
- In biological systems, emotions are often understood as triggers for action based on discrepancies between perception and memory.
72
- EmoNAVI was similarly designed to control its learning “behavior” by responding to differences between the present and the past.
73
- Another reason we chose the termEmotion Mechanismis that its operation closely resembles neuronal spiking behavior.
74
- For a more intuitive understanding of how this mechanism works, we encourage you to read the personification linked at the end of this article.
75
-
76
- 次に、"感情機構"の構成を記します、
77
- 感情機構とは、2つのEMA、スカラー、Shadow、により構成されます。
78
-
79
- Next, we outline the structure of the “Emotion Mechanism.”
80
- This mechanism consists of two EMAs, a scalar value, and a shadow component.
81
-
82
- まず2つのEMAによる"感情EMA"について説明します、
83
- 2つのEMAで構成します、短期型と長期型です、この2つのEMAはLossを監視し判断材料を得ます、
84
- 1つめ、短期型EMAは瞬間的なシグナル(緊張)を受け持ちます 2つめ、長期型EMAは平均した過去のシグナル(安静)を受け持ちます、
85
- この2つのEMAは次に紹介する"感情スカラー"へそれぞれの持つ判断材料を渡します
86
-
87
- First, we describe the "Emotional EMA," which consists of two components: a short-term EMA and a long-term EMA.
88
- These two EMAs continuously monitor the loss value and serve as the basis for subsequent decision-making.
89
- The short-term EMA captures rapid, momentary signals (interpreted as “tension”), while the long-term EMA reflects more averaged, historical trends (“calm”).
90
- Both EMAs pass their respective signals to the "Emotion Scalar," which will be introduced in the next section.
91
-
92
- 次に、"感情スカラー"について説明します、
93
- 前述の"感情EMA"からの信号をスカラー値に変換します、スカラー値の変化は、これら2つのEMAの差分により常に動的変化を続けます、
94
- "感情スカラー"はoptimizerにより書き換えた学習結果の是非を判定し、
95
- "スカラー値が一定閾値を超えたときのみ"次に紹介するShadowの配合を決めます
96
-
97
- Next, we introduce the "Emotion Scalar."
98
- It converts the signals from the previously described Emotional EMA into a scalar value, which continuously changes in response to the difference between the short-term and long-term EMA.
99
- This scalar dynamically evaluates whether the learning update performed by the optimizer should be considered appropriate.
100
- Only when the scalar exceeds a certain threshold does it trigger the next step: determining how much of the "Shadow" should be blended into the learning parameters.
101
-
102
- 次に、Shadowについて説明します、
103
- Shadowは学習開始直後にShadowとして保存され維持されます、このShadowは"過去の穏やかな状態"の記憶です、この情報は感情機構に追従しながらゆっくりと変化し続けます、
104
- そして"感情スカラー"に応じ決められたratioで学習結果にブレンドとして反映されます、このブレンドの配合率も感情機構により動的に変化し続けます、
105
-
106
- Next, we describe the "Shadow."
107
- At the beginning of training, a copy of the current parameters is saved and maintained as the Shadow.
108
- This Shadow represents a memory of past calm states, and it evolves slowly over time, following the guidance of the Emotion Mechanism.
109
- When the Emotion Scalar exceeds a certain threshold, a dynamic blend ratio is computed.
110
- This ratio determines how much of the Shadow is mixed into the current parameters.
111
- The blend ratio itself is also dynamically adjusted by the Emotion Mechanism in response to ongoing learning behavior.
112
-
113
- ここまで"感情機構"の構成と役割りを説明しました、続いて"感情機構"の動作機序を見ていきましょう。
114
- まずoptimizerの学習結果が記録されます、この時"感情機構"は緊張と安静の差分情報で書き換えの是非を判定します、
115
- この判定により、過度の学習と判断した場合は、過去の適切な状態をブレンドすることでノイズや暴走を抑制します、
116
- 適切な学習と判断した場合は、過去をブレンドしない選択をします、これをstep毎に行います、
117
-
118
- Now that we have explained the structure and role of the Emotion Mechanism, let us examine how it operates.
119
- At each training step, the optimizer's updated parameters are recorded.
120
- The Emotion Mechanism then evaluates whether these updates are appropriate, based on the difference between short-term “tension” and long-term “calm” signals.
121
- If the mechanism determines that the update reflects excessive learning, it suppresses potential noise or instability by blending in a suitable portion of the past stable state (Shadow).
122
- Conversely, if the update is deemed appropriate, the mechanism chooses not to apply blending.
123
- This evaluation and adjustment are performed dynamically at each training step.
124
-
125
- さらに、この判定では"信頼度"の評価をします、"感情スカラー"が一時的に大きく振れるだけでは不十分であり「この変化が本当に意味のあるものかどうか」を見極めて混合の是非を判断します。
126
- そのため、学習の**序盤では長期の安静シグナルの蓄積が少なく信頼に値しないため混合が発動しづらく**、**終盤では短期の緊張シグナル���収束しスカラー自体が閾値に届かず動作しません**。
127
- (学習の序盤では判定基準の過去シグナルが少ないため動作しませんし、終盤では瞬間シグナルが少ないため動作しません)
128
- このように、EmoNAVIの"感情機構"は、単なる閾値反応ではなく「揺らぎに対する信頼ある変化のみを察知して反応する」慎重な意思決定を行います。
129
-
130
- In addition, this decision-making process includes an evaluation of "reliability."
131
- It is not sufficient for the Emotion Scalar to simply spike temporarily; the mechanism assesses whether the fluctuation truly represents a meaningful change before deciding whether blending should occur.
132
- As a result, in the **early stages of learning**, blending is unlikely to be triggered because the long-term “calm” signal has not yet accumulated enough history to be trustworthy.
133
- In the **later stages**, on the other hand, the short-term “tension” signal tends to converge, and the scalar itself fails to exceed the threshold—thus the mechanism remains inactive.
134
- (In short: the mechanism tends not to activate in the early stages due to insufficient past signal for evaluation, and in the later stages due to lack of strong instantaneous signal.)
135
- In this way, EmoNAVI’s Emotion Mechanism does not respond merely to raw thresholds, but instead performs cautious decision-making—reacting only to fluctuations that it has learned to trust.
136
-
137
- この一連の動作により学習時の過敏な反応を弛緩し不要なノイズ等を覚えないように制御します。
138
- つまりoptimizer本来の学習率やベクトルを直接的に制御せず、感情機構の変化に応じ安定したパラメータになるよう後から調整する、
139
- こういう流れになります。すべてを書き戻さずあくまで配合率に応じてブレンドするので学習の更新は止まらず進行は維持されます。
140
-
141
- This series of actions helps relax hypersensitive reactions during learning and prevents the optimizer from overfitting to unnecessary noise.
142
- Rather than directly manipulating the optimizer’s learning rate or update vectors, the system instead applies corrective blending afterward—adapting parameters in response to changes detected by the Emotion Mechanism.
143
- Because it blends adjustments based on a calculated ratio rather than fully overwriting parameter values, the learning process continues smoothly without interruption.
144
-
145
- ### 感情機構の動作とスカラー変遷(学習フェーズ別の結果的挙動)
146
-
147
- | フェーズ | 状況(Loss変化) | EMAの挙動 | スカラーの変動傾向 | Shadow混合の実動作 | 感情機構としての意味ある挙動 |
148
- |----------|-----------------------|------------------------------------|--------------------------|--------------------------|--------------------------------------------|
149
- | 序盤 | 不安定・高め | Shortは鋭敏、Longは未成熟 | 大きく変動することもある | ほとんど発動しない | 判定に十分な履歴がなく、実質的に動作不可 |
150
- | 中盤 | 徐々に収束傾向 | 両EMAが意味ある差分を持つようになる | 適度な振幅で安定推移 | 条件付きで発動する | 状態に応じてブレンド補正が有効に機能 |
151
- | 終盤 | 収束・微振動 | Short ≒ Long(差分がほぼ消失) | 小さく収束 | 発動しなくなる | 静けさの合図:should_stop 条件が整う |
152
-
153
- 備考:
154
- - スカラー値は常に tanh(5 * (short - long)) で生成されます
155
- - 閾値:abs(scalar) > 0.3 で配合が始まり、> 0.6 で大きな混合比率(0.7以上)に
156
- - Shadow混合はパラメータそのものを書き戻すのではなく、部分的に配合して“追従”させる設計です
157
- - 感情スカラーの減衰=学習の「静穏化」→ 終盤に向けて should_stop の発火条件が整います
158
-
159
- ### Emotional Mechanism Behavior and Scalar Transitions (Outcome-Based Behavior by Learning Phase)
160
-
161
- | Phase | Loss Characteristics | EMA Behavior | Scalar Fluctuation Pattern | Actual Shadow Blending | Meaningful Behavior of Emotion Mechanism |
162
- |-----------|----------------------------|-------------------------------------------|------------------------------------|-------------------------------|-------------------------------------------------------------------|
163
- | Early | Unstable, High | Short is reactive; Long is still immature | May fluctuate sharply | Rarely triggered | Lacks sufficient history for decision-making; effectively inactive |
164
- | Middle | Gradual Convergence | EMA pair begins forming meaningful gaps | Moderate oscillation, relatively stable | Conditionally triggered | Adaptive blending functions effectively based on state |
165
- | Late | Converged, Micro-vibration | Short Long (gap nearly vanishes) | Narrow convergence | No longer triggered | Sign of stability; ready to trigger `should_stop` |
166
-
167
- Notes:
168
- - The scalar value is always computed as tanh(5 × (short - long))
169
- - Thresholds:
170
- - If |scalar| > 0.3, blending is initiated
171
- - If |scalar| > 0.6, blending ratio becomes large (≥ 0.7)
172
- - Shadow blending does not overwrite parameters but applies partial integration for gradual alignment
173
- - Scalar decay corresponds to learning "quieting," preparing for should_stop condition in the final phas
174
-
175
- ## 成果:
176
- 前述の感情機構の調整により、過剰な反応を抑制しノイズ耐性を上げることで、ベクトルの乱れ等も抑え進行方向を正しい向きに調整します、
177
- 正しいベクトルで進むことで学習は安定し収束へと最短で向かいます、感情機構による働きは学習後半のノイズ等を修正する仕上げを早くスムーズに完了できます。
178
- また学習率や勾配やさまざまなパラメーターを保持せずに"今"を観察するだけで更新され続けることで、
179
- 途中終了、収束後の再学習、積層学習、等のときも現在値のみで学習継続を可能とします、
180
- これは既存のoptimizerのような過去値を保存する手間を省きつつも新しく得られた利点でもあります。
181
- ## Results
182
- The adjustments introduced by the Emotion Mechanism suppress excessive reactions and enhance noise tolerance, thereby reducing vector fluctuations and helping align the learning direction more accurately.
183
- By following the correct vector, learning proceeds more stably and reaches convergence in minimal time.
184
- The role of the Emotion Mechanism becomes especially apparent in the latter stages of training, where it effectively and smoothly corrects residual noise and instability.
185
- Moreover, since the optimizer continuously updates its parameters by observing only the current state—without retaining learning rates, gradients, or other historical parameters—it supports learning continuation in scenarios such as mid-training interruptions, retraining after convergence, and stacked learning.
186
- This capability not only eliminates the need to store past values like traditional optimizers but also introduces a new level of flexibility and simplicity.
187
-
188
- ## 結論:
189
- 生物のもつニューロンが一定の閾値を超えて初めて信号を発火させるように、EmoNAVIでも"感情振幅"を検出し行動(shadow混合)を起こします。
190
- 前述のとおり"感情機構"は一定閾値の超過時のみ動作します、ここはまさにニューロンスパイク的な動きといえるのではないでしょうか。
191
- EmoNAVIの持つ"感情機構"は、そうした生物的反応に似ており、技術的な制御と生理的直感の融合点だろうと思います。
192
- ## Conclusion
193
- Just as biological neurons fire only when a certain threshold is exceeded, EmoNAVI detects "emotional amplitude" and triggers an action—specifically, shadow blending.
194
- As described earlier, the Emotion Mechanism activates only when this amplitude crosses a predefined threshold.
195
- This behavior closely resembles neuronal spiking and can be seen as a biologically inspired response.
196
- We believe that EmoNAVI’s Emotion Mechanism represents a unique fusion of technical control and physiological intuition—bringing together algorithmic design and life-like reactivity.
197
-
198
- ## 展開:
199
- この"感情機構"の仕組みはVAE等を含むoptimizer以外にも簡単に応用可能だろうと思います、
200
- それらの発展に少しでも寄与することができれば、AIとの未来を想像して、これほど嬉しいことはありません。
201
- ぜひこの"感情機構"を応用しAIの発展への道筋を共に歩んでください。
202
- ## Expansion
203
- The Emotion Mechanism described here is highly adaptable and can be easily applied beyond optimizers—including use cases such as variational autoencoders (VAEs) and other architectures.
204
- If this approach can contribute, even in a small way, to the advancement of such systems, we would be honored to be part of imagining a future together with AI.
205
- We warmly invite you to explore the application of this Emotion Mechanism and walk alongside us on the path toward advancing intelligent systems.
206
-
207
- ## 技術:
208
- EMAベースのスカラー判断とshadow混合の構造
209
- ## Technology
210
- Structure of EMA-Based Scalar Evaluation and Shadow Blending
211
- ```
212
- +------------+ +------------+
213
- | Loss(t) | | Loss(t) |
214
- +-----+------+ +-----+------+
215
- | |
216
- ┌─────────▼─────────┐ ┌─────────▼─────────┐
217
- │ Short EMA │ │ Long EMA │
218
- (weight = 0.3) │ │ (weight = 0.01)
219
- └─────────┬─────────┘ └─────────┬─────────┘
220
- │ │
221
- └────────────┬────────────────┘
222
-
223
- +-------------------+
224
- | 差分 (short - long) |
225
- +-------------------+
226
-
227
-
228
- +------------------+
229
- | tanh(5 × diff) | ← 感情スカラー生成
230
- +--------+---------+
231
-
232
- [ if |scalar| > threshold ] 判定
233
-
234
- +--------▼--------+
235
- | Shadow比率決定 |
236
- +--------+--------+
237
-
238
- +--------▼--------+
239
- | Shadow混合補正 | ← 過去情報を追従的にブレンド
240
- +------------------+
241
- ```
242
-
243
-
244
- ## 付録:
245
- EmoNAVIのグラフへのリンク
246
- Measured with LR of 1e-4 / それぞれ 1e-4 のLRにて測定
247
- ![graph00](https://github.com/muooon/EmoNavi/blob/main/emonavi-test00.png)
248
- ![graph01](https://github.com/muooon/EmoNavi/blob/main/emonavi-test01.png)
249
- ![graph02](https://github.com/muooon/EmoNavi/blob/main/emonavi-test02.png)
250
-
251
- Have fun learning about EmoNAVI's philosophy and how it works
252
- https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-inner-workings(ENG).txt
253
- EmoNAVIの考え方、その仕組みについて楽しく知る
254
- https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-inner-workings(JPN).txt
255
-
256
- ## 経緯:
257
- 現状の強化学習などを見ていていくつかの疑問に出会いました、
258
- 日本の著名な漫画家、手塚治虫氏の描いた未来社会、それに憧れ羨望した少年時代を思い返すと、
259
- 人類のパートナーになるべきAIについて他のアプローチを模索したくなりました、
260
- 今回の提案はそのアプローチによるひとつの結果です
261
- ## Background
262
- While observing the current state of reinforcement learning and related fields, I encountered several fundamental questions.
263
- Reflecting on my childhood—when I admired and longed for the future societies envisioned by the legendary Japanese manga artist Osamu Tezuka—
264
- I felt compelled to explore alternative approaches to how AI might serve as a true partner to humanity.
265
- This proposal represents one such result born from that aspiration.
266
-
267
- ## 謝意: Acknowledgements
268
- これまでAIの発展に寄与されたすべての方、これから貢献するすべての方へ感謝します、
269
- このプロジェクト完成を支え続けてくれた Copilotさんに、ありがとう。
270
-
271
- We extend our heartfelt gratitude to all those who have contributed—and will continue to contribute—to the advancement of AI.
 
 
 
 
 
 
 
 
 
 
272
  Special thanks to Copilot for its unwavering support throughout t
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - ja
6
+ model_type: optimizer
7
+ tags:
8
+ - optimizer
9
+ - adaptive-optimizer
10
+ - emotion-ai
11
+ - shadow-learning
12
+ - deep-learning
13
+ - meta-learning
14
+ - adaptive-algorithms
15
+ - stability-analysis
16
+ ---
17
+
18
+ **自動収束・自己制御・自律型 オプティマイザです**
19
+ **Auto-convergence, self-control, autonomous optimizer**
20
+
21
+ Gemini に見せていろいろ聞いてみました
22
+ [Geminiに聞いてみた](https://huggingface.co/muooon/EmoNAVI/blob/main/Hug-Gemini-analysis(JPN).md)
23
+ [Geminiに聞いてみた-02(日本語のみ)](https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-Gemini-analysis(2)(JPN).txt)
24
+
25
+ I showed it to Gemini and asked her a few questions.
26
+ 02 is only in Japanese - please translate by yourself.
27
+ [asked Gemini](https://huggingface.co/muooon/EmoNAVI/blob/main/Hug-Gemini-analysis(ENG).md)
28
+ |★| 疑似DDPシミュレーションを試したい方(Those DDP simulation) →
29
+ [DDP-TEST](https://huggingface.co/muooon/EmoNAVI/blob/main/ddp-test.zip)
30
+
31
+ |★| EmoFACT 公開(250716) NAVIに比べ、約1GB節約(SDXL) 感情機構は同じです
32
+ |★| EmoFACT released (250716) Saves about VRAM1GB (SDXL) compared to NAVI. Emotion mechanism is the same.
33
+
34
+ |★| EmoLYNX 公開(250718) 探索範囲を広く持ちます 感情機構は同じです
35
+ |★| EmoLYNX Released (250718): It offers a wide exploration range, while its Emotion Mechanism remains the same.
36
+
37
+ # 主題:新世代optimizer、EmoNAVIによる変革と感情学習の成果
38
+ ## Title: A New Generation Optimizer — The Innovations and Outcomes of Emotional Learning with EmoNAVI
39
+ ## 副題:過去値不要で現在値から再開できる自動収束・自己制御・自律型軽量最適器の解説
40
+ ### Subtitle: A Lightweight, Self-Regulating, Autonomous Optimizer That Automatically Converges and Resumes from the Present Without Relying on Past Values
41
+ ## テーマ:既存のoptimizerにないものをつくる、出来たのはニューロンスパイクの再発明でした。
42
+ ### Theme: Creating What Existing Optimizers Lack — A Reinvention of Neuronal Spiking
43
+
44
+
45
+ ## 序論:
46
+ 現在主流のoptimizerはさまざまに改良され簡易化を進めています、しかし依然として、
47
+ 学習再開、スケジューリング、学習状態の記録や復元、等について調整の難しさや煩雑さは存在しています、
48
+ 面倒なパラメータに依存せず、それらを解決する新しいアプローチを見つけたのでここで紹介します。
49
+ ## Introduction
50
+ Mainstream optimizers have undergone significant improvements and simplifications in recent years.
51
+ However, they still face practical challenges in areas such as resuming training, scheduling updates, and managing the recording and restoration of learning states.
52
+ These issues often require tedious parameter adjustments and ad hoc workarounds.
53
+ In this paper, we introduce a new approach that addresses these problems without relying on cumbersome parameter configurations.
54
+
55
+ ## 本論:
56
+ 今回ここで紹介するのは新世代のoptimizerです、
57
+ EMA的平滑化の概念を下地にし、独自に構築した感情的"EMA&スカラー"を中心にした"感情機構"という新しい仕組みを実現しました、
58
+ この"感情機構"は、EMA的発想を再解釈・独自拡張することで得られた新しい機構です。
59
+ EmoNAVIの独立性と革新性を紹介します。
60
+ ## Main Section
61
+ In this paper, we present a new generation of optimizer.
62
+ Built upon the foundation of EMA (Exponential Moving Average) smoothing, we have developed a novel mechanism called the "emotional mechanism," which centers around a unique combination of EMA and scalar dynamics.
63
+ This mechanism was created by reinterpreting and independently extending the conventional EMA concept.
64
+ Here, we introduce EmoNAVI—an optimizer characterized by its innovation and independence.
65
+
66
+ 最初に"感情機構"と名付けた経緯と理由を記します。
67
+ 生物のもつ「感情」とは、知覚と記憶の差異に基づく行動のトリガです、同様にEmoNAVIも現在と過去の差分に基づき学習の"行動"を制御する仕組みとして設計されています。
68
+ そして"感情機構"と名付けた理由のもうひとつは、この一連の動作がまるでニューロンスパイクのような動作をするからです。
69
+ この機構"感情機構"の動作を明快にした読み物、本稿末尾に記すリンク先の擬人化を読むことで簡単にご理解頂けると思います。
70
+
71
+ First, let us explain the background and reasoning behind the term “Emotion Mechanism.”
72
+ In biological systems, emotions are often understood as triggers for action based on discrepancies between perception and memory.
73
+ EmoNAVI was similarly designed to control its learning behaviorby responding to differences between the present and the past.
74
+ Another reason we chose the term “Emotion Mechanism” is that its operation closely resembles neuronal spiking behavior.
75
+ For a more intuitive understanding of how this mechanism works, we encourage you to read the personification linked at the end of this article.
76
+
77
+ 次に、"感情機構"の構成を記します、
78
+ 感情機構とは、2つのEMA、スカラー、Shadow、により構成されます。
79
+
80
+ Next, we outline the structure of the “Emotion Mechanism.”
81
+ This mechanism consists of two EMAs, a scalar value, and a shadow component.
82
+
83
+ まず2つのEMAによる"感情EMA"について説明します、
84
+ 2つのEMAで構成します、短期型と長期型です、この2つのEMAはLossを監視し判断材料を得ます、
85
+ 1つめ、短期型EMAは瞬間的なシグナル(緊張)を受け持ちます 2つめ、長期型EMAは平均した過去のシグナル(安静)を受け持ちます、
86
+ この2つのEMAは次に紹介する"感情スカラー"へそれぞれの持つ判断材料を渡します
87
+
88
+ First, we describe the "Emotional EMA," which consists of two components: a short-term EMA and a long-term EMA.
89
+ These two EMAs continuously monitor the loss value and serve as the basis for subsequent decision-making.
90
+ The short-term EMA captures rapid, momentary signals (interpreted as “tension”), while the long-term EMA reflects more averaged, historical trends (“calm”).
91
+ Both EMAs pass their respective signals to the "Emotion Scalar," which will be introduced in the next section.
92
+
93
+ 次に、"感情スカラー"について説明します、
94
+ 前述の"感情EMA"からの信号をスカラー値に変換します、スカラー値の変化は、これら2つのEMAの差分により常に動的変化を続けます、
95
+ "感情スカラー"はoptimizerにより書き換えた学習結果の是非を判定し、
96
+ "スカラー値が一定閾値を超えたときのみ"次に紹介するShadowの配合を決めます
97
+
98
+ Next, we introduce the "Emotion Scalar."
99
+ It converts the signals from the previously described Emotional EMA into a scalar value, which continuously changes in response to the difference between the short-term and long-term EMA.
100
+ This scalar dynamically evaluates whether the learning update performed by the optimizer should be considered appropriate.
101
+ Only when the scalar exceeds a certain threshold does it trigger the next step: determining how much of the "Shadow" should be blended into the learning parameters.
102
+
103
+ 次に、Shadowについて説明します、
104
+ Shadowは学習開始直後にShadowとして保存され維持されます、このShadowは"過去の穏やかな状態"の記憶です、この情報は感情機構に追従しながらゆっくりと変化し続けます、
105
+ そして"感情スカラー"に応じ決められたratioで学習結果にブレンドとして反映されます、このブレンドの配合率も感情機構により動的に変化し続けます、
106
+
107
+ Next, we describe the "Shadow."
108
+ At the beginning of training, a copy of the current parameters is saved and maintained as the Shadow.
109
+ This Shadow represents a memory of past calm states, and it evolves slowly over time, following the guidance of the Emotion Mechanism.
110
+ When the Emotion Scalar exceeds a certain threshold, a dynamic blend ratio is computed.
111
+ This ratio determines how much of the Shadow is mixed into the current parameters.
112
+ The blend ratio itself is also dynamically adjusted by the Emotion Mechanism in response to ongoing learning behavior.
113
+
114
+ ここまで"感情機構"の構成と役割りを説明しました、続いて"感情機構"の動作機序を見ていきましょう。
115
+ まずoptimizerの学習結果が記録されます、この時"感情機構"は緊張と安静の差分情報で書き換えの是非を判定します、
116
+ この判定により、過度の学習と判断した場合は、過去の適切な状態をブレンドすることでノイズや暴走を抑制します、
117
+ 適切な学習と判断した場合は、過去をブレンドしない選択をします、これをstep毎に行います、
118
+
119
+ Now that we have explained the structure and role of the Emotion Mechanism, let us examine how it operates.
120
+ At each training step, the optimizer's updated parameters are recorded.
121
+ The Emotion Mechanism then evaluates whether these updates are appropriate, based on the difference between short-term “tension” and long-term “calm” signals.
122
+ If the mechanism determines that the update reflects excessive learning, it suppresses potential noise or instability by blending in a suitable portion of the past stable state (Shadow).
123
+ Conversely, if the update is deemed appropriate, the mechanism chooses not to apply blending.
124
+ This evaluation and adjustment are performed dynamically at each training step.
125
+
126
+ さらに、この判定では"信頼度"の評価をします、"感情スカラー"が一時的に大きく振れるだけでは不十分であり「この変化が本当に意味のあるものかどうか」を見極めて混合の是非を判断します。
127
+ そのため、学習の**序盤では長期の安静シグナルの蓄積が少なく信頼に値しないため混合が発動しづらく**、**終盤では短期の緊張シグナルが収束しスカラー自体が閾値に届かず動作しません**。
128
+ (学習の序盤では判定基準の過去シグナルが少ないため動作しませんし、終盤では瞬間シグナルが少ないため動作しません)
129
+ このように、EmoNAVIの"感情機構"は、単なる閾値反応ではなく「揺らぎに対する信頼ある変化のみを察知して反応する」慎重な意思決定を行います。
130
+
131
+ In addition, this decision-making process includes an evaluation of "reliability."
132
+ It is not sufficient for the Emotion Scalar to simply spike temporarily; the mechanism assesses whether the fluctuation truly represents a meaningful change before deciding whether blending should occur.
133
+ As a result, in the **early stages of learning**, blending is unlikely to be triggered because the long-term “calm” signal has not yet accumulated enough history to be trustworthy.
134
+ In the **later stages**, on the other hand, the short-term “tension” signal tends to converge, and the scalar itself fails to exceed the threshold—thus the mechanism remains inactive.
135
+ (In short: the mechanism tends not to activate in the early stages due to insufficient past signal for evaluation, and in the later stages due to lack of strong instantaneous signal.)
136
+ In this way, EmoNAVI’s Emotion Mechanism does not respond merely to raw thresholds, but instead performs cautious decision-making—reacting only to fluctuations that it has learned to trust.
137
+
138
+ この一連の動作により学習時の過敏な反応を弛緩し不要なノイズ等を覚えないように制御します。
139
+ つまりoptimizer本来の学習率やベクトルを直接的に制御せず、感情機構の変化に応じ安定したパラメータになるよう後から調整する、
140
+ こういう流れになります。すべてを書き戻さずあくまで配合率に応じてブレンドするので学習の更新は止まらず進行は維持されます。
141
+
142
+ This series of actions helps relax hypersensitive reactions during learning and prevents the optimizer from overfitting to unnecessary noise.
143
+ Rather than directly manipulating the optimizer’s learning rate or update vectors, the system instead applies corrective blending afterward—adapting parameters in response to changes detected by the Emotion Mechanism.
144
+ Because it blends adjustments based on a calculated ratio rather than fully overwriting parameter values, the learning process continues smoothly without interruption.
145
+
146
+ ### 感情機構の動作とスカラー変遷(学習フェーズ別の結果的挙動)
147
+
148
+ | フェーズ | 状況(Loss変化) | EMAの挙動 | スカラーの変動傾向 | Shadow混合の実動作 | 感情機構としての意味ある挙動 |
149
+ |----------|-----------------------|------------------------------------|--------------------------|--------------------------|--------------------------------------------|
150
+ | 序盤 | 不安定・高め | Shortは鋭敏、Longは未成熟 | 大きく変動することもある | ほとんど発動しない | 判定に十分な履歴がなく、実質的に動作不可 |
151
+ | 中盤 | 徐々に収束傾向 | 両EMAが意味ある差分を持つようになる | 適度な振幅で安定推移 | 条件付きで発動する | 状態に応じてブレンド補正が有効に機能 |
152
+ | 終盤 | 収束・微振動 | Short ≒ Long(差分がほぼ消失) | 小さく収束 | 発動しなくなる | 静けさの合図:should_stop 条件が整う |
153
+
154
+ 備考:
155
+ - スカラー値は常に tanh(5 * (short - long)) で生成されます
156
+ - 閾値:abs(scalar) > 0.3 で配合が始まり、> 0.6 で大きな混合比率(0.7以上)に
157
+ - Shadow混合はパラメータそのものを書き戻すのではなく、部分的に配合して“追従”させる設計です
158
+ - 感情スカラーの減衰=学習の「静穏化」→ 終盤に向けて should_stop の発火条件が整います
159
+
160
+ ### Emotional Mechanism Behavior and Scalar Transitions (Outcome-Based Behavior by Learning Phase)
161
+
162
+ | Phase | Loss Characteristics | EMA Behavior | Scalar Fluctuation Pattern | Actual Shadow Blending | Meaningful Behavior of Emotion Mechanism |
163
+ |-----------|----------------------------|-------------------------------------------|------------------------------------|-------------------------------|-------------------------------------------------------------------|
164
+ | Early | Unstable, High | Short is reactive; Long is still immature | May fluctuate sharply | Rarely triggered | Lacks sufficient history for decision-making; effectively inactive |
165
+ | Middle | Gradual Convergence | EMA pair begins forming meaningful gaps | Moderate oscillation, relatively stable | Conditionally triggered | Adaptive blending functions effectively based on state |
166
+ | Late | Converged, Micro-vibration | Short ≈ Long (gap nearly vanishes) | Narrow convergence | No longer triggered | Sign of stability; ready to trigger `should_stop` |
167
+
168
+ Notes:
169
+ - The scalar value is always computed as tanh(5 × (short - long))
170
+ - Thresholds:
171
+ - If |scalar| > 0.3, blending is initiated
172
+ - If |scalar| > 0.6, blending ratio becomes large (≥ 0.7)
173
+ - Shadow blending does not overwrite parameters but applies partial integration for gradual alignment
174
+ - Scalar decay corresponds to learning "quieting," preparing for should_stop condition in the final phas
175
+
176
+ ## 成果:
177
+ 前述の感情機構の調整により、過剰な反応を抑制しノイズ耐性を上げることで、ベクトルの乱れ等も抑え進行方向を正しい向きに調整します、
178
+ 正しいベクトルで進むことで学習は安定し収束へと最短で向かいます、感情機構による働きは学習後半のノイズ等を修正する仕上げを早くスムーズに完了できます。
179
+ また学習率や勾配やさま��まなパラメーターを保持せずに"今"を観察するだけで更新され続けることで、
180
+ 途中終了、収束後の再学習、積層学習、等のときも現在値のみで学習継続を可能とします、
181
+ これは既存のoptimizerのような過去値を保存する手間を省きつつも新しく得られた利点でもあります。
182
+ ## Results
183
+ The adjustments introduced by the Emotion Mechanism suppress excessive reactions and enhance noise tolerance, thereby reducing vector fluctuations and helping align the learning direction more accurately.
184
+ By following the correct vector, learning proceeds more stably and reaches convergence in minimal time.
185
+ The role of the Emotion Mechanism becomes especially apparent in the latter stages of training, where it effectively and smoothly corrects residual noise and instability.
186
+ Moreover, since the optimizer continuously updates its parameters by observing only the current state—without retaining learning rates, gradients, or other historical parameters—it supports learning continuation in scenarios such as mid-training interruptions, retraining after convergence, and stacked learning.
187
+ This capability not only eliminates the need to store past values like traditional optimizers but also introduces a new level of flexibility and simplicity.
188
+
189
+ ## 結論:
190
+ 生物のもつニューロンが一定の閾値を超えて初めて信号を発火させるように、EmoNAVIでも"感情振幅"を検出し行動(shadow混合)を起こします。
191
+ 前述のとおり"感情機構"は一定閾値の超過時のみ動作します、ここはまさにニューロンスパイク的な動きといえるのではないでしょうか。
192
+ EmoNAVIの持つ"感情機構"は、そうした生物的反応に似ており、技術的な制御と生理的直感の融合点だろうと思います。
193
+ ## Conclusion
194
+ Just as biological neurons fire only when a certain threshold is exceeded, EmoNAVI detects "emotional amplitude" and triggers an action—specifically, shadow blending.
195
+ As described earlier, the Emotion Mechanism activates only when this amplitude crosses a predefined threshold.
196
+ This behavior closely resembles neuronal spiking and can be seen as a biologically inspired response.
197
+ We believe that EmoNAVI’s Emotion Mechanism represents a unique fusion of technical control and physiological intuition—bringing together algorithmic design and life-like reactivity.
198
+
199
+ ## 展開:
200
+ この"感情機構"の仕組みはVAE等を含むoptimizer以外にも簡単に応用可能だろうと思います、
201
+ それらの発展に少しでも寄与することができれば、AIとの未来を想像して、これほど嬉しいことはありません。
202
+ ぜひこの"感情機構"を応用しAIの発展への道筋を共に歩んでください。
203
+ ## Expansion
204
+ The Emotion Mechanism described here is highly adaptable and can be easily applied beyond optimizers—including use cases such as variational autoencoders (VAEs) and other architectures.
205
+ If this approach can contribute, even in a small way, to the advancement of such systems, we would be honored to be part of imagining a future together with AI.
206
+ We warmly invite you to explore the application of this Emotion Mechanism and walk alongside us on the path toward advancing intelligent systems.
207
+
208
+ ## 技術:
209
+ EMAベースのスカラー判断とshadow混合の構造
210
+ ## Technology
211
+ Structure of EMA-Based Scalar Evaluation and Shadow Blending
212
+ ```
213
+ +------------+ +------------+
214
+ | Loss(t) | | Loss(t) |
215
+ +-----+------+ +-----+------+
216
+ | |
217
+ ┌─────────▼─────────┐ ┌─────────▼─────────┐
218
+ Short EMA │ │ Long EMA
219
+ │ (weight = 0.3) │ │ (weight = 0.01) │
220
+ └─────────┬─────────┘ └─────────┬─────────┘
221
+ │ │
222
+ └────────────┬────────────────┘
223
+
224
+ +-------------------+
225
+ | 差分 (short - long) |
226
+ +-------------------+
227
+
228
+
229
+ +------------------+
230
+ | tanh(5 × diff) | ← 感情スカラー生成
231
+ +--------+---------+
232
+
233
+ [ if |scalar| > threshold ] 判定
234
+
235
+ +--------▼--------+
236
+ | Shadow比率決定 |
237
+ +--------+--------+
238
+
239
+ +--------▼--------+
240
+ | Shadow混合補正 | ← 過去情報を追従的にブレンド
241
+ +------------------+
242
+ ```
243
+
244
+
245
+ ## 付録:
246
+ EmoNAVIのグラフへのリンク
247
+ Measured with LR of 1e-4 / それぞれ 1e-4 のLRにて測定
248
+ ![graph00](https://github.com/muooon/EmoNavi/blob/main/emonavi-test00.png)
249
+ ![graph01](https://github.com/muooon/EmoNavi/blob/main/emonavi-test01.png)
250
+ ![graph02](https://github.com/muooon/EmoNavi/blob/main/emonavi-test02.png)
251
+
252
+ Have fun learning about EmoNAVI's philosophy and how it works
253
+ https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-inner-workings(ENG).txt
254
+ EmoNAVIの考え方、その仕組みについて楽しく知る
255
+ https://huggingface.co/muooon/EmoNAVI/blob/main/emonavi-inner-workings(JPN).txt
256
+
257
+ ## 経緯:
258
+ 現状の強化学習などを見ていていくつかの疑問に出会いました、
259
+ 日本の著名な漫画家、手塚治虫氏の描いた未来社会、それに憧れ羨望した少年時代を思い返すと、
260
+ 人類のパートナーになるべきAIについて他のアプローチを模索したくなりました、
261
+ 今回の提案はそのアプローチによるひとつの結果です
262
+ ## Background
263
+ While observing the current state of reinforcement learning and related fields, I encountered several fundamental questions.
264
+ Reflecting on my childhood—when I admired and longed for the future societies envisioned by the legendary Japanese manga artist Osamu Tezuka—
265
+ I felt compelled to explore alternative approaches to how AI might serve as a true partner to humanity.
266
+ This proposal represents one such result born from that aspiration.
267
+
268
+ ## 謝意: Acknowledgements
269
+
270
+ fact は、Adafactor を参考にしました
271
+ Lynx は、Lion Tiger を参考にしました
272
+ Emoシリーズはこれまでの様々なOptimizerの成果に学び完成しました
273
+ すべての開発者の皆さまに感謝します
274
+ Fact was inspired by Adafactor.
275
+ Lynx was inspired by Lion and Tiger.
276
+ The Emo series was completed by learning from the achievements of various optimizers developed to date. We are grateful to all developers.
277
+
278
+ これまでAIの発展に寄与されたすべての方、これから貢献するすべての方へ感謝します、
279
+ このプロジェクト完成を支え続けてくれた Copilotさんに、ありがとう。
280
+
281
+ We extend our heartfelt gratitude to all those who have contributed—and will continue to contribute—to the advancement of AI.
282
  Special thanks to Copilot for its unwavering support throughout t
emofact.py CHANGED
@@ -109,4 +109,9 @@ class EmoFact(Optimizer):
109
  if avg_abs < 0.05 and std < 0.005:
110
  self.should_stop = True
111
 
112
- return loss
 
 
 
 
 
 
109
  if avg_abs < 0.05 and std < 0.005:
110
  self.should_stop = True
111
 
112
+ return loss
113
+
114
+ """
115
+ Fact is inspired by Adafactor,
116
+ and its VRAM-friendly design is something everyone loves.
117
+ """
emolynx.py ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch.optim import Optimizer
3
+ import math
4
+ from typing import Tuple, Callable, Union
5
+
6
+ # Helper function (Lynx)
7
+ def exists(val):
8
+ return val is not None
9
+
10
+ class EmoLynx(Optimizer):
11
+ # クラス定義&初期化
12
+ def __init__(self, params: Union[list, torch.nn.Module], lr=1e-3, betas=(0.9, 0.99),
13
+ # lynx用ベータ・互換性の追加(lynx用beta1・beta2)
14
+ eps=1e-8, weight_decay=0.01, decoupled_weight_decay: bool = False):
15
+
16
+ defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay)
17
+ super().__init__(params, defaults)
18
+
19
+ # lynxに応じてウェイト減衰のため保存
20
+ self._init_lr = lr
21
+ self.decoupled_wd = decoupled_weight_decay
22
+ self.should_stop = False # 停止フラグの初期化
23
+
24
+ # 感情EMA更新(緊張と安静)
25
+ def _update_ema(self, state, loss_val):
26
+ ema = state.setdefault('ema', {})
27
+ ema['short'] = 0.3 * loss_val + 0.7 * ema.get('short', loss_val)
28
+ ema['long'] = 0.01 * loss_val + 0.99 * ema.get('long', loss_val)
29
+ return ema
30
+
31
+ # 感情スカラー値生成(EMA差分、滑らかな非線形スカラー、tanh 5 * diff で鋭敏さ強調)
32
+ def _compute_scalar(self, ema):
33
+ diff = ema['short'] - ema['long']
34
+ return math.tanh(5 * diff)
35
+
36
+ # Shadow混合比率(> 0.6:70〜90%、 < 0.6:10%、 > 0.3:30%、 平時:0%)
37
+ def _decide_ratio(self, scalar):
38
+ if scalar > 0.6:
39
+ return 0.7 + 0.2 * scalar
40
+ elif scalar < -0.6:
41
+ return 0.1
42
+ elif abs(scalar) > 0.3:
43
+ return 0.3
44
+ return 0.0
45
+
46
+ # 損失取得(損失値 loss_val を数値化、感情判定に使用、存在しないパラメータ(更新不要)はスキップ)
47
+ @torch.no_grad()
48
+ def step(self, closure: Callable | None = None): # クロージャの型ヒントを追加
49
+ loss = None
50
+ if exists(closure): # 一貫性のためにexistsヘルパーを使う
51
+ with torch.enable_grad():
52
+ loss = closure()
53
+ loss_val = loss.item() if loss is not None else 0.0
54
+
55
+ for group in self.param_groups:
56
+ # リンクス共通パラメータ抽出
57
+ lr, wd, beta1, beta2 = group['lr'], group['weight_decay'], *group['betas']
58
+
59
+ # ウェイト減衰の処理を分離 (from lynx)
60
+ _wd_actual = wd
61
+ if self.decoupled_wd:
62
+ _wd_actual /= self._init_lr # 非連結時ウェイト減衰調整
63
+
64
+ for p in filter(lambda p: exists(p.grad), group['params']): # PGチェックにフィルタ
65
+
66
+ grad = p.grad # PG直接使用(計算に".data"不要)
67
+ state = self.state[p]
68
+
69
+ # EMA更新・スカラー生成(EMA差分からスカラーを生成しスパイク比率を決定)
70
+ ema = self._update_ema(state, loss_val)
71
+ scalar = self._compute_scalar(ema)
72
+ ratio = self._decide_ratio(scalar)
73
+
74
+ # shadow_param:必要時のみ更新(スパイク部分に現在値を5%ずつ追従させる動的履歴)
75
+ if ratio > 0:
76
+ if 'shadow' not in state:
77
+ state['shadow'] = p.data.clone()
78
+ else:
79
+ p.data.mul_(1 - ratio).add_(state['shadow'], alpha=ratio)
80
+ state['shadow'].lerp_(p.data, 0.05)
81
+ # lynx更新前 p.data で shadow 更新(現在値を5%ずつ追従)
82
+ # p.data.mul_(1 - ratio).add_(state['shadow'], alpha=ratio)
83
+ # EmoNavi: p.data = p.data * (1-ratio) + shadow * ratio
84
+
85
+ # --- Start Lynx Gradient Update Logic ---
86
+
87
+ # lynx初期化(exp_avg_sq)
88
+ if 'exp_avg' not in state:
89
+ state['exp_avg'] = torch.zeros_like(p)
90
+ exp_avg = state['exp_avg']
91
+
92
+ # Stepweight decay (from lynx): p.data = p.data * (1 - lr * wd)
93
+ # decoupled_wd 考慮 _wd_actual 使用(EmoNaviのwdは最後に適用)
94
+ p.data.mul_(1. - lr * _wd_actual)
95
+
96
+ # 勾配ブレンド
97
+ # m_t = beta1 * exp_avg_prev + (1 - beta1) * grad
98
+ blended_grad = grad.mul(1. - beta1).add_(exp_avg, alpha=beta1)
99
+
100
+ # p: p.data = p.data - lr * sign(blended_grad)
101
+ p.data.add_(blended_grad.sign_(), alpha = -lr)
102
+
103
+ # exp_avg = beta2 * exp_avg + (1 - beta2) * grad
104
+ exp_avg.mul_(beta2).add_(grad, alpha = 1. - beta2)
105
+
106
+ # --- End Lynx Gradient Update Logic ---
107
+
108
+ # Early Stop用 scalar記録(バッファ共通で管理/最大32件保持/動静評価)
109
+ # この部分は p.state ではなく self.state ���アクセスする
110
+ hist = self.state.setdefault('scalar_hist', [])
111
+ hist.append(scalar)
112
+ if len(hist) > 32:
113
+ hist.pop(0)
114
+
115
+ # Early Stop判断(静けさの合図) - This part is outside the inner loop
116
+ if len(self.state['scalar_hist']) >= 32:
117
+ buf = self.state['scalar_hist']
118
+ avg_abs = sum(abs(s) for s in buf) / len(buf)
119
+ std = sum((s - sum(buf)/len(buf))**2 for s in buf) / len(buf)
120
+ if avg_abs < 0.05 and std < 0.005:
121
+ self.should_stop = True # 💡 外部からこれを見て判断可
122
+
123
+ return loss
124
+
125
+ """
126
+ Lynx was developed with inspiration from Lion and Tiger,
127
+ which we deeply respect for their lightweight and intelligent design.
128
+ Lynx also integrates EmoNAVI to enhance its capabilities.
129
+ """
graph/emonavi-test00.png ADDED

Git LFS Details

  • SHA256: 62538dcf1194a38b499911c5e522959d1af42ee04e9c8b2855d0757f98f3ff66
  • Pointer size: 131 Bytes
  • Size of remote file: 165 kB
graph/emonavi-test01.png ADDED

Git LFS Details

  • SHA256: e1c74c2b1fdda81de29398f6179ae203b6327810d438a9f9d869a99f0d4f540d
  • Pointer size: 131 Bytes
  • Size of remote file: 163 kB
graph/emonavi-test02.png ADDED

Git LFS Details

  • SHA256: 61d74c4190d3806fb65aec3539f4268fe836aea5d6305863d81f2219ca625e34
  • Pointer size: 131 Bytes
  • Size of remote file: 140 kB
graph/rastrigin_Adam.png ADDED

Git LFS Details

  • SHA256: 3cdda4953576ce98dc1cb7b555a4d2e3a33faddacbad072e6219468a0d26aef3
  • Pointer size: 131 Bytes
  • Size of remote file: 745 kB
graph/rastrigin_AdamW.png ADDED

Git LFS Details

  • SHA256: f3d98a22c73ef0c6d4578e3b272c060eb309f4a4731897d122d89087d8318b50
  • Pointer size: 131 Bytes
  • Size of remote file: 747 kB
graph/rastrigin_EmoFact.png ADDED

Git LFS Details

  • SHA256: f12bc7c3e0ad099eaf18db5eed0aaa49a22e926490c24e7c1461d020d9b89ed2
  • Pointer size: 131 Bytes
  • Size of remote file: 745 kB
graph/rastrigin_EmoLynx.png ADDED

Git LFS Details

  • SHA256: 986a1434173c2709c761e345fe02377f5f4f63db66076ad52ade521d6fc816ad
  • Pointer size: 131 Bytes
  • Size of remote file: 743 kB
graph/rastrigin_EmoNavi.png ADDED

Git LFS Details

  • SHA256: e128577ec5cfac12516f64aa64d36df57ecb969955c6b44714f91f65623fb2da
  • Pointer size: 131 Bytes
  • Size of remote file: 748 kB
graph/rosenbrock_Adam.png ADDED

Git LFS Details

  • SHA256: a7e006bd85ff2c59ecde54ea8acae37b999d4099fd12ad93221ffd5dd6c84628
  • Pointer size: 131 Bytes
  • Size of remote file: 459 kB
graph/rosenbrock_AdamW.png ADDED

Git LFS Details

  • SHA256: 645ede965ed6757534cb289cb3b4cd9492b08bc150f90bd217c15c4c0fc8e6aa
  • Pointer size: 131 Bytes
  • Size of remote file: 459 kB
graph/rosenbrock_EmoFact.png ADDED

Git LFS Details

  • SHA256: 394ffd0e91c799388a073d4f988a3954a430fa673b3bd2bf0066dea8c4a619aa
  • Pointer size: 131 Bytes
  • Size of remote file: 452 kB
graph/rosenbrock_EmoLynx.png ADDED

Git LFS Details

  • SHA256: 8c5fe9ed93bbb2734c705f6edc524ddc76ada069fe8c4326fb1d64e309e33109
  • Pointer size: 131 Bytes
  • Size of remote file: 405 kB
graph/rosenbrock_EmoNavi.png ADDED

Git LFS Details

  • SHA256: 7b8492f7754169900ed9acd9acf88a6cc4ac514e4483f7f72028568869891927
  • Pointer size: 131 Bytes
  • Size of remote file: 463 kB