Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ pinned: false
|
|
| 8 |
short_description: Reactive AI - Reactive Neural Networks and Event-Driven AI
|
| 9 |
---
|
| 10 |
|
| 11 |
-
<img src="https://huggingface.co/spaces/ReactiveAI/README/resolve/main/logo-black.png" width=
|
| 12 |
|
| 13 |
# Reactive AI
|
| 14 |
We are working on our own ideas of Reactive Neural Networks (RxNN) and Event-Driven AI, advancing from language models to AGI awareness models.
|
|
@@ -27,23 +27,12 @@ have to be stateful and remember the data between the interactions.
|
|
| 27 |
_**Strong Reactive Neural Networks**_ like **Reactor** could emit and listen to its internal events, while the _**Weak Reactive Neural Networks**_ are
|
| 28 |
working only on environment events.
|
| 29 |
|
| 30 |
-
## Reactor AGI
|
| 31 |
-
|
| 32 |
-
<!-- <img src="https://raw.githubusercontent.com/RxAI-dev/RxNN/refs/heads/main/assets/logo/logo_reactor.png" width="350" /> -->
|
| 33 |
-
|
| 34 |
-
Our primary architecture - **Reactor** - is planned as the first _**awareness AGI model**_, that's modelling awareness as an _Infinite Chain-of-Thoughts_,
|
| 35 |
-
connected to _Short-Term and Long-Term Memory_ (_Attention-based Memory System_) and _Receptors/Effectors_ systems for real-time reactive processing.
|
| 36 |
-
It will be able to constantly and autonomously learn from interactions in _Continouos Live Learning_ process.
|
| 37 |
-
|
| 38 |
-
> Reactor architecture details and mathematical model were analysed by 30 state-of-the-art LLM/Reasoning models that rated it's potential
|
| 39 |
-
> to reach the AGI as ~4.35/5
|
| 40 |
-
|
| 41 |
-
## Reactive Language Models (RxLM)
|
| 42 |
-
While the **Reactor** is the main goal, it's extremely hard to achieve, as it's definitely the most advanced neural network ensemble ever.
|
| 43 |
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
-
|
|
|
|
|
|
|
| 47 |
|
| 48 |
## RxLM vs LLM advantages
|
| 49 |
Processing single interactions in real-time by **Reactive Language Models** leads to **revolutional** improvements in inference speed/cost:
|
|
@@ -54,14 +43,9 @@ Processing single interactions in real-time by **Reactive Language Models** lead
|
|
| 54 |
> In example, for a dialog with **DeepSeek R1**, that have overally ~90k tokens, I paid for about 1.5M tokens. With **RxLM** it will cost only that ~90k tokens, so it
|
| 55 |
> will be about **15x cheaper**
|
| 56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
<img src="https://raw.githubusercontent.com/RxAI-dev/rxlm/refs/heads/main/assets/logo/logo_rxnn_v2.png" width="250" />
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
## Additional Research
|
| 64 |
-
- **Sparse Query Attention (SQA)** - the most cost-effective GQA variant, even 2-3x faster for long sequences!
|
| 65 |
-
- **Flex-SQA** - combination of Flex Attention and (symmetric) Sparse Query Attention, enabling 4-8x longer sliding windows
|
| 66 |
-
- **Flex Memory Attention/Memory Cross-Attention** - connecting spatially sparse attention with memory layers to enable very long single interactions - smaller sliding window for input sequences attends to full memory, or the opposite
|
| 67 |
-
- **Mixture-of-Experts for Grouped Attention** - MoE Router dynamically selects GQA/SQA groups, instead of static selection. Abandoned, because results were worse than for GQA/SQA
|
|
|
|
| 8 |
short_description: Reactive AI - Reactive Neural Networks and Event-Driven AI
|
| 9 |
---
|
| 10 |
|
| 11 |
+
<img src="https://huggingface.co/spaces/ReactiveAI/README/resolve/main/logo-black.png" width=300 />
|
| 12 |
|
| 13 |
# Reactive AI
|
| 14 |
We are working on our own ideas of Reactive Neural Networks (RxNN) and Event-Driven AI, advancing from language models to AGI awareness models.
|
|
|
|
| 27 |
_**Strong Reactive Neural Networks**_ like **Reactor** could emit and listen to its internal events, while the _**Weak Reactive Neural Networks**_ are
|
| 28 |
working only on environment events.
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
+
## Stateful Reactive Language Models (RxLM)
|
| 32 |
+
Our **Reactive Transformer** and second, improved generation of RxT, are extending stateless Transformer language models (almost all LLMs), introducing
|
| 33 |
+
_Attention-based Memory System_ (ABMS) with _Short-Term Memory_ (STM) or multi-level _Mixture-of-Memory_ (MoM / with _Long-Term Memory_). It's based on higher-level
|
| 34 |
+
of recurrence and memory - not between tokens like SSMs, Linear Attention (it could be combined with RxLM) or RNNs, but between interactions (query and answer).
|
| 35 |
+
They introduce effective stateful processing with continual learning, infinite memory & context and are natively conversational & agentic
|
| 36 |
|
| 37 |
## RxLM vs LLM advantages
|
| 38 |
Processing single interactions in real-time by **Reactive Language Models** leads to **revolutional** improvements in inference speed/cost:
|
|
|
|
| 43 |
> In example, for a dialog with **DeepSeek R1**, that have overally ~90k tokens, I paid for about 1.5M tokens. With **RxLM** it will cost only that ~90k tokens, so it
|
| 44 |
> will be about **15x cheaper**
|
| 45 |
|
| 46 |
+
## Reactor AGI
|
| 47 |
+
Our final goal- **Reactor** - is planned as the first _**awareness AGI model**_, that's modelling consciousness as an _Infinite Chain-of-Thoughts_,
|
| 48 |
+
connected to _Mixture-of-Memory (MoM)_ in _Attention-based Memory System_ and _Receptors/Effectors_ systems for real-time reactive processing.
|
| 49 |
+
It will be able to constantly and autonomously learn from interactions in _Continouos Live Learning_ process.
|
| 50 |
|
| 51 |
+
[Visit our website!](https://rxai.dev) \[Work in progress\]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|