File size: 1,724 Bytes
9858bfd
 
 
11e816a
9858bfd
 
 
 
 
0a69cd2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
361daa7
 
 
 
0a69cd2
 
 
a97ab06
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
title: README
emoji: 🦀
colorFrom: blue
colorTo: gray
sdk: static
pinned: false
---

# Reactive AI
We are working on our own idea of Reactive Neural Networks (RxNN) - special kind of memory-augmented neural networks, that keeps state/memory
between interactions/sequences instead of between tokens/elements in sequence and provides reactive communication patterns.

Our primary architecture - **Reactor** - is planned as the first _**awareness AGI model**_, that's modelling awareness as an _Infinite Chain-of-Thoughts_,
connected to _Short-Term and Long-Term Memory_ (_Attention-based Memory System_) and _Receptors/Effectors_ systems for real-time reactive processing.
It will be able to constantly and autonomously learn from interactions in _Continouos Live Learning_ process.

While the **Reactor** is the main goal, it's extremely hard to achieve, as it's definitely the most advanced neural network ensemble ever.

That's why we designed simplified architectures, for incremental transformation from language/reasoning models to awareness model:
- **Reactive Transformer** is introducing _Attention-based Memory System_ and adding _Short-Term Memory_ to Transformer language models
- **Preactor** is adding _Long-Term Memory_ and ability to learn from interactions

We are currently working on **Reactive Transformer Proof-of-Concept - RxT-Alpha**, that will be published soon

More info soon

## RxNN Platform
We are working on complete Reactive Neural Networks development framework - [RxNN github](https://github.com/RxAI-dev/RxNN)

## Additional Research
- **Sparse Query Attention** - the most cost-effective GQA variant, reducing training time/cost by ~3-10% with similar performance. Research in progress