File size: 8,821 Bytes
b8613c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d9c751
b8613c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f93c77
b8613c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f93c77
b8613c3
 
 
9f93c77
b8613c3
 
 
9f93c77
b8613c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen3-4B
- Qwen/Qwen3-1.7B
- Qwen/Qwen3-0.6B
library_name: transformers
---
## Model Overview

Memory Operator is a specialized language model developed for MemOS, designed to handle memory-related operations. Its core capabilities include **memory extraction, integration, and update**. The primary objectives for developing the Memory Operator sub-model are:

1. **Support local-only deployment**, enabling the use of MemOS in restricted environments where internet connectivity is unavailable.
2. **Achieve memory operations at lower cost and higher speed**, while maintaining high system performance.

We are releasing the MemOperator model series in three sizes: **4B, 1.7B, and 0.6B parameters**. These models are fine-tuned from the **Qwen3 series**, trained using supervised fine-tuning (SFT) on a combination of human-annotated and model-generated data. They demonstrate excellent performance in tasks such as memory extraction and reorganization.

Currently, the memory operation model supports **memory extraction** and **clustering-based memory reorganization** within the MemOS system. Conflict resolution and relational reasoning are under active development (WIP).

### Key Features
- **Type**: Causal Language Model (Decoder-only)
- **Training Stage**: Supervised Fine-tuning (SFT)
- **Supported Languages**: English (en), Chinese (zh)
- **Number of Parameters**: 4B, 1.7B, 0.6B
- **Context Length**: 32,768 tokens

---

## Highlights

### 🚀 Faster and More Efficient Memory Operations
Memory Operator is optimized for fast and accurate memory handling, enabling real-time processing in local environments.

### 🧠 Comprehensive Memory Management
- **Memory Extraction**: Supports extraction of high-quality memories from both conversations and documents, including summarization of document snippets.
- **Memory Reorganization**: Implements clustering-based reorganization to group and integrate related memories, enhancing long-term memory coherence.

### 💻 High System Performance with Low Resource Usage
- The **4B model** delivers performance that surpasses GPT-4o-mini while remaining deployable on most consumer-grade hardware.
- The smaller **1.7B and 0.6B variants** retain strong performance, making them ideal for edge devices and low-latency applications.

### 🌍 Multilingual Support
- Supports memory extraction in **both Chinese and English**.
- Effectively follows instructions in the input language, ensuring accurate and context-aware outputs.



## Performance

### Memory Extraction & Integration Evaluation (locomo benchmark)

| Model | Overall | Temporal Reasoning | Multi-Hop | Single-Hop | Open-Domain |
|-------|--------|---------------------|----------|-----------|-------------|
| Qwen3-32B | 0.7675 | 0.7103 | 0.6702 | 0.8442 | 0.5729 |
| Qwen3-14B | 0.7370 | 0.6822 | 0.6631 | 0.8002 | 0.5833 |
| **MemOperator-4B** | **0.7714** | **0.8037** | **0.6737** | **0.8180** | **0.5416** |
| **MemOperator-1.7B** | **0.7571** | **0.8068** | **0.6560** | **0.7955** | **0.5521** |
| **MemOperator-0.6B** | **0.6753** | **0.6635** | **0.5780** | **0.7325** | **0.5000** |
| GPT-4o-mini | 0.7405 | 0.7217 | 0.6844 | 0.7864 | 0.5659 |

> ✅ **Key Advantage**:  
> By replacing large open-source models (e.g., Qwen3-32B) with **MemOperator-4B**, you can achieve **comparable or better memory processing performance** while reducing **resource consumption by over 80%** (4B vs 32B). This enables efficient, scalable, and cost-effective deployment.

---

## Usage

### MemOS Integration Guide

You can easily configure MemOS to use the trained MemReader model for memory extraction.

#### 1. Install MemOS via pip

```bash
pip install MemoryOS
```
#### 2. Initialize a MemOperator and Extract Memory
```Python
from memos.configs.mem_reader import SimpleStructMemReaderConfig
from memos.mem_reader.simple_struct import SimpleStructMemReader

config = SimpleStructMemReaderConfig(
    **{
        "llm": {
            "backend": "huggingface",
            "config": {
                "model_name_or_path": "MemTensor/MemOperator-0.6B",
                "temperature": 0.6,
                "max_tokens": 6000,
                "top_p": 0.95,
                "top_k": 20,
                "extra_body": {"chat_template_kwargs": {"enable_thinking": false}}
            },
        },
        "embedder": {
            "backend": "ollama",
            "config": {"model_name_or_path": "nomic-embed-text:latest"},
        },
        "chunker": {
            "backend": "sentence",
            "config": {
                "tokenizer_or_token_counter": "gpt2",
                "chunk_size": 512,
                "chunk_overlap": 128,
                "min_sentences_per_chunk": 1,
            },
        },
        "remove_prompt_example": True,
    }
)

reader = SimpleStructMemReader(config)

# Example chat data
chat_data = [
    [
        {
            "role": "user",
            "chat_time": "June 26, 2025 at 3:00 PM",
            "content": "Hi Jerry! Yesterday at 3 PM I had a meeting with my team about the new project.",
        },
        {
            "role": "assistant",
            "chat_time": "June 26, 2025 at 3:00 PM",
            "content": "Oh Tom! Do you think the team can finish by December 15?",
        },
        {
            "role": "user",
            "chat_time": "June 26, 2025 at 3:00 PM",
            "content": "I’m worried. The backend won’t be done until December 10, so testing will be tight.",
        },
        {
            "role": "assistant",
            "chat_time": "June 26, 2025 at 3:00 PM",
            "content": "Maybe propose an extension?",
        },
        {
            "role": "user",
            "chat_time": "June 26, 2025 at 4:21 PM",
            "content": "Good idea. I’ll raise it in tomorrow’s 9:30 AM meeting—maybe shift the deadline to January 5.",
        },
    ]
]

# Save document for testing
with open("tmp.txt", "w") as f:
    f.write(
        "Lou Henry Hoover (March 29, 1874 – January 7, 1944) was an American philanthropist, geologist, and the first lady of the United States from 1929 to 1933 as the wife of President Herbert Hoover. She was active in community organizations and volunteer groups throughout her life, including the Girl Scouts of the USA, which she led from 1922 to 1925 and from 1935 to 1937. Throughout her life, Hoover supported women's rights and women's independence. She was a polyglot, fluent in Mandarin and well-versed in Latin, and was the primary translator from Latin to English of the complex 16th-century metallurgy text De re metallica."
    )

# Extract chat and document memories
chat_memory = reader.get_memory(
    chat_data, type="chat", info={"user_id": "Tom", "session_id": "session1"}
)
doc_memory = reader.get_memory(
    ["tmp.txt"],
    "doc",
    info={
        "user_id": "Tom",
        "session_id": "session2",
    },
)

print(chat_memory)
print(doc_memory)
```
#### 3. Use MemOperator to Organize Memory in MemOS
Configure your mem_cube_config.json:
```Json
{
  ...,
  "reorganize": true,
  "text_mem": {
    "backend": "tree_text",
    "config": {
      "extractor_llm": {
        "backend": "huggingface",
        "config": {
          "model_name_or_path": "MemTensor/MemOperator-0.6B",
          "temperature": 0.8,
          "max_tokens": 1024,
          "top_p": 0.9,
          "top_k": 50
        }
      },
      "dispatcher_llm": {
        ...
        }
      },
      "graph_db": {
        ...
        }
      },
      "embedder": {
        ...
        }
      }
    }
  },
  "act_mem": {},
  "para_mem": {}
}
```
#### 4. Initialize MemOS and Register Memory Cube

```Python
import json
from memos import GeneralMemCubeConfig, GeneralMemCube, MOSConfig
from memos.mem_os.main import MOS

# Initialize MOS
user_id = 'test'
mos_config_path = "configs/mos_memos_config.json"
mos_config_data = json.load(open(mos_config_path))
mos_config = MOSConfig(**mos_config_data)
mos = MOS(mos_config)
mos.create_user(user_id=user_id)

# Configure and initialize memory cube
mem_cube_config_path = "configs/mem_cube_config.json"
mem_cube_config_data = json.load(open(mem_cube_config_path))
mem_cube_config = GeneralMemCubeConfig.model_validate(mem_cube_config_data)
mem_cube = GeneralMemCube(mem_cube_config)

# Register memory cube to MOS
storage_path = f"./{user_id}_cube"
try:
    mem_cube.dump(storage_path)
except Exception as e:
    print(f"Memory cube already exists at {storage_path}, will reuse it.")

mos.register_mem_cube(
    mem_cube_name_or_path=storage_path,
    mem_cube_id=user_id,
    user_id=user_id,
)
```


## Huggingface Usage

You can also directly load the model via Huggingface, vLLM, or SGLang and perform memory extraction using the preset templates we have configured.