codysnider commited on
Commit
eeafa51
·
1 Parent(s): 5ecbe81

Tighten public README

Browse files
Files changed (1) hide show
  1. README.md +44 -2
README.md CHANGED
@@ -16,7 +16,7 @@ size_categories:
16
 
17
  # FalseMemBench
18
 
19
- `FalseMemBench` is a benchmark project for evaluating memory retrieval systems under adversarial distractors.
20
 
21
  The goal is to measure whether a system can retrieve the right memory when many nearby but wrong memories are present.
22
 
@@ -33,6 +33,20 @@ It emphasizes:
33
  - speaker confusion
34
  - near-duplicate paraphrases
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ## Layout
37
 
38
  - `schema/case.schema.json`: benchmark case schema
@@ -52,6 +66,34 @@ It emphasizes:
52
 
53
  There are no public snapshot versions in this repository. Version history is tracked through git.
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ## Case format
56
 
57
  Each case contains:
@@ -100,7 +142,7 @@ Current dataset size:
100
 
101
  - `573` cases
102
 
103
- ## Intended use
104
 
105
  The benchmark is intended to be:
106
 
 
16
 
17
  # FalseMemBench
18
 
19
+ `FalseMemBench` is an adversarial benchmark for evaluating memory retrieval systems under heavy distractor pressure.
20
 
21
  The goal is to measure whether a system can retrieve the right memory when many nearby but wrong memories are present.
22
 
 
33
  - speaker confusion
34
  - near-duplicate paraphrases
35
 
36
+ ## Public Surface
37
+
38
+ The public release is intentionally small:
39
+
40
+ - `data/cases.jsonl`: canonical benchmark dataset
41
+ - `schema/case.schema.json`: case schema
42
+ - `scripts/validate.py`: dataset validator
43
+ - `scripts/run_tagmem_benchmark.py`: benchmark runner for `tagmem`
44
+ - `scripts/run_mempalace_benchmark.py`: benchmark runner for MemPalace-style retrieval
45
+ - `scripts/run_benchmark.py`: simple keyword baseline
46
+ - `scripts/run_bm25_benchmark.py`: BM25 baseline
47
+ - `scripts/run_dense_benchmark.py`: dense retrieval baseline
48
+ - `docs/`: supporting benchmark notes
49
+
50
  ## Layout
51
 
52
  - `schema/case.schema.json`: benchmark case schema
 
66
 
67
  There are no public snapshot versions in this repository. Version history is tracked through git.
68
 
69
+ ## Running
70
+
71
+ Validate the canonical dataset:
72
+
73
+ ```bash
74
+ python3 scripts/validate.py
75
+ ```
76
+
77
+ Run the simple keyword baseline:
78
+
79
+ ```bash
80
+ python3 scripts/run_benchmark.py
81
+ ```
82
+
83
+ Run the `tagmem` benchmark:
84
+
85
+ ```bash
86
+ python3 scripts/run_tagmem_benchmark.py --tagmem-bin tagmem
87
+ ```
88
+
89
+ Run the MemPalace-style benchmark:
90
+
91
+ ```bash
92
+ python3 scripts/run_mempalace_benchmark.py
93
+ ```
94
+
95
+ Optional BM25 and dense baselines use dependencies from `requirements.txt`.
96
+
97
  ## Case format
98
 
99
  Each case contains:
 
142
 
143
  - `573` cases
144
 
145
+ ## Intended Use
146
 
147
  The benchmark is intended to be:
148