AuroraProgram commited on
Commit
ff10712
·
verified ·
1 Parent(s): 95f4def

Upload trinity_3\core_backup.py with huggingface_hub

Browse files
Files changed (1) hide show
  1. trinity_3//core_backup.py +2172 -0
trinity_3//core_backup.py ADDED
@@ -0,0 +1,2172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # === Patch Pattern 0 stubs after class definitions ===
2
+ # ===================== PATRÓN 0: SEMILLA ONTOLÓGICA CANÓNICA =====================
3
+ import hashlib
4
+ import random
5
+ import itertools
6
+ from typing import List, Union, Optional, Tuple, Dict
7
+
8
+ PHI = 0.6180339887
9
+
10
+ def apply_ethical_constraint(vector, space_id, kb):
11
+ # Placeholder: fetch ethical rules from KB, default to [-1, -1, -1]
12
+ rules = getattr(kb, 'get_ethics', lambda sid: [-1, -1, -1])(space_id) or [-1, -1, -1]
13
+ return [v ^ r if r != -1 else v for v, r in zip(vector, rules)]
14
+
15
+ def compute_ethical_signature(cluster):
16
+ base = str([t.nivel_3[0] for t in cluster]).encode()
17
+ return hashlib.sha256(base).hexdigest()
18
+
19
+ def golden_ratio_select(N, seed):
20
+ step = int(max(1, round(N * PHI)))
21
+ return [(seed + i * step) % N for i in range(3)]
22
+
23
+ def pattern0_create_fractal_cluster(
24
+ *,
25
+ input_data=None,
26
+ space_id="default",
27
+ num_tensors=3,
28
+ context=None,
29
+ entropy_seed=PHI,
30
+ depth_max=3,
31
+ ):
32
+ random.seed(entropy_seed * 1e9)
33
+ kb = FractalKnowledgeBase()
34
+ armonizador = Armonizador(knowledge_base=kb)
35
+ pool = TensorPoolManager()
36
+
37
+ # 1. Generación / Importación
38
+ tensors = []
39
+ for i in range(num_tensors):
40
+ if input_data and i < len(input_data):
41
+ vec = apply_ethical_constraint(input_data[i], space_id, kb)
42
+ tensor = FractalTensor(nivel_3=[vec])
43
+ else:
44
+ # If FractalTensor.random supports constraints, pass space_id; else fallback
45
+ try:
46
+ tensor = FractalTensor.random(space_constraints=space_id)
47
+ except TypeError:
48
+ tensor = FractalTensor.random()
49
+ tensors.append(tensor)
50
+ pool.add_tensor(tensor)
51
+
52
+ # 2. Armonización recursiva
53
+ def harmonize_fractal(t, depth=0):
54
+ if depth >= depth_max:
55
+ return t
56
+ t.nivel_3[0] = armonizador.harmonize(t.nivel_3[0], space_id=space_id)["output"]
57
+ # Recursively harmonize sublevels if method exists
58
+ if hasattr(t, 'get_sublevels'):
59
+ for sub in t.get_sublevels():
60
+ harmonize_fractal(sub, depth + 1)
61
+ return t
62
+
63
+ tensors = [harmonize_fractal(t) for t in tensors]
64
+
65
+ # 3. Selección de trío óptimo
66
+ idx = golden_ratio_select(len(tensors), int(entropy_seed * 1e6))
67
+ cluster = [tensors[i] for i in idx]
68
+
69
+ # 4. Registro en KB
70
+ signature = compute_ethical_signature(cluster)
71
+ if hasattr(kb, 'register_pattern0'):
72
+ kb.register_pattern0(
73
+ space_id=space_id,
74
+ cluster=cluster,
75
+ entropy_seed=entropy_seed,
76
+ ethical_hash=signature,
77
+ )
78
+ # Attach metadata to each tensor
79
+ for t in cluster:
80
+ if not hasattr(t, 'metadata') or t.metadata is None:
81
+ t.metadata = {}
82
+ t.metadata["ethical_hash"] = signature
83
+ t.metadata["entropy_seed"] = entropy_seed
84
+ t.metadata["space_id"] = space_id
85
+ return cluster
86
+
87
+ # --- STUBS for Pattern 0 integration (to be implemented in KB and FractalTensor) ---
88
+ def _stub_get_sublevels(self):
89
+ # Returns all sublevels (nivel_9 and nivel_27) as FractalTensor if possible
90
+ subs = []
91
+ if hasattr(self, 'nivel_9'):
92
+ subs.extend([FractalTensor(nivel_3=[v]) for v in self.nivel_9])
93
+ if hasattr(self, 'nivel_27'):
94
+ subs.extend([FractalTensor(nivel_3=[v]) for v in self.nivel_27])
95
+ return subs
96
+
97
+ def _stub_register_pattern0(self, space_id, cluster, entropy_seed, ethical_hash):
98
+ # Placeholder for registering Pattern 0 cluster in KB
99
+ if not hasattr(self, 'pattern0_registry'):
100
+ self.pattern0_registry = {}
101
+ self.pattern0_registry[space_id] = {
102
+ 'cluster': cluster,
103
+ 'entropy_seed': entropy_seed,
104
+ 'ethical_hash': ethical_hash,
105
+ }
106
+
107
+ def _stub_get_ethics(self, space_id):
108
+ # Placeholder: return default ethical rules
109
+ return [-1, -1, -1]
110
+
111
+ # Patch stubs if not present
112
+
113
+
114
+ """
115
+ Licensed under the Apache License, Version 2.0 (the "License");
116
+ you may not use this file except in compliance with the License.
117
+ You may obtain a copy of the License at
118
+
119
+ http://www.apache.org/licenses/LICENSE-2.0
120
+
121
+ Unless required by applicable law or agreed to in writing, software
122
+ distributed under the License is distributed on an "AS IS" BASIS,
123
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
124
+ See the License for the specific language governing permissions and
125
+ limitations under the License.
126
+ """
127
+
128
+ '''
129
+ NO BORRAR ESTA NOTAS:
130
+ librereria trinity-3: Classes pare el nuecleo de modelo Aurora de inteligencia electronica
131
+ El nucleo hacer 3 operaciones fundamentales:
132
+ 1. Obtner la inteligencia de la realcion entre los valores de las dimension de los tnensores fractales y su relacion con el contexto.
133
+ 2. Almacenar la inteligencia como knowledge base que deber ser utilizar para de forma recursiva obtener la inteligencia de los tensores fractales.
134
+ 3. Extender la inteligencia a nuevos tensores fractales y contextos en base a dinamicas y devolverlo como output al usuario.
135
+
136
+ PRINCIPIOS DE DESARROLLO:
137
+ 1. Simplicidad, el codigo nunca debe basar en cadenas larga de if and else. Tienes que ser elegante y en todo caso bascar soluciones recusivas/Fracta.
138
+ 2. Autosimilitud. El codigo debe buscar que todos los mecanimso de emergencia y aprendizaje de relgas sigan patrones similares en cada uno de sus componentes
139
+ 3. Reversibilidad triple. El codigo de transcendiencia, extension y aprendizaje, debe tener la misma logica pero en direccion inversa.
140
+
141
+ Cada uno de los elementos del sistema deber usar el trigate, como atomo fundamental de la logica ternaria.
142
+
143
+ Los tensores fractales son la entrada del sistema, se analizan desde el transcenders, se analizar tensores de 3 en tres que realiza una triple accion:
144
+
145
+ 1. Obtiene la relacion entre los tensores fractales y su contexto.
146
+ 2. Emerge los tensores fractales base a un nivel superior.
147
+ 3. ?????
148
+
149
+
150
+ La informacion de relaciones obtenidas por el transceder pasa al extender que se encarga de reconstruir los tensores apartir de
151
+
152
+ Una vez el tensor llega al sinteizarse en un solo tensor, pasa al extender, que realiza la extension de los tensores fractales a partir de la informacion de la KB y el tensor sintetizado.
153
+
154
+ Una vez el ciclo esta completo, se puede realizar un test de integridad y coherencia del flujo de trabajo. De eso se encarga el armonizador, un comprobando que el sistema esta armonizado y los tenosres de salida so coherentes.
155
+ Si no es asi inicia un proces de correccion o armonizacion, en el que se incia un ciclo de recurisova de prueba hasta que el sistema es coherente:
156
+ 1. En primer lugar busca una correccion de los tensores fractales.
157
+ 2. Si no es posible, busca una correccion de las relaciones.
158
+ 3. Si no es posible, busca una correccion de los valores del sistema.
159
+
160
+ Los tensores fractales son los que aportan la inteligencia al sistema. Esta formado por vectores ternarios de 3 dimensiones, que representan la relacion entre los valores de las dimensiones y su contexto.
161
+ Cada valor dimensional represnta la forma, la estructura y la funcion del vector.
162
+ Cada valor dimensional esta compuesto por 3trits (0, 1, None) que representan la relacion entre los valores de las dimensiones y su contexto.
163
+
164
+ Cada valor dimensional tiene una doble funcion: Por un lado representa el valor de la dimension y por otro identifica el espacion dimensional inferior.
165
+ Cada valor dimensional tiene asocidado se vector inferior. Los aximoas del espacio inferior depende de valor de la dimension superior.
166
+
167
+ La forma de tensor es 1 3 9 donde cada nivel es un vector de 3 dimensiones. Cada ima de las dimensione represtan la forma, la estructura y la funcion del elemento.
168
+
169
+
170
+
171
+ Documentacion extensas para seguir en : documentation/documentation.txt
172
+
173
+
174
+ '''
175
+
176
+
177
+
178
+ # === Reversibilidad completa en InverseEvolver (jerárquica) ===
179
+ class InverseEvolver:
180
+ # ...existing code...
181
+ def reconstruct_fractal(self, synthesized):
182
+ """Reconstruye tres tensores fractales a partir de uno sintetizado (nivel 3, 9, 27)."""
183
+ ms_key = synthesized.nivel_3[0]
184
+ # Deducir A, B usando lógica inversa de Trigate (ejemplo simplificado)
185
+ A, B = self.reconstruct_vectors(ms_key) if hasattr(self, 'reconstruct_vectors') else (ms_key, ms_key)
186
+ C = [a ^ b if a is not None and b is not None else None for a, b in zip(A, B)]
187
+ # Para niveles superiores, aplicar recursividad similar si existen
188
+ return [FractalTensor(nivel_3=[A]), FractalTensor(nivel_3=[B]), FractalTensor(nivel_3=[C])]
189
+
190
+ # === Imputación contextual optimizada (ponderando niveles fractales) ===
191
+ def impute_none(vec, context, tensor=None):
192
+ from statistics import mode
193
+ result = []
194
+ for i, v in enumerate(vec):
195
+ if v is not None:
196
+ result.append(v)
197
+ else:
198
+ col = [c[i] for c in context if c[i] is not None]
199
+ # Añadir valores de niveles superiores si tensor está disponible
200
+ if tensor and hasattr(tensor, 'nivel_9') and tensor.nivel_9 and i < len(tensor.nivel_9[0]):
201
+ col.extend([x for x in tensor.nivel_9[i] if x is not None])
202
+ result.append(mode(col) if col else 0)
203
+ return result
204
+
205
+ # === Utilidad: Imputación contextual de None ===
206
+ from statistics import mode
207
+ def impute_none(vec, context):
208
+ """Imputa None usando la moda de valores adyacentes en el contexto."""
209
+ result = []
210
+ for i, v in enumerate(vec):
211
+ if v is not None:
212
+ result.append(v)
213
+ else:
214
+ col = [c[i] for c in context if c[i] is not None]
215
+ result.append(mode(col) if col else 0)
216
+ return result
217
+
218
+ # === Validación centralizada de entradas ternarias ===
219
+ def validate_ternary_input(vec, expected_len=3, name="input"):
220
+ if not isinstance(vec, (list, tuple)) or len(vec) != expected_len:
221
+ print(f"Warning: Invalid {name}: {vec}, using default {[0]*expected_len}")
222
+ return [0] * expected_len
223
+ return [None if x is None else int(x) % 2 for x in vec]
224
+
225
+ # === Refactorización autosimilar del Armonizador ===
226
+ class AdjustmentStep:
227
+ def apply(self, vec, archetype, kb=None):
228
+ raise NotImplementedError
229
+
230
+ class MicroShift(AdjustmentStep):
231
+ def apply(self, vec, archetype, kb=None):
232
+ # Ejemplo: corrige un valor si difiere en 1 posición
233
+ return [a if v is None else v for v, a in zip(vec, archetype)]
234
+
235
+ class Regrewire(AdjustmentStep):
236
+ def apply(self, vec, archetype, kb=None):
237
+ # Ejemplo: fuerza coincidencia si hay 2/3 iguales
238
+ if sum(1 for v, a in zip(vec, archetype) if v == a) >= 2:
239
+ return list(archetype)
240
+ return vec
241
+
242
+ class Metatune(AdjustmentStep):
243
+ def apply(self, vec, archetype, kb=None):
244
+ # Ejemplo: si kb está presente, busca el arquetipo más cercano
245
+ if kb is not None:
246
+ matches = kb.find_archetype_by_ms(archetype)
247
+ if matches:
248
+ return matches[0]
249
+ return vec
250
+
251
+ # === Heurísticas de selección: Golden Ratio Skip y Fibonacci Stepping ===
252
+ import math
253
+
254
+ def golden_ratio_skip_indices(N, k, trios=3):
255
+ """Devuelve una lista de índices para formar un trío usando saltos áureos."""
256
+ phi = (1 + math.sqrt(5)) / 2
257
+ skip = max(1, int(N / phi))
258
+ indices = []
259
+ idx = k
260
+ for _ in range(trios):
261
+ indices.append(idx % N)
262
+ idx = (idx + skip) % N
263
+ return indices
264
+
265
+ def fibonacci(n):
266
+ a, b = 1, 1
267
+ for _ in range(n):
268
+ a, b = b, a + b
269
+ return a
270
+
271
+ def fibonacci_stepping_indices(N, k, trios=3, start_step=0):
272
+ """Devuelve una lista de índices para formar un trío usando pasos de Fibonacci."""
273
+ indices = []
274
+ idx = k
275
+ for i in range(start_step, start_step + trios):
276
+ step = fibonacci(i)
277
+ indices.append(idx % N)
278
+ idx = (idx + step) % N
279
+ return indices
280
+
281
+ # === Ejemplo de uso: formación de tríos con heurística ===
282
+ def formar_trio_golden(tensores, k):
283
+ N = len(tensores)
284
+ idxs = golden_ratio_skip_indices(N, k)
285
+ return [tensores[i] for i in idxs]
286
+
287
+ def formar_trio_fibonacci(tensores, k, start_step=0):
288
+ N = len(tensores)
289
+ idxs = fibonacci_stepping_indices(N, k, start_step=start_step)
290
+ return [tensores[i] for i in idxs]
291
+ # --- Dependencias globales ---
292
+ import numpy as np
293
+ # ===================== TRIAGE FUNCIONAL AURORA: COMPOSICIÓN Y REVERSIBILIDAD =====================
294
+
295
+ import operator
296
+
297
+
298
+ # === HOT-FIX: Utilidades de validación robusta para vectores y secuencias funcionales ===
299
+ def normalize_ternary_vector(vec, default=[0, 0, 0]):
300
+ """Normaliza un vector a ternario de longitud 3."""
301
+ if not isinstance(vec, (list, tuple)):
302
+ return default.copy()
303
+ return [
304
+ None if x is None else int(x) if x in (0, 1) else 0
305
+ for x in list(vec)[:3]
306
+ ] + [0] * (3 - len(vec))
307
+
308
+ def validate_function_sequence(M, allowed_functions, max_len=2):
309
+ """Valida que M sea una lista de listas de funciones permitidas."""
310
+ if not isinstance(M, (list, tuple)) or len(M) != 3:
311
+ return [[f_id] for _ in range(3)]
312
+ return [
313
+ list(seq)[:max_len] if isinstance(seq, (list, tuple)) and all(f in allowed_functions for f in seq) else [f_id]
314
+ for seq in M[:3]
315
+ ] + [[f_id]] * (3 - len(M))
316
+
317
+ def aurora_apply_sequence(val, sequence):
318
+ """Aplica una secuencia de funciones a un valor."""
319
+ for func in sequence:
320
+ val = func(val)
321
+ return val
322
+
323
+ def aurora_triage_inferencia(A, B, M):
324
+ """Inferencia: Aplica la composición M a A y/o B y retorna el resultado emergente."""
325
+ logger.info("Iniciando inferencia funcional", extra={'stage': 'inferencia', 'ambiguity': 0})
326
+ allowed_functions = [f_not, f_inc, f_id]
327
+ A = normalize_ternary_vector(A)
328
+ B = normalize_ternary_vector(B)
329
+ M = validate_function_sequence(M, allowed_functions)
330
+ R = []
331
+ for i in range(3):
332
+ rA = aurora_apply_sequence(A[i], M[i])
333
+ rB = aurora_apply_sequence(B[i], M[i])
334
+ if rA is not None and rB is not None:
335
+ R.append(rA + rB)
336
+ else:
337
+ R.append(0)
338
+ logger.info(f"Inferencia completada: R={R}", extra={'stage': 'inferencia', 'ambiguity': R.count(None)})
339
+ return R
340
+
341
+ def aurora_triage_aprendizaje(A, B, R, funciones_permitidas, max_len=2):
342
+ """Aprendizaje: Busca una composición de funciones (por bit) que aplicada a A y B da R."""
343
+ logger.info("Iniciando aprendizaje funcional", extra={'stage': 'aprendizaje', 'ambiguity': 0})
344
+ import itertools
345
+ A = normalize_ternary_vector(A)
346
+ B = normalize_ternary_vector(B)
347
+ R = normalize_ternary_vector(R)
348
+ M = []
349
+ for i in range(3):
350
+ found = False
351
+ for l in range(1, max_len+1):
352
+ for seq in itertools.product(funciones_permitidas, repeat=l):
353
+ rA = aurora_apply_sequence(A[i], seq)
354
+ rB = aurora_apply_sequence(B[i], seq)
355
+ if rA is not None and rB is not None and rA + rB == R[i]:
356
+ M.append(list(seq))
357
+ found = True
358
+ break
359
+ if found:
360
+ break
361
+ if not found:
362
+ M.append([f_id])
363
+ logger.warning(f"No se encontró secuencia para bit {i}, usando identidad", extra={'stage': 'aprendizaje', 'ambiguity': 1})
364
+ logger.info(f"Aprendizaje completado: M={M}", extra={'stage': 'aprendizaje', 'ambiguity': sum(len(m) for m in M)})
365
+ return M
366
+
367
+ def aurora_triage_deduccion(M, R, known, known_is_A=True):
368
+ """Deducción: Dado M, R y A (o B), deduce B (o A) aplicando las inversas."""
369
+ logger.info("Iniciando deducción funcional", extra={'stage': 'deduccion', 'ambiguity': 0})
370
+ allowed_functions = [f_not, f_inc, f_id]
371
+ R = normalize_ternary_vector(R)
372
+ known = normalize_ternary_vector(known)
373
+ M = validate_function_sequence(M, allowed_functions)
374
+ deduced = []
375
+ for i in range(3):
376
+ val = R[i] - aurora_apply_sequence(known[i], M[i]) if R[i] is not None and known[i] is not None else 0
377
+ for func in reversed(M[i]):
378
+ if hasattr(func, 'inverse'):
379
+ val = func.inverse(val)
380
+ else:
381
+ logger.warning(f"No hay inversa para función en bit {i}, asumiendo identidad", extra={'stage': 'deduccion', 'ambiguity': 1})
382
+ deduced.append(val if val in (0, 1, None) else 0)
383
+ logger.info(f"Deducción completada: {deduced}", extra={'stage': 'deduccion', 'ambiguity': deduced.count(None)})
384
+ return deduced
385
+
386
+ # Ejemplo de funciones ternarias simples con inversa
387
+ def f_not(x):
388
+ return 1 - x if x in (0, 1) else 0
389
+ def f_not_inv(x):
390
+ return 1 - x if x in (0, 1) else 0
391
+ f_not.inverse = f_not_inv
392
+
393
+ def f_inc(x):
394
+ return (x + 1) % 2 if x in (0, 1) else 0
395
+ def f_inc_inv(x):
396
+ return (x - 1) % 2 if x in (0, 1) else 0
397
+ f_inc.inverse = f_inc_inv
398
+
399
+ def f_id(x):
400
+ return x
401
+ f_id.inverse = f_id
402
+
403
+ # Ejemplo de uso experimental:
404
+ # A = [1, 0, 1]
405
+ # B = [0, 1, 1]
406
+ # M = [[f_not, f_inc], [f_inc], [f_id]]
407
+ # R = aurora_triage_inferencia(A, B, M)
408
+ # M_learned = aurora_triage_aprendizaje(A, B, R, [f_not, f_inc, f_id])
409
+ # B_deduced = aurora_triage_deduccion(M, R, A, known_is_A=True)
410
+
411
+
412
+ # ===================== AUTOCURACIÓN: HOT-FIX, REAXIOMATIZACIÓN Y CONSEJO TERNARIO =====================
413
+
414
+ # Mini-test para ExpertRelator tuple return
415
+ def test_relator_returns_tuple():
416
+ kb = FractalKnowledgeBase()
417
+ ext = Extender(kb)
418
+ ok, rel = ext.relator.contextualizar([1,0,1], 'default')
419
+ assert isinstance(ok, bool)
420
+ assert ok is False and rel is None # vacío porque la KB está vacía
421
+ # ===============================================================================
422
+ # IMPORTS AGRUPADOS
423
+ # ===============================================================================
424
+ import random
425
+ import time
426
+ import warnings
427
+ import copy
428
+ import math
429
+ from typing import List, Dict, Any, Tuple, Optional
430
+
431
+ # === NOTA SOBRE TESTS Y CONCURRENCIA ===
432
+ # Para concurrencia real, proteger la KB con locks o usar una base de datos transaccional.
433
+ # Añadir casos de prueba unitarios (ejemplo: PyTest) para cada clase principal.
434
+
435
+ # ===============================================================================
436
+ # AURORA TRINITY-3 - ARQUITECTURA CANÓNICA COMPLETA Y REFACTORIZADA
437
+ ################################################################################
438
+ # AURORA – Módulo Armonizador ##################################################
439
+ ################################################################################
440
+ """
441
+ Armonizador
442
+ ===========
443
+ Complemento autosimilar para Aurora Trinity‑3 que afina
444
+ coherencia y corrige ambigüedades a tres escalones:
445
+
446
+ 1. *Vector* – Micro‑ajusta las coordenadas Ss/Ms/MetaM.
447
+ 2. *Regla* – Re‑encamina entradas en LUT / Knowledge‑Base.
448
+ 3. *Valor* – Sintoniza parámetros globales (umbral, pesos…).
449
+
450
+ El módulo está pensado como *post‑hook* del `Extender`;
451
+ llámese después de cada reconstrucción para garantizar
452
+ consonancia.
453
+ """
454
+ from typing import List, Tuple, Dict, Any, Optional
455
+ import itertools
456
+ import warnings
457
+ import logging
458
+
459
+ # Logger central para Aurora
460
+ logger = logging.getLogger("aurora.arq")
461
+ if not logger.hasHandlers():
462
+ handler = logging.StreamHandler()
463
+ formatter = logging.Formatter('[%(levelname)s][%(stage)s][ambig=%(ambiguity)s] %(message)s')
464
+ handler.setFormatter(formatter)
465
+ logger.addHandler(handler)
466
+ logger.setLevel(logging.INFO)
467
+
468
+ Vector = List[Optional[int]] # Ternary value: 0 | 1 | None
469
+
470
+ class AmbiguityScore(int):
471
+ """Int sub‑class → permite añadir meta‑datos si hiciera falta."""
472
+ pass
473
+
474
+ class Armonizador:
475
+ """Afinador jerárquico que aplica **MicroShift → RegRewire → MetaTune**."""
476
+
477
+ def __init__(self, knowledge_base=None, *,
478
+ tau_1: int = 1, tau_2: int = 2, tau_3: int = 3):
479
+ self.kb = knowledge_base # Puede ser None si sólo MicroShift
480
+ self.tau_1, self.tau_2, self.tau_3 = tau_1, tau_2, tau_3
481
+
482
+ @staticmethod
483
+ def ambiguity_score(t: Vector, a: Vector) -> AmbiguityScore:
484
+ """Suma de diferencias ternarias *ignorando* `None`."""
485
+ if len(t) != len(a):
486
+ raise ValueError("Vector size mismatch in ambiguity check")
487
+ score = 0
488
+ for x, y in zip(t, a):
489
+ if x is None or y is None:
490
+ score += 1
491
+ elif x != y:
492
+ score += 1
493
+ return AmbiguityScore(score)
494
+
495
+ _neighbor_mask_cache = {}
496
+ def _microshift(self, vec: Vector, archetype: Vector) -> Vector:
497
+ """
498
+ Microshift recursivo con poda inteligente y logging estructurado.
499
+ Explora vecinos ternarios de vec, buscando el de menor ambigüedad respecto a archetype.
500
+ Early exit si score==0. Usa un set para evitar repeticiones.
501
+ Cachea los masks de vecinos por longitud y solo explora ±1 donde hay NULLs.
502
+ """
503
+ seen = set()
504
+ best = vec
505
+ best_score = self.ambiguity_score(vec, archetype)
506
+
507
+ def neighbor_masks(length):
508
+ if length not in self._neighbor_mask_cache:
509
+ masks = []
510
+ for i in range(length):
511
+ mask = [0]*length
512
+ mask[i] = 1
513
+ masks.append(mask)
514
+ self._neighbor_mask_cache[length] = masks
515
+ return self._neighbor_mask_cache[length]
516
+
517
+ def dfs(v):
518
+ nonlocal best, best_score
519
+ v_tuple = tuple(v)
520
+ if v_tuple in seen:
521
+ return
522
+ seen.add(v_tuple)
523
+ score = self.ambiguity_score(v, archetype)
524
+ logger.debug(f"Vecino: {v} | Score: {score}", extra={'stage':'microshift','ambiguity':score})
525
+ if score < best_score:
526
+ best, best_score = v.copy(), score
527
+ if best_score == 0:
528
+ return
529
+ # Solo explora ±1 donde hay None
530
+ for i in range(len(v)):
531
+ if v[i] is not None:
532
+ continue
533
+ for delta in (-1, 1):
534
+ nv = v.copy()
535
+ nv[i] = 0 if delta == -1 else 1
536
+ dfs(nv)
537
+
538
+ dfs(list(vec))
539
+ logger.info(f"Microshift final: {best} | Score: {best_score}", extra={'stage':'microshift','ambiguity':best_score})
540
+ return best
541
+
542
+ def _regrewire(self, vec: Vector, space_id: str = "default") -> Vector:
543
+ """Busca todos los arquetipos candidatos y selecciona el más cercano por ambigüedad (nivel_3[0])."""
544
+ if self.kb is None:
545
+ return vec
546
+ matches = self.kb._get_space(space_id).find_archetype_by_ms(vec)
547
+ if matches:
548
+ best_entry = min(matches, key=lambda e: self.ambiguity_score(vec, e.nivel_3[0]))
549
+ return best_entry.nivel_3[0]
550
+ return vec
551
+
552
+ def _metatune(self, vec: Vector) -> Vector:
553
+ """Ajuste grosero: si continúa ambigüedad, aplica heurística φ."""
554
+ phi = (1 + 5 ** 0.5) / 2
555
+ tuned = []
556
+ for v in vec:
557
+ if v is None:
558
+ tuned.append(None)
559
+ else:
560
+ tuned.append(int(round(v / phi)) % 2)
561
+ return tuned
562
+
563
+ def harmonize(self, tensor: Vector, *, archetype: Vector | None = None,
564
+ space_id: str = "default") -> Dict[str, Any]:
565
+ """Afinado completo. Devuelve dict con info para tracing."""
566
+ if archetype is None:
567
+ if self.kb is not None:
568
+ entries = self.kb._get_space(space_id).find_archetype_by_ms(tensor)
569
+ if entries:
570
+ if isinstance(entries, list):
571
+ archetype = entries[0].nivel_3[0]
572
+ elif hasattr(entries, 'nivel_3'):
573
+ archetype = entries.nivel_3[0]
574
+ archetype = archetype or tensor
575
+
576
+ vec_step1 = self._microshift(tensor, archetype)
577
+ score1 = self.ambiguity_score(vec_step1, archetype)
578
+ if score1 <= self.tau_1:
579
+ return {
580
+ "output": vec_step1,
581
+ "stage": "vector",
582
+ "ambiguity": int(score1),
583
+ }
584
+
585
+ vec_step2 = self._regrewire(vec_step1, space_id=space_id)
586
+ score2 = self.ambiguity_score(vec_step2, archetype)
587
+ if score2 <= self.tau_2:
588
+ return {
589
+ "output": vec_step2,
590
+ "stage": "regla",
591
+ "ambiguity": int(score2),
592
+ }
593
+
594
+ vec_step3 = self._metatune(vec_step2)
595
+ score3 = self.ambiguity_score(vec_step3, archetype)
596
+ if score3 <= self.tau_3:
597
+ stage = "valor"
598
+ else:
599
+ stage = "falla_critica"
600
+ warnings.warn("Armonizador: falla crítica – no se pudo reducir ambigüedad")
601
+ return {
602
+ "output": vec_step3,
603
+ "stage": stage,
604
+ "ambiguity": int(score3),
605
+ }
606
+ '''
607
+ Muy imporante:
608
+
609
+ Principios que se deben aplicar para el desarrollo de esta libreria:
610
+
611
+ Simplicidad, el codigo nunca debe basar en cadenas larga de if and else. Tienes que ser elegante y en todo caso bascar soluciones recusivas.
612
+ Autosimilitud. El codigo debe buscar que todos los mecanimso de emergencia y aprendizaje de relgas sigan patrones similares en cada uno de sus componentes
613
+ Solucion inversa. El codigo de transcendiencia y extension debe tener la misma logica pero en direccion inversa.
614
+
615
+ '''
616
+
617
+
618
+
619
+
620
+
621
+
622
+ # ===============================================================================
623
+ # NIVEL 1: LÓGICA FUNDAMENTAL
624
+ # ===============================================================================
625
+
626
+ class TernaryLogic:
627
+ """
628
+ Lógica ternaria Aurora con manejo correcto de incertidumbre.
629
+ Implementa Honestidad Computacional propagando NULL apropiadamente.
630
+ """
631
+ NULL = None # Representación canónica de NULL en Aurora
632
+
633
+ @staticmethod
634
+ def ternary_xor(a: Optional[int], b: Optional[int]) -> Optional[int]:
635
+ """XOR ternario con propagación de NULL."""
636
+ if a is TernaryLogic.NULL or b is TernaryLogic.NULL:
637
+ return TernaryLogic.NULL
638
+ return a ^ b
639
+
640
+ @staticmethod
641
+ def ternary_xnor(a: Optional[int], b: Optional[int]) -> Optional[int]:
642
+ """XNOR ternario con propagación de NULL."""
643
+ if a is TernaryLogic.NULL or b is TernaryLogic.NULL:
644
+ return TernaryLogic.NULL
645
+ return 1 if a == b else 0
646
+
647
+ # ===============================================================================
648
+ # NIVEL 2: COMPONENTES BÁSICOS DE PROCESAMIENTO
649
+ # ===============================================================================
650
+
651
+ # Inicializar las LUTs una sola vez al cargar el script
652
+ # Trigate se inicializa más adelante en el archivo
653
+
654
+ class Transcender:
655
+ def relate_vectors(self, A: list, B: list, context: dict = None) -> list:
656
+ """
657
+ Calcula un vector de relación Aurora-native entre A y B, incorporando ventana de contexto y relaciones cruzadas si se proveen.
658
+ """
659
+ if len(A) != len(B):
660
+ return [0, 0, 0]
661
+ diff_vector = []
662
+ for i in range(len(A)):
663
+ a_val = A[i] if A[i] is not None else 0
664
+ b_val = B[i] if B[i] is not None else 0
665
+ diff = b_val - a_val
666
+ # Normalize to ternary: 1 if diff > 0, 0 if diff == 0, None if diff < 0
667
+ if diff > 0:
668
+ diff_vector.append(1)
669
+ elif diff == 0:
670
+ diff_vector.append(0)
671
+ else:
672
+ diff_vector.append(None)
673
+ # --- Aurora-native: ventana de contexto y relaciones cruzadas ---
674
+ # Si context contiene 'prev' y 'next', añade relaciones cruzadas
675
+ if context and 'prev' in context and 'next' in context:
676
+ v_prev = context['prev']
677
+ v_next = context['next']
678
+ rel_cross = []
679
+ for vp, vn in zip(v_prev, v_next):
680
+ vp_val = vp if vp is not None else 0
681
+ vn_val = vn if vn is not None else 0
682
+ diff_cross = vp_val - vn_val
683
+ if diff_cross > 0:
684
+ rel_cross.append(1)
685
+ elif diff_cross == 0:
686
+ rel_cross.append(0)
687
+ else:
688
+ rel_cross.append(None)
689
+ # Concatenar: [diff_vector, rel_cross, A, B]
690
+ return list(diff_vector) + list(rel_cross) + list(A) + list(B)
691
+ return diff_vector
692
+ """
693
+ Componente de síntesis que implementa la síntesis jerárquica
694
+ de Tensores Fractales completos.
695
+ """
696
+ def __init__(self, fractal_vector: Optional[List[int]] = None):
697
+ self.trigate = Trigate()
698
+ # Se guarda por si algún test antiguo lo inspecciona,
699
+ # pero NO es obligatorio para el funcionamiento.
700
+ self.seed_vector = fractal_vector
701
+
702
+ def compute_vector_trio(self, A: List[int], B: List[int], C: List[int]) -> Dict[str, Any]:
703
+ """Procesa un trío de vectores simples (operación base)."""
704
+ M_AB, _ = self.trigate.synthesize(A, B)
705
+ M_BC, _ = self.trigate.synthesize(B, C)
706
+ M_CA, _ = self.trigate.synthesize(C, A)
707
+ M_emergent, _ = self.trigate.synthesize(M_AB, M_BC)
708
+ M_intermediate, _ = self.trigate.synthesize(M_emergent, M_CA)
709
+ MetaM = [TernaryLogic.ternary_xor(a, b) for a, b in zip(M_intermediate, M_emergent)]
710
+ return {'M_emergent': M_emergent, 'MetaM': MetaM}
711
+
712
+ # ------------------------------------------------------------------
713
+ # MODO “DEEP LEARNING” (compatibilidad con suites heredadas)
714
+ # ------------------------------------------------------------------
715
+ def deep_learning(
716
+ self,
717
+ A: List[int],
718
+ B: List[int],
719
+ C: List[int],
720
+ M_emergent: Optional[List[int]] = None
721
+ ) -> Dict[str, Any]:
722
+ """
723
+ • Calcula M_emergent y MetaM tal como exige el modelo Trinity-3.
724
+ • Genera R_hipotesis = Trigate.infer(A, B, M_emergent).
725
+ • Devuelve un diccionario con claves que los tests integrales esperan.
726
+ """
727
+ trio = self.compute_vector_trio(A, B, C)
728
+
729
+ # Si el caller no aporta M_emergent, usa el calculado.
730
+ if M_emergent is None:
731
+ M_emergent = trio["M_emergent"]
732
+
733
+ R_hipotesis = self.trigate.infer(A, B, M_emergent)
734
+
735
+ return {
736
+ "M_emergent": M_emergent,
737
+ "MetaM": trio["MetaM"],
738
+ "R_hipotesis": R_hipotesis,
739
+ }
740
+
741
+
742
+
743
+ def compute_full_fractal(self, A: 'FractalTensor', B: 'FractalTensor', C: 'FractalTensor') -> 'FractalTensor':
744
+ """
745
+ Sintetiza tres tensores fractales en uno, de manera jerárquica y elegante.
746
+ Prioriza una raíz de entrada válida por encima de la síntesis.
747
+ """
748
+ out = FractalTensor.neutral()
749
+
750
+ def synthesize_trio(vectors: list) -> list:
751
+ # Only use first 3 elements of each vector
752
+ while len(vectors) < 3:
753
+ vectors.append([0, 0, 0])
754
+ trimmed = [v[:3] if isinstance(v, (list, tuple)) else [0,0,0] for v in vectors[:3]]
755
+ r = self.compute_vector_trio(*trimmed)
756
+ m_emergent = r.get('M_emergent', [0, 0, 0])
757
+ return [bit if bit is not None else 0 for bit in m_emergent[:3]]
758
+
759
+ inter_from_27 = []
760
+ for i in range(27):
761
+ context = {'prev': A.nivel_27[i - 1] if i > 0 else [0,0,0], 'next': A.nivel_27[i + 1] if i < 26 else [0,0,0]}
762
+ enriched_a = self.relate_vectors(A.nivel_27[i], B.nivel_27[i], context)[:3]
763
+ enriched_b = self.relate_vectors(B.nivel_27[i], C.nivel_27[i], context)[:3]
764
+ enriched_c = self.relate_vectors(C.nivel_27[i], A.nivel_27[i], context)[:3]
765
+ inter_from_27.append(synthesize_trio([enriched_a, enriched_b, enriched_c]))
766
+ out.nivel_27 = inter_from_27
767
+
768
+ inter_from_9 = [synthesize_trio(inter_from_27[i:i+3]) for i in range(0, 27, 3)]
769
+ out.nivel_9 = inter_from_9
770
+ out.nivel_3 = [synthesize_trio(inter_from_9[i:i+3]) for i in range(0, 9, 3)]
771
+
772
+ # Ensure all nivel_3 vectors are length 3
773
+ out.nivel_3 = [v[:3] if isinstance(v, (list, tuple)) else [0,0,0] for v in out.nivel_3]
774
+
775
+ input_roots = [t.nivel_3[0] for t in (A, B, C) if hasattr(t, 'nivel_3') and t.nivel_3 and t.nivel_3[0] and len(t.nivel_3[0]) == 3]
776
+ valid_roots = [r for r in input_roots if all(bit is not None for bit in r)]
777
+ if valid_roots:
778
+ final_root = [0, 0, 0]
779
+ for i in range(3):
780
+ votes = [r[i] for r in valid_roots]
781
+ final_root[i] = 1 if votes.count(1) > votes.count(0) else 0
782
+ out.nivel_3[0] = final_root
783
+ out.Ms = final_root
784
+ return out
785
+
786
+ # ===============================================================================
787
+ # NIVEL 3: ESTRUCTURAS DE DATOS Y CONOCIMIENTO
788
+ # ===============================================================================
789
+
790
+ class FractalTensor:
791
+ """
792
+ Representa un tensor fractal con 3 niveles de profundidad (3, 9, 27).
793
+ """
794
+
795
+
796
+
797
+ def __init__(
798
+ self,
799
+ nivel_3=None,
800
+ nivel_9=None,
801
+ nivel_27=None,
802
+ *,
803
+ Ms=None,
804
+ Ss=None,
805
+ dMs=None
806
+ ):
807
+ def norm3(v):
808
+ # Normalize a vector to length 3, fill with 0 if needed
809
+ if not isinstance(v, (list, tuple)):
810
+ return [0, 0, 0]
811
+ return [(0 if x is None else int(x) if x in (0, 1) else 0) for x in list(v)[:3]] + [0] * (3 - len(v))
812
+
813
+ def expand_nivel_3(n3):
814
+ # Always returns a list of 3 vectors of length 3
815
+ if not isinstance(n3, (list, tuple)) or len(n3) == 0:
816
+ return [[0, 0, 0] for _ in range(3)]
817
+ if len(n3) == 1 and isinstance(n3[0], (list, tuple)) and len(n3[0]) == 3:
818
+ # If only one vector, repeat it
819
+ return [list(n3[0]) for _ in range(3)]
820
+ return [norm3(v) for v in list(n3)[:3]] + [[0, 0, 0]] * (3 - len(n3))
821
+
822
+ def expand_nivel_9(n9):
823
+ # Always returns a list of 9 vectors of length 3
824
+ if not isinstance(n9, (list, tuple)) or len(n9) == 0:
825
+ return [[0, 0, 0] for _ in range(9)]
826
+ # If only one vector, repeat it
827
+ if len(n9) == 1 and isinstance(n9[0], (list, tuple)) and len(n9[0]) == 3:
828
+ return [list(n9[0]) for _ in range(9)]
829
+ return [norm3(v) for v in list(n9)[:9]] + [[0, 0, 0]] * (9 - len(n9))
830
+
831
+ def expand_nivel_27(n27):
832
+ # Always returns a list of 27 vectors of length 3
833
+ if not isinstance(n27, (list, tuple)) or len(n27) == 0:
834
+ return [[0, 0, 0] for _ in range(27)]
835
+ if len(n27) == 1 and isinstance(n27[0], (list, tuple)) and len(n27[0]) == 3:
836
+ return [list(n27[0]) for _ in range(27)]
837
+ return [norm3(v) for v in list(n27)[:27]] + [[0, 0, 0]] * (27 - len(n27))
838
+
839
+ # If only nivel_3 is provided, expand to all levels
840
+ if nivel_3 is not None and (nivel_9 is None and nivel_27 is None):
841
+ n3 = expand_nivel_3(nivel_3)
842
+ n9 = [list(n3[i // 3]) for i in range(9)]
843
+ n27 = [list(n3[i // 9]) for i in range(27)]
844
+ elif nivel_9 is not None and nivel_27 is None:
845
+ n9 = expand_nivel_9(nivel_9)
846
+ n3 = [list(n9[i * 3]) for i in range(3)]
847
+ n27 = [list(n9[i // 3]) for i in range(27)]
848
+ elif nivel_27 is not None:
849
+ n27 = expand_nivel_27(nivel_27)
850
+ n9 = [list(n27[i * 3]) for i in range(9)]
851
+ n3 = [list(n27[i * 9]) for i in range(3)]
852
+ else:
853
+ n3 = expand_nivel_3(nivel_3)
854
+ n9 = expand_nivel_9(nivel_9)
855
+ n27 = expand_nivel_27(nivel_27)
856
+
857
+ self.nivel_3 = n3
858
+ self.nivel_9 = n9
859
+ self.nivel_27 = n27
860
+
861
+ self.Ms = Ms if Ms is not None else (self.nivel_3[0] if self.nivel_3 and isinstance(self.nivel_3[0], (list, tuple)) and len(self.nivel_3[0]) == 3 else [0,0,0])
862
+ self.Ss = Ss
863
+ self.dMs = dMs
864
+
865
+ @staticmethod
866
+ def random():
867
+ """Crea un FractalTensor aleatorio."""
868
+ rand_vec = lambda: [random.choice([0, 1]) for _ in range(3)]
869
+ return FractalTensor(
870
+ nivel_3=[rand_vec() for _ in range(3)],
871
+ nivel_9=[rand_vec() for _ in range(9)],
872
+ nivel_27=[rand_vec() for _ in range(27)]
873
+ )
874
+
875
+ @staticmethod
876
+ def neutral():
877
+ """Crea un FractalTensor neutro (ceros)."""
878
+ zero_vec = lambda: [0, 0, 0]
879
+ return FractalTensor(
880
+ nivel_3=[zero_vec() for _ in range(3)],
881
+ nivel_9=[zero_vec() for _ in range(9)],
882
+ nivel_27=[zero_vec() for _ in range(27)]
883
+ )
884
+
885
+ def __repr__(self):
886
+ def short(vs):
887
+ return vs[:2] + ['...'] if len(vs) > 2 else vs
888
+ return (f"FT(root={self.nivel_3}, "
889
+ f"mid={short(self.nivel_9)}, "
890
+ f"detail={short(self.nivel_27)})")
891
+
892
+ # ===============================================================================
893
+ # NIVEL 4: MOTOR DE ABSTRACCIÓN Y APRENDIZAJE (EVOLVER)
894
+ # ===============================================================================
895
+
896
+ class Evolver:
897
+ """
898
+ Motor de visión fractal unificada para Arquetipos, Dinámicas y Relatores.
899
+ """
900
+ def __init__(self):
901
+ self.base_transcender = Transcender()
902
+
903
+ def _perform_full_tensor_synthesis(self, tensors: List["FractalTensor"]) -> "FractalTensor":
904
+ """
905
+ Motor de síntesis fractal: reduce una lista de tensores a uno solo.
906
+ """
907
+ if not tensors:
908
+ return FractalTensor.neutral()
909
+
910
+ current_level_tensors = list(tensors)
911
+ while len(current_level_tensors) > 1:
912
+ next_level_tensors = []
913
+ for i in range(0, len(current_level_tensors), 3):
914
+ trio = current_level_tensors[i:i+3]
915
+ while len(trio) < 3:
916
+ trio.append(FractalTensor.neutral())
917
+ synthesized_tensor = self.base_transcender.compute_full_fractal(*trio)
918
+ next_level_tensors.append(synthesized_tensor)
919
+ current_level_tensors = next_level_tensors
920
+
921
+ return current_level_tensors[0]
922
+
923
+ def compute_fractal_archetype(self, tensor_family: List["FractalTensor"]) -> "FractalTensor":
924
+ """Perspectiva de ARQUETIPO: Destila la esencia de una familia de conceptos."""
925
+ if len(tensor_family) < 2:
926
+ warnings.warn("Se requieren al menos 2 tensores para computar un arquetipo.")
927
+ return FractalTensor.neutral() if not tensor_family else tensor_family[0]
928
+ return self._perform_full_tensor_synthesis(tensor_family)
929
+
930
+ def analyze_fractal_dynamics(
931
+ self,
932
+ temporal_sequence: List["FractalTensor"]
933
+ ) -> "FractalTensor":
934
+ """
935
+ Perspectiva de DINÁMICA: Sintetiza el patrón de evolución de una secuencia
936
+ y calcula el gradiente lógico dMs = Ms_fin XOR Ms_ini.
937
+ """
938
+ if len(temporal_sequence) < 2:
939
+ warnings.warn(
940
+ "Se requiere una secuencia de al menos 2 tensores para analizar dinámicas."
941
+ )
942
+ return (
943
+ FractalTensor.neutral()
944
+ if not temporal_sequence
945
+ else temporal_sequence[0]
946
+ )
947
+
948
+ # ---------- síntesis de la secuencia (lo que ya hacías) ----------
949
+ tensor_dyn = self._perform_full_tensor_synthesis(temporal_sequence)
950
+
951
+ # ---------- ➊ nuevo: calcular y guardar dMs ----------
952
+ Ms_ini = temporal_sequence[0].Ms or temporal_sequence[0].nivel_3[0]
953
+ Ms_fin = temporal_sequence[-1].Ms or temporal_sequence[-1].nivel_3[0]
954
+ dMs = [a ^ b for a, b in zip(Ms_ini, Ms_fin)]
955
+
956
+ tensor_dyn.dMs = dMs # gradiente temporal
957
+ tensor_dyn.Ms = Ms_fin # Ms más reciente
958
+ tensor_dyn.nivel_3[0] = Ms_fin # coherencia con la raíz
959
+
960
+ return tensor_dyn
961
+
962
+ def analyze_fractal_relations(self, contextual_cluster: List["FractalTensor"]) -> "FractalTensor":
963
+ """Perspectiva de RELATOR: Obtiene el mapa conceptual de un clúster."""
964
+ if len(contextual_cluster) < 2:
965
+ warnings.warn("Se requieren al menos 2 tensores para el análisis relacional.")
966
+ return FractalTensor.neutral() if not contextual_cluster else contextual_cluster[0]
967
+ return self._perform_full_tensor_synthesis(contextual_cluster)
968
+
969
+ @staticmethod
970
+ def fractal_relate(tensor_group: List["FractalTensor"], level: int = 27) -> Optional[List[List[Optional[int]]]]:
971
+ """
972
+ Calcula una firma relacional por mayoría de votos entre un grupo de tensores.
973
+ """
974
+ if not tensor_group:
975
+ return None
976
+
977
+ # Seleccionar el nivel correcto del tensor
978
+ try:
979
+ dim_vectors = [getattr(t, f'nivel_{level}') for t in tensor_group]
980
+ except AttributeError:
981
+ raise ValueError(f"El nivel {level} no es válido. Debe ser 3, 9 o 27.")
982
+
983
+ num_vectors = len(dim_vectors[0])
984
+ signature = []
985
+ for pos in range(num_vectors):
986
+ bit_result = []
987
+ for bit in range(3): # Asume vectores de 3 bits
988
+ bit_vals = [t[pos][bit] for t in dim_vectors if t and t[pos] and t[pos][bit] is not None]
989
+ if not bit_vals:
990
+ bit_result.append(None)
991
+ continue
992
+
993
+ # Lógica de mayoría ternaria
994
+ count_1 = bit_vals.count(1)
995
+ count_0 = bit_vals.count(0)
996
+ if count_1 > count_0: bit_result.append(1)
997
+ elif count_0 > count_1: bit_result.append(0)
998
+ else: bit_result.append(None)
999
+ signature.append(bit_result)
1000
+ return signature
1001
+
1002
+ # ===============================================================================
1003
+ # NIVEL 5: BASE DE CONOCIMIENTO Y EXTENSIÓN
1004
+ # ===============================================================================
1005
+
1006
+ class _SingleUniverseKB:
1007
+ """Gestiona el conocimiento de un único espacio lógico (universo)."""
1008
+ def __init__(self):
1009
+ self.archetypes = []
1010
+ self.ms_index = {}
1011
+ self.name_index = {}
1012
+ self.coherence_violations = 0
1013
+ self.ss_index = {}
1014
+ self.models = {} # Nuevo: modelos genéricos
1015
+
1016
+ def store_model(self, model_name: str, model_data: dict):
1017
+ """Almacena un modelo de decisión genérico en este universo."""
1018
+ self.models[model_name] = model_data
1019
+ return True
1020
+
1021
+ def get_model(self, model_name: str) -> Optional[dict]:
1022
+ """Recupera un modelo de decisión."""
1023
+ return self.models.get(model_name)
1024
+
1025
+ def add_archetype(self, archetype_tensor: "FractalTensor", Ss: List[int], name: Optional[str] = None, **kwargs) -> bool:
1026
+ """Añade un arquetipo (Tensor Fractal) al universo, almacenando Ss (memoria factual)."""
1027
+ if not isinstance(archetype_tensor, FractalTensor):
1028
+ raise ValueError("La entrada debe ser un objeto FractalTensor.")
1029
+ # Normalize keys to int(0 if x is None else x) for robust lookup
1030
+ ms_key = tuple(int(0 if x is None else x) for x in archetype_tensor.nivel_3[0][:3])
1031
+ # Robustly flatten Ss if it is a list of lists (e.g., [[0,1,1], ...])
1032
+ ss_source = Ss
1033
+ if isinstance(Ss, list) and len(Ss) > 0 and isinstance(Ss[0], list):
1034
+ ss_source = Ss[0]
1035
+ ss_key = tuple(int(0 if x is None else x) for x in (ss_source[:3] if ss_source else archetype_tensor.nivel_3[0][:3]))
1036
+ # Permitir múltiples arquetipos por clave Ms/Ss
1037
+ if name and name in self.name_index:
1038
+ warnings.warn(f"Violación de Coherencia: Ya existe un arquetipo con el nombre '{name}'. No se añadió el nuevo.")
1039
+ self.coherence_violations += 1
1040
+ return False
1041
+ metadata = kwargs.copy()
1042
+ if name: metadata['name'] = name
1043
+ setattr(archetype_tensor, 'metadata', metadata)
1044
+ setattr(archetype_tensor, 'timestamp', time.time())
1045
+ setattr(archetype_tensor, 'Ss', list(ss_key))
1046
+ self.archetypes.append(archetype_tensor)
1047
+ if ms_key not in self.ms_index:
1048
+ self.ms_index[ms_key] = []
1049
+ self.ms_index[ms_key].append(archetype_tensor)
1050
+ if ss_key not in self.ss_index:
1051
+ self.ss_index[ss_key] = []
1052
+ self.ss_index[ss_key].append(archetype_tensor)
1053
+ if name: self.name_index[name] = archetype_tensor
1054
+ return True
1055
+
1056
+ def find_archetype_by_ms(self, Ms_query: List[int]) -> list:
1057
+ """Busca arquetipos por su clave Ms (vector raíz, normalizado a 3 ints). Devuelve siempre lista."""
1058
+ res = self.ms_index.get(tuple(Ms_query[:3]))
1059
+ if res is None:
1060
+ return []
1061
+ if isinstance(res, list):
1062
+ return res
1063
+ return [res]
1064
+
1065
+ def find_archetype_by_ss(self, Ss_query: List[int]) -> list:
1066
+ """Busca arquetipos por su clave Ss (memoria factual, normalizado a 3 ints). Devuelve siempre lista."""
1067
+ res = self.ss_index.get(tuple(Ss_query[:3]))
1068
+ if res is None:
1069
+ return []
1070
+ if isinstance(res, list):
1071
+ return res
1072
+ return [res]
1073
+
1074
+ def find_archetype_by_name(self, name: str) -> Optional["FractalTensor"]:
1075
+ """Busca un arquetipo por su nombre asignado."""
1076
+ return self.name_index.get(name)
1077
+
1078
+ def register_patch(self, ms_key, ttl=10_000):
1079
+ """Registra un parche temporal para un vector raíz con TTL."""
1080
+ if not hasattr(self, '_patches'):
1081
+ self._patches = {}
1082
+ self._patches[tuple(ms_key)] = {'ttl': ttl, 'timestamp': time.time()}
1083
+
1084
+ def supersede_axiom(self, ms_key, new_axiom):
1085
+ """Reemplaza el axioma raíz y versiona el anterior."""
1086
+ if not hasattr(self, '_axiom_versions'):
1087
+ self._axiom_versions = {}
1088
+ old = self.ms_index.get(tuple(ms_key))
1089
+ if old:
1090
+ self._axiom_versions[tuple(ms_key)] = old
1091
+ self.ms_index[tuple(ms_key)] = new_axiom
1092
+ # También actualizar en archetypes si está
1093
+ for i, t in enumerate(self.archetypes):
1094
+ if t.nivel_3[0] == list(ms_key):
1095
+ self.archetypes[i] = new_axiom
1096
+ break
1097
+
1098
+ class FractalKnowledgeBase:
1099
+ def add_archetype(self, space_id: str, name: str, archetype_tensor: "FractalTensor", Ss: list, **kwargs) -> bool:
1100
+ """Delegado: añade un arquetipo fractal al universo correcto."""
1101
+ return self._get_space(space_id).add_archetype(archetype_tensor, Ss, name=name, **kwargs)
1102
+
1103
+ def get_archetype(self, space_id: str, name: str) -> Optional["FractalTensor"]:
1104
+ """Obtiene un arquetipo por space_id y nombre."""
1105
+ return self._get_space(space_id).find_archetype_by_name(name)
1106
+
1107
+ def store_model(self, space_id: str, model_name: str, model_data: dict):
1108
+ return self._get_space(space_id).store_model(model_name, model_data)
1109
+
1110
+ def get_model(self, space_id: str, model_name: str):
1111
+ return self._get_space(space_id).get_model(model_name)
1112
+ """Gestor de múltiples universos de conocimiento fractal."""
1113
+
1114
+
1115
+ def __init__(self):
1116
+ self.universes = {}
1117
+
1118
+ def _get_space(self, space_id: str = 'default'):
1119
+ if space_id not in self.universes:
1120
+ self.universes[space_id] = _SingleUniverseKB()
1121
+ return self.universes[space_id]
1122
+
1123
+
1124
+
1125
+
1126
+ # ===================== MÓDULO DE EVOLVER INVERSO =====================
1127
+ class InverseEvolver:
1128
+ def __init__(self):
1129
+ self.trigate = Trigate()
1130
+
1131
+ def infer_inputs_from_meta(self, Ms: list, MetaM: list) -> list:
1132
+ """
1133
+ Dado Ms (emergente) y MetaM, deduce M_AB, M_BC, M_CA compatibles.
1134
+ """
1135
+ M_intermediate = [TernaryLogic.ternary_xor(m, mm) for m, mm in zip(Ms, MetaM)]
1136
+ # Heurística simple: replicamos M_AB = M_BC = M_CA = M_intermediate
1137
+ return [M_intermediate, M_intermediate, M_intermediate]
1138
+
1139
+ def reconstruct_vectors(self, Ms: list) -> tuple:
1140
+ """
1141
+ Deduce todas las combinaciones posibles de A y B que generan Ms usando lógica inversa del Trigate.
1142
+ Selecciona la combinación con menor cantidad de valores None.
1143
+ """
1144
+ import itertools, warnings
1145
+ if not isinstance(Ms, list) or len(Ms) != 3:
1146
+ Ms = [0, 0, 0] # Normalizar entrada inválida
1147
+ possible_pairs = []
1148
+ states = [0, 1, None]
1149
+ # Explorar todas las combinaciones de A y B
1150
+ for a in itertools.product(states, repeat=3):
1151
+ a = list(a)
1152
+ # Deducir B desde A y Ms usando LUT
1153
+ b = [self.trigate._LUT_DEDUCE_B.get((a_i, 1, m), None) for a_i, m in zip(a, Ms)]
1154
+ if all(x is not None for x in b): # Solo aceptar si B es válido
1155
+ none_count = a.count(None) + b.count(None)
1156
+ possible_pairs.append((a, b, none_count))
1157
+ if not possible_pairs:
1158
+ warnings.warn("No se encontraron combinaciones válidas para Ms. Devolviendo valores neutros.")
1159
+ return [0, 0, 0], [0, 0, 0]
1160
+ # Seleccionar la pareja con menor cantidad de None (criterio de simplicidad)
1161
+ best_pair = min(possible_pairs, key=lambda x: x[2])
1162
+ return list(best_pair[0]), list(best_pair[1])
1163
+
1164
+ # ===================== NUEVO EXTENDER: CONSEJO DE EXPERTOS =====================
1165
+
1166
+ class Extender:
1167
+ """
1168
+ Orquestador Aurora refactorizado con expertos como métodos internos para
1169
+ simplificar el alcance y la gestión de estado.
1170
+
1171
+ Opera como de forma inversa a Evolver, extendiendo el conocimiento fractal
1172
+ a partir de consultas simples y contexto, utilizando expertos para validar,
1173
+ utiliza trigate de form inversa al transcender.
1174
+ """
1175
+ def __init__(self, knowledge_base: "FractalKnowledgeBase"):
1176
+ self.kb = knowledge_base
1177
+ self.transcender = Transcender() # El relator necesita un transcender
1178
+ self._lut_tables = {}
1179
+ self.armonizador = Armonizador(knowledge_base=self.kb)
1180
+
1181
+ # --- Experto Arquetipo como método ---
1182
+ def _validate_archetype(self, ss_query: list, space_id: str) -> Tuple[bool, Optional['FractalTensor']]:
1183
+ universe = self.kb._get_space(space_id)
1184
+ ss_key = tuple(int(x) if x in (0, 1) else 0 for x in ss_query[:3])
1185
+ print(f"DEBUG: Looking up archetype with ss_key={ss_key} in space={space_id}")
1186
+ # Buscar por Ss
1187
+ archi_ss = universe.find_archetype_by_ss(list(ss_key))
1188
+ if archi_ss:
1189
+ print(f"DEBUG: Found archetype by Ss: {archi_ss}")
1190
+ return True, archi_ss
1191
+ # Buscar por Ms
1192
+ archi_ms = universe.find_archetype_by_ms(list(ss_key))
1193
+ if archi_ms:
1194
+ print(f"DEBUG: Found archetype by Ms: {archi_ms}")
1195
+ return True, archi_ms
1196
+ print("DEBUG: No archetype found")
1197
+ return False, None
1198
+
1199
+ # --- Experto Dinámica como método ---
1200
+ def _project_dynamics(self, ss_query: list, space_id: str) -> Tuple[bool, Optional['FractalTensor']]:
1201
+ universe = self.kb._get_space(space_id)
1202
+ best, best_sim = None, -1.0
1203
+ for archetype in universe.archetypes:
1204
+ dMs = getattr(archetype, 'dMs', None)
1205
+ if dMs and getattr(archetype, 'Ss', None):
1206
+ sim = sum(1 for a, b in zip(archetype.Ss, ss_query) if a == b) / len(ss_query)
1207
+ if sim > best_sim:
1208
+ best_sim, best = sim, archetype
1209
+ if best and best_sim > 0.7:
1210
+ return True, best
1211
+ return False, None
1212
+
1213
+ # --- Experto Relator como método ---
1214
+ def _contextualize_relations(self, ss_query: list, space_id: str) -> Tuple[bool, Optional['FractalTensor']]:
1215
+ universe = self.kb._get_space(space_id)
1216
+ if not universe.archetypes:
1217
+ print("DEBUG: No archetypes in universe")
1218
+ return False, None
1219
+ best, best_score = None, float('-inf')
1220
+ for archetype in universe.archetypes:
1221
+ if not getattr(archetype, 'Ss', None):
1222
+ continue
1223
+ rel = self.transcender.relate_vectors(ss_query, archetype.Ss)
1224
+ score = sum(1 for bit in rel if bit == 0)
1225
+ if score > best_score:
1226
+ best_score, best = score, archetype
1227
+ if best:
1228
+ # Create a deep copy to avoid modifying the original
1229
+ result = copy.deepcopy(best)
1230
+ result.nivel_3[0] = list(ss_query[:3]) # Explicitly preserve root
1231
+ print(f"DEBUG: Contextualized with score={best_score}, root preserved={result.nivel_3[0]}")
1232
+ return True, result
1233
+ print("DEBUG: No relational match found")
1234
+ return False, None
1235
+
1236
+ # --- Orquestador Principal ---
1237
+ def extend_fractal(self, input_ss, contexto: dict) -> dict:
1238
+ log = [f"Extensión Aurora: espacio '{contexto.get('space_id', 'default')}'"]
1239
+ # Validación y normalización de ss_query
1240
+ if isinstance(input_ss, FractalTensor):
1241
+ ss_query = getattr(input_ss, 'Ss', input_ss.nivel_3[0])
1242
+ else:
1243
+ ss_query = input_ss
1244
+ # Normalizar a un vector ternario de longitud 3
1245
+ if not isinstance(ss_query, (list, tuple, np.ndarray)):
1246
+ log.append("⚠️ Entrada inválida, usando vector neutro [0,0,0]")
1247
+ ss_query = [0, 0, 0]
1248
+ else:
1249
+ ss_query = [
1250
+ None if x is None else int(x) if x in (0, 1) else 0
1251
+ for x in list(ss_query)[:3]
1252
+ ] + [0] * (3 - len(ss_query))
1253
+ space_id = contexto.get('space_id', 'default')
1254
+ STEPS = [
1255
+ lambda q, s: (self.lookup_lut(s, q) is not None, self.lookup_lut(s, q)),
1256
+ self._validate_archetype,
1257
+ self._project_dynamics,
1258
+ self._contextualize_relations
1259
+ ]
1260
+ METHODS = [
1261
+ "reconstrucción por LUT",
1262
+ "reconstrucción por arquetipo (axioma)",
1263
+ "proyección por dinámica (raíz preservada)",
1264
+ "contextualización por relator (raíz preservada)"
1265
+ ]
1266
+ for step, method in zip(STEPS, METHODS):
1267
+ ok, tensor = step(ss_query, space_id)
1268
+ if ok and tensor is not None:
1269
+ log.append(f"✅ {method}.")
1270
+ # Si tensor es lista, seleccionar el más cercano
1271
+ if isinstance(tensor, list):
1272
+ armonizador = self.armonizador
1273
+ tensor = min(tensor, key=lambda t: armonizador.ambiguity_score(ss_query, t.nivel_3[0]))
1274
+ # For dynamic/relator, preserve root
1275
+ if method.startswith("proyección") or method.startswith("contextualización"):
1276
+ result = copy.deepcopy(tensor)
1277
+ result.nivel_3[0] = ss_query
1278
+ root_vector = result.nivel_3[0]
1279
+ harm = self.armonizador.harmonize(root_vector, archetype=root_vector, space_id=space_id)
1280
+ result.nivel_3[0] = harm["output"]
1281
+ return {
1282
+ "reconstructed_tensor": result,
1283
+ "reconstruction_method": method + " + armonizador",
1284
+ "log": log
1285
+ }
1286
+ tensor_c = copy.deepcopy(tensor)
1287
+ root_vector = tensor_c.nivel_3[0]
1288
+ harm = self.armonizador.harmonize(root_vector, archetype=root_vector, space_id=space_id)
1289
+ tensor_c.nivel_3[0] = harm["output"]
1290
+ return {
1291
+ "reconstructed_tensor": tensor_c,
1292
+ "reconstruction_method": method + " + armonizador",
1293
+ "log": log
1294
+ }
1295
+ # Fallback
1296
+ log.append("🤷 No se encontraron coincidencias. Devolviendo tensor neutro.")
1297
+ tensor_n = FractalTensor.neutral()
1298
+ root_vector = tensor_n.nivel_3[0]
1299
+ harm = self.armonizador.harmonize(root_vector, archetype=root_vector, space_id=space_id)
1300
+ tensor_n.nivel_3[0] = harm["output"]
1301
+ return {
1302
+ "reconstructed_tensor": tensor_n,
1303
+ "reconstruction_method": "tensor neutro (sin coincidencias) + armonizador",
1304
+ "log": log
1305
+ }
1306
+
1307
+ # --- LUT methods moved into Extender as proper methods ---
1308
+ def lookup_lut(self, space_id: str, ss_query: list):
1309
+ """
1310
+ Consulta la LUT para el espacio dado y la firma ss_query.
1311
+ """
1312
+ lut = getattr(self, '_lut_tables', {}).get(space_id, None)
1313
+ if lut is None:
1314
+ return None
1315
+ key = tuple(ss_query)
1316
+ return lut.get(key, None)
1317
+
1318
+ def learn_lut_from_data(self, space_id: str, data: list):
1319
+ """
1320
+ Aprende una LUT auto-didacta a partir de datos [(ss_query, tensor_result)].
1321
+ Si hay conflicto, usa voto por mayoría.
1322
+ """
1323
+ lut = {}
1324
+ votes = {}
1325
+ for ss_query, tensor_result in data:
1326
+ # Ensure key is always a tuple of ints (flatten if needed)
1327
+ if isinstance(ss_query, list) and len(ss_query) > 0 and isinstance(ss_query[0], list):
1328
+ key = tuple(ss_query[0])
1329
+ else:
1330
+ key = tuple(ss_query)
1331
+ if key not in votes:
1332
+ votes[key] = []
1333
+ votes[key].append(tensor_result)
1334
+ # Votar por mayoría (por nivel_3[0])
1335
+ for key, tensors in votes.items():
1336
+ # Si solo hay uno, usarlo
1337
+ if len(tensors) == 1:
1338
+ lut[key] = tensors[0]
1339
+ else:
1340
+ # Votar por mayoría en nivel_3[0]
1341
+ root_votes = [t.nivel_3[0] if hasattr(t, 'nivel_3') else t for t in tensors]
1342
+ # Simple: moda por componente
1343
+ majority = []
1344
+ for i in range(3):
1345
+ vals = [rv[i] for rv in root_votes if rv and len(rv) > i]
1346
+ if vals:
1347
+ count_1 = vals.count(1)
1348
+ count_0 = vals.count(0)
1349
+ if count_1 > count_0:
1350
+ majority.append(1)
1351
+ elif count_0 > count_1:
1352
+ majority.append(0)
1353
+ else:
1354
+ majority.append(None)
1355
+ else:
1356
+ majority.append(None)
1357
+ # Crear tensor neutro y ponerle la raíz votada
1358
+ tensor_majority = FractalTensor.neutral()
1359
+ tensor_majority.nivel_3[0] = majority
1360
+ lut[key] = tensor_majority
1361
+ self.patch_lut(space_id, lut)
1362
+ return lut
1363
+
1364
+ def patch_lut(self, space_id, lut):
1365
+ """Actualiza o crea la LUT para el espacio dado."""
1366
+ if not hasattr(self, '_lut_tables') or self._lut_tables is None:
1367
+ self._lut_tables = {}
1368
+ self._lut_tables[space_id] = lut
1369
+
1370
+ def vote_candidates(self, candidates: list):
1371
+ """
1372
+ Vota entre varios tensores candidatos y devuelve el tensor con mayoría en la raíz.
1373
+ """
1374
+ if not candidates:
1375
+ return FractalTensor.neutral()
1376
+ root_votes = [c.nivel_3[0] if hasattr(c, 'nivel_3') else c for c in candidates]
1377
+ majority = []
1378
+ for i in range(3):
1379
+ vals = [rv[i] for rv in root_votes if rv and len(rv) > i]
1380
+ if vals:
1381
+ count_1 = vals.count(1)
1382
+ count_0 = vals.count(0)
1383
+ if count_1 > count_0:
1384
+ majority.append(1)
1385
+ elif count_0 > count_1:
1386
+ majority.append(0)
1387
+ else:
1388
+ majority.append(None)
1389
+ else:
1390
+ majority.append(None)
1391
+ tensor_majority = FractalTensor.neutral()
1392
+ tensor_majority.nivel_3[0] = majority
1393
+ return tensor_majority
1394
+
1395
+ # Move these expert classes to top-level scope
1396
+ class ExpertArquetipo:
1397
+ def __init__(self, kb):
1398
+ self.kb = kb
1399
+ def validar_axioma(self, ss_query, space_id):
1400
+ """
1401
+ Valida si existe un axioma. Es más robusto:
1402
+ 1. Busca por Ss (memoria factual) en ss_index.
1403
+ 2. Si falla, busca por Ms (raíz) en ms_index.
1404
+ """
1405
+ universe = self.kb._get_space(space_id)
1406
+ # --- FIX: Normalización de tipo reforzada con int() ---
1407
+ ss_query_fixed = tuple(int(0 if x is None else x) for x in ss_query[:3])
1408
+ # Búsqueda primaria por Ss/Ms en el índice (ahora ambos usan la misma clave)
1409
+ exact_match_list = universe.ss_index.get(ss_query_fixed)
1410
+ if exact_match_list:
1411
+ return True, exact_match_list[0]
1412
+ # Búsqueda de respaldo (aunque debería ser redundante si el índice es el mismo)
1413
+ exact_by_ms = universe.find_archetype_by_ms(list(ss_query_fixed))
1414
+ if exact_by_ms:
1415
+ return True, exact_by_ms
1416
+ return False, None
1417
+
1418
+ class ExpertDinamica:
1419
+ def __init__(self, kb):
1420
+ self.kb = kb
1421
+ def proyectar_dinamica(self, ss_query, space_id):
1422
+ # Busca tensor con dMs compatible o genera proyección neutra
1423
+ universe = self.kb._get_space(space_id)
1424
+ best, best_sim = None, 0.0
1425
+ for archetype in universe.archetypes:
1426
+ dMs = getattr(archetype, 'dMs', None)
1427
+ if dMs:
1428
+ sim = sum(1 for a, b in zip(getattr(archetype, 'Ss', []), ss_query) if a == b) / len(ss_query)
1429
+ if sim > best_sim:
1430
+ best_sim, best = sim, archetype
1431
+ if best and best_sim > 0.7:
1432
+ return True, best
1433
+ return False, None
1434
+
1435
+ class ExpertRelator:
1436
+ def __init__(self, kb):
1437
+ self.kb = kb
1438
+ self.transcender = Transcender()
1439
+ def contextualizar(self, ss_query, space_id):
1440
+ # Busca relaciones semánticas entre ss_query y todos los arquetipos
1441
+ universe = self.kb._get_space(space_id)
1442
+ best, best_score = None, float('-inf')
1443
+ for archetype in universe.archetypes:
1444
+ rel = self.transcender.relate_vectors(ss_query, getattr(archetype, 'Ss', [0,0,0]))
1445
+ score = -sum(abs(x) if x is not None else 0 for x in rel)
1446
+ if score > best_score:
1447
+ best_score, best = score, archetype
1448
+ if best:
1449
+ return True, best
1450
+ return False, None
1451
+
1452
+
1453
+ # ===================== MÓDULO DE ROTACIÓN DE TENSORES (ARC - Aurean Rotation Cycle)
1454
+ # ===============================================================================
1455
+ PHI = (1 + 5**0.5) / 2
1456
+ PHI_INVERSE = 1 / PHI
1457
+
1458
+ class TensorRotor:
1459
+ """Genera secuencias de índices para la selección de tensores."""
1460
+ def __init__(self, N: int, mode: str = "hybrid", start_k: int = 0):
1461
+ self.N = max(1, N)
1462
+ self.k = start_k % self.N
1463
+ self.i = 0
1464
+ self.mode = mode
1465
+ self.phi_step = max(1, round(PHI_INVERSE * self.N))
1466
+ self.fib_cache = {n: self._fib(n) for n in range(16)}
1467
+
1468
+ def _fib(self, n: int) -> int:
1469
+ if n <= 1: return 1
1470
+ a, b = 1, 1
1471
+ for _ in range(2, n + 1): a, b = b, a + b
1472
+ return b
1473
+
1474
+ def next(self) -> int:
1475
+ """Calcula el siguiente índice según la estrategia de rotación."""
1476
+ if self.mode == "phi":
1477
+ self.k = (self.k + self.phi_step) % self.N
1478
+ elif self.mode == "fibonacci":
1479
+ fib_step = self.fib_cache[self.i % 16]
1480
+ self.k = (self.k + fib_step) % self.N
1481
+ else: # hybrid
1482
+ if self.i % 2 == 0:
1483
+ self.k = (self.k + self.phi_step) % self.N
1484
+ else:
1485
+ fib_step = self.fib_cache[(self.i // 2) % 16]
1486
+ self.k = (self.k + fib_step) % self.N
1487
+ self.i += 1
1488
+ return self.k
1489
+
1490
+ class TensorPoolManager:
1491
+ """Gestor de pools de tensores con rotación estratificada."""
1492
+ def __init__(self):
1493
+ self.pools: Dict[str, List['FractalTensor']] = {
1494
+ 'deep27': [], 'mid9': [], 'shallow3': [], 'mixed': []
1495
+ }
1496
+ self.rotors: Dict[str, TensorRotor] = {
1497
+ 'deep27': TensorRotor(0, mode="fibonacci"),
1498
+ 'mid9': TensorRotor(0, mode="hybrid"),
1499
+ 'shallow3': TensorRotor(0, mode="phi"),
1500
+ 'mixed': TensorRotor(0, mode="hybrid")
1501
+ }
1502
+
1503
+ def add_tensor(self, tensor: 'FractalTensor'):
1504
+ """Añade un tensor al pool apropiado según su profundidad."""
1505
+ # Un tensor se considera "profundo" si tiene datos en el nivel 27
1506
+ if any(any(bit is not None for bit in vec) for vec in tensor.nivel_27):
1507
+ pool_name = 'deep27'
1508
+ elif any(any(bit is not None for bit in vec) for vec in tensor.nivel_9):
1509
+ pool_name = 'mid9'
1510
+ else:
1511
+ pool_name = 'shallow3'
1512
+
1513
+ self.pools[pool_name].append(tensor)
1514
+ self.pools['mixed'].append(tensor)
1515
+ self.rotors[pool_name].N = len(self.pools[pool_name])
1516
+ self.rotors['mixed'].N = len(self.pools['mixed'])
1517
+
1518
+ def get_tensor_trio(self, task_type: str = "arquetipo") -> List['FractalTensor']:
1519
+ """Obtiene un trío de tensores optimizado para una tarea específica."""
1520
+ task_to_pool = {
1521
+ 'arquetipo': 'mixed', 'dinamica': 'shallow3',
1522
+ 'relator': 'mid9', 'axioma': 'deep27'
1523
+ }
1524
+ pool_name = task_to_pool.get(task_type, 'mixed')
1525
+
1526
+ # Fallback inteligente si el pool preferido no tiene suficientes tensores
1527
+ if len(self.pools[pool_name]) < 3:
1528
+ fallback_order = ['mixed', 'shallow3', 'mid9', 'deep27']
1529
+ for fb_pool_name in fallback_order:
1530
+ if len(self.pools[fb_pool_name]) >= 3:
1531
+ pool_name = fb_pool_name
1532
+ break
1533
+
1534
+ pool = self.pools[pool_name]
1535
+ rotor = self.rotors[pool_name]
1536
+
1537
+ if len(pool) < 3:
1538
+ trio = list(pool)
1539
+ while len(trio) < 3: trio.append(FractalTensor.neutral())
1540
+ return trio
1541
+
1542
+ indices = [rotor.next() for _ in range(3)]
1543
+ return [pool[i] for i in indices]
1544
+
1545
+
1546
+ KnowledgeBase = FractalKnowledgeBase
1547
+
1548
+
1549
+ # ===============================================================================
1550
+ # DEMOSTRACIÓN FRACTAL COMPLETA
1551
+ # ===============================================================================
1552
+
1553
+ if __name__ == "__main__":
1554
+ print("🌌 DEMOSTRACIÓN FRACTAL AURORA: Arquetipos, Dinámicas y Relatores 🌌")
1555
+ print("=" * 80)
1556
+ print("Análisis de conocimiento desde tres perspectivas con datos coherentes.")
1557
+ print("=" * 80)
1558
+
1559
+ # === INICIALIZACIÓN DEL ECOSISTEMA AURORA ===
1560
+ kb = FractalKnowledgeBase()
1561
+ evolver = Evolver()
1562
+ extender = Extender(kb)
1563
+ pool_manager = TensorPoolManager()
1564
+
1565
+ # === FASE 1: ANÁLISIS DE ARQUETIPOS ===
1566
+ print("\n🏛️ FASE 1: ANÁLISIS DE ARQUETIPOS")
1567
+ print("-" * 50)
1568
+ familia_movimiento = [
1569
+ FractalTensor(nivel_3=[[1,0,1]], nivel_9=[[1,0,0]]*9, nivel_27=[[0,0,1]]*27),
1570
+ FractalTensor(nivel_3=[[1,0,1]], nivel_9=[[1,1,0]]*9, nivel_27=[[0,1,0]]*27),
1571
+ FractalTensor(nivel_3=[[1,0,1]], nivel_9=[[0,1,1]]*9, nivel_27=[[1,1,1]]*27)
1572
+ ]
1573
+ for t in familia_movimiento: pool_manager.add_tensor(t)
1574
+
1575
+ trio_para_arquetipo = pool_manager.get_tensor_trio('arquetipo')
1576
+ arquetipo_movimiento = evolver.compute_fractal_archetype(trio_para_arquetipo)
1577
+ print(f"• Analizando {len(trio_para_arquetipo)} conceptos de 'movimiento'...")
1578
+ print(f"• ARQUETIPO resultante: {arquetipo_movimiento}")
1579
+ # Extraer Ss del tensor raíz del arquetipo (ejemplo: primer vector de nivel_3)
1580
+ Ss_movimiento = arquetipo_movimiento.nivel_3[0] if hasattr(arquetipo_movimiento, 'nivel_3') else [0,0,0]
1581
+ kb.add_archetype('fisica_conceptual', 'movimiento_universal', arquetipo_movimiento, Ss=Ss_movimiento)
1582
+ print(" └─ Arquetipo almacenado en el espacio 'fisica_conceptual'.")
1583
+ # Initialize LUT for archetype
1584
+ extender.learn_lut_from_data('fisica_conceptual', [([1, 0, 1], arquetipo_movimiento)])
1585
+ # Print KB indices for debug
1586
+ print("DEBUG: ss_index:", kb._get_space('fisica_conceptual').ss_index)
1587
+ print("DEBUG: ms_index:", kb._get_space('fisica_conceptual').ms_index)
1588
+
1589
+ # === FASE 2: ANÁLISIS DE DINÁMICAS ===
1590
+ print("\n⚡ FASE 2: ANÁLISIS DE DINÁMICAS")
1591
+ print("-" * 50)
1592
+
1593
+ estado_t0 = FractalTensor.random()
1594
+ estado_t1 = evolver.base_transcender.compute_full_fractal(estado_t0, estado_t0, FractalTensor.neutral())
1595
+ estado_t2 = evolver.base_transcender.compute_full_fractal(estado_t1, estado_t1, FractalTensor.neutral())
1596
+ secuencia_temporal_logica = [estado_t0, estado_t1, estado_t2]
1597
+
1598
+ print(f"• Analizando secuencia temporal de {len(secuencia_temporal_logica)} estados.")
1599
+ firma_dinamica = evolver.analyze_fractal_dynamics(secuencia_temporal_logica)
1600
+ print(f"• DINÁMICA resultante: {firma_dinamica}")
1601
+ Ss_dinamica = firma_dinamica.nivel_3[0] if hasattr(firma_dinamica, 'nivel_3') else [0,0,0]
1602
+ kb.add_archetype('dinamicas_sistemas', 'evolucion_sistema_X', firma_dinamica, Ss=Ss_dinamica)
1603
+ print(" └─ Dinámica almacenada en 'dinamicas_sistemas'.")
1604
+
1605
+ # === FASE 3: ANÁLISIS DE RELATORES ===
1606
+ print("\n🔗 FASE 3: ANÁLISIS DE RELATORES")
1607
+ print("-" * 50)
1608
+
1609
+ concepto_base = FractalTensor.random()
1610
+ concepto_fuerza = evolver.base_transcender.compute_full_fractal(concepto_base, FractalTensor.random(), FractalTensor.neutral())
1611
+ concepto_energia = evolver.base_transcender.compute_full_fractal(concepto_base, concepto_fuerza, FractalTensor.neutral())
1612
+ cluster_contextual = [concepto_base, concepto_fuerza, concepto_energia]
1613
+
1614
+ print(f"• Analizando clúster de {len(cluster_contextual)} conceptos relacionados.")
1615
+ firma_relacional = evolver.analyze_fractal_relations(cluster_contextual)
1616
+ print(f"• RELATOR resultante: {firma_relacional}")
1617
+ Ss_relator = firma_relacional.nivel_3[0] if hasattr(firma_relacional, 'nivel_3') else [0,0,0]
1618
+ kb.add_archetype('mapas_conceptuales', 'mecanica_basica', firma_relacional, Ss=Ss_relator)
1619
+ print(" └─ Relator almacenado en 'mapas_conceptuales'.")
1620
+
1621
+
1622
+ # === FASE 4: EXTENSIÓN BASADA EN CONOCIMIENTO ===
1623
+ print("\n🧩 FASE 4: EXTENSIÓN POR ARQUETIPO")
1624
+ print("-" * 50)
1625
+
1626
+ # Usar directamente el vector raíz del arquetipo como consulta
1627
+ query_vector = arquetipo_movimiento.nivel_3[0][:3]
1628
+ print(f"• Vector a extender (solo con raíz): {query_vector}")
1629
+
1630
+ # Extensión robusta: la función copiará todos los niveles del arquetipo encontrado
1631
+ resultado_extension = extender.extend_fractal(
1632
+ query_vector,
1633
+ contexto={'space_id': 'fisica_conceptual'}
1634
+ )
1635
+
1636
+ tensor_reconstruido = resultado_extension['reconstructed_tensor']
1637
+ print(f"• Método de reconstrucción: {resultado_extension['reconstruction_method']}")
1638
+ print(f"• Tensor reconstruido: {tensor_reconstruido}")
1639
+ print(" └─ Los niveles 3, 9 y 27 se han rellenado desde la KB.")
1640
+
1641
+ print("\n" + "=" * 80)
1642
+ print("🎯 DEMOSTRACIÓN FINALIZADA.")
1643
+ print("=" * 80)
1644
+
1645
+ ################################################################################################
1646
+ # ===================== INTEGRACIÓN DE REVERSIBILIDAD Y AUTOSIMILARIDAD ========================
1647
+ ################################################################################################
1648
+
1649
+ # --- UTILIDADES DE IMPUTACIÓN Y VALIDACIÓN ---
1650
+ from statistics import mode
1651
+ def impute_none(vec, context, tensor=None):
1652
+ """Imputa valores None usando contexto y niveles superiores del tensor."""
1653
+ result = []
1654
+ for i, v in enumerate(vec):
1655
+ if v is not None:
1656
+ result.append(v)
1657
+ continue
1658
+ col = [c[i] for c in context if i < len(c) and c[i] is not None]
1659
+ if tensor:
1660
+ if hasattr(tensor, 'nivel_9') and i < len(tensor.nivel_9):
1661
+ col.extend([x for x in tensor.nivel_9[i] if x is not None])
1662
+ if hasattr(tensor, 'nivel_3') and i < len(tensor.nivel_3[0]):
1663
+ col.append(tensor.nivel_3[0][i % 3])
1664
+ result.append(mode(col) if col else 0)
1665
+ return result
1666
+
1667
+ def validate_ternary_input(vec, expected_len=3, name="input"):
1668
+ """Valida y normaliza entradas ternarias."""
1669
+ if not isinstance(vec, (list, tuple)) or len(vec) != expected_len:
1670
+ print(f"Warning: Invalid {name}: {vec}, using default {[0]*expected_len}")
1671
+ return [0] * expected_len
1672
+ return [None if x is None else int(x) % 2 for x in vec]
1673
+
1674
+ # --- ESTRATEGIAS DE SELECCIÓN AUTOSIMILARES ---
1675
+ def golden_ratio_skip_indices(N, k, trios=3):
1676
+ """Devuelve una lista de índices para formar un trío usando saltos áureos."""
1677
+ phi = (1 + math.sqrt(5)) / 2
1678
+ skip = max(1, int(N / phi))
1679
+ indices = []
1680
+ idx = k
1681
+ for _ in range(trios):
1682
+ indices.append(idx % N)
1683
+ idx = (idx + skip) % N
1684
+ return indices
1685
+
1686
+ def fibonacci(n):
1687
+ a, b = 1, 1
1688
+ for _ in range(n):
1689
+ a, b = b, a + b
1690
+ return a
1691
+
1692
+ def fibonacci_stepping_indices(N, k, trios=3, start_step=0):
1693
+ """Devuelve una lista de índices para formar un trío usando pasos de Fibonacci."""
1694
+ indices = []
1695
+ idx = k
1696
+ for i in range(start_step, start_step + trios):
1697
+ step = fibonacci(i)
1698
+ indices.append(idx % N)
1699
+ idx = (idx + step) % N
1700
+ return indices
1701
+
1702
+ # --- AJUSTE AUTOSIMILAR (OPCIONAL, SI SE DESEA USAR EN ARMONIZADOR) ---
1703
+ class AdjustmentStep:
1704
+ def apply(self, vec, archetype, kb=None):
1705
+ raise NotImplementedError
1706
+
1707
+ class MicroShift(AdjustmentStep):
1708
+ def apply(self, vec, archetype, kb=None):
1709
+ return [a if v is None else v for v, a in zip(vec, archetype)]
1710
+
1711
+ class Regrewire(AdjustmentStep):
1712
+ def apply(self, vec, archetype, kb=None):
1713
+ if sum(1 for v, a in zip(vec, archetype) if v == a) >= 2:
1714
+ return list(archetype)
1715
+ return vec
1716
+
1717
+ class Metatune(AdjustmentStep):
1718
+ def apply(self, vec, archetype, kb=None):
1719
+ if kb is not None:
1720
+ matches = kb.find_archetype_by_ms(archetype)
1721
+ if matches:
1722
+ return matches[0]
1723
+ return vec
1724
+
1725
+ # --- TRIAGE FUNCIONAL UNIFICADO (INFERENCIA, APRENDIZAJE, DEDUCCIÓN) ---
1726
+ def f_not(x):
1727
+ return 1 - x if x in (0, 1) else 0
1728
+ def f_not_inv(x):
1729
+ return 1 - x if x in (0, 1) else 0
1730
+ f_not.inverse = f_not_inv
1731
+
1732
+ def f_inc(x):
1733
+ return (x + 1) % 2 if x in (0, 1) else 0
1734
+ def f_inc_inv(x):
1735
+ return (x - 1) % 2 if x in (0, 1) else 0
1736
+ f_inc.inverse = f_inc_inv
1737
+
1738
+ def f_id(x):
1739
+ return x
1740
+ f_id.inverse = f_id
1741
+
1742
+ def aurora_apply_sequence(val, sequence):
1743
+ for func in sequence:
1744
+ val = func(val)
1745
+ return val
1746
+
1747
+ def aurora_triage_inferencia(A, B, M):
1748
+ allowed_functions = [f_not, f_inc, f_id]
1749
+ def normalize_ternary_vector(vec, default=[0,0,0]):
1750
+ if not isinstance(vec, (list, tuple)):
1751
+ return default.copy()
1752
+ return [None if x is None else int(x) if x in (0,1) else 0 for x in list(vec)[:3]] + [0]*(3-len(vec))
1753
+ def validate_function_sequence(M, allowed_functions, max_len=2):
1754
+ if not isinstance(M, (list, tuple)) or len(M) != 3:
1755
+ return [[f_id] for _ in range(3)]
1756
+ return [list(seq)[:max_len] if isinstance(seq, (list, tuple)) and all(f in allowed_functions for f in seq) else [f_id] for seq in M[:3]] + [[f_id]]*(3-len(M))
1757
+ A = normalize_ternary_vector(A)
1758
+ B = normalize_ternary_vector(B)
1759
+ M = validate_function_sequence(M, allowed_functions)
1760
+ R = []
1761
+ for i in range(3):
1762
+ rA = aurora_apply_sequence(A[i], M[i])
1763
+ rB = aurora_apply_sequence(B[i], M[i])
1764
+ if rA is not None and rB is not None:
1765
+ R.append(rA + rB)
1766
+ else:
1767
+ R.append(0)
1768
+ return R
1769
+
1770
+ def aurora_triage_aprendizaje(A, B, R, funciones_permitidas, max_len=2):
1771
+ import itertools
1772
+ def normalize_ternary_vector(vec, default=[0,0,0]):
1773
+ if not isinstance(vec, (list, tuple)):
1774
+ return default.copy()
1775
+ return [None if x is None else int(x) if x in (0,1) else 0 for x in list(vec)[:3]] + [0]*(3-len(vec))
1776
+ A = normalize_ternary_vector(A)
1777
+ B = normalize_ternary_vector(B)
1778
+ R = normalize_ternary_vector(R)
1779
+ M = []
1780
+ for i in range(3):
1781
+ found = False
1782
+ for l in range(1, max_len+1):
1783
+ for seq in itertools.product(funciones_permitidas, repeat=l):
1784
+ rA = aurora_apply_sequence(A[i], seq)
1785
+ rB = aurora_apply_sequence(B[i], seq)
1786
+ if rA is not None and rB is not None and rA + rB == R[i]:
1787
+ M.append(list(seq))
1788
+ found = True
1789
+ break
1790
+ if found:
1791
+ break
1792
+ if not found:
1793
+ M.append([f_id])
1794
+ return M
1795
+
1796
+ def aurora_triage_deduccion(M, R, known, known_is_A=True):
1797
+ allowed_functions = [f_not, f_inc, f_id]
1798
+ def normalize_ternary_vector(vec, default=[0,0,0]):
1799
+ if not isinstance(vec, (list, tuple)):
1800
+ return default.copy()
1801
+ return [None if x is None else int(x) if x in (0,1) else 0 for x in list(vec)[:3]] + [0]*(3-len(vec))
1802
+ def validate_function_sequence(M, allowed_functions, max_len=2):
1803
+ if not isinstance(M, (list, tuple)) or len(M) != 3:
1804
+ return [[f_id] for _ in range(3)]
1805
+ return [list(seq)[:max_len] if isinstance(seq, (list, tuple)) and all(f in allowed_functions for f in seq) else [f_id] for seq in M[:3]] + [[f_id]]*(3-len(M))
1806
+ R = normalize_ternary_vector(R)
1807
+ known = normalize_ternary_vector(known)
1808
+ M = validate_function_sequence(M, allowed_functions)
1809
+ deduced = []
1810
+ for i in range(3):
1811
+ val = R[i] - aurora_apply_sequence(known[i], M[i]) if R[i] is not None and known[i] is not None else 0
1812
+ for func in reversed(M[i]):
1813
+ if hasattr(func, 'inverse'):
1814
+ val = func.inverse(val)
1815
+ deduced.append(val if val in (0,1,None) else 0)
1816
+ return deduced
1817
+
1818
+ # --- INVERSE EVOLVER: REVERSIBILIDAD FRACTAL ---
1819
+ class InverseEvolver:
1820
+ """Reconstruye tensores originales desde sintetizados usando lógica inversa."""
1821
+ def __init__(self, knowledge_base=None):
1822
+ self.kb = knowledge_base
1823
+ self.trigate = Trigate()
1824
+ self.armonizador = Armonizador(knowledge_base=knowledge_base) if knowledge_base else None
1825
+
1826
+ def reconstruct_vectors(self, Ms):
1827
+ """Deduce A y B desde Ms usando lógica inversa del Trigate."""
1828
+ A, B = [], []
1829
+ for m in Ms:
1830
+ if m == 0:
1831
+ A.append(0)
1832
+ B.append(0)
1833
+ elif m == 1:
1834
+ A.append(1)
1835
+ B.append(0)
1836
+ else:
1837
+ A.append(None)
1838
+ B.append(None)
1839
+ return A, B
1840
+
1841
+ def reconstruct_fractal(self, synthesized):
1842
+ """Reconstruye tres tensores fractales desde uno sintetizado (niveles 3, 9, 27)."""
1843
+ ms_key = synthesized.nivel_3[0]
1844
+ A_l3, B_l3 = self.reconstruct_vectors(ms_key)
1845
+ C_l3 = [TernaryLogic.ternary_xor(a, b) for a, b in zip(A_l3, B_l3)]
1846
+
1847
+ def reconstruct_level(level_vectors):
1848
+ A_vectors, B_vectors, C_vectors = [], [], []
1849
+ for vec in level_vectors:
1850
+ a, b = self.reconstruct_vectors(vec)
1851
+ c = [TernaryLogic.ternary_xor(x, y) for x, y in zip(a, b)]
1852
+ A_vectors.append(a)
1853
+ B_vectors.append(b)
1854
+ C_vectors.append(c)
1855
+ return A_vectors, B_vectors, C_vectors
1856
+
1857
+ A_l9, B_l9, C_l9 = reconstruct_level(synthesized.nivel_9)
1858
+ A_l27, B_l27, C_l27 = reconstruct_level(synthesized.nivel_27)
1859
+
1860
+ def create_tensor(n3, n9, n27, ss):
1861
+ tensor = FractalTensor(nivel_3=n3, nivel_9=n9, nivel_27=n27)
1862
+ if self.armonizador:
1863
+ harm = self.armonizador.harmonize(
1864
+ tensor.nivel_3[0],
1865
+ archetype=tensor.nivel_3[0],
1866
+ space_id="inverse"
1867
+ )
1868
+ tensor.nivel_3[0] = harm["output"]
1869
+ tensor.Ss = ss
1870
+ return tensor
1871
+
1872
+ return [
1873
+ create_tensor([A_l3], A_l9, A_l27, ss="A"),
1874
+ create_tensor([B_l3], B_l9, B_l27, ss="B"),
1875
+ create_tensor([C_l3], C_l9, C_l27, ss="C")
1876
+ ]
1877
+
1878
+
1879
+ # ===================== TRIGATE IMPLEMENTATION =====================
1880
+
1881
+ # Ternary values
1882
+ NULL = None
1883
+ TERNARY_VALUES = [0, 1, NULL]
1884
+
1885
+
1886
+ class Trigate:
1887
+ """
1888
+ Fundamental Aurora logic module implementing ternary operations.
1889
+
1890
+ Supports three operational modes:
1891
+ 1. Inference: A + B + M -> R (given inputs and control, compute result)
1892
+ 2. Learning: A + B + R -> M (given inputs and result, learn control)
1893
+ 3. Deduction: M + R + A -> B (given control, result, and one input, deduce other)
1894
+
1895
+ All operations are O(1) using precomputed lookup tables (LUTs).
1896
+ """
1897
+
1898
+ # Class-level LUTs (computed once at module load)
1899
+ _LUT_INFER: Dict[Tuple, int] = {}
1900
+ _LUT_LEARN: Dict[Tuple, int] = {}
1901
+ _LUT_DEDUCE_A: Dict[Tuple, int] = {}
1902
+ _LUT_DEDUCE_B: Dict[Tuple, int] = {}
1903
+ _initialized = False
1904
+
1905
+ def __init__(self):
1906
+ """Initialize Trigate and ensure LUTs are computed."""
1907
+ if not Trigate._initialized:
1908
+ Trigate._initialize_luts()
1909
+
1910
+ @classmethod
1911
+ def _initialize_luts(cls):
1912
+ """
1913
+ Initialize all lookup tables for O(1) operations.
1914
+
1915
+ Based on extended XOR logic with NULL propagation:
1916
+ - 0 XOR 0 = 0, 0 XOR 1 = 1, 1 XOR 0 = 1, 1 XOR 1 = 0
1917
+ - Any operation with NULL propagates NULL
1918
+ - Control bit M determines XOR (1) or XNOR (0)
1919
+ """
1920
+ print("Initializing Trigate LUTs...")
1921
+
1922
+ # Generate all possible combinations for ternary logic
1923
+ for a, b, m, r in itertools.product(TERNARY_VALUES, repeat=4):
1924
+
1925
+ # INFERENCE LUT: (a, b, m) -> r
1926
+ computed_r = cls._compute_inference(a, b, m)
1927
+ cls._LUT_INFER[(a, b, m)] = computed_r
1928
+
1929
+ # LEARNING LUT: (a, b, r) -> m
1930
+ # Find control M that produces R given A, B
1931
+ learned_m = cls._compute_learning(a, b, r)
1932
+ cls._LUT_LEARN[(a, b, r)] = learned_m
1933
+
1934
+ # DEDUCTION LUTS: (m, r, a) -> b and (m, r, b) -> a
1935
+ deduced_b = cls._compute_deduction_b(m, r, a)
1936
+ deduced_a = cls._compute_deduction_a(m, r, b)
1937
+
1938
+ cls._LUT_DEDUCE_B[(m, r, a)] = deduced_b
1939
+ cls._LUT_DEDUCE_A[(m, r, b)] = deduced_a
1940
+
1941
+ cls._initialized = True
1942
+ print(f"Trigate LUTs initialized: {len(cls._LUT_INFER)} entries each")
1943
+
1944
+ @staticmethod
1945
+ def _compute_inference(a: Union[int, None], b: Union[int, None], m: Union[int, None]) -> Union[int, None]:
1946
+ """
1947
+ Compute R given A, B, M using ternary logic.
1948
+
1949
+ Logic:
1950
+ - If any input is NULL, result is NULL
1951
+ - If M is 1: R = A XOR B
1952
+ - If M is 0: R = A XNOR B (NOT(A XOR B))
1953
+ """
1954
+ if a is NULL or b is NULL or m is NULL:
1955
+ return NULL
1956
+
1957
+ if m == 1: # XOR mode
1958
+ return a ^ b
1959
+ else: # XNOR mode (m == 0)
1960
+ return 1 - (a ^ b)
1961
+
1962
+ @staticmethod
1963
+ def _compute_learning(a: Union[int, None], b: Union[int, None], r: Union[int, None]) -> Union[int, None]:
1964
+ """
1965
+ Learn control M given A, B, R.
1966
+
1967
+ Logic:
1968
+ - If any input is NULL, cannot learn -> NULL
1969
+ - If A XOR B == R, then M = 1 (XOR)
1970
+ - If A XOR B != R, then M = 0 (XNOR)
1971
+ """
1972
+ if a is NULL or b is NULL or r is NULL:
1973
+ return NULL
1974
+
1975
+ xor_result = a ^ b
1976
+ if xor_result == r:
1977
+ return 1 # XOR mode produces correct result
1978
+ else:
1979
+ return 0 # XNOR mode produces correct result
1980
+
1981
+ @staticmethod
1982
+ def _compute_deduction_a(m: Union[int, None], r: Union[int, None], b: Union[int, None]) -> Union[int, None]:
1983
+ """
1984
+ Deduce A given M, R, B.
1985
+
1986
+ Logic:
1987
+ - If any input is NULL, cannot deduce -> NULL
1988
+ - If M is 1: A = R XOR B (since R = A XOR B)
1989
+ - If M is 0: A = NOT(R) XOR B (since R = NOT(A XOR B))
1990
+ """
1991
+ if m is NULL or r is NULL or b is NULL:
1992
+ return NULL
1993
+
1994
+ if m == 1: # XOR mode: A XOR B = R -> A = R XOR B
1995
+ return r ^ b
1996
+ else: # XNOR mode: NOT(A XOR B) = R -> A XOR B = NOT(R) -> A = NOT(R) XOR B
1997
+ return (1 - r) ^ b
1998
+
1999
+ @staticmethod
2000
+ def _compute_deduction_b(m: Union[int, None], r: Union[int, None], a: Union[int, None]) -> Union[int, None]:
2001
+ """
2002
+ Deduce B given M, R, A.
2003
+
2004
+ Logic: Same as deduce_a but solving for B instead of A.
2005
+ """
2006
+ if m is NULL or r is NULL or a is NULL:
2007
+ return NULL
2008
+
2009
+ if m == 1: # XOR mode: A XOR B = R -> B = R XOR A
2010
+ return r ^ a
2011
+ else: # XNOR mode: NOT(A XOR B) = R -> A XOR B = NOT(R) -> B = NOT(R) XOR A
2012
+ return (1 - r) ^ a
2013
+
2014
+ def infer(self, A: List[Union[int, None]], B: List[Union[int, None]], M: List[Union[int, None]]) -> List[Union[int, None]]:
2015
+ """
2016
+ Inference mode: Compute R given A, B, M.
2017
+
2018
+ Args:
2019
+ A: First input vector (3 bits)
2020
+ B: Second input vector (3 bits)
2021
+ M: Control vector (3 bits)
2022
+
2023
+ Returns:
2024
+ R: Result vector (3 bits)
2025
+ """
2026
+ if not (len(A) == len(B) == len(M) == 3):
2027
+ raise ValueError("All vectors must have exactly 3 elements")
2028
+
2029
+ return [self._LUT_INFER[(a, b, m)] for a, b, m in zip(A, B, M)]
2030
+
2031
+ def learn(self, A: List[Union[int, None]], B: List[Union[int, None]], R: List[Union[int, None]]) -> List[Union[int, None]]:
2032
+ """
2033
+ Learning mode: Learn control M given A, B, R.
2034
+
2035
+ Args:
2036
+ A: First input vector (3 bits)
2037
+ B: Second input vector (3 bits)
2038
+ R: Target result vector (3 bits)
2039
+
2040
+ Returns:
2041
+ M: Learned control vector (3 bits)
2042
+ """
2043
+ if not (len(A) == len(B) == len(R) == 3):
2044
+ raise ValueError("All vectors must have exactly 3 elements")
2045
+
2046
+ return [self._LUT_LEARN[(a, b, r)] for a, b, r in zip(A, B, R)]
2047
+
2048
+ def deduce_a(self, M: List[Union[int, None]], R: List[Union[int, None]], B: List[Union[int, None]]) -> List[Union[int, None]]:
2049
+ """
2050
+ Deduction mode: Deduce A given M, R, B.
2051
+
2052
+ Args:
2053
+ M: Control vector (3 bits)
2054
+ R: Result vector (3 bits)
2055
+ B: Known input vector (3 bits)
2056
+
2057
+ Returns:
2058
+ A: Deduced input vector (3 bits)
2059
+ """
2060
+ if not (len(M) == len(R) == len(B) == 3):
2061
+ raise ValueError("All vectors must have exactly 3 elements")
2062
+
2063
+ return [self._LUT_DEDUCE_A[(m, r, b)] for m, r, b in zip(M, R, B)]
2064
+
2065
+ def deduce_b(self, M: List[Union[int, None]], R: List[Union[int, None]], A: List[Union[int, None]]) -> List[Union[int, None]]:
2066
+ """
2067
+ Deduction mode: Deduce B given M, R, A.
2068
+
2069
+ Args:
2070
+ M: Control vector (3 bits)
2071
+ R: Result vector (3 bits)
2072
+ A: Known input vector (3 bits)
2073
+
2074
+ Returns:
2075
+ B: Deduced input vector (3 bits)
2076
+ """
2077
+ if not (len(M) == len(R) == len(A) == 3):
2078
+ raise ValueError("All vectors must have exactly 3 elements")
2079
+
2080
+ return [self._LUT_DEDUCE_B[(m, r, a)] for m, r, a in zip(M, R, A)]
2081
+
2082
+ def validate_triangle_closure(self, A: List[Union[int, None]], B: List[Union[int, None]],
2083
+ M: List[Union[int, None]], R: List[Union[int, None]]) -> bool:
2084
+ """
2085
+ Validate that A, B, M, R form a valid logical triangle.
2086
+
2087
+ This ensures geometric coherence: the triangle "closes" properly.
2088
+
2089
+ Args:
2090
+ A, B, M, R: The four vectors forming the logical triangle
2091
+
2092
+ Returns:
2093
+ True if triangle is valid, False otherwise
2094
+ """
2095
+ # Compute expected R from A, B, M
2096
+ expected_R = self.infer(A, B, M)
2097
+
2098
+ # Check if computed R matches provided R
2099
+ for expected, actual in zip(expected_R, R):
2100
+ if expected != actual:
2101
+ return False
2102
+
2103
+ return True
2104
+
2105
+ def get_truth_table(self, operation: str = "infer") -> str:
2106
+ """
2107
+ Generate human-readable truth table for debugging.
2108
+
2109
+ Args:
2110
+ operation: "infer", "learn", "deduce_a", or "deduce_b"
2111
+
2112
+ Returns:
2113
+ Formatted truth table string
2114
+ """
2115
+ if operation == "infer":
2116
+ lut = self._LUT_INFER
2117
+ header = "A | B | M | R"
2118
+ elif operation == "learn":
2119
+ lut = self._LUT_LEARN
2120
+ header = "A | B | R | M"
2121
+ elif operation == "deduce_a":
2122
+ lut = self._LUT_DEDUCE_A
2123
+ header = "M | R | B | A"
2124
+ elif operation == "deduce_b":
2125
+ lut = self._LUT_DEDUCE_B
2126
+ header = "M | R | A | B"
2127
+ else:
2128
+ raise ValueError(f"Unknown operation: {operation}")
2129
+
2130
+ def format_val(v):
2131
+ return "N" if v is NULL else str(v)
2132
+
2133
+ lines = [header, "-" * len(header)]
2134
+
2135
+ for key, value in sorted(lut.items()):
2136
+ key_str = " | ".join(format_val(k) for k in key)
2137
+ val_str = format_val(value)
2138
+ lines.append(f"{key_str} | {val_str}")
2139
+
2140
+ return "\n".join(lines)
2141
+
2142
+ def synthesize(self, A: List[int], B: List[int]) -> Tuple[List[Optional[int]], List[Optional[int]]]:
2143
+ """Síntesis Aurora: genera M (lógica) y S (forma) desde A y B."""
2144
+ M = [TernaryLogic.ternary_xor(a, b) for a, b in zip(A, B)]
2145
+ S = [TernaryLogic.ternary_xnor(a, b) for a, b in zip(A, B)]
2146
+ return M, S
2147
+
2148
+ def recursive_synthesis(
2149
+ self,
2150
+ vectors: List[List[int]]
2151
+ ) -> Tuple[List[Optional[int]], List[List[Optional[int]]]]:
2152
+ """
2153
+ Reduce secuencialmente una lista ≥2 de vectores ternarios.
2154
+
2155
+ Devuelve:
2156
+ • resultado_final – vector M después de la última combinación
2157
+ • history – lista de cada resultado intermedio (M-k) para depuración
2158
+ """
2159
+ if len(vectors) < 2:
2160
+ raise ValueError("Se necesitan al menos 2 vectores")
2161
+
2162
+ history: List[List[Optional[int]]] = []
2163
+ current = vectors[0]
2164
+
2165
+ for nxt in vectors[1:]:
2166
+ current, _ = self.synthesize(current, nxt)
2167
+ history.append(current)
2168
+
2169
+ return current, history
2170
+
2171
+ def __repr__(self) -> str:
2172
+ return f"Trigate(initialized={self._initialized}, lut_size={len(self._LUT_INFER)})"