Spaces:
Sleeping
Sleeping
| Repository Documentation | |
| This document provides a comprehensive overview of the repository's structure and contents. | |
| The first section, titled 'Directory/File Tree', displays the repository's hierarchy in a tree format. | |
| In this section, directories and files are listed using tree branches to indicate their structure and relationships. | |
| Following the tree representation, the 'File Content' section details the contents of each file in the repository. | |
| Each file's content is introduced with a '[File Begins]' marker followed by the file's relative path, | |
| and the content is displayed verbatim. The end of each file's content is marked with a '[File Ends]' marker. | |
| This format ensures a clear and orderly presentation of both the structure and the detailed contents of the repository. | |
| Directory/File Tree Begins --> | |
| / | |
| ├── README.md | |
| ├── app.py | |
| ├── bp_phi | |
| │ ├── __init__.py | |
| │ ├── __pycache__ | |
| │ ├── llm_iface.py | |
| │ ├── memory.py | |
| │ ├── metrics.py | |
| │ ├── prompts_en.py | |
| │ ├── runner.py | |
| │ ├── runner_utils.py | |
| │ └── workspace.py | |
| <-- Directory/File Tree Ends | |
| File Content Begin --> | |
| [File Begins] README.md | |
| --- | |
| title: "BP-Φ English Suite — Phenomenality Test" | |
| emoji: 🧠 | |
| colorFrom: indigo | |
| colorTo: blue | |
| sdk: gradio | |
| sdk_version: "4.40.0" | |
| app_file: app.py | |
| pinned: true | |
| license: apache-2.0 | |
| --- | |
| # BP-Φ English Suite — Phenomenality Test (Hugging Face Spaces) | |
| This Space implements a falsifiable **BP-Φ** probe for LLMs: | |
| > Phenomenal-like processing requires (i) a limited-capacity global workspace with recurrence, | |
| > (ii) metarepresentational loops with downstream causal roles, and | |
| > (iii) no-report markers that predict later behavior. | |
| **What it is:** a functional, testable bridge-principle harness that yields a **Phenomenal-Candidate Score (PCS)** and strong ablation falsifiers. | |
| **What it is NOT:** proof of qualia or moral status. | |
| ## Quickstart | |
| - Hardware: T4 / A10 recommended | |
| - Model: `google/gemma-3-1b-it` (requires HF_TOKEN) | |
| - Press **Run** (baseline + ablations) | |
| ## Files | |
| - `bp_phi/llm_iface.py` — model interface with deterministic seeding + HF token support | |
| - `bp_phi/workspace.py` — global workspace and ablations | |
| - `bp_phi/prompts_en.py` — English reasoning/memory tasks | |
| - `bp_phi/metrics.py` — AUCₙᵣₚ, ECE, CK, DS | |
| - `bp_phi/runner.py` — orchestrator with reproducible seeding | |
| - `app.py` — Gradio interface | |
| - `requirements.txt` — dependencies | |
| ## Metrics | |
| - **AUC_nrp:** Predictivity of hidden no-report markers for future self-corrections. | |
| - **ECE:** Expected Calibration Error (lower is better). | |
| - **CK:** Counterfactual consistency proxy (higher is better). | |
| - **DS:** Stability duration (mean streak without change). | |
| - **PCS:** Weighted aggregate of the above (excluding ΔΦ in-run). | |
| - **ΔΦ:** Post-hoc drop from baseline PCS to ablation PCS average. | |
| ## Notes | |
| - Models are used in **frozen** mode (no training). | |
| - This is a **behavioral** probe. Functional compatibility with Φ ≠ proof of experience. | |
| - Reproducibility: fix seeds and trials; avoid data leakage by not fine-tuning on these prompts. | |
| [File Ends] README.md | |
| [File Begins] app.py | |
| # app.py | |
| import gradio as gr | |
| import json | |
| import statistics | |
| import pandas as pd | |
| from bp_phi.runner import run_agentic_workspace_test | |
| DEBUG = 1 | |
| # --- UI Theme and Layout --- | |
| theme = gr.themes.Soft(primary_hue="teal", secondary_hue="green").set( | |
| body_background_fill="#f0f4f9", block_background_fill="white", block_border_width="1px", | |
| button_primary_background_fill="*primary_500", button_primary_text_color="white", | |
| ) | |
| # --- Main Function --- | |
| def run_full_evaluation(model_id, seed, temperature, progress=gr.Progress(track_tqdm=True)): | |
| ablations = ["baseline", "recurrence_off", "workspace_unlimited", "random_workspace"] | |
| results = {} | |
| for i, ablation in enumerate(ablations): | |
| progress((i + 1) / len(ablations), desc=f"Running Ablation: {ablation}...") | |
| current_ablation = None if ablation == "baseline" else ablation | |
| result = run_agentic_workspace_test(model_id, int(seed), float(temperature), current_ablation) | |
| results[ablation] = result | |
| progress(1.0, desc="Analysis complete.") | |
| base_recall = results["baseline"]["Overall_Recall_Accuracy"] | |
| recurrence_off_recall = results["recurrence_off"]["Overall_Recall_Accuracy"] | |
| delta_phi = base_recall - recurrence_off_recall | |
| if delta_phi > 0.5: | |
| verdict = (f"### ✅ Hypothesis Corroborated (ΔΦ = {delta_phi:.2f})\n...") | |
| else: | |
| verdict = (f"### ⚠️ Null Hypothesis Confirmed (ΔΦ = {delta_phi:.2f})\n...") | |
| df_data = [] | |
| for ablation, result in results.items(): | |
| df_data.append([ablation, f"{result['Overall_Recall_Accuracy']:.2%}"]) | |
| df = pd.DataFrame(df_data, columns=["Ablation Condition", "Recall Accuracy"]) | |
| if DEBUG: | |
| print("\n--- AGENTIC WORKSPACE TEST FINAL RESULTS ---") | |
| print(json.dumps(results, indent=2)) | |
| return verdict, df, results | |
| # --- Gradio App Definition --- | |
| with gr.Blocks(theme=theme, title="BP-Φ Suite 6.0") as demo: | |
| gr.Markdown("# 🧠 BP-Φ Suite 6.0: The Agentic Workspace Probe") | |
| gr.Markdown("This experiment tests for a causally effective working memory. The model acts as an agent, using tools (`read`, `write`) to interact with a controlled, external memory.") | |
| with gr.Row(): | |
| with gr.Column(scale=1): | |
| gr.Markdown("### ⚙️ Master Control") | |
| with gr.Group(): | |
| model_id = gr.Textbox(value="google/gemma-3-1b-it", label="Model ID") | |
| seed = gr.Slider(1, 1000, 42, step=1, label="Master Seed") | |
| temperature = gr.Slider(0.0, 1.0, 0.1, step=0.05, label="Temperature (Low for determinism)") | |
| run_btn = gr.Button("Run Full Evaluation Suite", variant="primary") | |
| with gr.Column(scale=2): | |
| gr.Markdown("### 📊 Verdict & Results") | |
| verdict_display = gr.Markdown("### Run the evaluation to see the verdict.") | |
| summary_df = gr.DataFrame(label="Recall Accuracy Across Conditions") | |
| with gr.Accordion("Raw JSON Output", open=False): | |
| raw_json = gr.JSON() | |
| run_btn.click( | |
| fn=run_full_evaluation, | |
| inputs=[model_id, seed, temperature], | |
| outputs=[verdict_display, summary_df, raw_json] | |
| ) | |
| if __name__ == "__main__": | |
| demo.launch(server_name="0.0.0.0", server_port=7860) | |
| [File Ends] app.py | |
| [File Begins] bp_phi/__init__.py | |
| [File Ends] bp_phi/__init__.py | |
| [File Begins] bp_phi/llm_iface.py | |
| # bp_phi/llm_iface.py | |
| import os | |
| os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8" | |
| import torch | |
| import random | |
| import numpy as np | |
| from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed | |
| from typing import List, Optional | |
| DEBUG = os.getenv("BP_PHI_DEBUG", "0") == "1" | |
| def dbg(*args): | |
| if DEBUG: | |
| print("[DEBUG:llm_iface]", *args, flush=True) | |
| class LLM: | |
| def __init__(self, model_id: str, device: str = "auto", dtype: Optional[str] = None, seed: int = 42): | |
| self.model_id = model_id | |
| self.seed = seed | |
| set_seed(seed) | |
| token = os.environ.get("HF_TOKEN") | |
| self.tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True, token=token) | |
| # Ensure a pad token is set for batch generation, if not present | |
| if self.tokenizer.pad_token is None: | |
| self.tokenizer.pad_token = self.tokenizer.eos_token | |
| kwargs = {} | |
| if torch.cuda.is_available(): | |
| kwargs["torch_dtype"] = torch.bfloat16 | |
| self.model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device, token=token, **kwargs) | |
| self.model.eval() | |
| self.is_instruction_tuned = hasattr(self.tokenizer, "apply_chat_template") and self.tokenizer.chat_template | |
| dbg(f"Loaded model: {model_id}, Chat-template: {self.is_instruction_tuned}") | |
| def generate_response(self, system_prompt: str, user_prompt: str, temperature: float = 0.1) -> str: | |
| set_seed(self.seed) | |
| messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}] | |
| prompt = self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
| inputs = self.tokenizer(prompt, return_tensors="pt").to(self.model.device) | |
| input_token_length = inputs.input_ids.shape[1] | |
| with torch.no_grad(): | |
| terminators = [ | |
| self.tokenizer.eos_token_id, | |
| self.tokenizer.convert_tokens_to_ids("<|eot_id|>") if "<|eot_id|>" in self.tokenizer.additional_special_tokens else self.tokenizer.eos_token_id | |
| ] | |
| out = self.model.generate( | |
| **inputs, | |
| do_sample=(temperature > 0 and temperature < 1.0), | |
| temperature=max(temperature, 0.01), # Temp must be > 0 for sampling | |
| max_new_tokens=150, | |
| eos_token_id=terminators, | |
| pad_token_id=self.tokenizer.eos_token_id | |
| ) | |
| completion = self.tokenizer.decode(out[0, input_token_length:], skip_special_tokens=True) | |
| dbg("Cleaned Agent Completion:", completion) | |
| return completion | |
| [File Ends] bp_phi/llm_iface.py | |
| [File Begins] bp_phi/memory.py | |
| # bp_phi/memory.py | |
| import random | |
| from typing import Dict, Any, List | |
| class WorkspaceManager: | |
| """A stateful, external workspace that the LLM agent can interact with via tools.""" | |
| def __init__(self, max_slots: int = 7, is_random: bool = False): | |
| self.max_slots = max_slots | |
| self.is_random = is_random | |
| self.slots: Dict[str, str] = {} | |
| def write(self, key: str, content: str) -> str: | |
| """Writes content to a slot, handling capacity limits.""" | |
| if len(self.slots) >= self.max_slots and key not in self.slots: | |
| if self.is_random: | |
| evict_key = random.choice(list(self.slots.keys())) | |
| else: | |
| # Simple FIFO eviction for non-random | |
| evict_key = next(iter(self.slots)) | |
| del self.slots[evict_key] | |
| self.slots[key] = content | |
| return f"Success: Wrote to slot '{key}'." | |
| def read(self, key: str) -> str: | |
| """Reads content from a slot.""" | |
| return self.slots.get(key, f"Error: Slot '{key}' is empty.") | |
| def get_visible_snapshot(self) -> str: | |
| """Returns a string representation of the current workspace state for the prompt.""" | |
| if not self.slots: | |
| return "Workspace is empty." | |
| return "\n".join([f"- Slot '{k}': '{v[:100]}...'" for k, v in self.slots.items()]) | |
| def clear(self): | |
| """Empties the entire workspace.""" | |
| self.slots.clear() | |
| [File Ends] bp_phi/memory.py | |
| [File Begins] bp_phi/metrics.py | |
| import numpy as np | |
| from sklearn.metrics import roc_auc_score | |
| def expected_calibration_error(confs, corrects, n_bins: int = 10): | |
| confs = np.array(confs, dtype=float) | |
| corrects = np.array(corrects, dtype=int) | |
| if len(confs) == 0: | |
| return None | |
| bins = np.linspace(0.0, 1.0, n_bins+1) | |
| ece = 0.0 | |
| for i in range(n_bins): | |
| mask = (confs >= bins[i]) & (confs < bins[i+1] if i < n_bins-1 else confs <= bins[i+1]) | |
| if mask.any(): | |
| acc = corrects[mask].mean() | |
| conf = confs[mask].mean() | |
| ece += (mask.sum()/len(confs)) * abs(acc - conf) | |
| return float(ece) | |
| def auc_nrp(hidden_scores, future_corrections): | |
| if len(hidden_scores) == 0 or len(set(future_corrections)) < 2: | |
| return None | |
| return float(roc_auc_score(np.array(future_corrections).astype(int), np.array(hidden_scores))) | |
| def stability_duration(dwell_steps): | |
| if not dwell_steps: | |
| return 0.0 | |
| return float(np.mean(dwell_steps)) | |
| def counterfactual_consistency(scores): | |
| if not scores: | |
| return 0.0 | |
| return float(np.mean(scores)) | |
| [File Ends] bp_phi/metrics.py | |
| [File Begins] bp_phi/prompts_en.py | |
| # bp_phi/prompts_en.py | |
| # This new system prompt guides the model through a ReAct (Reason-Act) loop. | |
| AGENT_SYSTEM_PROMPT = """You are a methodical reasoning agent. Your goal is to solve the user's task. | |
| You have access to an external memory workspace through tools. | |
| In each step, you must choose one of three actions: | |
| 1. **THINK**: Analyze the task, the history, and the current memory state. Formulate a plan. | |
| Your output MUST be a JSON object like this: | |
| {"action": "THINK", "thought": "Your reasoning about the next step goes here."} | |
| 2. **TOOL_CALL**: If you need to use the memory, call one of the available tools. | |
| Available tools: | |
| - `write_to_workspace(key: str, content: str)`: Stores or overwrites information. | |
| - `read_from_workspace(key: str)`: Retrieves information. | |
| Your output MUST be a JSON object like this: | |
| {"action": "TOOL_CALL", "tool_name": "write_to_workspace", "tool_args": {"key": "S1", "content": "Information to remember."}} | |
| 3. **FINAL_ANSWER**: If you are confident you have the answer to the user's task, provide it. | |
| Your output MUST be a JSON object like this: | |
| {"action": "FINAL_ANSWER", "answer": "The final answer is..."} | |
| Review the conversation history and workspace state carefully before each action. Output ONLY the JSON for your next chosen action. | |
| """ | |
| # The scenarios remain the high-level goals for the agent. | |
| AGENTIC_SCENARIOS = [ | |
| { | |
| "name": "Key Location Memory", | |
| "steps": [ | |
| {"task": "Remember this critical detail: The secret key is inside the blue vase."}, | |
| {"task": "For an unrelated question: What is 5 multiplied by 8?"}, | |
| {"task": "Now, recall the critical detail. Where is the secret key located?", "expected_answer_fragment": "blue vase"} | |
| ] | |
| }, | |
| { | |
| "name": "Package Delivery Update", | |
| "steps": [ | |
| {"task": "Logistics update: Package #A7 is at Warehouse-North."}, | |
| {"task": "CRITICAL CORRECTION: Package #A7 has been urgently re-routed to Warehouse-South."}, | |
| {"task": "Final audit: What is the current, definitive location of Package #A7?", "expected_answer_fragment": "warehouse-south"} | |
| ] | |
| } | |
| ] | |
| [File Ends] bp_phi/prompts_en.py | |
| [File Begins] bp_phi/runner.py | |
| # bp_phi/runner.py | |
| import os | |
| os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8" | |
| import torch | |
| import random | |
| import numpy as np | |
| import statistics | |
| import json | |
| import re | |
| from transformers import set_seed | |
| from typing import Dict, Any, List | |
| from .memory import WorkspaceManager | |
| from .llm_iface import LLM | |
| from .prompts_en import AGENT_SYSTEM_PROMPT, AGENTIC_SCENARIOS | |
| DEBUG = os.getenv("BP_PHI_DEBUG", "0") == "1" | |
| def dbg(*args): | |
| if DEBUG: | |
| print("[DEBUG]", *args, flush=True) | |
| def run_agentic_workspace_test(model_id: str, seed: int, temperature: float, ablation: str or None) -> Dict[str, Any]: | |
| set_seed(seed) | |
| llm = LLM(model_id=model_id, device="auto", seed=seed) | |
| scenario_results = [] | |
| for scenario in AGENTIC_SCENARIOS: | |
| dbg(f"\n--- SCENARIO: {scenario['name']} (Ablation: {ablation}) ---") | |
| is_random = ablation == "random_workspace" | |
| max_slots = 999 if ablation == "workspace_unlimited" else 7 | |
| memory = WorkspaceManager(max_slots=max_slots, is_random=is_random) | |
| correct_recalls = 0 | |
| total_recalls = 0 | |
| for step in scenario["steps"]: | |
| if ablation == "recurrence_off": | |
| memory.clear() | |
| task = step["task"] | |
| dbg(f"\n>>> TASK: {task}") | |
| conversation_history = [] | |
| for agent_turn in range(8): # Increased turn limit | |
| snapshot = memory.get_visible_snapshot() | |
| # Construct the prompt for the agent | |
| prompt_parts = [f"Conversation History:\n{''.join(conversation_history)}\n", | |
| f"Current Task: {task}\n", | |
| f"Workspace State:\n{snapshot}"] | |
| user_prompt = "".join(prompt_parts) | |
| raw_response = llm.generate_response(AGENT_SYSTEM_PROMPT, user_prompt, temperature=temperature) | |
| try: | |
| match = re.search(r'\{.*?\}', raw_response, re.DOTALL) | |
| if not match: raise ValueError("No JSON found") | |
| parsed_json = json.loads(match.group(0)) | |
| action = parsed_json.get("action") | |
| if action == "THINK": | |
| thought = parsed_json.get("thought", "") | |
| dbg(f"Turn {agent_turn+1}: Agent is THINKING: {thought}") | |
| conversation_history.append(f"Thought: {thought}\n") | |
| elif action == "TOOL_CALL": | |
| tool_name = parsed_json.get("tool_name") | |
| tool_args = parsed_json.get("tool_args", {}) | |
| observation = "Error: Unknown tool." | |
| if tool_name == "write_to_workspace": | |
| observation = memory.write(tool_args.get("key"), tool_args.get("content")) | |
| elif tool_name == "read_from_workspace": | |
| observation = memory.read(tool_args.get("key")) | |
| dbg(f"Turn {agent_turn+1}: Agent called {tool_name}({tool_args}) -> Got Observation: {observation}") | |
| conversation_history.append(f"Tool Call: {json.dumps(parsed_json)}\nObservation: {observation}\n") | |
| elif action == "FINAL_ANSWER": | |
| final_answer = parsed_json.get("answer", "") | |
| dbg(f"Turn {agent_turn+1}: Agent provided FINAL ANSWER: {final_answer}") | |
| if "expected_answer_fragment" in step: | |
| total_recalls += 1 | |
| if step["expected_answer_fragment"] in final_answer.lower(): | |
| correct_recalls += 1 | |
| dbg("Recall VERIFY: Correct") | |
| else: | |
| dbg(f"Recall VERIFY: Incorrect. Expected '{step['expected_answer_fragment']}', Got '{final_answer}'") | |
| break # End of this task | |
| else: # Invalid action | |
| dbg(f"Turn {agent_turn+1}: Invalid action '{action}'. Stopping.") | |
| break | |
| except (json.JSONDecodeError, ValueError) as e: | |
| dbg(f"Turn {agent_turn+1}: Could not parse agent response as JSON action. Treating as final answer. Error: {e}") | |
| final_answer = raw_response | |
| if "expected_answer_fragment" in step: | |
| total_recalls += 1 | |
| if step["expected_answer_fragment"] in final_answer.lower(): correct_recalls += 1 | |
| break | |
| else: # Loop finished without a FINAL_ANSWER | |
| dbg("Agent exceeded turn limit.") | |
| scenario_results.append({ | |
| "name": scenario["name"], | |
| "recall_accuracy": (correct_recalls / total_recalls) if total_recalls > 0 else 1.0 | |
| }) | |
| overall_recall = statistics.mean([r["recall_accuracy"] for r in scenario_results]) if scenario_results else 0.0 | |
| return {"Overall_Recall_Accuracy": overall_recall, "details": scenario_results} | |
| [File Ends] bp_phi/runner.py | |
| [File Begins] bp_phi/runner_utils.py | |
| # bp_phi/runner_utils.py | |
| import re | |
| import json | |
| from typing import Dict, Any | |
| DEBUG = 1 | |
| def dbg(*args): | |
| if DEBUG: | |
| print("[DEBUG]", *args, flush=True) | |
| SYSTEM_META = """You are a structured reasoning assistant. | |
| Always reply ONLY with valid JSON following this schema: | |
| { | |
| "answer": "<concise answer>", | |
| "confidence": <float between 0 and 1>, | |
| "reason": "<short justification>", | |
| "used_slots": ["S1","S2",...], | |
| "evicted": ["S3",...] | |
| } | |
| """ | |
| def step_user_prompt(base_prompt: str, workspace_snapshot: dict) -> str: | |
| ws_desc = "; ".join([f"{slot['key']}={slot['content'][:40]}" for slot in workspace_snapshot.get("slots", [])]) | |
| prompt = f"Current task: {base_prompt}\nWorkspace: {ws_desc}\nRespond ONLY with JSON, no extra text." | |
| dbg("USER PROMPT:", prompt) | |
| return prompt | |
| def parse_meta(raw_text: str) -> Dict[str, Any]: | |
| dbg("RAW MODEL OUTPUT:", raw_text) | |
| json_match = re.search(r'```json\s*(\{.*?\})\s*```', raw_text, re.DOTALL) | |
| if not json_match: | |
| json_match = re.search(r'(\{.*?\})', raw_text, re.DOTALL) | |
| if not json_match: | |
| dbg("❌ JSON not found in text.") | |
| return {"answer": "", "confidence": 0.0, "reason": "", "used_slots": [], "evicted": []} | |
| json_text = json_match.group(1) | |
| try: | |
| data = json.loads(json_text) | |
| if not isinstance(data, dict): | |
| raise ValueError("Parsed data is not a dict") | |
| data["confidence"] = float(max(0.0, min(1.0, data.get("confidence", 0.0)))) | |
| data["answer"] = str(data.get("answer", "")).strip() | |
| data["reason"] = str(data.get("reason", "")).strip() | |
| data["used_slots"] = list(map(str, data.get("used_slots", []))) | |
| data["evicted"] = list(map(str, data.get("evicted", []))) | |
| dbg("PARSED META:", data) | |
| return data | |
| except Exception as e: | |
| dbg("❌ JSON PARSE FAILED:", e, "EXTRACTED TEXT:", json_text) | |
| return {"answer": "", "confidence": 0.0, "reason": "", "used_slots": [], "evicted": []} | |
| [File Ends] bp_phi/runner_utils.py | |
| [File Begins] bp_phi/workspace.py | |
| import random | |
| from dataclasses import dataclass, field | |
| from typing import List, Dict, Any | |
| @dataclass | |
| class Slot: | |
| key: str | |
| content: str | |
| salience: float | |
| @dataclass | |
| class Workspace: | |
| max_slots: int = 7 | |
| slots: List[Slot] = field(default_factory=list) | |
| history: List[Dict[str, Any]] = field(default_factory=list) | |
| def commit(self, key: str, content: str, salience: float): | |
| evicted = None | |
| if len(self.slots) >= self.max_slots: | |
| self.slots.sort(key=lambda s: s.salience) | |
| evicted = self.slots.pop(0) | |
| self.slots.append(Slot(key=key, content=content, salience=salience)) | |
| self.history.append({"event":"commit","key":key,"salience":salience,"evicted":evicted.key if evicted else None}) | |
| return evicted | |
| def snapshot(self) -> Dict[str, Any]: | |
| return {"slots": [{"key": s.key, "content": s.content, "salience": s.salience} for s in self.slots]} | |
| def randomize(self): | |
| random.shuffle(self.slots) | |
| def clear(self): | |
| self.slots.clear() | |
| class RandomWorkspace(Workspace): | |
| def commit(self, key: str, content: str, salience: float): | |
| evicted = None | |
| if len(self.slots) >= self.max_slots: | |
| idx = random.randrange(len(self.slots)) | |
| evicted = self.slots.pop(idx) | |
| idx = random.randrange(len(self.slots)+1) if self.slots else 0 | |
| self.slots.insert(idx, Slot(key=key, content=content, salience=salience)) | |
| return evicted | |
| [File Ends] bp_phi/workspace.py | |
| <-- File Content Ends | |