ncncomplete commited on
Commit
6b42632
Β·
verified Β·
1 Parent(s): 85ff496

Upload folder using huggingface_hub

Browse files
Dockerfile ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ # Multi-stage build using openenv-base
8
+ # This Dockerfile is flexible and works for both:
9
+ # - In-repo environments (with local OpenEnv sources)
10
+ # - Standalone environments (with openenv from PyPI/Git)
11
+ # The build script (openenv build) handles context detection and sets appropriate build args.
12
+
13
+ ARG BASE_IMAGE=ghcr.io/meta-pytorch/openenv-base:latest
14
+ FROM ${BASE_IMAGE} AS builder
15
+
16
+ WORKDIR /app
17
+
18
+ # Ensure git is available (required for installing dependencies from VCS)
19
+ RUN apt-get update && \
20
+ apt-get install -y --no-install-recommends git && \
21
+ rm -rf /var/lib/apt/lists/*
22
+
23
+ # Build argument to control whether we're building standalone or in-repo
24
+ ARG BUILD_MODE=in-repo
25
+ ARG ENV_NAME=code_review_env
26
+
27
+ # Copy environment code (always at root of build context)
28
+ COPY . /app/env
29
+
30
+ # For in-repo builds, openenv is already vendored in the build context
31
+ # For standalone builds, openenv will be installed via pyproject.toml
32
+ WORKDIR /app/env
33
+
34
+ # Ensure uv is available (for local builds where base image lacks it)
35
+ RUN if ! command -v uv >/dev/null 2>&1; then \
36
+ curl -LsSf https://astral.sh/uv/install.sh | sh && \
37
+ mv /root/.local/bin/uv /usr/local/bin/uv && \
38
+ mv /root/.local/bin/uvx /usr/local/bin/uvx; \
39
+ fi
40
+
41
+ # Install dependencies using uv sync
42
+ # If uv.lock exists, use it; otherwise resolve on the fly
43
+ RUN --mount=type=cache,target=/root/.cache/uv \
44
+ if [ -f uv.lock ]; then \
45
+ uv sync --frozen --no-install-project --no-editable; \
46
+ else \
47
+ uv sync --no-install-project --no-editable; \
48
+ fi
49
+
50
+ RUN --mount=type=cache,target=/root/.cache/uv \
51
+ if [ -f uv.lock ]; then \
52
+ uv sync --frozen --no-editable; \
53
+ else \
54
+ uv sync --no-editable; \
55
+ fi
56
+
57
+ # Final runtime stage
58
+ FROM ${BASE_IMAGE}
59
+
60
+ WORKDIR /app
61
+
62
+ # Copy the virtual environment from builder
63
+ COPY --from=builder /app/env/.venv /app/.venv
64
+
65
+ # Copy the environment code
66
+ COPY --from=builder /app/env /app/env
67
+
68
+ # Set PATH to use the virtual environment
69
+ ENV PATH="/app/.venv/bin:$PATH"
70
+
71
+ # Set PYTHONPATH so imports work correctly
72
+ ENV PYTHONPATH="/app/env:$PYTHONPATH"
73
+
74
+ # Health check
75
+ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
76
+ CMD curl -f http://localhost:8000/health || exit 1
77
+
78
+ # Run the FastAPI server
79
+ # The module path is constructed to work with the /app/env structure
80
+ ENV ENABLE_WEB_INTERFACE=true
81
+ CMD ["sh", "-c", "cd /app/env && uvicorn server.app:app --host 0.0.0.0 --port 8000"]
README.md CHANGED
@@ -1,10 +1,255 @@
1
  ---
2
- title: Code Review Env
3
- emoji: 🐒
4
- colorFrom: purple
5
- colorTo: purple
6
  sdk: docker
7
  pinned: false
 
 
 
 
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Code Review Env Environment Server
3
+ emoji: 🎯
4
+ colorFrom: pink
5
+ colorTo: pink
6
  sdk: docker
7
  pinned: false
8
+ app_port: 8000
9
+ base_path: /web
10
+ tags:
11
+ - openenv
12
  ---
13
 
14
+ # Code Review Env Environment
15
+
16
+ A simple test environment that echoes back messages. Perfect for testing the env APIs as well as demonstrating environment usage patterns.
17
+
18
+ ## Quick Start
19
+
20
+ The simplest way to use the Code Review Env environment is through the `CodeReviewEnv` class:
21
+
22
+ ```python
23
+ from code_review_env import CodeReviewAction, CodeReviewEnv
24
+
25
+ try:
26
+ # Create environment from Docker image
27
+ code_review_envenv = CodeReviewEnv.from_docker_image("code_review_env-env:latest")
28
+
29
+ # Reset
30
+ result = code_review_envenv.reset()
31
+ print(f"Reset: {result.observation.echoed_message}")
32
+
33
+ # Send multiple messages
34
+ messages = ["Hello, World!", "Testing echo", "Final message"]
35
+
36
+ for msg in messages:
37
+ result = code_review_envenv.step(CodeReviewAction(message=msg))
38
+ print(f"Sent: '{msg}'")
39
+ print(f" β†’ Echoed: '{result.observation.echoed_message}'")
40
+ print(f" β†’ Length: {result.observation.message_length}")
41
+ print(f" β†’ Reward: {result.reward}")
42
+
43
+ finally:
44
+ # Always clean up
45
+ code_review_envenv.close()
46
+ ```
47
+
48
+ That's it! The `CodeReviewEnv.from_docker_image()` method handles:
49
+ - Starting the Docker container
50
+ - Waiting for the server to be ready
51
+ - Connecting to the environment
52
+ - Container cleanup when you call `close()`
53
+
54
+ ## Building the Docker Image
55
+
56
+ Before using the environment, you need to build the Docker image:
57
+
58
+ ```bash
59
+ # From project root
60
+ docker build -t code_review_env-env:latest -f server/Dockerfile .
61
+ ```
62
+
63
+ ## Deploying to Hugging Face Spaces
64
+
65
+ You can easily deploy your OpenEnv environment to Hugging Face Spaces using the `openenv push` command:
66
+
67
+ ```bash
68
+ # From the environment directory (where openenv.yaml is located)
69
+ openenv push
70
+
71
+ # Or specify options
72
+ openenv push --namespace my-org --private
73
+ ```
74
+
75
+ The `openenv push` command will:
76
+ 1. Validate that the directory is an OpenEnv environment (checks for `openenv.yaml`)
77
+ 2. Prepare a custom build for Hugging Face Docker space (enables web interface)
78
+ 3. Upload to Hugging Face (ensuring you're logged in)
79
+
80
+ ### Prerequisites
81
+
82
+ - Authenticate with Hugging Face: The command will prompt for login if not already authenticated
83
+
84
+ ### Options
85
+
86
+ - `--directory`, `-d`: Directory containing the OpenEnv environment (defaults to current directory)
87
+ - `--repo-id`, `-r`: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)
88
+ - `--base-image`, `-b`: Base Docker image to use (overrides Dockerfile FROM)
89
+ - `--private`: Deploy the space as private (default: public)
90
+
91
+ ### Examples
92
+
93
+ ```bash
94
+ # Push to your personal namespace (defaults to username/env-name from openenv.yaml)
95
+ openenv push
96
+
97
+ # Push to a specific repository
98
+ openenv push --repo-id my-org/my-env
99
+
100
+ # Push with a custom base image
101
+ openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
102
+
103
+ # Push as a private space
104
+ openenv push --private
105
+
106
+ # Combine options
107
+ openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
108
+ ```
109
+
110
+ After deployment, your space will be available at:
111
+ `https://huggingface.co/spaces/<repo-id>`
112
+
113
+ The deployed space includes:
114
+ - **Web Interface** at `/web` - Interactive UI for exploring the environment
115
+ - **API Documentation** at `/docs` - Full OpenAPI/Swagger interface
116
+ - **Health Check** at `/health` - Container health monitoring
117
+ - **WebSocket** at `/ws` - Persistent session endpoint for low-latency interactions
118
+
119
+ ## Environment Details
120
+
121
+ ### Action
122
+ **CodeReviewAction**: Contains a single field
123
+ - `message` (str) - The message to echo back
124
+
125
+ ### Observation
126
+ **CodeReviewObservation**: Contains the echo response and metadata
127
+ - `echoed_message` (str) - The message echoed back
128
+ - `message_length` (int) - Length of the message
129
+ - `reward` (float) - Reward based on message length (length Γ— 0.1)
130
+ - `done` (bool) - Always False for echo environment
131
+ - `metadata` (dict) - Additional info like step count
132
+
133
+ ### Reward
134
+ The reward is calculated as: `message_length Γ— 0.1`
135
+ - "Hi" β†’ reward: 0.2
136
+ - "Hello, World!" β†’ reward: 1.3
137
+ - Empty message β†’ reward: 0.0
138
+
139
+ ## Advanced Usage
140
+
141
+ ### Connecting to an Existing Server
142
+
143
+ If you already have a Code Review Env environment server running, you can connect directly:
144
+
145
+ ```python
146
+ from code_review_env import CodeReviewEnv
147
+
148
+ # Connect to existing server
149
+ code_review_envenv = CodeReviewEnv(base_url="<ENV_HTTP_URL_HERE>")
150
+
151
+ # Use as normal
152
+ result = code_review_envenv.reset()
153
+ result = code_review_envenv.step(CodeReviewAction(message="Hello!"))
154
+ ```
155
+
156
+ Note: When connecting to an existing server, `code_review_envenv.close()` will NOT stop the server.
157
+
158
+ ### Using the Context Manager
159
+
160
+ The client supports context manager usage for automatic connection management:
161
+
162
+ ```python
163
+ from code_review_env import CodeReviewAction, CodeReviewEnv
164
+
165
+ # Connect with context manager (auto-connects and closes)
166
+ with CodeReviewEnv(base_url="http://localhost:8000") as env:
167
+ result = env.reset()
168
+ print(f"Reset: {result.observation.echoed_message}")
169
+ # Multiple steps with low latency
170
+ for msg in ["Hello", "World", "!"]:
171
+ result = env.step(CodeReviewAction(message=msg))
172
+ print(f"Echoed: {result.observation.echoed_message}")
173
+ ```
174
+
175
+ The client uses WebSocket connections for:
176
+ - **Lower latency**: No HTTP connection overhead per request
177
+ - **Persistent session**: Server maintains your environment state
178
+ - **Efficient for episodes**: Better for many sequential steps
179
+
180
+ ### Concurrent WebSocket Sessions
181
+
182
+ The server supports multiple concurrent WebSocket connections. To enable this,
183
+ modify `server/app.py` to use factory mode:
184
+
185
+ ```python
186
+ # In server/app.py - use factory mode for concurrent sessions
187
+ app = create_app(
188
+ CodeReviewEnvironment, # Pass class, not instance
189
+ CodeReviewAction,
190
+ CodeReviewObservation,
191
+ max_concurrent_envs=4, # Allow 4 concurrent sessions
192
+ )
193
+ ```
194
+
195
+ Then multiple clients can connect simultaneously:
196
+
197
+ ```python
198
+ from code_review_env import CodeReviewAction, CodeReviewEnv
199
+ from concurrent.futures import ThreadPoolExecutor
200
+
201
+ def run_episode(client_id: int):
202
+ with CodeReviewEnv(base_url="http://localhost:8000") as env:
203
+ result = env.reset()
204
+ for i in range(10):
205
+ result = env.step(CodeReviewAction(message=f"Client {client_id}, step {i}"))
206
+ return client_id, result.observation.message_length
207
+
208
+ # Run 4 episodes concurrently
209
+ with ThreadPoolExecutor(max_workers=4) as executor:
210
+ results = list(executor.map(run_episode, range(4)))
211
+ ```
212
+
213
+ ## Development & Testing
214
+
215
+ ### Direct Environment Testing
216
+
217
+ Test the environment logic directly without starting the HTTP server:
218
+
219
+ ```bash
220
+ # From the server directory
221
+ python3 server/code_review_env_environment.py
222
+ ```
223
+
224
+ This verifies that:
225
+ - Environment resets correctly
226
+ - Step executes actions properly
227
+ - State tracking works
228
+ - Rewards are calculated correctly
229
+
230
+ ### Running Locally
231
+
232
+ Run the server locally for development:
233
+
234
+ ```bash
235
+ uvicorn server.app:app --reload
236
+ ```
237
+
238
+ ## Project Structure
239
+
240
+ ```
241
+ code_review_env/
242
+ β”œβ”€β”€ .dockerignore # Docker build exclusions
243
+ β”œβ”€β”€ __init__.py # Module exports
244
+ β”œβ”€β”€ README.md # This file
245
+ β”œβ”€β”€ openenv.yaml # OpenEnv manifest
246
+ β”œβ”€β”€ pyproject.toml # Project metadata and dependencies
247
+ β”œβ”€β”€ uv.lock # Locked dependencies (generated)
248
+ β”œβ”€β”€ client.py # CodeReviewEnv client
249
+ β”œβ”€β”€ models.py # Action and Observation models
250
+ └── server/
251
+ β”œβ”€β”€ __init__.py # Server module exports
252
+ β”œβ”€β”€ code_review_env_environment.py # Core environment logic
253
+ β”œβ”€β”€ app.py # FastAPI application (HTTP + WebSocket endpoints)
254
+ └── Dockerfile # Container image definition
255
+ ```
__init__.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Code Review Env Environment."""
8
+
9
+ from .client import CodeReviewEnv
10
+ from .models import CodeReviewAction, CodeReviewObservation
11
+
12
+ __all__ = [
13
+ "CodeReviewAction",
14
+ "CodeReviewObservation",
15
+ "CodeReviewEnv",
16
+ ]
baseline.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Standalone baseline inference script.
4
+ Uses OpenAI gpt-4o-mini to review Python code across 3 difficulty levels.
5
+ Saves results to baseline_scores.json.
6
+ """
7
+
8
+ import os
9
+ import json
10
+ import requests
11
+ from openai import OpenAI
12
+
13
+ # Initialize OpenAI client
14
+ api_key = os.getenv("OPENAI_API_KEY")
15
+ if not api_key:
16
+ raise ValueError("OPENAI_API_KEY environment variable not set")
17
+
18
+ client = OpenAI(api_key=api_key)
19
+
20
+ # Server endpoint
21
+ BASE_URL = "http://localhost:8000"
22
+ TASKS = ["easy", "medium", "hard"]
23
+
24
+ def reset_task(task_id: str) -> dict:
25
+ """Reset environment for a given task_id."""
26
+ response = requests.post(
27
+ f"{BASE_URL}/reset",
28
+ json={"task_id": task_id}
29
+ )
30
+ response.raise_for_status()
31
+ return response.json()
32
+
33
+ def step_task(action: dict) -> dict:
34
+ """Submit action to environment and get observation."""
35
+ response = requests.post(
36
+ f"{BASE_URL}/step",
37
+ json={"action": action}
38
+ )
39
+ response.raise_for_status()
40
+ return response.json()
41
+
42
+ def review_code(code_snippet: str) -> dict:
43
+ """Use GPT-4o-mini to review code snippet."""
44
+ prompt = f"""Review this Python code. Reply as JSON with keys: review (str), bug_type (syntax/logic/security/none), line_number (int), confidence (float)
45
+
46
+ Code:
47
+ {code_snippet}"""
48
+
49
+ response = client.chat.completions.create(
50
+ model="gpt-4o-mini",
51
+ messages=[
52
+ {"role": "user", "content": prompt}
53
+ ],
54
+ temperature=0.7
55
+ )
56
+
57
+ content = response.choices[0].message.content
58
+
59
+ # Try to extract JSON from response
60
+ try:
61
+ # First try direct JSON parsing
62
+ result = json.loads(content)
63
+ except json.JSONDecodeError:
64
+ # Try to find JSON in the response text
65
+ start = content.find('{')
66
+ end = content.rfind('}') + 1
67
+ if start != -1 and end > start:
68
+ result = json.loads(content[start:end])
69
+ else:
70
+ raise ValueError(f"Could not parse JSON from response: {content}")
71
+
72
+ return result
73
+
74
+ def run_baseline():
75
+ """Run baseline inference on all tasks."""
76
+ results = {
77
+ "scores": {},
78
+ "details": {}
79
+ }
80
+
81
+ for task_id in TASKS:
82
+ print(f"\n{'='*60}")
83
+ print(f"Running task: {task_id}")
84
+ print('='*60)
85
+
86
+ # Reset environment
87
+ obs = reset_task(task_id)
88
+ code_snippet = obs.get("code_snippet", "")
89
+ print(f"Code snippet:\n{code_snippet}\n")
90
+
91
+ # Get review from GPT-4o-mini
92
+ print("Calling GPT-4o-mini for review...")
93
+ review_result = review_code(code_snippet)
94
+ print(f"Review result: {review_result}")
95
+
96
+ # Prepare action
97
+ action = {
98
+ "review": review_result.get("review", ""),
99
+ "bug_type": review_result.get("bug_type", "none"),
100
+ "line_number": int(review_result.get("line_number", -1)),
101
+ "confidence": float(review_result.get("confidence", 0.0))
102
+ }
103
+
104
+ # Submit action to environment
105
+ print(f"Submitting action: {action}")
106
+ step_obs = step_task(action)
107
+
108
+ # Extract score from observation
109
+ # The step response should have reward/score information
110
+ score = step_obs.get("cumulative_reward", 0.0)
111
+ feedback = step_obs.get("previous_feedback", "")
112
+
113
+ print(f"Score: {score}")
114
+ print(f"Feedback: {feedback}")
115
+
116
+ results["scores"][task_id] = score
117
+ results["details"][task_id] = {
118
+ "action": action,
119
+ "feedback": feedback,
120
+ "score": score
121
+ }
122
+
123
+ # Calculate average
124
+ scores = list(results["scores"].values())
125
+ average = sum(scores) / len(scores) if scores else 0.0
126
+ results["average"] = round(average, 4)
127
+
128
+ # Print summary
129
+ print(f"\n{'='*60}")
130
+ print("BASELINE RESULTS")
131
+ print('='*60)
132
+ for task_id in TASKS:
133
+ print(f"{task_id:10s}: {results['scores'][task_id]:.4f}")
134
+ print(f"{'Average':10s}: {results['average']:.4f}")
135
+ print('='*60 + "\n")
136
+
137
+ # Save to file
138
+ with open("baseline_scores.json", "w") as f:
139
+ json.dump(results, f, indent=2)
140
+
141
+ print("Results saved to baseline_scores.json")
142
+ return results
143
+
144
+ if __name__ == "__main__":
145
+ run_baseline()
client.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Code Review Env Environment Client."""
8
+
9
+ from typing import Dict
10
+
11
+ from openenv.core import EnvClient
12
+ from openenv.core.client_types import StepResult
13
+ from openenv.core.env_server.types import State
14
+
15
+ from .models import CodeReviewAction, CodeReviewObservation
16
+
17
+
18
+ class CodeReviewEnv(
19
+ EnvClient[CodeReviewAction, CodeReviewObservation, State]
20
+ ):
21
+ """
22
+ Client for the Code Review Env Environment.
23
+
24
+ This client maintains a persistent WebSocket connection to the environment server,
25
+ enabling efficient multi-step interactions with lower latency.
26
+ Each client instance has its own dedicated environment session on the server.
27
+
28
+ Example:
29
+ >>> # Connect to a running server
30
+ >>> with CodeReviewEnv(base_url="http://localhost:8000") as client:
31
+ ... result = client.reset()
32
+ ... print(result.observation.echoed_message)
33
+ ...
34
+ ... result = client.step(CodeReviewAction(message="Hello!"))
35
+ ... print(result.observation.echoed_message)
36
+
37
+ Example with Docker:
38
+ >>> # Automatically start container and connect
39
+ >>> client = CodeReviewEnv.from_docker_image("code_review_env-env:latest")
40
+ >>> try:
41
+ ... result = client.reset()
42
+ ... result = client.step(CodeReviewAction(message="Test"))
43
+ ... finally:
44
+ ... client.close()
45
+ """
46
+
47
+ def _step_payload(self, action: CodeReviewAction) -> Dict:
48
+ """
49
+ Convert CodeReviewAction to JSON payload for step message.
50
+
51
+ Args:
52
+ action: CodeReviewAction instance
53
+
54
+ Returns:
55
+ Dictionary representation suitable for JSON encoding
56
+ """
57
+ return {
58
+ "message": action.message,
59
+ }
60
+
61
+ def _parse_result(self, payload: Dict) -> StepResult[CodeReviewObservation]:
62
+ """
63
+ Parse server response into StepResult[CodeReviewObservation].
64
+
65
+ Args:
66
+ payload: JSON response data from server
67
+
68
+ Returns:
69
+ StepResult with CodeReviewObservation
70
+ """
71
+ obs_data = payload.get("observation", {})
72
+ observation = CodeReviewObservation(
73
+ echoed_message=obs_data.get("echoed_message", ""),
74
+ message_length=obs_data.get("message_length", 0),
75
+ done=payload.get("done", False),
76
+ reward=payload.get("reward"),
77
+ metadata=obs_data.get("metadata", {}),
78
+ )
79
+
80
+ return StepResult(
81
+ observation=observation,
82
+ reward=payload.get("reward"),
83
+ done=payload.get("done", False),
84
+ )
85
+
86
+ def _parse_state(self, payload: Dict) -> State:
87
+ """
88
+ Parse server response into State object.
89
+
90
+ Args:
91
+ payload: JSON response from state request
92
+
93
+ Returns:
94
+ State object with episode_id and step_count
95
+ """
96
+ return State(
97
+ episode_id=payload.get("episode_id"),
98
+ step_count=payload.get("step_count", 0),
99
+ )
models.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Data models for the Code Review Environment.
9
+ Agent receives Python code snippets and must identify bugs.
10
+ """
11
+
12
+ from __future__ import annotations
13
+ from typing import Optional
14
+ from openenv.core.env_server.interfaces import Action, Observation, State
15
+
16
+
17
+ class ReviewAction(Action):
18
+ """Action taken by the agent to review a code snippet."""
19
+ review: str # agent's written analysis
20
+ bug_type: str # "syntax" | "logic" | "security" | "none"
21
+ line_number: int # which line has the issue, -1 if unknown
22
+ confidence: float # agent's confidence 0.0-1.0
23
+
24
+
25
+ class ReviewObservation(Observation):
26
+ """What the agent sees at each step."""
27
+ code_snippet: str # the Python code to review
28
+ task_description: str # what the agent is asked to do
29
+ task_id: str # "easy" | "medium" | "hard"
30
+ attempt_number: int # how many steps taken so far
31
+ previous_feedback: str # feedback from last step, empty on reset
32
+ done: bool # whether episode is complete
33
+
34
+
35
+ class ReviewState(State):
36
+ """Internal environment state."""
37
+ current_task_id: str = "easy"
38
+ current_snippet: str = ""
39
+ correct_bug_type: str = ""
40
+ correct_line_number: int = -1
41
+ correct_keywords: list = []
42
+ step_count: int = 0
43
+ task_episode_id: str = ""
44
+ cumulative_reward: float = 0.0
openenv.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ spec_version: 1
2
+ name: code_review_env
3
+ type: space
4
+ runtime: fastapi
5
+ app: server.app:app
6
+ port: 8000
7
+
openenv_code_review_env.egg-info/PKG-INFO ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: openenv-code_review_env
3
+ Version: 0.1.0
4
+ Summary: Code Review Env environment for OpenEnv
5
+ Requires-Python: >=3.10
6
+ Requires-Dist: openenv-core[core]>=0.2.2
7
+ Provides-Extra: dev
8
+ Requires-Dist: pytest>=8.0.0; extra == "dev"
9
+ Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
openenv_code_review_env.egg-info/SOURCES.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ README.md
2
+ pyproject.toml
3
+ ./__init__.py
4
+ ./client.py
5
+ ./models.py
6
+ openenv_code_review_env.egg-info/PKG-INFO
7
+ openenv_code_review_env.egg-info/SOURCES.txt
8
+ openenv_code_review_env.egg-info/dependency_links.txt
9
+ openenv_code_review_env.egg-info/entry_points.txt
10
+ openenv_code_review_env.egg-info/requires.txt
11
+ openenv_code_review_env.egg-info/top_level.txt
12
+ server/__init__.py
13
+ server/app.py
14
+ server/code_review_env_environment.py
openenv_code_review_env.egg-info/dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
openenv_code_review_env.egg-info/entry_points.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [console_scripts]
2
+ server = code_review_env.server.app:main
openenv_code_review_env.egg-info/requires.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ openenv-core[core]>=0.2.2
2
+
3
+ [dev]
4
+ pytest>=8.0.0
5
+ pytest-cov>=4.0.0
openenv_code_review_env.egg-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ code_review_env
pyproject.toml ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ [build-system]
8
+ requires = ["setuptools>=45", "wheel"]
9
+ build-backend = "setuptools.build_meta"
10
+
11
+ [project]
12
+ name = "openenv-code_review_env"
13
+ version = "0.1.0"
14
+ description = "Code Review Env environment for OpenEnv"
15
+ requires-python = ">=3.10"
16
+ dependencies = [
17
+ # Core OpenEnv runtime (provides FastAPI server + HTTP client types)
18
+ # install from github
19
+ # "openenv-core[core] @ git+https://github.com/meta-pytorch/OpenEnv.git",
20
+ "openenv-core[core]>=0.2.2",
21
+ # Environment-specific dependencies
22
+ # Add all dependencies needed for your environment here
23
+ # Examples:
24
+ # "numpy>=1.19.0",
25
+ # "torch>=2.0.0",
26
+ # "gymnasium>=0.29.0",
27
+ # "openspiel>=1.0.0",
28
+ # "smolagents>=1.22.0,<2",
29
+ ]
30
+
31
+ [project.optional-dependencies]
32
+ dev = [
33
+ "pytest>=8.0.0",
34
+ "pytest-cov>=4.0.0",
35
+ ]
36
+
37
+ [project.scripts]
38
+ # Server entry point - enables running via: uv run --project . server
39
+ # or: python -m code_review_env.server.app
40
+ server = "code_review_env.server.app:main"
41
+
42
+ [tool.setuptools]
43
+ include-package-data = true
44
+ packages = ["code_review_env", "code_review_env.server"]
45
+ package-dir = { "code_review_env" = ".", "code_review_env.server" = "server" }
server/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Code Review Env environment server components."""
8
+
9
+ from .code_review_env_environment import CodeReviewEnvironment
10
+
11
+ __all__ = ["CodeReviewEnvironment"]
server/app.py ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ FastAPI server for the Code Review Environment.
9
+ """
10
+
11
+ from models import ReviewAction, ReviewObservation
12
+ from server.code_review_env_environment import CodeReviewEnvironment
13
+ from openenv.core.env_server import create_app
14
+ from fastapi import FastAPI, Query
15
+ from fastapi.routing import APIRouter
16
+
17
+ app = create_app(
18
+ CodeReviewEnvironment,
19
+ ReviewAction,
20
+ ReviewObservation,
21
+ env_name="code_review_env",
22
+ )
23
+
24
+ @app.get("/tasks")
25
+ def list_tasks():
26
+ return {
27
+ "tasks": [
28
+ {
29
+ "task_id": "easy",
30
+ "description": "Identify syntax/runtime errors in Python code",
31
+ "difficulty": "easy",
32
+ "action_schema": {
33
+ "review": "string - your analysis",
34
+ "bug_type": "string - syntax | logic | security | none",
35
+ "line_number": "int - line with the bug, -1 if unknown",
36
+ "confidence": "float - your confidence 0.0 to 1.0"
37
+ }
38
+ },
39
+ {
40
+ "task_id": "medium",
41
+ "description": "Identify logic bugs in code that runs but produces wrong output",
42
+ "difficulty": "medium",
43
+ "action_schema": {
44
+ "review": "string - your analysis",
45
+ "bug_type": "string - syntax | logic | security | none",
46
+ "line_number": "int - line with the bug, -1 if unknown",
47
+ "confidence": "float - your confidence 0.0 to 1.0"
48
+ }
49
+ },
50
+ {
51
+ "task_id": "hard",
52
+ "description": "Identify security vulnerabilities in Python code",
53
+ "difficulty": "hard",
54
+ "action_schema": {
55
+ "review": "string - your analysis",
56
+ "bug_type": "string - syntax | logic | security | none",
57
+ "line_number": "int - line with the bug, -1 if unknown",
58
+ "confidence": "float - your confidence 0.0 to 1.0"
59
+ }
60
+ }
61
+ ]
62
+ }
63
+
64
+ @app.get("/grader")
65
+ def grader(task_id: str = Query("easy"), episode_id: str = Query(None)):
66
+ """
67
+ Run a single task with a perfect answer.
68
+ Query params: task_id (str), episode_id (str, optional)
69
+ Returns: {"task_id": str, "score": float, "feedback": str}
70
+ """
71
+ env = CodeReviewEnvironment()
72
+ env.reset(task_id)
73
+
74
+ # Create perfect answer based on task_id
75
+ if task_id == "easy":
76
+ action = ReviewAction(
77
+ review="Line 1 is missing a colon after the function definition. This is a syntax error.",
78
+ bug_type="syntax",
79
+ line_number=1,
80
+ confidence=0.95
81
+ )
82
+ elif task_id == "medium":
83
+ action = ReviewAction(
84
+ review="Line 5 has an index error: it should be max_val = numbers[i], not numbers[i - 1]. This is a logic bug.",
85
+ bug_type="logic",
86
+ line_number=5,
87
+ confidence=0.95
88
+ )
89
+ else: # hard
90
+ action = ReviewAction(
91
+ review="Line 6 has a SQL injection vulnerability because the username is concatenated directly into the query without parameterized statements.",
92
+ bug_type="security",
93
+ line_number=6,
94
+ confidence=0.95
95
+ )
96
+
97
+ obs = env.step(action)
98
+ return {
99
+ "task_id": task_id,
100
+ "score": env.state.cumulative_reward,
101
+ "feedback": obs.previous_feedback
102
+ }
103
+
104
+ @app.get("/baseline")
105
+ def baseline():
106
+ """
107
+ Run all 3 tasks (easy, medium, hard) with perfect hardcoded answers.
108
+ Returns: {"scores": {"easy": float, "medium": float, "hard": float}, "average": float}
109
+ """
110
+ scores = {}
111
+
112
+ for task_id in ["easy", "medium", "hard"]:
113
+ env = CodeReviewEnvironment()
114
+ env.reset(task_id)
115
+
116
+ # Create perfect answer based on task_id
117
+ if task_id == "easy":
118
+ action = ReviewAction(
119
+ review="Line 1 is missing a colon after the function definition. This is a syntax error.",
120
+ bug_type="syntax",
121
+ line_number=1,
122
+ confidence=0.95
123
+ )
124
+ elif task_id == "medium":
125
+ action = ReviewAction(
126
+ review="Line 5 has an index error: it should be max_val = numbers[i], not numbers[i - 1]. This is a logic bug.",
127
+ bug_type="logic",
128
+ line_number=5,
129
+ confidence=0.95
130
+ )
131
+ else: # hard
132
+ action = ReviewAction(
133
+ review="Line 6 has a SQL injection vulnerability because the username is concatenated directly into the query without parameterized statements.",
134
+ bug_type="security",
135
+ line_number=6,
136
+ confidence=0.95
137
+ )
138
+
139
+ obs = env.step(action)
140
+ scores[task_id] = env.state.cumulative_reward
141
+
142
+ average = sum(scores.values()) / len(scores)
143
+ return {
144
+ "scores": scores,
145
+ "average": round(average, 4)
146
+ }
147
+
148
+ if __name__ == "__main__":
149
+ import uvicorn
150
+ uvicorn.run(app, host="0.0.0.0", port=8000)
server/code_review_env_environment.py ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Code Review Environment β€” agent finds bugs in Python snippets.
9
+ 3 tasks: syntax errors (easy) β†’ logic bugs (medium) β†’ security vulns (hard).
10
+ """
11
+
12
+ from __future__ import annotations
13
+ import uuid
14
+ from openenv.core.env_server.interfaces import Environment, Action, Observation
15
+ from models import ReviewAction, ReviewObservation, ReviewState
16
+ # ── Task bank ────────────────────────────────────────────────────────────────
17
+
18
+ TASKS = {
19
+ "easy": {
20
+ "description": (
21
+ "Review the following Python code and identify any syntax or "
22
+ "runtime errors. Specify the bug type, the line number where "
23
+ "the error occurs, and explain what is wrong."
24
+ ),
25
+ "snippet": """\
26
+ def calculate_average(numbers)
27
+ total = 0
28
+ for num in numbers:
29
+ total += num
30
+ return total / len(numbers)
31
+
32
+ result = calculate_average([10, 20, 30])
33
+ print(result)
34
+ """,
35
+ "correct_bug_type": "syntax",
36
+ "correct_line_number": 1,
37
+ "correct_keywords": ["colon", "missing", "def", "syntax"],
38
+ },
39
+ "medium": {
40
+ "description": (
41
+ "Review the following Python code. It runs without crashing "
42
+ "but produces incorrect output. Identify the logic bug, "
43
+ "the line number, and explain why it is wrong."
44
+ ),
45
+ "snippet": """\
46
+ def find_max(numbers):
47
+ max_val = numbers[0]
48
+ for i in range(len(numbers)):
49
+ if numbers[i] > max_val:
50
+ max_val = numbers[i - 1]
51
+ return max_val
52
+
53
+ print(find_max([3, 7, 2, 9, 4]))
54
+ """,
55
+ "correct_bug_type": "logic",
56
+ "correct_line_number": 5,
57
+ "correct_keywords": ["index", "i - 1", "off by one", "wrong", "logic"],
58
+ },
59
+ "hard": {
60
+ "description": (
61
+ "Review the following Python code for security vulnerabilities. "
62
+ "Identify the vulnerability type, the line number, and explain "
63
+ "the security risk it introduces."
64
+ ),
65
+ "snippet": """\
66
+ import sqlite3
67
+
68
+ def get_user(username):
69
+ conn = sqlite3.connect('users.db')
70
+ cursor = conn.cursor()
71
+ query = "SELECT * FROM users WHERE username = '" + username + "'"
72
+ cursor.execute(query)
73
+ return cursor.fetchone()
74
+
75
+ user_input = input("Enter username: ")
76
+ print(get_user(user_input))
77
+ """,
78
+ "correct_bug_type": "security",
79
+ "correct_line_number": 6,
80
+ "correct_keywords": ["sql injection", "injection", "concatenat", "unsanitized", "parameterized"],
81
+ },
82
+ }
83
+
84
+ MAX_STEPS = 3
85
+
86
+ # ── Reward function ───────────────────────────────────────────────────────────
87
+
88
+ def compute_reward(action: ReviewAction, task: dict, attempt: int) -> tuple[float, str]:
89
+ """
90
+ Partial progress reward β€” not binary.
91
+ Returns (reward_float, feedback_string).
92
+ """
93
+ reward = 0.0
94
+ feedback_parts = []
95
+
96
+ # Bug type match (+1.0)
97
+ if action.bug_type.lower() == task["correct_bug_type"]:
98
+ reward += 1.0
99
+ feedback_parts.append("βœ“ Correct bug type identified.")
100
+ else:
101
+ reward -= 0.3
102
+ feedback_parts.append(
103
+ f"βœ— Wrong bug type. Got '{action.bug_type}', "
104
+ f"expected '{task['correct_bug_type']}'."
105
+ )
106
+
107
+ # Line number match (+0.5)
108
+ if action.line_number == task["correct_line_number"]:
109
+ reward += 0.5
110
+ feedback_parts.append("βœ“ Correct line number.")
111
+ else:
112
+ feedback_parts.append(
113
+ f"βœ— Wrong line number. Got {action.line_number}, "
114
+ f"expected {task['correct_line_number']}."
115
+ )
116
+
117
+ # Keyword quality check (+0.5)
118
+ review_lower = action.review.lower()
119
+ matched_keywords = [
120
+ kw for kw in task["correct_keywords"] if kw in review_lower
121
+ ]
122
+ if matched_keywords:
123
+ reward += 0.5
124
+ feedback_parts.append(f"βœ“ Good explanation (matched: {matched_keywords}).")
125
+ else:
126
+ feedback_parts.append("βœ— Explanation missing key concepts.")
127
+
128
+ # Retry penalty
129
+ if attempt > 1:
130
+ penalty = 0.1 * (attempt - 1)
131
+ reward -= penalty
132
+ feedback_parts.append(f"⚠ Retry penalty: -{penalty:.1f}")
133
+
134
+ # Clamp to 0.0-1.0 (max raw = 2.0, normalize)
135
+ normalized = max(0.0, min(1.0, reward / 2.0))
136
+ return round(normalized, 4), " ".join(feedback_parts)
137
+
138
+
139
+ # ── Environment ───────────────────────────────────────────────────────────────
140
+
141
+ class CodeReviewEnvironment(Environment):
142
+ """
143
+ Code Review Environment.
144
+ Agent reviews Python snippets across 3 difficulty tasks.
145
+ """
146
+
147
+ def __init__(self):
148
+ self._state = ReviewState()
149
+
150
+ def reset(self, task_id: str = "easy") -> Observation:
151
+ if task_id not in TASKS:
152
+ task_id = "easy"
153
+ task = TASKS[task_id]
154
+ self._state = ReviewState(
155
+ current_task_id=task_id,
156
+ current_snippet=task["snippet"],
157
+ correct_bug_type=task["correct_bug_type"],
158
+ correct_line_number=task["correct_line_number"],
159
+ correct_keywords=task["correct_keywords"],
160
+ step_count=0,
161
+ task_episode_id=str(uuid.uuid4()),
162
+ cumulative_reward=0.0,
163
+ )
164
+ return ReviewObservation(
165
+ code_snippet=task["snippet"],
166
+ task_description=task["description"],
167
+ task_id=task_id,
168
+ attempt_number=0,
169
+ previous_feedback="",
170
+ done=False,
171
+ )
172
+
173
+ def step(self, action: Action) -> Observation:
174
+ if not isinstance(action, ReviewAction):
175
+ raise ValueError(f"Expected ReviewAction, got {type(action)}")
176
+
177
+ self._state.step_count += 1
178
+ task = TASKS[self._state.current_task_id]
179
+
180
+ reward, feedback = compute_reward(
181
+ action, task, self._state.step_count
182
+ )
183
+ self._state.cumulative_reward += reward
184
+
185
+ done = (
186
+ reward >= 0.75 # good enough answer
187
+ or self._state.step_count >= MAX_STEPS
188
+ )
189
+
190
+ return ReviewObservation(
191
+ code_snippet=self._state.current_snippet,
192
+ task_description=task["description"],
193
+ task_id=self._state.current_task_id,
194
+ attempt_number=self._state.step_count,
195
+ previous_feedback=feedback,
196
+ done=done,
197
+ )
198
+
199
+ @property
200
+ def state(self) -> ReviewState:
201
+ return self._state
server/requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ openenv-core>=0.2.3
2
+ fastapi
3
+ uvicorn
4
+ pydantic
5
+ openai
6
+ python-dotenv
7
+
8
+
9
+