Datasets:
Add Coding Intelligence MCTS sample (10K traces) with README, SCHEMA, parquet, JSONL
Browse files- .gitattributes +1 -0
- README.md +167 -0
- SCHEMA.md +92 -0
- coding_intel_sample.jsonl +3 -0
- coding_intel_sample.parquet +3 -0
.gitattributes
CHANGED
|
@@ -58,3 +58,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 58 |
# Video files - compressed
|
| 59 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 60 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 58 |
# Video files - compressed
|
| 59 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 60 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
coding_intel_sample.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
- text-classification
|
| 6 |
+
- reinforcement-learning
|
| 7 |
+
language:
|
| 8 |
+
- en
|
| 9 |
+
- code
|
| 10 |
+
tags:
|
| 11 |
+
- synthetic
|
| 12 |
+
- coding-agent
|
| 13 |
+
- mcts
|
| 14 |
+
- reasoning-traces
|
| 15 |
+
- process-reward-model
|
| 16 |
+
- rlhf
|
| 17 |
+
- dpo
|
| 18 |
+
- agentic-ai
|
| 19 |
+
- tool-use
|
| 20 |
+
- code-generation
|
| 21 |
+
- llm-training
|
| 22 |
+
- ucb
|
| 23 |
+
- reward-modeling
|
| 24 |
+
pretty_name: Coding Agent MCTS Reasoning Trace Pack
|
| 25 |
+
size_categories:
|
| 26 |
+
- 10K<n<100K
|
| 27 |
+
configs:
|
| 28 |
+
- config_name: default
|
| 29 |
+
data_files:
|
| 30 |
+
- split: train
|
| 31 |
+
path: coding_intel_sample.parquet
|
| 32 |
+
---
|
| 33 |
+
|
| 34 |
+
# Coding Agent MCTS Reasoning Trace Pack (Sample)
|
| 35 |
+
|
| 36 |
+
**A synthetic Monte Carlo Tree Search reasoning-trace dataset for autonomous coding agents.** Each row is a complete reasoning lifecycle — initial context analysis → draft exploration → test feedback → prune-or-anchor → final outcome — labeled with a reasoning phenotype (TEST_DRIVEN, HACKER, DEEP_THINK, SECURITY_FIRST, REFACTOR_HEAVY) and carrying UCB scores at every step and explicit rewards at terminal actions.
|
| 37 |
+
|
| 38 |
+
Built by [SolsticeAI](https://www.solsticestudio.ai/datasets) as a free sample of a larger commercial pack. 100% synthetic. No real code, no proprietary repos — task titles and descriptions are generic archetypes drawn from canonical library patterns.
|
| 39 |
+
|
| 40 |
+
## What is included
|
| 41 |
+
|
| 42 |
+
| File | Rows | Format | Purpose |
|
| 43 |
+
|---|---:|---|---|
|
| 44 |
+
| `coding_intel_sample.parquet` | 10,000 | Parquet | Columnar, typed, best for analytics and RL training |
|
| 45 |
+
| `coding_intel_sample.jsonl` | 10,000 | JSON Lines | Streaming / LLM training friendly |
|
| 46 |
+
|
| 47 |
+
**Source pack:** 2.5M-trace corpus
|
| 48 |
+
**This sample:** 10,000 reasoning traces, stratified 2,000 per reasoning phenotype
|
| 49 |
+
**Reasoning phenotypes (5):** `TEST_DRIVEN`, `HACKER`, `DEEP_THINK`, `SECURITY_FIRST`, `REFACTOR_HEAVY`
|
| 50 |
+
**Task types (3):** `bugfix`, `feature`, `refactor` (~3,300 each)
|
| 51 |
+
**Languages (4):** `python`, `rust`, `go`, `typescript` (~2,500 each)
|
| 52 |
+
**Production impact tiers:** `LOW`, `MEDIUM`, `HIGH`, `CRITICAL` (~2,500 each)
|
| 53 |
+
|
| 54 |
+
## Record structure
|
| 55 |
+
|
| 56 |
+
Each record is one reasoning lifecycle with 7 top-level fields:
|
| 57 |
+
|
| 58 |
+
| Field | Type | Contents |
|
| 59 |
+
|---|---|---|
|
| 60 |
+
| `schema_version` | string | Pack schema version (`1.0.0-coding-intel-sample`) |
|
| 61 |
+
| `event` | struct | `task_id`, `task_type`, `language`, `title`, `description` |
|
| 62 |
+
| `risk_context` | struct | `test_coverage_baseline`, `cyclomatic_complexity`, `production_impact` |
|
| 63 |
+
| `agent_reasoning` | list<struct> | Ordered reasoning steps: `action` (`analyze_context`, `write_draft`, `run_tests`, `lethe_prune`, `prometheus_anchor`), `depth`, `ucb_score` (null at root / terminal), `reward` (populated on terminal actions only), `thought` (natural-language rationale) |
|
| 64 |
+
| `correlated_telemetry` | struct | `linter_warnings_initial`, `linter_warnings_final`, `test_runtime_ms`, `ci_status` |
|
| 65 |
+
| `execution_summary` | struct | `files_changed`, `lines_added`, `lines_removed`, `time_to_resolution_sec` |
|
| 66 |
+
| `genetic_optimizer_feedback` | struct | `final_reward`, `lethe_prunes_triggered`, `nodes_expanded`, `phenotype_used` |
|
| 67 |
+
|
| 68 |
+
See [SCHEMA.md](./SCHEMA.md) for the full nested field breakdown.
|
| 69 |
+
|
| 70 |
+
## Why this dataset is useful
|
| 71 |
+
|
| 72 |
+
Most public coding datasets (HumanEval, SWE-bench, MBPP) only give you the *final answer* and the task description. They don't capture the reasoning tree the agent walked through — the wrong paths, the prunes, the anchor points. This pack is shaped around what modern agent-training pipelines actually need:
|
| 73 |
+
|
| 74 |
+
- **Explicit exploration vs exploitation.** Traces include both successful and pruned branches — `lethe_prune` events with negative reward, `prometheus_anchor` events with positive reward. Roughly 30% of traces carry a failed exploration branch before reaching the golden timeline.
|
| 75 |
+
- **Reward signals embedded at every step.** UCB scores at each non-terminal step, explicit rewards at terminal actions — directly usable for RL, DPO, and process-reward-model training.
|
| 76 |
+
- **Phenotype labels on every trace.** Train a `SECURITY_FIRST` coder specifically; run phenotype-transfer studies; build strategy-aware evaluation harnesses.
|
| 77 |
+
- **Correlated telemetry.** Linter-warning deltas, test runtime, and CI status correlated to reasoning outcome — grounds the trace in observable signals.
|
| 78 |
+
- **Compact.** Parquet fits in 340 KB, JSONL in 12.5 MB — you can pull this into a notebook in seconds and iterate.
|
| 79 |
+
|
| 80 |
+
## Typical use cases
|
| 81 |
+
|
| 82 |
+
- MCTS-based coding agent architecture training
|
| 83 |
+
- Process reward model (PRM) training
|
| 84 |
+
- Reasoning-chain evaluation benchmarks
|
| 85 |
+
- Agent self-improvement via trace replay
|
| 86 |
+
- Strategy-conditional code-generation research
|
| 87 |
+
- Curriculum learning with task-difficulty ladders
|
| 88 |
+
- LLM fine-tuning on structured reasoning narratives
|
| 89 |
+
- Benchmarking UCB-based exploration policies
|
| 90 |
+
|
| 91 |
+
## Quick start
|
| 92 |
+
|
| 93 |
+
```python
|
| 94 |
+
import pandas as pd
|
| 95 |
+
import pyarrow.parquet as pq
|
| 96 |
+
|
| 97 |
+
df = pq.read_table("coding_intel_sample.parquet").to_pandas()
|
| 98 |
+
|
| 99 |
+
# Phenotype distribution (stratified balanced)
|
| 100 |
+
print(df["genetic_optimizer_feedback"].apply(lambda g: g["phenotype_used"]).value_counts())
|
| 101 |
+
|
| 102 |
+
# Average final reward by phenotype
|
| 103 |
+
df["pheno"] = df["genetic_optimizer_feedback"].apply(lambda g: g["phenotype_used"])
|
| 104 |
+
df["reward"] = df["genetic_optimizer_feedback"].apply(lambda g: g["final_reward"])
|
| 105 |
+
print(df.groupby("pheno")["reward"].mean().round(2))
|
| 106 |
+
|
| 107 |
+
# Prune rate by task type
|
| 108 |
+
df["task"] = df["event"].apply(lambda e: e["task_type"])
|
| 109 |
+
df["prunes"] = df["genetic_optimizer_feedback"].apply(lambda g: g["lethe_prunes_triggered"])
|
| 110 |
+
print(df.groupby("task")["prunes"].mean().round(2))
|
| 111 |
+
|
| 112 |
+
# Pull one full reasoning chain
|
| 113 |
+
row = df.iloc[0]
|
| 114 |
+
for step in row["agent_reasoning"]:
|
| 115 |
+
print(f" d={step['depth']:<2} {step['action']:<20} ucb={step['ucb_score']} reward={step['reward']}: {step['thought']}")
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
Streaming form:
|
| 119 |
+
|
| 120 |
+
```python
|
| 121 |
+
import json
|
| 122 |
+
|
| 123 |
+
with open("coding_intel_sample.jsonl") as f:
|
| 124 |
+
for line in f:
|
| 125 |
+
trace = json.loads(line)
|
| 126 |
+
# one MCTS reasoning trace per line
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
## Notes and limitations
|
| 130 |
+
|
| 131 |
+
- **Reasoning traces use canned action templates rather than live-executed code.** This pack is designed for agent-architecture training, not end-to-end SWE-bench-style evaluation.
|
| 132 |
+
- **`ci_status` is `SUCCESS` for every row in this sample** — the production pack includes `FAILURE` / `FLAKY` / `TIMEOUT` variants; this free sample is restricted to golden-timeline anchored traces to keep a clean reward surface.
|
| 133 |
+
- **UCB scores at root nodes use positive infinity** (serialized as `"Infinity"` in JSONL), following the standard MCTS convention.
|
| 134 |
+
- Phenotype distribution is uniform; production licensing supports custom phenotype mixes.
|
| 135 |
+
|
| 136 |
+
## Responsible use
|
| 137 |
+
|
| 138 |
+
This dataset is intended for **agent-training, process-reward-model, and MCTS research**. It contains synthesized reasoning narratives and action templates — it does **not** contain real code, real commit history, or proprietary repository content. Models trained on this data will learn reasoning structure and phenotype-conditional behavior; downstream code-generation quality still depends on training with real-code supervision from appropriately licensed corpora.
|
| 139 |
+
|
| 140 |
+
## License
|
| 141 |
+
|
| 142 |
+
Released under **CC BY 4.0**. Use freely for research, agent prototyping, education, and commercial development with attribution.
|
| 143 |
+
|
| 144 |
+
## Get the full pack
|
| 145 |
+
|
| 146 |
+
This Hugging Face repo is a **10K-trace sample**. The production pack scales to 2.5M+ traces with wider CI-outcome distribution (FAILURE / FLAKY / TIMEOUT), additional languages (C++, Java, Kotlin, Swift, C#), AST-diff variants, tool-call graph traces, multi-turn user-interaction sequences, custom phenotype mixes, and buyer-specific variants.
|
| 147 |
+
|
| 148 |
+
**Self-serve (Stripe checkout):**
|
| 149 |
+
- [**Sample Scale tier — $5,000**](https://buy.stripe.com/7sY5kD2j85QTfSb5lfeEo03) — ~25K records, one subject, 72-hour delivery.
|
| 150 |
+
|
| 151 |
+
**Full pack + enterprise scope:**
|
| 152 |
+
- [www.solsticestudio.ai/datasets](https://www.solsticestudio.ai/datasets) — per-SKU pricing across Starter / Professional / Enterprise tiers, plus commercial licensing, custom generation, and buyer-specific variants.
|
| 153 |
+
|
| 154 |
+
**Procurement catalog:**
|
| 155 |
+
- [SolsticeAI Data Storefront](https://solsticeai.mydatastorefront.com) — available via Datarade / Monda.
|
| 156 |
+
|
| 157 |
+
## Citation
|
| 158 |
+
|
| 159 |
+
```bibtex
|
| 160 |
+
@dataset{solstice_coding_intel_pack_2026,
|
| 161 |
+
title = {Coding Agent MCTS Reasoning Trace Pack (Sample)},
|
| 162 |
+
author = {SolsticeAI},
|
| 163 |
+
year = {2026},
|
| 164 |
+
publisher = {Hugging Face},
|
| 165 |
+
url = {https://huggingface.co/datasets/solsticestudioai/coding-intel-pack}
|
| 166 |
+
}
|
| 167 |
+
```
|
SCHEMA.md
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Coding Agent MCTS Reasoning Trace Pack — Schema
|
| 2 |
+
|
| 3 |
+
One row = one complete MCTS reasoning lifecycle. All records share the same seven top-level fields.
|
| 4 |
+
|
| 5 |
+
Schema version: `1.0.0-coding-intel-sample`
|
| 6 |
+
|
| 7 |
+
## Top-level fields
|
| 8 |
+
|
| 9 |
+
### `schema_version` — string
|
| 10 |
+
Schema identifier. Constant within a sample release.
|
| 11 |
+
|
| 12 |
+
### `event` — struct
|
| 13 |
+
Task identity and description.
|
| 14 |
+
|
| 15 |
+
| Field | Type | Notes |
|
| 16 |
+
|---|---|---|
|
| 17 |
+
| `task_id` | string | Stable task identifier, e.g., `TASK-D8DFB752`. UUID-derived. |
|
| 18 |
+
| `task_type` | string | `bugfix`, `feature`, `refactor`. |
|
| 19 |
+
| `language` | string | `python`, `rust`, `go`, `typescript`. |
|
| 20 |
+
| `title` | string | Generic task title (e.g., `Fix off-by-one error`, `Refactor the parser`, `Add retry logic`). |
|
| 21 |
+
| `description` | string | Short user-intent statement. |
|
| 22 |
+
|
| 23 |
+
### `risk_context` — struct
|
| 24 |
+
Baseline code-health signals for the task target.
|
| 25 |
+
|
| 26 |
+
| Field | Type | Notes |
|
| 27 |
+
|---|---|---|
|
| 28 |
+
| `test_coverage_baseline` | double | Percent test coverage at task start (0–100). |
|
| 29 |
+
| `cyclomatic_complexity` | int | Baseline cyclomatic complexity score. |
|
| 30 |
+
| `production_impact` | string | `LOW`, `MEDIUM`, `HIGH`, `CRITICAL`. |
|
| 31 |
+
|
| 32 |
+
### `agent_reasoning` — list<struct>
|
| 33 |
+
Ordered MCTS reasoning steps. One struct per step.
|
| 34 |
+
|
| 35 |
+
Step struct:
|
| 36 |
+
|
| 37 |
+
| Field | Type | Notes |
|
| 38 |
+
|---|---|---|
|
| 39 |
+
| `action` | string | Step type: `analyze_context` (root), `write_draft`, `run_tests`, `lethe_prune` (failed branch), `prometheus_anchor` (golden timeline). |
|
| 40 |
+
| `depth` | int | Tree depth of this step (root = 0). |
|
| 41 |
+
| `ucb_score` | double | UCB score at this step. `Infinity` at root nodes, `null` at terminal actions. |
|
| 42 |
+
| `reward` | double | Terminal reward. Populated only on `lethe_prune` (negative) or `prometheus_anchor` (positive); `null` elsewhere. |
|
| 43 |
+
| `thought` | string | Natural-language rationale for the step. |
|
| 44 |
+
|
| 45 |
+
### `correlated_telemetry` — struct
|
| 46 |
+
Observable signals correlated to the reasoning outcome.
|
| 47 |
+
|
| 48 |
+
| Field | Type | Notes |
|
| 49 |
+
|---|---|---|
|
| 50 |
+
| `linter_warnings_initial` | int | Linter warnings before the task. |
|
| 51 |
+
| `linter_warnings_final` | int | Linter warnings after the golden timeline commit. |
|
| 52 |
+
| `test_runtime_ms` | int | Runtime of the test suite at anchor time, ms. |
|
| 53 |
+
| `ci_status` | string | `SUCCESS` in this sample (production pack includes `FAILURE`, `FLAKY`, `TIMEOUT`). |
|
| 54 |
+
|
| 55 |
+
### `execution_summary` — struct
|
| 56 |
+
Diff-level metrics at the golden timeline commit.
|
| 57 |
+
|
| 58 |
+
| Field | Type | Notes |
|
| 59 |
+
|---|---|---|
|
| 60 |
+
| `files_changed` | int | Files touched. |
|
| 61 |
+
| `lines_added` | int | Lines added. |
|
| 62 |
+
| `lines_removed` | int | Lines removed. |
|
| 63 |
+
| `time_to_resolution_sec` | double | End-to-end lifecycle duration (seconds). |
|
| 64 |
+
|
| 65 |
+
### `genetic_optimizer_feedback` — struct
|
| 66 |
+
Outer-loop optimizer metrics used to tune future MCTS policies.
|
| 67 |
+
|
| 68 |
+
| Field | Type | Notes |
|
| 69 |
+
|---|---|---|
|
| 70 |
+
| `final_reward` | double | Terminal reward attributed to the golden timeline. |
|
| 71 |
+
| `lethe_prunes_triggered` | int | Count of `lethe_prune` actions in the trace. |
|
| 72 |
+
| `nodes_expanded` | int | Total MCTS nodes expanded during the trace. |
|
| 73 |
+
| `phenotype_used` | string | `TEST_DRIVEN`, `HACKER`, `DEEP_THINK`, `SECURITY_FIRST`, `REFACTOR_HEAVY`. |
|
| 74 |
+
|
| 75 |
+
## Distribution of this sample
|
| 76 |
+
|
| 77 |
+
- 10,000 traces, stratified 2,000 per reasoning phenotype across all five phenotypes.
|
| 78 |
+
- Task type: balanced (~3,300 each across bugfix / feature / refactor).
|
| 79 |
+
- Language: balanced (~2,500 each across python / rust / go / typescript).
|
| 80 |
+
- Production impact: balanced (~2,500 each across LOW / MEDIUM / HIGH / CRITICAL).
|
| 81 |
+
- CI status is `SUCCESS` for every row in this sample; production pack includes failure variants.
|
| 82 |
+
|
| 83 |
+
## Sanitization notes
|
| 84 |
+
|
| 85 |
+
- Task IDs are UUID-derived synthetic identifiers (e.g., `TASK-D8DFB752`).
|
| 86 |
+
- Task titles and descriptions are generic archetypes (e.g., `Fix off-by-one error`, `Add retry logic`) — no real commit messages, PRs, or issue descriptions are present.
|
| 87 |
+
- No real code content, no diffs, no actual linter output.
|
| 88 |
+
- `ucb_score` uses the IEEE-754 infinity representation at root nodes. Parquet preserves this natively; JSONL serializes as the string `"Infinity"`.
|
| 89 |
+
|
| 90 |
+
## Relationship to the full pack
|
| 91 |
+
|
| 92 |
+
The production pack scales to 2.5M+ traces with wider CI-outcome distribution (`FAILURE`, `FLAKY`, `TIMEOUT`), additional languages (C++, Java, Kotlin, Swift, C#), AST-diff variants, tool-call graph traces, multi-turn user-interaction sequences, and custom phenotype mixes. See the pack card for commercial access.
|
coding_intel_sample.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:45f45fcc7411b192ebcc5d15abb8372d26ea88afa6592ca529a118d63956a855
|
| 3 |
+
size 12525896
|
coding_intel_sample.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bfcd2f90bd6cccb91036983ddb735bdbb297e2d6bc4173c67e42bf2d6d542229
|
| 3 |
+
size 338690
|