EridanusQ commited on
Commit
35f2452
·
1 Parent(s): e051ec0
README.md CHANGED
@@ -12,221 +12,188 @@ tags:
12
  - mps
13
  - benchmark
14
  size_categories:
15
- - 100<n<1K
16
  ---
17
 
18
-
19
- # 机组组合 MIP 基准数据集(UCTD)
20
-
21
- 本文档说明如何使用本仓库提供的工具链,生成一份用于**混合整数规划(MIP)**研究的标准基准数据集。
22
-
23
- ---
24
-
25
- ## 一、整体目录结构
26
-
27
- 本仓库(`UnitCommitment_Trajectory`)包含两个顶层子文件夹:
28
-
29
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  UnitCommitment_Trajectory/
31
- ├── UnitCommitment_Trajectory_Test/ ← 代码与算例(运行脚本的工作目录)
32
- ── UnitCommitment_Trajectory_Dataset/ ← 数据集输出目录(脚本自动生成)
 
 
 
 
 
 
 
 
 
 
 
 
33
  ```
34
 
35
- ---
36
 
37
- ## 二、代码目录详细说明(`UnitCommitment_Trajectory_Test/`)
38
 
39
- ### 2.1 顶层文件
 
40
 
41
- | 文件名 | 作用 |
42
- | :--- | :--- |
43
- | `generate_dataset.jl` | **核心脚本**。批量读取所有算例,构建 MIP 模型,导出 `.mps` 文件到数据集目录。 |
44
- | `create_scuc_mps_files.jl` | 单算例调试脚本。用于手动测试单个算例的 MPS 生成流程,不用于批量生产。 |
45
- | `test_main.jl` | 模型对比实验脚本。对比基础模型、v1(添加启停轨迹)和 v2(修改最小运行时间)三种配置的求解结果,生成 CSV 报表。 |
46
- | `pmax-preprocessing.jl` | 预处理函数库。被 `test_main.jl` 调用,实现以下两步逻辑:①按容量(Pmax)筛选机组并写入启停曲线(Part 1);②修改前 20% 大机组的最小运行时间(Part 2)。 |
47
- | `Project.toml` | Julia 包管理配置文件。声明了本项目依赖的所有第三方包及版本约束(JuMP、HiGHS、CodecZlib 等)。 |
48
- | `Manifest.toml` | Julia 依赖锁定文件。精确记录了每个依赖包的版本和哈希值,确保不同机器上的环境完全一致。**请勿手动修改此文件。** |
49
 
50
- ### 2.2 `src/` UnitCommitment.jl 核心源代码
51
 
52
- 这是对原版 `UnitCommitment.jl` 进行定制修改后源代码目录
53
 
54
- | 子目录 | 作用 |
55
- | :--- | :--- |
56
- | `src/instance/` | 数据读取模块。负责将 `.json.gz` 格式的算例文件解析为 Julia 内��的数据结构(`UnitCommitmentInstance`)。 |
57
- | `src/model/` | 模型构建模块。`build.jl` 是入口,根据传入的 `Formulation` 参数调用各子模块,在内存中构建 JuMP 数学模型。 |
58
- | `src/model/formulations/base/` | 基础公式化组件。包含线路约束(`line.jl`)、机组约束(`unit.jl`)等标准模块。 |
59
- | `src/model/formulations/xxx2005/` | **本项目新增的核心模块**。实现了发电机启停功率轨迹(Power Trajectories)约束。该约束用曲线精确建模机组在启动和停机阶段逐时段的输出功率,是本数据集区别于普通 UC 数据集的关键特性。 |
60
- | `src/transform/` | 数据变换模块。包含 `convert_to_subhourly` 函数,负责将 1 小时分辨率(24 时段)的算例转换为 15 分钟分辨率(96 时段)。 |
61
 
62
- ### 2.3 `instances/` — 原始输入算例
 
63
 
 
 
64
  ```
65
- instances/
66
- └── matpower/
67
- ├── case14/ ← IEEE 14 节点测试系统,含 67 个日期的算例文件
68
- └── case30/ ← IEEE 30 节点测试系统,含 45 个日期的算例文件
69
- ```
70
-
71
- 每个文件夹下存放以日期命名的 `.json.gz` 压缩文件(例如 `2017-01-01.json.gz`),每个文件描述了当日的负荷曲线、机组参数和电网拓扑。
72
 
73
- ### 2.4 `testdata/` — 大型算例
74
 
75
- ```
76
- testdata/
77
- └── case2383wp/ ← 波兰国家电网模型(2383 节点,323 台发电机),含 4 个日期的算例文件
78
  ```
79
 
80
- 该算例规模接近实际电网,生成的 MPS 文件体积较大(单文件可超过 50MB),用于测试 MIP 求解器的可扩展性。
81
 
82
- ### 2.5 `test/` 实验输出目录
83
 
84
- 运行 `test_main.jl` 后生成,存放对比实验的中间 JSON 文件和最终的 CSV 报表。**与数据集生成无关。**
 
 
 
 
 
 
85
 
86
- ---
 
87
 
88
- ## 三、数据集目录详细说明(`UnitCommitment_Trajectory_Dataset/`)
89
 
90
- 运行生成脚本后,该目录结构如下
91
 
 
 
92
  ```
93
- UnitCommitment_Trajectory_Dataset/
94
- ├── case14/
95
- │ ├── hourly_noline/ ← case14 的 MPS 文件,1小时粒度,无网络约束
96
- │ ├── hourly_withline/ ← case14 的 MPS 文件,1小时粒度,含 SCUC 网络约束
97
- │ ├── subhourly_noline/ ← case14 的 MPS 文件,15分钟粒度,无网络约束
98
- │ └── subhourly_withline/ ← case14 的 MPS 文件,15分钟粒度,含 SCUC 网络约束
99
- ├── case30/
100
- │ └── ...(结构同上)
101
- └── case2383wp/
102
- └── ...(结构同上)
103
- ```
104
-
105
- ### 3.1 四种变体的 MIP 特性对比
106
 
107
- | 变体文件 | 时段数 | 网络约束 | MIP 变量规模 | 典型用途 |
108
- | :--- | :---: | :---: | :--- | :--- |
109
- | `hourly_noline` | 24 | ✗ | 最小。仅含机组开关(二进制)和出力(连续)变量。 | 入门级 MIP 测试,算法原型验证。 |
110
- | `hourly_withline` | 24 | ✓ | 中等。在 `noline` 基础上增加大量潮流不等式约束(DCOPF)。 | 标准 SCUC 基准,适合测试割平面效果。 |
111
- | `subhourly_noline` | 96 | ✗ | 较大。变量数量是 hourly 版本的 4 倍。 | 测试求解器处理高维 MIP 的收敛性能。 |
112
- | `subhourly_withline` | 96 | ✓ | 最大。同时具备高维变量和密集约束矩阵。 | 最接近现实调度的高难度 MIP 命题。 |
113
 
114
- ### 3.2 文件命名规则
115
 
116
- 每个 `.mps` 文件的命名格式为:
117
-
118
- ```
119
- {算例名}_{日期}_{粒度}_{变体}.mps
120
- ```
121
 
122
- 示例解析
123
 
124
- ```
125
- case30_2017-01-01_s_withline.mps
126
- │ │ │ └── withline:包含网络约束 / noline:无网络约束
127
- │ │ └── s:subhourly(15分钟)/ h:hourly(1小时)
128
- │ ��── 该条 MPS 对应的负荷场景日期
129
- └── 电网算例名称
130
  ```
131
 
132
- ---
133
-
134
- ## 四、从零复现数据集的完整步骤
135
 
136
- ### 步骤 1:安装 Julia
137
 
138
- 从 [https://julialang.org/downloads/](https://julialang.org/downloads/) 下载并安装 Julia,建议版本 **v1.10 或更高**。
 
 
 
 
139
 
140
- 安装完成后,在命令行执行 `julia --version`,确认输出版本号即表示安装成功
141
 
142
- ### 步骤 2:克隆或下载本仓库
143
 
144
- 确保本地磁盘上存以下两个文件夹(目录结构必须与此一致)
145
 
 
 
 
146
  ```
147
- E:\Project\UnitCommitment_Trajectory\
148
- ├── UnitCommitment_Trajectory_Test\ ← 包含 generate_dataset.jl 和 src/ 等
149
- └── UnitCommitment_Trajectory_Dataset\ ← 可以是空文件夹,脚本会自动创建子目录
150
- ```
151
-
152
- > **注意**:`UnitCommitment_Trajectory_Dataset` 文件夹必须与 `UnitCommitment_Trajectory_Test` 处于**同一层级**,否则脚本中的相对路径 `../UnitCommitment_Trajectory_Dataset` 将无法找到输出目录。
153
-
154
- ### 步骤 3:初始化 Julia 环境(仅需执行一次)
155
 
156
- 打开 PowerShell,进入代码目录,执行以下命令自动安装所有依赖:
157
 
158
  ```powershell
159
- cd E:\Project\UnitCommitment_Trajectory\UnitCommitment_Trajectory_Test
160
 
 
161
  julia --project=. -e "using Pkg; Pkg.instantiate()"
162
- ```
163
-
164
- 该命令会读取 `Project.toml` 和 `Manifest.toml`,自动下载并安装所有指定版本的依赖包(JuMP、HiGHS、CodecZlib 等)。
165
-
166
- > **常见问题**:若网络较慢或出现 SSL 错误,可尝试设置国内镜像后再执行:
167
- > ```powershell
168
- > $env:JULIA_PKG_SERVER = "https://mirrors.tuna.tsinghua.edu.cn/julia"
169
- > julia --project=. -e "using Pkg; Pkg.instantiate()"
170
- > ```
171
 
172
- ### 步骤 4:运行数据集生成脚本
 
173
 
174
- 在同一目录执行:
 
175
 
176
- ```powershell
177
  julia --project=. generate_dataset.jl
178
- ```
179
-
180
- 脚本将依次处理 `case14`(67 个文件)、`case30`(45 个文件)和 `case2383wp`(4 个文件),每个算例生成 4 种变体共 **464 个 `.mps` 文件**。
181
 
182
- 端输出示例:
 
183
  ```
184
- 🚀 开始全量数据集生成任务 (包含大型算例)...
185
 
186
- >>> 正在准备算例: case14
187
- [case14] 处理进度: 1/67 (2017-01-01)
188
- [case14] 处理进度: 2/67 (2017-01-02)
189
- ...
190
- ✅ 所有算例生成完毕!
191
- ```
192
-
193
- > **预计耗时**:普通电脑约 10–20 分钟(case2383wp 的大型算例每个文件需较长时间)。
194
- >
195
- > **内存要求**:建议至少 8GB 可用内存,生成 case2383wp 的 Subhourly 版本时内存占用峰值约为 4GB。
196
-
197
- ### 步骤 5:验证生成结果
198
-
199
- 生成完成后,检查 `UnitCommitment_Trajectory_Dataset/` 下是否存在如下结构,且每个变体文件夹内包含对应数量的 `.mps` 文件:
200
-
201
- | 文件夹路径 | 预期文件数 |
202
- | :--- | :---: |
203
- | `case14/hourly_noline/` | 67 |
204
- | `case14/hourly_withline/` | 67 |
205
- | `case14/subhourly_noline/` | 67 |
206
- | `case14/subhourly_withline/` | 67 |
207
- | `case30/hourly_noline/` | 45 |
208
- | `case30/hourly_withline/` | 45 |
209
- | `case30/subhourly_noline/` | 45 |
210
- | `case30/subhourly_withline/` | 45 |
211
- | `case2383wp/hourly_noline/` | 4 |
212
- | `case2383wp/hourly_withline/` | 4 |
213
- | `case2383wp/subhourly_noline/` | 4 |
214
- | `case2383wp/subhourly_withline/` | 4 |
215
- | **合计** | **464** |
216
-
217
- ---
218
-
219
- ## 五、数据集技术规格
220
 
221
- - **格式**:标准 MPS(Mathematical Programming System)兼容 Gurobi、CPLEX、HiGHS、SCIP 等所有主流求解器
222
- - **变量命名**:已启用 `variable_names=true`,MPS 文件中的变量使用语义化名称(如 `is_on[generator_name, t]`、`prod_above[s1, generator_name, t]`),而非 `x1, x2` 等匿名编号。
223
- - **启停轨迹**:所有模型均包含本项目定制的发电机启停功率轨迹约束(`xxx2005` 模块),相比标准 UC 公式,MIP 的可行域更精确,对求解器分支策略的要求更高。
224
 
225
- ---
 
226
 
227
- ## 六、引用
228
 
229
- 本数据集基于以下项目生成,如在研究中使用,请引用原始论文:
230
 
231
- - **UnitCommitment.jl** — Alinson S. Xavier et al., Argonne National Laboratory
232
- DOI: [10.5281/zenodo.4269874](https://doi.org/10.5281/zenodo.4269874)
 
12
  - mps
13
  - benchmark
14
  size_categories:
15
+ - 10K<n<100K
16
  ---
17
 
18
+ # UnitCommitment Trajectory MPS 数据集
19
+
20
+ 本仓库用于从 UnitCommitment.jl 的 Matpower 机组组合(UC)实例生成标准 `.mps` 文件,供混合整数规划(MIP)、机组组合(UC)及安全约束机组组合(SCUC)模型求解器测试与基准研究使用。
21
+
22
+ 仓库地址:[EridanusQ/UnitCommitment_Trajectory · Datasets at Hugging Face](https://huggingface.co/datasets/EridanusQ/UnitCommitment_Trajectory)
23
+
24
+ ## 1. 数据规模
25
+
26
+ `UnitCommitment_Trajectory_Test/instances/matpower` 下共有 **26** 个 Matpower case,**9487** 个 `.json.gz` 输入实例。每个实例生成 4 个 `.mps`,全量输出预计 **37948** 个文件。
27
+
28
+ | Case | 输入实例数 |
29
+ | :-------------- | ---------: |
30
+ | case118 | 365 |
31
+ | case1354pegase | 365 |
32
+ | case13659pegase | 365 |
33
+ | case14 | 365 |
34
+ | case1888rte | 365 |
35
+ | case1951rte | 365 |
36
+ | case2383wp | 365 |
37
+ | case2736sp | 365 |
38
+ | case2737sop | 365 |
39
+ | case2746wop | 365 |
40
+ | case2746wp | 365 |
41
+ | case2848rte | 365 |
42
+ | case2868rte | 365 |
43
+ | case2869pegase | 365 |
44
+ | case30 | 365 |
45
+ | case300 | 365 |
46
+ | case3012wp | 365 |
47
+ | case3120sp | 365 |
48
+ | case3375wp | 365 |
49
+ | case57 | 362 |
50
+ | case6468rte | 365 |
51
+ | case6470rte | 365 |
52
+ | case6495rte | 365 |
53
+ | case6515rte | 365 |
54
+ | case89pegase | 365 |
55
+ | case9241pegase | 365 |
56
+
57
+ ## 2. 目录结构
58
+
59
+ ```text
60
  UnitCommitment_Trajectory/
61
+ ├── README.md
62
+ ── UnitCommitment_Trajectory_Test/
63
+ │ ├── Project.toml
64
+ │ ├── Manifest.toml
65
+ │ ├── generate_dataset.jl # 批量生成 MPS 主脚本
66
+ │ ├── create_scuc_mps_files.jl # 单算例调试脚本
67
+ │ ├── instances/
68
+ │ │ └── matpower/ # 原始 .json.gz 输入实例
69
+ │ ├── benchmark/
70
+ │ │ └── scripts/
71
+ │ │ └── download_matpower_instances.py
72
+ │ ├── src/ # 修改版 UnitCommitment.jl 源码
73
+ │ └── ...
74
+ └── UnitCommitment_Trajectory_Dataset/ # 输出的 .mps 文件目录
75
  ```
76
 
77
+ > 后文所有命令均在 `UnitCommitment_Trajectory_Test` 目录下执行,路径均相对于该目录。
78
 
79
+ ## 3. 环境准备
80
 
81
+ - **Julia**:建议 1.12 系列。
82
+ - **Python 3**:仅用于下载脚本,无需额外依赖。
83
 
84
+ ```powershell
85
+ cd UnitCommitment_Trajectory\UnitCommitment_Trajectory_Test
86
+ julia --project=. -e "using Pkg; Pkg.instantiate()"
87
+ ```
 
 
 
 
88
 
89
+ ## 4. 下载原始 Matpower 输入数据
90
 
91
+ 下载脚本 `benchmark\scripts\download_matpower_instances.py` 从 `https://axavier.org/UnitCommitment.jl/0.4/instances` 获取数据,默认日期范围 `2017-01-01` 至 `2017-12-31`,保存到 `instances/matpower`,已存在且非空文件会自动跳过
92
 
93
+ ```powershell
94
+ # 下载全年数据(最常用)
95
+ python benchmark\scripts\download_matpower_instances.py
 
 
 
 
96
 
97
+ # 查看支持的 case 列表
98
+ python benchmark\scripts\download_matpower_instances.py --list-cases
99
 
100
+ # 指定日期范围(示例)
101
+ python benchmark\scripts\download_matpower_instances.py --start-date 2017-01-01 --end-date 2017-01-31
102
  ```
 
 
 
 
 
 
 
103
 
104
+ 快速检查下载结果:
105
 
106
+ ```powershell
107
+ Get-ChildItem instances\matpower -Recurse -Filter *.json.gz | Measure-Object
 
108
  ```
109
 
110
+ ## 5. MPS 输出结构
111
 
112
+ `generate_dataset.jl` 将结果输出到 `../UnitCommitment_Trajectory_Dataset`(即仓库根目录下的数据集目录)。每个 case 下生成四个变体子目录
113
 
114
+ ```text
115
+ case_name/
116
+ ├── hourly_noline/ # 小时级 UC,无线路约束
117
+ ├── hourly_withline/ # 小时级 SCUC,含线路约束
118
+ ├── subhourly_noline/ # 子小时 UC,无线路约束
119
+ └── subhourly_withline/ # 子小时 SCUC,含线路约束
120
+ ```
121
 
122
+ 文件命名规则:`{case}_{date}_{resolution}_{network}.mps`
123
+ 例如:`case30_2017-01-01_h_noline.mps`(`h` = hourly,`s` = subhourly)。
124
 
125
+ ## 6. 单算例测试
126
 
127
+ 用于验证环境与建模流程是否正常
128
 
129
+ ```powershell
130
+ julia --project=. create_scuc_mps_files.jl
131
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
132
 
133
+ 成功后会在当前目录生成四个测试文件:`uc_default_noline.mps` 等。
 
 
 
 
 
134
 
135
+ ## 7. 全量生成 MPS 数据集
136
 
137
+ ### 7.1 基本用法
 
 
 
 
138
 
139
+ 确认输入数据就绪后,直接运行
140
 
141
+ ```powershell
142
+ julia --project=. generate_dataset.jl
 
 
 
 
143
  ```
144
 
145
+ 脚本会自动扫描 `instances/matpower` 下所有包含 `.json.gz` 的 case。**全量生成非常耗时且占用大量磁盘空间**,大规模 case(如 `case13659pegase`、`case9241pegase`)尤为突出。
 
 
146
 
147
+ ### 7.2 只生成指定 Case
148
 
149
+ ```powershell
150
+ $env:UC_CASES = "case14,case30"
151
+ julia --project=. generate_dataset.jl
152
+ Remove-Item Env:\UC_CASES
153
+ ```
154
 
155
+ case 同理:`$env:UC_CASES = "case118"`。干跑与指定 case 可组合使用。
156
 
157
+ ## 8. 检查 MPS 输出结果
158
 
159
+ `UnitCommitment_Trajectory_Test` 目录下执行
160
 
161
+ ```powershell
162
+ # 文件总数
163
+ Get-ChildItem ..\UnitCommitment_Trajectory_Dataset -Recurse -Filter *.mps | Measure-Object
164
  ```
 
 
 
 
 
 
 
 
165
 
166
+ ## 9. 完整复现流程
167
 
168
  ```powershell
169
+ cd UnitCommitment_Trajectory\UnitCommitment_Trajectory_Test
170
 
171
+ # 1. 初始化 Julia 环境
172
  julia --project=. -e "using Pkg; Pkg.instantiate()"
 
 
 
 
 
 
 
 
 
173
 
174
+ # 2. 下载 Matpower 实例
175
+ python benchmark\scripts\download_matpower_instances.py
176
 
177
+ # 3. 检查载数量
178
+ Get-ChildItem instances\matpower -Recurse -Filter *.json.gz | Measure-Object
179
 
180
+ # 4. 全量生成 MPS
181
  julia --project=. generate_dataset.jl
 
 
 
182
 
183
+ # 5. 检查最 MPS 文件数
184
+ Get-ChildItem ..\UnitCommitment_Trajectory_Dataset -Recurse -Filter *.mps | Measure-Object
185
  ```
 
186
 
187
+ ## 10. 轨迹约束与预处理说明
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
188
 
189
+ 本仓库��于修改版 UnitCommitment.jl增加了启停轨迹约束与实例预处理逻辑相关代码位于:
 
 
190
 
191
+ - `src/model/formulations`:轨迹约束建模
192
+ - `src/instance/modify.jl`:实例预处理
193
 
194
+ 更详细的说明与测试示例参见 `UnitCommitment_Trajectory_Test/README.md`。
195
 
196
+ ## 11. 引用
197
 
198
+ 原始 UnitCommitment.jl DOI:
199
+ [10.5281/zenodo.4269874](https://doi.org/10.5281/zenodo.4269874)
UnitCommitment_Trajectory_Test/Project.toml CHANGED
@@ -6,6 +6,7 @@ version = "0.3.0"
6
  Cbc = "9961bab8-2fa3-5c5a-9d89-47fab24efd76"
7
  CodecZlib = "944b1d66-785c-5afd-91f1-9de20f533193"
8
  DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
 
9
  Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
10
  Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
11
  GZip = "92fee26a-97fe-5a0c-ad85-20a5f3185b63"
@@ -27,6 +28,7 @@ TimerOutputs = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f"
27
  Cbc = "1.3.0"
28
  CodecZlib = "0.7.8"
29
  DataStructures = "0.18.22"
 
30
  Distributed = "1.11.0"
31
  Distributions = "0.25.125"
32
  GZip = "0.5.2"
 
6
  Cbc = "9961bab8-2fa3-5c5a-9d89-47fab24efd76"
7
  CodecZlib = "944b1d66-785c-5afd-91f1-9de20f533193"
8
  DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
9
+ Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
10
  Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
11
  Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
12
  GZip = "92fee26a-97fe-5a0c-ad85-20a5f3185b63"
 
28
  Cbc = "1.3.0"
29
  CodecZlib = "0.7.8"
30
  DataStructures = "0.18.22"
31
+ Dates = "1.11.0"
32
  Distributed = "1.11.0"
33
  Distributions = "0.25.125"
34
  GZip = "0.5.2"
UnitCommitment_Trajectory_Test/README.md CHANGED
@@ -1,8 +1,8 @@
1
  <h1 align="center">UnitCommitment.jl (Modified Trajectory Version)</h1>
2
 
3
- > **NOTA BENE**
4
- > This is a customized version of the original `UnitCommitment.jl` package.
5
- >
6
  > **主要修改与新增功能 (Modified Features):**
7
  > 1. **启停轨迹约束 (Power Trajectories)**: 新增了 `xxx2005` 文件夹及其对应的模型实现,支持机组的 Startup / Shutdown 启停轨迹约束 (`xxx2005.PowerTrajectories()`)。
8
  > 2. **预处理修正 (Preprocessing)**: 增加了 `pmax-preprocessing.jl` 脚本,针对特定机组的 Minimum uptime 等参数进行数据筛选与修正逻辑。
@@ -62,23 +62,26 @@ JuMP.optimize!(model)
62
  2. 在该文件夹下打开命令行(或终端),输入 `julia` 进入交互模式。
63
  3. 按下 `]` 键进入 Pkg 包管理模式(提示符会变为 `pkg>`)。
64
  4. 依次执行以下命令:
 
65
  ```julia
66
  # 激活当前目录的独立环境
67
  pkg> activate .
68
-
69
  # 实例化安装原项目自带的依赖
70
  pkg> instantiate
71
-
72
  # 安装本次测试脚本额外需要的依赖包
73
  pkg> add JuMP HiGHS GZip
74
  pkg> update GZip
75
  ```
 
76
  5. 按下 `Backspace` (退格键) 退出 Pkg 模式,回到 `julia>` 提示符,或直接关闭窗口。
77
 
78
  ### 2. 算例配置 (CASES 选择)
79
 
80
  `test_main.jl` 的顶部定义了一个 `CASES` 数组,列出了需要进行对比测试的数据集。
81
  如果您想缩短测试时间,可以打开 `test_main.jl`,利用 `#` 号注释掉暂不需要测试的算例:
 
82
  ```julia
83
  CASES =[
84
  ("testdata/case2383wp/2017-07-28.json.gz", "case2383wp"),
@@ -89,18 +92,23 @@ CASES =[
89
  ### 3. 运行测试
90
 
91
  在命令行中(确保路径为当前文件夹),直接运行:
 
92
  ```bash
93
  julia test_main.jl
94
  ```
 
95
  测试脚本将自动针对每个算例依序运行以下三种情况:
 
96
  * **Base**: 基础模型(未添加启停轨迹的原版逻辑)
97
  * **v1 (Part 1)**: 添加 Startup / Shutdown 启停曲线的模型
98
  * **v2 (Part 2)**: 在 v1 基础上筛选并修正 Minimum uptime 的模型
99
 
100
  ### 4. 测试输出结果说明
 
101
  测试完毕后,您可以在 `./test/` 目录下找到所有独立生成的算例结果文件夹(例如 `./test/case2383wp-2017-07-28/`)。
102
 
103
  每个文件夹内包含:
 
104
  1. **中间 JSON 数据集**:
105
  * `*-part1.json`:动态添加了 Startup/Shutdown curve 的实例。
106
  * `*-part2.json`:在 part1 基础上修正了 Minimum uptime 的实例。
@@ -114,6 +122,7 @@ julia test_main.jl
114
  ---
115
 
116
  ## Original Authors
 
117
  * **Alinson S. Xavier** (Argonne National Laboratory)
118
  * **Aleksandr M. Kazachkov** (University of Florida)
119
  * **Ogün Yurdakul** (Technische Universität Berlin)
@@ -149,4 +158,4 @@ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSE
149
  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
150
  OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
151
  POSSIBILITY OF SUCH DAMAGE.
152
- ```
 
1
  <h1 align="center">UnitCommitment.jl (Modified Trajectory Version)</h1>
2
 
3
+ > **NOTA BENE**
4
+ > This is a customized version of the original `UnitCommitment.jl` package.
5
+ >
6
  > **主要修改与新增功能 (Modified Features):**
7
  > 1. **启停轨迹约束 (Power Trajectories)**: 新增了 `xxx2005` 文件夹及其对应的模型实现,支持机组的 Startup / Shutdown 启停轨迹约束 (`xxx2005.PowerTrajectories()`)。
8
  > 2. **预处理修正 (Preprocessing)**: 增加了 `pmax-preprocessing.jl` 脚本,针对特定机组的 Minimum uptime 等参数进行数据筛选与修正逻辑。
 
62
  2. 在该文件夹下打开命令行(或终端),输入 `julia` 进入交互模式。
63
  3. 按下 `]` 键进入 Pkg 包管理模式(提示符会变为 `pkg>`)。
64
  4. 依次执行以下命令:
65
+
66
  ```julia
67
  # 激活当前目录的独立环境
68
  pkg> activate .
69
+
70
  # 实例化安装原项目自带的依赖
71
  pkg> instantiate
72
+
73
  # 安装本次测试脚本额外需要的依赖包
74
  pkg> add JuMP HiGHS GZip
75
  pkg> update GZip
76
  ```
77
+
78
  5. 按下 `Backspace` (退格键) 退出 Pkg 模式,回到 `julia>` 提示符,或直接关闭窗口。
79
 
80
  ### 2. 算例配置 (CASES 选择)
81
 
82
  `test_main.jl` 的顶部定义了一个 `CASES` 数组,列出了需要进行对比测试的数据集。
83
  如果您想缩短测试时间,可以打开 `test_main.jl`,利用 `#` 号注释掉暂不需要测试的算例:
84
+
85
  ```julia
86
  CASES =[
87
  ("testdata/case2383wp/2017-07-28.json.gz", "case2383wp"),
 
92
  ### 3. 运行测试
93
 
94
  在命令行中(确保路径为当前文件夹),直接运行:
95
+
96
  ```bash
97
  julia test_main.jl
98
  ```
99
+
100
  测试脚本将自动针对每个算例依序运行以下三种情况:
101
+
102
  * **Base**: 基础模型(未添加启停轨迹的原版逻辑)
103
  * **v1 (Part 1)**: 添加 Startup / Shutdown 启停曲线的模型
104
  * **v2 (Part 2)**: 在 v1 基础上筛选并修正 Minimum uptime 的模型
105
 
106
  ### 4. 测试输出结果说明
107
+
108
  测试完毕后,您可以在 `./test/` 目录下找到所有独立生成的算例结果文件夹(例如 `./test/case2383wp-2017-07-28/`)。
109
 
110
  每个文件夹内包含:
111
+
112
  1. **中间 JSON 数据集**:
113
  * `*-part1.json`:动态添加了 Startup/Shutdown curve 的实例。
114
  * `*-part2.json`:在 part1 基础上修正了 Minimum uptime 的实例。
 
122
  ---
123
 
124
  ## Original Authors
125
+
126
  * **Alinson S. Xavier** (Argonne National Laboratory)
127
  * **Aleksandr M. Kazachkov** (University of Florida)
128
  * **Ogün Yurdakul** (Technische Universität Berlin)
 
158
  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
159
  OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
160
  POSSIBILITY OF SUCH DAMAGE.
161
+ ```
UnitCommitment_Trajectory_Test/benchmark/scripts/download_matpower_instances.py ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import argparse
4
+ import datetime as dt
5
+ import os
6
+ import re
7
+ import sys
8
+ import time
9
+ import urllib.error
10
+ import urllib.request
11
+ from concurrent.futures import ThreadPoolExecutor, as_completed
12
+ from dataclasses import dataclass
13
+ from pathlib import Path
14
+
15
+ BASE_URL_DEFAULT = "https://axavier.org/UnitCommitment.jl/0.4/instances"
16
+
17
+
18
+ @dataclass(frozen=True)
19
+ class Task:
20
+ case: str
21
+ date: dt.date
22
+ url: str
23
+ out_path: Path
24
+
25
+
26
+ def _iter_dates(start: dt.date, end: dt.date):
27
+ if end < start:
28
+ raise ValueError("end_date must be >= start_date")
29
+ cur = start
30
+ one = dt.timedelta(days=1)
31
+ while cur <= end:
32
+ yield cur
33
+ cur += one
34
+
35
+
36
+ def _default_instances_md_path() -> Path:
37
+ here = Path(__file__).resolve()
38
+ project_root = here.parents[2]
39
+ return project_root / "docs" / "src" / "guides" / "instances.md"
40
+
41
+
42
+ def _parse_matpower_cases_from_instances_md(instances_md_path: Path) -> list[str]:
43
+ text = instances_md_path.read_text(encoding="utf-8")
44
+ pattern = re.compile(r"matpower/(case[^/\s`]+)/\d{4}-\d{2}-\d{2}")
45
+ cases = sorted(set(pattern.findall(text)))
46
+ if not cases:
47
+ raise RuntimeError(f"No MATPOWER cases found in {instances_md_path}")
48
+ return cases
49
+
50
+
51
+ def _download_one(task: Task, timeout_s: float, retries: int, force: bool) -> tuple[str, str | None]:
52
+ task.out_path.parent.mkdir(parents=True, exist_ok=True)
53
+
54
+ if task.out_path.exists() and not force and task.out_path.stat().st_size > 0:
55
+ return ("skipped", None)
56
+
57
+ tmp_path = task.out_path.with_suffix(task.out_path.suffix + ".part")
58
+ req = urllib.request.Request(task.url, headers={"User-Agent": "UnitCommitment downloader"})
59
+
60
+ last_err = None
61
+ for attempt in range(retries + 1):
62
+ try:
63
+ with urllib.request.urlopen(req, timeout=timeout_s) as resp:
64
+ with open(tmp_path, "wb") as f:
65
+ while True:
66
+ chunk = resp.read(1024 * 256)
67
+ if not chunk:
68
+ break
69
+ f.write(chunk)
70
+ os.replace(tmp_path, task.out_path)
71
+ return ("downloaded", None)
72
+ except (urllib.error.URLError, urllib.error.HTTPError, TimeoutError, OSError) as e:
73
+ last_err = e
74
+ try:
75
+ if tmp_path.exists():
76
+ tmp_path.unlink()
77
+ except OSError:
78
+ pass
79
+ if attempt < retries:
80
+ time.sleep(min(10.0, 0.5 * (2**attempt)))
81
+ continue
82
+ return ("failed", f"{type(last_err).__name__}: {last_err}")
83
+
84
+ return ("failed", f"{type(last_err).__name__}: {last_err}")
85
+
86
+
87
+ def _build_tasks(
88
+ base_url: str,
89
+ out_dir: Path,
90
+ cases: list[str],
91
+ start_date: dt.date,
92
+ end_date: dt.date,
93
+ ) -> list[Task]:
94
+ tasks: list[Task] = []
95
+ for case in cases:
96
+ for d in _iter_dates(start_date, end_date):
97
+ date_str = d.isoformat()
98
+ url = f"{base_url}/matpower/{case}/{date_str}.json.gz"
99
+ out_path = out_dir / case / f"{date_str}.json.gz"
100
+ tasks.append(Task(case=case, date=d, url=url, out_path=out_path))
101
+ return tasks
102
+
103
+
104
+ def _parse_date(s: str) -> dt.date:
105
+ return dt.date.fromisoformat(s)
106
+
107
+
108
+ def main(argv: list[str]) -> int:
109
+ parser = argparse.ArgumentParser()
110
+ parser.add_argument("--instances-md", type=Path, default=_default_instances_md_path())
111
+ parser.add_argument("--base-url", type=str, default=BASE_URL_DEFAULT)
112
+ parser.add_argument(
113
+ "--out-dir",
114
+ type=Path,
115
+ default=(Path(__file__).resolve().parents[2] / "instances" / "matpower"),
116
+ )
117
+ parser.add_argument("--start-date", type=_parse_date, default=dt.date(2017, 1, 1))
118
+ parser.add_argument("--end-date", type=_parse_date, default=dt.date(2017, 12, 31))
119
+ parser.add_argument("--workers", type=int, default=min(32, (os.cpu_count() or 4) * 4))
120
+ parser.add_argument("--timeout", type=float, default=60.0)
121
+ parser.add_argument("--retries", type=int, default=3)
122
+ parser.add_argument("--force", action="store_true")
123
+ parser.add_argument("--dry-run", action="store_true")
124
+ parser.add_argument("--list-cases", action="store_true")
125
+ args = parser.parse_args(argv)
126
+
127
+ instances_md_path: Path = args.instances_md
128
+ if not instances_md_path.exists():
129
+ raise FileNotFoundError(str(instances_md_path))
130
+
131
+ cases = _parse_matpower_cases_from_instances_md(instances_md_path)
132
+ if args.list_cases:
133
+ for c in cases:
134
+ print(c)
135
+ return 0
136
+
137
+ out_dir: Path = args.out_dir
138
+ tasks = _build_tasks(
139
+ base_url=args.base_url.rstrip("/"),
140
+ out_dir=out_dir,
141
+ cases=cases,
142
+ start_date=args.start_date,
143
+ end_date=args.end_date,
144
+ )
145
+
146
+ print(f"cases={len(cases)} files={len(tasks)} out_dir={out_dir}")
147
+ if args.dry_run:
148
+ for t in tasks[:10]:
149
+ print(f"{t.url} -> {t.out_path}")
150
+ if len(tasks) > 10:
151
+ print("...")
152
+ return 0
153
+
154
+ downloaded = 0
155
+ skipped = 0
156
+ failed = 0
157
+ total = len(tasks)
158
+ t0 = time.time()
159
+ last_print = 0.0
160
+
161
+ with ThreadPoolExecutor(max_workers=max(1, args.workers)) as ex:
162
+ fut_to_task = {
163
+ ex.submit(_download_one, t, args.timeout, args.retries, args.force): t for t in tasks
164
+ }
165
+ for i, fut in enumerate(as_completed(fut_to_task), start=1):
166
+ status, err = fut.result()
167
+ if status == "downloaded":
168
+ downloaded += 1
169
+ elif status == "skipped":
170
+ skipped += 1
171
+ else:
172
+ failed += 1
173
+ task = fut_to_task[fut]
174
+ sys.stderr.write(f"FAILED {task.url} -> {task.out_path} ({err})\n")
175
+
176
+ now = time.time()
177
+ if now - last_print >= 1.0 or i == total:
178
+ elapsed = max(0.001, now - t0)
179
+ rate = i / elapsed
180
+ print(
181
+ f"{i}/{total} downloaded={downloaded} skipped={skipped} failed={failed} rate={rate:.1f}/s"
182
+ )
183
+ last_print = now
184
+
185
+ if failed:
186
+ return 2
187
+ return 0
188
+
189
+
190
+ if __name__ == "__main__":
191
+ raise SystemExit(main(sys.argv[1:]))
UnitCommitment_Trajectory_Test/create_scuc_mps_files.jl CHANGED
@@ -1,100 +1,46 @@
1
- # =========================================================================
2
- # 脚本名称: create_scuc_mps_files.jl
3
- # 运行环境: D:\e-task\UnitCommitment.jl\dev\
4
- # 功能目标: 生成 4 个版本的机组组合(UC)优化模型的 MPS 文件
5
- # =========================================================================
6
-
7
- # 1. 导入必需的包
8
- using UnitCommitment
9
  using JuMP
 
10
 
11
-
12
- function generate_mps_files()
13
- base_instance_name = "instances/matpower/case30/2017-01-01.json.gz"
14
-
15
- println("==================================================")
16
- println(" 开始生成测试算例 MPS 文件 (基于 $base_instance_name)")
17
- println("==================================================\n")
18
-
19
- # ==========================================================
20
- # 版本 1: Default (Hourly) - 不加线路约束
21
- # ==========================================================
22
- println(">>> 正在生成 版本 1/4: Default (Hourly), 不加线路约束")
23
- instance_v1 = UnitCommitment.read(base_instance_name)
24
-
25
- # 【核心操作】:直接清空内存中的线路数据,确保 build_model 不加线路约束
26
- empty!(instance_v1.scenarios[1].lines)
27
-
28
- model_v1 = UnitCommitment.build_model(
29
- instance=instance_v1,
30
- formulation=UnitCommitment.Formulation(
31
- transmission=UnitCommitment.ShiftFactorsFormulation(
32
- precomputed_isf=zeros(0,0), precomputed_lodf=zeros(0,0)
33
- )
34
  ),
35
- variable_names=true
36
  )
37
- JuMP.write_to_file(model_v1, "uc_default_noline.mps")
38
- println(" -> 成功保存: uc_default_noline.mps\n")
39
-
40
-
41
- # ==========================================================
42
- # 版本 2: Default (Hourly) - 加线路约束 (SCUC)
43
- # ==========================================================
44
- println(">>> 正在生成 版本 2/4: Default (Hourly), 加线路约束")
45
- instance_v2 = UnitCommitment.read(base_instance_name)
46
-
47
- # 因为 case14 默认自带线路数据,这里直接 build 就会包含网络约束
48
- model_v2 = UnitCommitment.build_model(instance=instance_v2, variable_names=true)
49
-
50
- JuMP.write_to_file(model_v2, "uc_default_withline.mps")
51
- println(" -> 成功保存: uc_default_withline.mps\n")
52
-
53
-
54
- # ==========================================================
55
- # 版本 3: Subhourly - 不加线路约束
56
- # ==========================================================
57
- println(">>> 正在生成 版本 3/4: Subhourly, 不加线路约束")
58
- instance_v3_base = UnitCommitment.read(base_instance_name)
59
-
60
- # 使用你找到的本地函数,把相同的 instance 传两次来充当"今天"和"明天"
61
- instance_v3 = UnitCommitment.convert_to_subhourly(instance_v3_base, instance_v3_base)
62
-
63
- # 同样地,清空线路数据以确保无网络约束
64
- empty!(instance_v3.scenarios[1].lines)
65
-
66
- model_v3 = UnitCommitment.build_model(
67
- instance=instance_v3,
68
- formulation=UnitCommitment.Formulation(
69
- transmission=UnitCommitment.ShiftFactorsFormulation(
70
- precomputed_isf=zeros(0,0), precomputed_lodf=zeros(0,0)
71
- )
72
  ),
73
- variable_names=true
74
  )
75
- JuMP.write_to_file(model_v3, "uc_subhourly_noline.mps")
76
- println(" -> 成功保存: uc_subhourly_noline.mps\n")
77
-
78
 
79
- # ==========================================================
80
- # 版本 4: Subhourly - 加线路约束 (SCUC)
81
- # ==========================================================
82
- println(">>> 正在生成 版本 4/4: Subhourly, 加线路约束")
83
- instance_v4_base = UnitCommitment.read(base_instance_name)
84
-
85
- # 转换为 Subhourly 数据
86
- instance_v4 = UnitCommitment.convert_to_subhourly(instance_v4_base, instance_v4_base)
87
-
88
- # 保留线路数据,直接 build 即可包含线路约束
89
- model_v4 = UnitCommitment.build_model(instance=instance_v4, variable_names=true)
90
-
91
- JuMP.write_to_file(model_v4, "uc_subhourly_withline.mps")
92
- println(" -> 成功保存: uc_subhourly_withline.mps\n")
93
 
94
- println("==================================================")
95
- println("🎉 任务圆满完成!所有 4 个 MPS 文件均已生成!")
96
- println("==================================================")
97
  end
98
 
99
- # 执行主函数
100
- generate_mps_files()
 
 
 
 
 
 
 
 
 
1
  using JuMP
2
+ using UnitCommitment
3
 
4
+ function generate_mps_files(; base_instance_path::String = "instances/matpower/case30/2017-01-01.json.gz")
5
+ instance_hourly = UnitCommitment.read(base_instance_path)
6
+ instance_hourly_noline = deepcopy(instance_hourly)
7
+ empty!(instance_hourly_noline.scenarios[1].lines)
8
+
9
+ model_hourly_noline = UnitCommitment.build_model(
10
+ instance = instance_hourly_noline,
11
+ formulation = UnitCommitment.Formulation(
12
+ transmission = UnitCommitment.ShiftFactorsFormulation(
13
+ precomputed_isf = zeros(0, 0),
14
+ precomputed_lodf = zeros(0, 0),
15
+ ),
 
 
 
 
 
 
 
 
 
 
 
16
  ),
17
+ variable_names = true,
18
  )
19
+ JuMP.write_to_file(model_hourly_noline, "uc_default_noline.mps")
20
+
21
+ model_hourly_withline = UnitCommitment.build_model(instance = instance_hourly, variable_names = true)
22
+ JuMP.write_to_file(model_hourly_withline, "uc_default_withline.mps")
23
+
24
+ instance_sub = UnitCommitment.convert_to_subhourly(instance_hourly, instance_hourly)
25
+ instance_sub_noline = deepcopy(instance_sub)
26
+ empty!(instance_sub_noline.scenarios[1].lines)
27
+
28
+ model_sub_noline = UnitCommitment.build_model(
29
+ instance = instance_sub_noline,
30
+ formulation = UnitCommitment.Formulation(
31
+ transmission = UnitCommitment.ShiftFactorsFormulation(
32
+ precomputed_isf = zeros(0, 0),
33
+ precomputed_lodf = zeros(0, 0),
34
+ ),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ),
36
+ variable_names = true,
37
  )
38
+ JuMP.write_to_file(model_sub_noline, "uc_subhourly_noline.mps")
 
 
39
 
40
+ model_sub_withline = UnitCommitment.build_model(instance = instance_sub, variable_names = true)
41
+ JuMP.write_to_file(model_sub_withline, "uc_subhourly_withline.mps")
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
+ return nothing
 
 
44
  end
45
 
46
+ generate_mps_files()
 
UnitCommitment_Trajectory_Test/docs/src/tutorials/customizing.jl CHANGED
@@ -35,7 +35,7 @@ model = UnitCommitment.build_model(
35
  isf_cutoff = 0.008,
36
  lodf_cutoff = 0.003,
37
  ),
38
- power_trajectories = xxx2005.PowerTrajectories(),
39
  ),
40
  );
41
 
 
35
  isf_cutoff = 0.008,
36
  lodf_cutoff = 0.003,
37
  ),
38
+ power_trajectories = ArrCon2004.PowerTrajectories(),
39
  ),
40
  );
41
 
UnitCommitment_Trajectory_Test/generate_dataset.jl CHANGED
@@ -1,89 +1,167 @@
1
- # =========================================================================
2
- # 脚本名称: generate_dataset.jl (全量增强版)
3
- # 功能: 自动化生成包含 case14, case30, case2383wp 的全量 MIP 优化模型
4
- # 说明: 生成的 .mps 文件包含大量二进制变量,专用于混合整数规划 (MIP) 研究
5
- # 路径: 输出结果将导出至上级目录的 UnitCommitment_Trajectory_Dataset 文件夹
6
- # =========================================================================
7
-
8
- using UnitCommitment
9
  using JuMP
10
- using CodecZlib
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- # 配置路径映射
13
- config = Dict(
14
- "case14" => "instances/matpower/case14",
15
- "case30" => "instances/matpower/case30",
16
- "case2383wp" => "testdata/case2383wp"
 
17
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- output_root = "../UnitCommitment_Trajectory_Dataset"
20
- variants = ["hourly_noline", "hourly_withline", "subhourly_noline", "subhourly_withline"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- function run_full_generation()
23
- println("🚀 开始全量数据集生成任务 (包含大型算例)...")
24
-
25
- # 确保根目录存在
26
  mkpath(output_root)
 
 
 
 
 
 
 
 
 
27
 
28
- for (case_name, case_path) in config
29
- println("\n>>> 正在准备算例: $case_name")
30
-
31
- # 创建目录
32
- for v in variants
33
- mkpath(joinpath(output_root, case_name, v))
34
  end
35
-
36
- # 获取目录下所有的 json.gz 文件
37
- all_files = filter(f -> endswith(f, ".json.gz"), readdir(case_path))
38
-
39
- for (i, file_name) in enumerate(all_files)
 
 
 
 
 
 
 
40
  date_tag = split(file_name, ".")[1]
41
- src_path = joinpath(case_path, file_name)
42
- println(" [$case_name] 处理进度: $i/$(length(all_files)) ($date_tag)")
43
-
44
- # --- 建模逻辑 ---
45
-
46
- # 1. Hourly No-Line
47
- instance_v1 = UnitCommitment.read(src_path)
48
- empty!(instance_v1.scenarios[1].lines)
49
- model_v1 = UnitCommitment.build_model(
50
- instance=instance_v1,
51
- formulation=UnitCommitment.Formulation(
52
- transmission=UnitCommitment.ShiftFactorsFormulation(precomputed_isf=zeros(0,0), precomputed_lodf=zeros(0,0))
53
- ),
54
- variable_names=true
55
- )
56
- JuMP.write_to_file(model_v1, joinpath(output_root, case_name, "hourly_noline", "$(case_name)_$(date_tag)_h_noline.mps"))
57
-
58
- # 2. Hourly With-Line
59
- instance_v2 = UnitCommitment.read(src_path)
60
- model_v2 = UnitCommitment.build_model(instance=instance_v2, variable_names=true)
61
- JuMP.write_to_file(model_v2, joinpath(output_root, case_name, "hourly_withline", "$(case_name)_$(date_tag)_h_withline.mps"))
62
-
63
- # 3. Subhourly No-Line
64
- instance_v3_base = UnitCommitment.read(src_path)
65
- instance_v3 = UnitCommitment.convert_to_subhourly(instance_v3_base, instance_v3_base)
66
- empty!(instance_v3.scenarios[1].lines)
67
- model_v3 = UnitCommitment.build_model(
68
- instance=instance_v3,
69
- formulation=UnitCommitment.Formulation(
70
- transmission=UnitCommitment.ShiftFactorsFormulation(precomputed_isf=zeros(0,0), precomputed_lodf=zeros(0,0))
71
- ),
72
- variable_names=true
73
- )
74
- JuMP.write_to_file(model_v3, joinpath(output_root, case_name, "subhourly_noline", "$(case_name)_$(date_tag)_s_noline.mps"))
75
-
76
- # 4. Subhourly With-Line
77
- instance_v4_base = UnitCommitment.read(src_path)
78
- instance_v4 = UnitCommitment.convert_to_subhourly(instance_v4_base, instance_v4_base)
79
- model_v4 = UnitCommitment.build_model(instance=instance_v4, variable_names=true)
80
- JuMP.write_to_file(model_v4, joinpath(output_root, case_name, "subhourly_withline", "$(case_name)_$(date_tag)_s_withline.mps"))
81
-
82
- # 及时释放内存 (Julia 的 GC 有时会有延迟)
83
- model_v1 = model_v2 = model_v3 = model_v4 = nothing
84
  end
85
  end
86
- println("\n✅ 所有算例生成完毕!生成的 MPS 文件存放在: $output_root")
 
 
87
  end
88
 
89
- run_full_generation()
 
 
 
 
 
 
 
 
 
 
 
1
  using JuMP
2
+ using UnitCommitment
3
+
4
+ const DEFAULT_INPUT_ROOT = "instances/matpower"
5
+ const DEFAULT_OUTPUT_ROOT = "../UnitCommitment_Trajectory_Dataset"
6
+ const VARIANTS = ("hourly_noline", "hourly_withline", "subhourly_noline", "subhourly_withline")
7
+
8
+ function _build_noline_formulation()
9
+ return UnitCommitment.Formulation(
10
+ transmission = UnitCommitment.ShiftFactorsFormulation(
11
+ precomputed_isf = zeros(0, 0),
12
+ precomputed_lodf = zeros(0, 0),
13
+ ),
14
+ )
15
+ end
16
+
17
+ function _write_mps(model::JuMP.Model, path::String)
18
+ mkpath(dirname(path))
19
+ JuMP.write_to_file(model, path)
20
+ return nothing
21
+ end
22
+
23
+ function _list_json_gz(case_dir::String)
24
+ files = filter(f -> endswith(f, ".json.gz"), readdir(case_dir))
25
+ sort!(files)
26
+ return files
27
+ end
28
+
29
+ function discover_matpower_cases(input_root::String = DEFAULT_INPUT_ROOT)
30
+ isdir(input_root) || error("Input directory does not exist: $input_root")
31
+
32
+ case_dirs = filter(d -> isdir(joinpath(input_root, d)), readdir(input_root))
33
+ sort!(case_dirs)
34
+
35
+ return [
36
+ (case_name, joinpath(input_root, case_name))
37
+ for case_name in case_dirs
38
+ if !isempty(_list_json_gz(joinpath(input_root, case_name)))
39
+ ]
40
+ end
41
+
42
+ function _parse_case_filter()
43
+ raw = strip(get(ENV, "UC_CASES", ""))
44
+ isempty(raw) && return nothing
45
+ return Set(strip.(split(raw, ",")))
46
+ end
47
+
48
+ function _is_truthy_env(name::String)
49
+ value = lowercase(strip(get(ENV, name, "")))
50
+ return value in ("1", "true", "yes", "y")
51
+ end
52
+
53
+ function _selected_cases(input_root::String)
54
+ cases = discover_matpower_cases(input_root)
55
+ selected = _parse_case_filter()
56
+ selected === nothing && return cases
57
+ return filter(case -> case[1] in selected, cases)
58
+ end
59
 
60
+ function _generate_one_instance!(
61
+ case_name::AbstractString,
62
+ date_tag::AbstractString,
63
+ src_path::AbstractString,
64
+ output_root::AbstractString,
65
+ noline_formulation,
66
  )
67
+ inst_hourly = UnitCommitment.read(src_path)
68
+ inst_hourly_noline = deepcopy(inst_hourly)
69
+ empty!(inst_hourly_noline.scenarios[1].lines)
70
+
71
+ model_hourly_noline = UnitCommitment.build_model(
72
+ instance = inst_hourly_noline,
73
+ formulation = noline_formulation,
74
+ variable_names = true,
75
+ )
76
+ _write_mps(
77
+ model_hourly_noline,
78
+ joinpath(output_root, case_name, "hourly_noline", "$(case_name)_$(date_tag)_h_noline.mps"),
79
+ )
80
+
81
+ model_hourly_withline = UnitCommitment.build_model(
82
+ instance = inst_hourly,
83
+ variable_names = true,
84
+ )
85
+ _write_mps(
86
+ model_hourly_withline,
87
+ joinpath(output_root, case_name, "hourly_withline", "$(case_name)_$(date_tag)_h_withline.mps"),
88
+ )
89
+
90
+ inst_sub = UnitCommitment.convert_to_subhourly(inst_hourly, inst_hourly)
91
+ inst_sub_noline = deepcopy(inst_sub)
92
+ empty!(inst_sub_noline.scenarios[1].lines)
93
+
94
+ model_sub_noline = UnitCommitment.build_model(
95
+ instance = inst_sub_noline,
96
+ formulation = noline_formulation,
97
+ variable_names = true,
98
+ )
99
+ _write_mps(
100
+ model_sub_noline,
101
+ joinpath(output_root, case_name, "subhourly_noline", "$(case_name)_$(date_tag)_s_noline.mps"),
102
+ )
103
 
104
+ model_sub_withline = UnitCommitment.build_model(
105
+ instance = inst_sub,
106
+ variable_names = true,
107
+ )
108
+ _write_mps(
109
+ model_sub_withline,
110
+ joinpath(output_root, case_name, "subhourly_withline", "$(case_name)_$(date_tag)_s_withline.mps"),
111
+ )
112
+
113
+ return nothing
114
+ end
115
+
116
+ function generate_dataset(;
117
+ input_root::String = get(ENV, "UC_INPUT_ROOT", DEFAULT_INPUT_ROOT),
118
+ output_root::String = get(ENV, "UC_OUTPUT_ROOT", DEFAULT_OUTPUT_ROOT),
119
+ )
120
+ cases = _selected_cases(input_root)
121
+ isempty(cases) && error("No cases selected under $input_root. Check UC_CASES or the input directory.")
122
 
 
 
 
 
123
  mkpath(output_root)
124
+ noline_formulation = _build_noline_formulation()
125
+
126
+ total_instances = sum(length(_list_json_gz(case_dir)) for (_, case_dir) in cases)
127
+ println("Input root: $input_root")
128
+ println("Output root: $output_root")
129
+ println("Cases: $(length(cases))")
130
+ println("Instances: $total_instances")
131
+ println("Variants: $(length(VARIANTS))")
132
+ println("MPS files: $(total_instances * length(VARIANTS))")
133
 
134
+ if _is_truthy_env("UC_DRY_RUN")
135
+ println("\nDry run only. Set UC_DRY_RUN=0 or remove it to generate MPS files.")
136
+ for (case_name, case_dir) in cases
137
+ println(" $case_name: $(length(_list_json_gz(case_dir))) instances")
 
 
138
  end
139
+ return nothing
140
+ end
141
+
142
+ for (case_index, (case_name, case_dir)) in enumerate(cases)
143
+ files = _list_json_gz(case_dir)
144
+ println("\n[$case_index/$(length(cases))] $case_name ($(length(files)) instances)")
145
+
146
+ for variant in VARIANTS
147
+ mkpath(joinpath(output_root, case_name, variant))
148
+ end
149
+
150
+ for (i, file_name) in enumerate(files)
151
  date_tag = split(file_name, ".")[1]
152
+ src_path = joinpath(case_dir, file_name)
153
+
154
+ _generate_one_instance!(case_name, date_tag, src_path, output_root, noline_formulation)
155
+
156
+ GC.gc()
157
+ println(" [$case_name] $i/$(length(files)) $date_tag")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
  end
159
  end
160
+
161
+ println("\nDone. output_root=$output_root")
162
+ return nothing
163
  end
164
 
165
+ if abspath(PROGRAM_FILE) == @__FILE__
166
+ generate_dataset()
167
+ end
UnitCommitment_Trajectory_Test/pmax-preprocessing.jl CHANGED
@@ -1,208 +1,7 @@
1
- # preprocessing.jl
2
- # Part 1: 直接从JSON提取Pmax,按 90% 位置计算阈值,对前x%合格机组写入轨迹约束,并重排机组列表输出 JSON_v1
3
- # Part 2: 读取 JSON_v1,对重排后的前10%和10%~20%机组修改 Minimum uptime (h),输出 JSON_v2
4
 
5
- using JSON
6
- using CodecZlib
7
 
8
- # ─────────────────────────────────────────────────────────────────────────────
9
- # Part 1: 生成 JSON_v1 (添加起停轨迹约束)
10
- # 输入:
11
- # json_path : 原始JSON路径
12
- # top_pct : 添加曲线的比例,默认前10%
13
- # output_path : 输出的 JSON_v1 路径
14
- # 输出:
15
- # JSON_v1 路径(字符串)
16
- # ─────────────────────────────────────────────────────────────────────────────
17
- function add_trajectory_curves(
18
- json_path::String;
19
- top_pct::Float64 = 0.10,
20
- output_path::String = replace(json_path, ".json" => "-part1.json"),
21
- )
22
- # ── 读取原始JSON ─────────────────────────────────────────────────────────
23
- json_data = open(json_path) do io
24
- if endswith(json_path, ".gz")
25
- decompressor = GzipDecompressorStream(io)
26
- JSON.parse(decompressor)
27
- else
28
- JSON.parse(io)
29
- end
30
- end
31
-
32
- generators = json_data["Generators"]
33
-
34
- # 仅处理含 "Minimum uptime (h)" 字段的机组
35
- thermal_names = filter(
36
- u -> haskey(generators[u], "Minimum uptime (h)"),
37
- collect(keys(generators))
38
- )
39
- n_total = length(thermal_names)
40
-
41
- # ── 1. 提取每个机组的 Pmax 和 Pmin ───────────────────────────────────────
42
- # 直接读取 "Production cost curve (MW)" 数组的第一个值(Pmin)和最后一个值(Pmax)
43
- Pmax_dict = Dict{String, Float64}()
44
- Pmin_dict = Dict{String, Float64}()
45
-
46
- for u in thermal_names
47
- curve_mw = generators[u]["Production cost curve (MW)"]
48
- Pmin_dict[u] = Float64(curve_mw[1])
49
- Pmax_dict[u] = Float64(curve_mw[end])
50
- end
51
-
52
- # ── 2. 计算下界阈值 (Threshold) ──────────────────────────────────────────
53
- # 将全网 Pmax 降序排列,取 90% 位置的值
54
- all_pmax_desc = sort(collect(values(Pmax_dict)), rev=true)
55
- idx_90 = max(1, ceil(Int, n_total * 0.90))
56
- pmax_90_val = all_pmax_desc[idx_90]
57
-
58
- # 阈值 = max(10, 降序第90%位置的值)
59
- threshold = max(10.0, pmax_90_val)
60
-
61
- println("── Part 1: 筛选与约束添加 ──")
62
- println(" 热机组总数: $n_total")
63
- println(" Pmax 降序第 90% 位置 (第 $idx_90 名) 的值: $pmax_90_val")
64
- println(" Pmax 筛选下界阈值 (Threshold): $threshold")
65
-
66
- # ── 3. 主干排序:主键 Minimum uptime 降序,次键 Pmax 降序 ───────────────
67
- sort!(
68
- thermal_names,
69
- by = u -> (
70
- get(generators[u], "Minimum uptime (h)", 0.0), # 主键:降序
71
- Pmax_dict[u] # 次键:降序
72
- ),
73
- rev = true,
74
- )
75
-
76
- # ── 4. 顺延挑选并重组数组供 Part 2 使用 ──────────────────────
77
- qualified_units = String[]
78
- disqualified_units = String[]
79
-
80
- for u in thermal_names
81
- if Pmax_dict[u] >= threshold
82
- push!(qualified_units, u)
83
- else
84
- push!(disqualified_units, u)
85
- end
86
- end
87
-
88
- # 核心重组逻辑:合格的排在最前面(保持原相对顺序),淘汰的全部扔到末尾
89
- # 这样 Part 2 按照索引 1~n_top 和 n_top+1~n_second 去读时,拿到的全都是合格机组
90
- reordered_units = vcat(qualified_units, disqualified_units)
91
-
92
- # ── 5. 确定名额并向合格的前 n_top 台机组写入轨迹曲线 ─────────────────────
93
- n_top = max(1, ceil(Int, n_total * top_pct))
94
-
95
- # 防止合格机组总数少于 n_top 的极端情况
96
- actual_top_count = min(n_top, length(qualified_units))
97
-
98
- println("\n── 选中并添加轨迹的机组名单 (前 10% 名额: $n_top) ──")
99
- for u in reordered_units[1:actual_top_count]
100
- uptime = get(generators[u], "Minimum uptime (h)", 0)
101
- pmax = Pmax_dict[u]
102
- pmin = Pmin_dict[u]
103
-
104
- # 写入 Startup / Shutdown curve
105
- generators[u]["Startup curve (MW)"] = [pmin / 2.0, pmin]
106
- generators[u]["Shutdown curve (MW)"] = [pmin, pmin / 2.0]
107
-
108
- println(" [写入轨迹] $(rpad(u,10)) Uptime=$uptime Pmax=$(round(pmax, digits=2)) Pmin=$(round(pmin, digits=2))")
109
- end
110
-
111
- println("\n── 被 Pmax 阈值淘汰的机组 (展示前几位) ──")
112
- for u in disqualified_units[1:min(5, length(disqualified_units))]
113
- uptime = get(generators[u], "Minimum uptime (h)", 0)
114
- pmax = Pmax_dict[u]
115
- println(" [不足下界] $(rpad(u,10)) Uptime=$uptime Pmax=$(round(pmax, digits=2)) < 阈值 $threshold")
116
- end
117
-
118
- # ── 6. 将重组后的结果存入元数据,交接给 Part 2 ───────────────────────────
119
- json_data["_sorted_thermal_units"] = reordered_units
120
-
121
- # ── 7. 写出 JSON_v1 ──────────────────────────────────────────────────────
122
- open(output_path, "w") do f
123
- JSON.print(f, json_data, 4)
124
- end
125
- println("\nPart 1 完成 → 输出保存至: $output_path")
126
-
127
- return output_path
128
- end
129
-
130
-
131
- # ─────────────────────────────────────────────────────────────────────────────
132
- # Part 2: 生成 JSON_v2 (修改 Minimum uptime)
133
- # (注:此处代码根据需求完全保持原样,未做任何核心逻辑改动,无缝衔接 Part 1)
134
- # ─────────────────────────────────────────────────────────────────────────────
135
- function modify_min_uptime(
136
- json_v1_path::String;
137
- top_pct::Float64 = 0.10,
138
- output_path::String = replace(json_v1_path, "-part1.json" => "-part2.json"),
139
- )
140
- # 读取 JSON_v1
141
- json_data = JSON.parsefile(json_v1_path)
142
- generators = json_data["Generators"]
143
-
144
- # 读取 Part 1 保存的重排结果
145
- haskey(json_data, "_sorted_thermal_units") ||
146
- error("缺少 _sorted_thermal_units 元数据,请先运行 Part 1(add_trajectory_curves)")
147
-
148
- sorted_units = json_data["_sorted_thermal_units"]
149
- n_total = length(sorted_units)
150
-
151
- # 计算两个区间的索引边界
152
- n_top = max(1, ceil(Int, n_total * top_pct))
153
- n_second = min(n_total, ceil(Int, n_total * top_pct * 2))
154
-
155
- println("\n── Part 2: Uptime 修改 ──")
156
- println(" 机组总数: $n_total")
157
- println(" 前10%区间: 1 ~ $n_top")
158
- println(" 10%~20%区间: $(n_top+1) ~ $n_second")
159
-
160
- skipped_top = String[]
161
- skipped_second = String[]
162
- modified_top = String[]
163
- modified_second= String[]
164
-
165
- # ── 处理前 10% 机组 ──────────────────────────────────────────────────────
166
- for u in sorted_units[1:n_top]
167
- uptime = get(generators[u], "Minimum uptime (h)", 1)
168
- if uptime <= 5
169
- generators[u]["Minimum uptime (h)"] = uptime * 3
170
- push!(modified_top, u)
171
- println("[10% 区间 | ×3 修改] $u uptime: $uptime → $(uptime*3)")
172
- else
173
- push!(skipped_top, u)
174
- println(" [10% 区间 | 跳过] $u uptime=$uptime > 5,不修改")
175
- end
176
- end
177
-
178
- # ── 处理 10% ~ 20% 机组 ──────────────────────────────────────────────────
179
- if n_top < n_second
180
- for u in sorted_units[n_top+1:n_second]
181
- uptime = get(generators[u], "Minimum uptime (h)", 1)
182
- if uptime <= 5
183
- generators[u]["Minimum uptime (h)"] = uptime * 2
184
- push!(modified_second, u)
185
- println("[20% 区间 | ×2 修改] $u uptime: $uptime → $(uptime*2)")
186
- else
187
- push!(skipped_second, u)
188
- println(" [20% 区间 | 跳过] $u uptime=$uptime > 5,不修改")
189
- end
190
- end
191
- end
192
-
193
- # ── 删除元数据字段,不写入最终 JSON_v2 ──────────────────────────────────
194
- delete!(json_data, "_sorted_thermal_units")
195
-
196
- # ── 写出 JSON_v2 ────────────────────────────────────────────────────────
197
- open(output_path, "w") do f
198
- JSON.print(f, json_data, 4)
199
- end
200
-
201
- println("\nPart 2 完成 → 输出保存至: $output_path")
202
- println(" 前10% 已修改(×3): $(length(modified_top)) 个")
203
- println(" 前10% 已跳过(>5): $(length(skipped_top)) 个")
204
- println(" 10~20% 已修改(×2): $(length(modified_second)) 个")
205
- println(" 10~20% 已跳过(>5): $(length(skipped_second)) 个")
206
-
207
- return output_path
208
- end
 
1
+ using UnitCommitment
 
 
2
 
3
+ add_trajectory_curves(args...; kwargs...) =
4
+ UnitCommitment.add_trajectory_curves_to_source_data(args...; kwargs...)
5
 
6
+ modify_min_uptime(args...; kwargs...) =
7
+ UnitCommitment.modify_min_uptime_in_source_data(args...; kwargs...)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
UnitCommitment_Trajectory_Test/src/UnitCommitment.jl CHANGED
@@ -23,11 +23,13 @@ include("solution/methods/XavQiuWanThi2019/structs.jl")
23
  include("solution/methods/ProgressiveHedging/structs.jl")
24
  include("model/formulations/WanHob2016/structs.jl")
25
  include("solution/methods/TimeDecomposition/structs.jl")
26
- include("model/formulations/xxx2005/structs.jl")
27
 
28
  include("import/egret.jl")
29
  include("instance/read.jl")
30
  include("instance/migrate.jl")
 
 
31
  include("model/build.jl")
32
  include("model/formulations/ArrCon2000/ramp.jl")
33
  include("model/formulations/base/bus.jl")
@@ -48,7 +50,7 @@ include("model/formulations/MorLatRam2013/ramp.jl")
48
  include("model/formulations/MorLatRam2013/scosts.jl")
49
  include("model/formulations/PanGua2016/ramp.jl")
50
  include("model/formulations/WanHob2016/ramp.jl")
51
- include("model/formulations/xxx2005/powertrajectories.jl")
52
  include("model/jumpext.jl")
53
  include("solution/fix.jl")
54
  include("solution/methods/XavQiuWanThi2019/enforce.jl")
@@ -76,4 +78,16 @@ include("lmp/conventional.jl")
76
  include("lmp/aelmp.jl")
77
  include("market/market.jl")
78
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  end
 
23
  include("solution/methods/ProgressiveHedging/structs.jl")
24
  include("model/formulations/WanHob2016/structs.jl")
25
  include("solution/methods/TimeDecomposition/structs.jl")
26
+ include("model/formulations/ArrCon2004/structs.jl")
27
 
28
  include("import/egret.jl")
29
  include("instance/read.jl")
30
  include("instance/migrate.jl")
31
+ include("instance/modify.jl")
32
+ include("instance/subhourly.jl")
33
  include("model/build.jl")
34
  include("model/formulations/ArrCon2000/ramp.jl")
35
  include("model/formulations/base/bus.jl")
 
50
  include("model/formulations/MorLatRam2013/scosts.jl")
51
  include("model/formulations/PanGua2016/ramp.jl")
52
  include("model/formulations/WanHob2016/ramp.jl")
53
+ include("model/formulations/ArrCon2004/powertrajectories.jl")
54
  include("model/jumpext.jl")
55
  include("solution/fix.jl")
56
  include("solution/methods/XavQiuWanThi2019/enforce.jl")
 
78
  include("lmp/aelmp.jl")
79
  include("market/market.jl")
80
 
81
+ const xxx2005 = ArrCon2004
82
+
83
+ using .Modify: add_trajectory_curves_to_source_data, modify_min_uptime_in_source_data
84
+ export add_trajectory_curves_to_source_data, modify_min_uptime_in_source_data
85
+
86
+ export convert_to_subhourly
87
+
88
+ # Provide the file-path based conversion API from `src/instance/subhourly.jl`,
89
+ # without importing/overwriting the existing instance-object methods.
90
+ convert_to_subhourly(instance_path::AbstractString, next_day_path::AbstractString) =
91
+ Subhourly.convert_to_subhourly(instance_path, next_day_path)
92
+
93
  end
UnitCommitment_Trajectory_Test/src/import/egret.jl CHANGED
@@ -2,7 +2,7 @@
2
  # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
3
  # Released under the modified BSD license. See COPYING.md for more details.
4
 
5
- using DataStructures, JSON, CodecZlib
6
 
7
  """
8
 
 
2
  # Copyright (C) 2020, UChicago Argonne, LLC. All rights reserved.
3
  # Released under the modified BSD license. See COPYING.md for more details.
4
 
5
+ using DataStructures, JSON, GZip
6
 
7
  """
8
 
UnitCommitment_Trajectory_Test/src/instance/modify.jl ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ module Modify
2
+
3
+ using JSON
4
+ using CodecZlib
5
+
6
+ export add_trajectory_curves_to_source_data, modify_min_uptime_in_source_data
7
+
8
+ function read_json_maybe_gz(path::String)
9
+ open(path) do io
10
+ if endswith(path, ".gz")
11
+ decompressor = GzipDecompressorStream(io)
12
+ return JSON.parse(decompressor)
13
+ end
14
+ return JSON.parse(io)
15
+ end
16
+ end
17
+
18
+ function write_json_pretty(path::String, json_data)
19
+ open(path, "w") do f
20
+ JSON.print(f, json_data, 4)
21
+ end
22
+ end
23
+
24
+ normalize_top_ratio(top_pct::Float64) = clamp(top_pct <= 1.0 ? top_pct : (top_pct / 100.0), 0.0, 1.0)
25
+
26
+ function get_eligible_thermal_units(generators)::Vector{String}
27
+ return filter(
28
+ u -> haskey(generators[u], "Minimum uptime (h)") &&
29
+ haskey(generators[u], "Production cost curve (MW)"),
30
+ collect(keys(generators)),
31
+ )
32
+ end
33
+
34
+ function build_unit_metric_dicts(generators, unit_names::Vector{String})
35
+ uptime_dict = Dict{String, Float64}()
36
+ pmax_dict = Dict{String, Float64}()
37
+ pmin_dict = Dict{String, Float64}()
38
+
39
+ for u in unit_names
40
+ curve_mw = generators[u]["Production cost curve (MW)"]
41
+ uptime_dict[u] = Float64(get(generators[u], "Minimum uptime (h)", 0.0))
42
+ pmin_dict[u] = Float64(curve_mw[1])
43
+ pmax_dict[u] = Float64(curve_mw[end])
44
+ end
45
+ return uptime_dict, pmax_dict, pmin_dict
46
+ end
47
+
48
+ function sort_by_uptime_then_pmax(unit_names::Vector{String}, uptime_dict, pmax_dict)
49
+ return sort(copy(unit_names), by = u -> (uptime_dict[u], pmax_dict[u]), rev = true)
50
+ end
51
+
52
+ function sort_by_pmax_then_uptime(unit_names::Vector{String}, uptime_dict, pmax_dict)
53
+ return sort(copy(unit_names), by = u -> (pmax_dict[u], uptime_dict[u]), rev = true)
54
+ end
55
+
56
+ function top_set(sorted_units::Vector{String}, ratio::Float64)
57
+ n_total = length(sorted_units)
58
+ n_total == 0 && return Set{String}()
59
+ n_top = max(1, ceil(Int, n_total * ratio))
60
+ return Set(sorted_units[1:n_top])
61
+ end
62
+
63
+ function build_four_categories(unit_order::Vector{String}, uptime_sorted::Vector{String}, pmax_sorted::Vector{String})
64
+ top_uptime_10 = top_set(uptime_sorted, 0.10)
65
+ top_uptime_20 = top_set(uptime_sorted, 0.20)
66
+ top_uptime_50 = top_set(uptime_sorted, 0.50)
67
+
68
+ top_pmax_10 = top_set(pmax_sorted, 0.10)
69
+ top_pmax_20 = top_set(pmax_sorted, 0.20)
70
+ top_pmax_50 = top_set(pmax_sorted, 0.50)
71
+
72
+ cat1_set = intersect(top_uptime_10, top_pmax_10)
73
+ cat2_set = setdiff(intersect(top_uptime_20, top_pmax_20), cat1_set)
74
+ cat3_set = setdiff(intersect(top_uptime_50, top_pmax_50), union(cat1_set, cat2_set))
75
+ cat4_set = setdiff(Set(unit_order), union(cat1_set, cat2_set, cat3_set))
76
+
77
+ category1 = [u for u in unit_order if u in cat1_set]
78
+ category2 = [u for u in unit_order if u in cat2_set]
79
+ category3 = [u for u in unit_order if u in cat3_set]
80
+ category4 = [u for u in unit_order if u in cat4_set]
81
+
82
+ return category1, category2, category3, category4
83
+ end
84
+
85
+ function add_trajectory_curves_to_source_data(
86
+ json_path::String;
87
+ top_pct::Float64 = 10.0,
88
+ output_path::String = replace(json_path, ".json" => "-part1.json"),
89
+ )
90
+ json_data = read_json_maybe_gz(json_path)
91
+
92
+ generators = json_data["Generators"]
93
+ thermal_names = get_eligible_thermal_units(generators)
94
+ n_total = length(thermal_names)
95
+
96
+ n_total == 0 && begin
97
+ println("── Part 1: 未发现可处理机组,直接写出原始数据 ──")
98
+ write_json_pretty(output_path, json_data)
99
+ return output_path
100
+ end
101
+
102
+ uptime_dict, pmax_dict, pmin_dict = build_unit_metric_dicts(generators, thermal_names)
103
+
104
+ top_ratio = normalize_top_ratio(top_pct)
105
+ top_count = min(n_total, max(1, ceil(Int, n_total * top_ratio)))
106
+ threshold = 10.0
107
+
108
+ println("── Part 1: 筛选与约束添加 ──")
109
+ println(" 热机组总数: $n_total")
110
+ println(" Top X% 参数: $top_pct (Top 数量: $top_count)")
111
+ println(" Pmax 下界阈值 (Threshold): $threshold")
112
+
113
+ uptime_sorted = sort_by_uptime_then_pmax(thermal_names, uptime_dict, pmax_dict)
114
+ pmax_sorted = sort_by_pmax_then_uptime(thermal_names, uptime_dict, pmax_dict)
115
+
116
+ top_uptime_set = Set(uptime_sorted[1:top_count])
117
+ top_pmax_set = Set(pmax_sorted[1:top_count])
118
+
119
+ qualified_units = String[]
120
+ disqualified_units = String[]
121
+
122
+ for u in uptime_sorted
123
+ if (u in top_uptime_set) && (u in top_pmax_set) && (pmax_dict[u] > threshold)
124
+ push!(qualified_units, u)
125
+ else
126
+ push!(disqualified_units, u)
127
+ end
128
+ end
129
+
130
+ reordered_units = vcat(qualified_units, disqualified_units)
131
+
132
+ println("\n── 选中并添加轨迹的机组名单 (交集且 Pmax > $threshold) ──")
133
+ for u in qualified_units
134
+ uptime = uptime_dict[u]
135
+ pmax = pmax_dict[u]
136
+ pmin = pmin_dict[u]
137
+
138
+ generators[u]["Startup curve (MW)"] = [pmin / 2.0, pmin]
139
+ generators[u]["Shutdown curve (MW)"] = [pmin, pmin / 2.0]
140
+
141
+ println(" [写入轨迹] $(rpad(u,10)) Uptime=$uptime Pmax=$(round(pmax, digits=2)) Pmin=$(round(pmin, digits=2))")
142
+ end
143
+
144
+ println("\n── 未满足筛选条件的机组 (展示前几位) ──")
145
+ for u in disqualified_units[1:min(5, length(disqualified_units))]
146
+ uptime = uptime_dict[u]
147
+ pmax = pmax_dict[u]
148
+ in_top_uptime = u in top_uptime_set
149
+ in_top_pmax = u in top_pmax_set
150
+ println(" [未入选] $(rpad(u,10)) Uptime=$uptime Pmax=$(round(pmax, digits=2)) TopUptime=$in_top_uptime TopPmax=$in_top_pmax")
151
+ end
152
+
153
+ json_data["_sorted_thermal_units"] = reordered_units
154
+
155
+ write_json_pretty(output_path, json_data)
156
+ println("\nPart 1 完成 → 输出保存至: $output_path")
157
+
158
+ return output_path
159
+ end
160
+
161
+ function modify_min_uptime_in_source_data(
162
+ json_v1_path::String;
163
+ output_path::String = replace(json_v1_path, "-part1.json" => "-part2.json"),
164
+ )
165
+ json_data = read_json_maybe_gz(json_v1_path)
166
+ generators = json_data["Generators"]
167
+
168
+ sorted_units = get_eligible_thermal_units(generators)
169
+ n_total = length(sorted_units)
170
+
171
+ n_total == 0 && begin
172
+ println("── Part 2: 未发现可处理机组,直接写出原始数据 ──")
173
+ if haskey(json_data, "_sorted_thermal_units")
174
+ delete!(json_data, "_sorted_thermal_units")
175
+ end
176
+ write_json_pretty(output_path, json_data)
177
+ return output_path
178
+ end
179
+
180
+ uptime_dict, pmax_dict, _ = build_unit_metric_dicts(generators, sorted_units)
181
+ uptime_sorted = sort_by_uptime_then_pmax(sorted_units, uptime_dict, pmax_dict)
182
+ pmax_sorted = sort_by_pmax_then_uptime(sorted_units, uptime_dict, pmax_dict)
183
+
184
+ category1, category2, category3, category4 =
185
+ build_four_categories(uptime_sorted, uptime_sorted, pmax_sorted)
186
+
187
+ println("\n── Part 2: Uptime / Downtime 分类修改 ──")
188
+ println(" 机组总数: $n_total")
189
+ println(" Category 1 (top10%∩top10%): $(length(category1))")
190
+ println(" Category 2 (top20%∩top20% \\ cat1): $(length(category2))")
191
+ println(" Category 3 (top50%∩top50% \\ cat1,2): $(length(category3))")
192
+ println(" Category 4 (remaining): $(length(category4))")
193
+
194
+ modified_cat1 = String[]
195
+ modified_cat2 = String[]
196
+ modified_cat3 = String[]
197
+
198
+ function apply_uptime_downtime_multipliers!(u::String, up_mult::Int, down_mult::Int, tag::String)
199
+ old_up = get(generators[u], "Minimum uptime (h)", nothing)
200
+ old_down = get(generators[u], "Minimum downtime (h)", nothing)
201
+
202
+ if old_up !== nothing
203
+ generators[u]["Minimum uptime (h)"] = old_up * up_mult
204
+ end
205
+ if old_down !== nothing
206
+ generators[u]["Minimum downtime (h)"] = old_down * down_mult
207
+ end
208
+
209
+ println("[$tag] $u uptime: $old_up → $(get(generators[u], "Minimum uptime (h)", old_up)) downtime: $old_down → $(get(generators[u], "Minimum downtime (h)", old_down))")
210
+ end
211
+
212
+ for u in category1
213
+ apply_uptime_downtime_multipliers!(u, 4, 3, "Category 1")
214
+ push!(modified_cat1, u)
215
+ end
216
+
217
+ for u in category2
218
+ apply_uptime_downtime_multipliers!(u, 3, 2, "Category 2")
219
+ push!(modified_cat2, u)
220
+ end
221
+
222
+ for u in category3
223
+ apply_uptime_downtime_multipliers!(u, 2, 2, "Category 3")
224
+ push!(modified_cat3, u)
225
+ end
226
+
227
+ if haskey(json_data, "_sorted_thermal_units")
228
+ delete!(json_data, "_sorted_thermal_units")
229
+ end
230
+
231
+ write_json_pretty(output_path, json_data)
232
+
233
+ println("\nPart 2 完成 → 输出保存至: $output_path")
234
+ println(" Category 1 已修改 (uptime×4, downtime×3): $(length(modified_cat1)) 个")
235
+ println(" Category 2 已修改 (uptime×3, downtime×2): $(length(modified_cat2)) 个")
236
+ println(" Category 3 已修改 (uptime×2, downtime×2): $(length(modified_cat3)) 个")
237
+ println(" Category 4 未修改: $(length(category4)) 个")
238
+
239
+ return output_path
240
+ end
241
+
242
+ end
UnitCommitment_Trajectory_Test/src/instance/read.jl CHANGED
@@ -6,7 +6,7 @@
6
  using Printf # For formatted string printing
7
  using JSON # For parsing JSON files
8
  using DataStructures # For OrderedDict and other data structures
9
- using CodecZlib # For reading gzipped files
10
  import Base: getindex, time # Import specific functions from Base module
11
 
12
  # Define constant URL for downloading benchmark instances
@@ -139,7 +139,7 @@ Helper function to read a single scenario from a file path
139
  function _read_scenario(path::String)::UnitCommitmentScenario
140
  # Check if file is gzipped and read accordingly
141
  if endswith(path, ".gz")
142
- scenario = _read(GzipDecompressorStream(open(path)))
143
  elseif endswith(path, ".json")
144
  scenario = _read(open(path))
145
  else
@@ -164,7 +164,7 @@ Helper function to read JSON from a file path (handles both .json and .gz files)
164
  function _read_json(path::String)::OrderedDict
165
  # Open file based on extension (gzipped or plain JSON)
166
  if endswith(path, ".gz")
167
- file = GzipDecompressorStream(open(path))
168
  else
169
  file = open(path)
170
  end
 
6
  using Printf # For formatted string printing
7
  using JSON # For parsing JSON files
8
  using DataStructures # For OrderedDict and other data structures
9
+ using GZip # For reading gzipped files
10
  import Base: getindex, time # Import specific functions from Base module
11
 
12
  # Define constant URL for downloading benchmark instances
 
139
  function _read_scenario(path::String)::UnitCommitmentScenario
140
  # Check if file is gzipped and read accordingly
141
  if endswith(path, ".gz")
142
+ scenario = _read(gzopen(path))
143
  elseif endswith(path, ".json")
144
  scenario = _read(open(path))
145
  else
 
164
  function _read_json(path::String)::OrderedDict
165
  # Open file based on extension (gzipped or plain JSON)
166
  if endswith(path, ".gz")
167
+ file = GZip.gzopen(path)
168
  else
169
  file = open(path)
170
  end
UnitCommitment_Trajectory_Test/src/instance/subhourly.jl ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Subhourly
3
+
4
+ Module for converting UC instances from 40-minute time periods (36 periods/day)
5
+ to 20-minute time periods (72 periods/day).
6
+ """
7
+ module Subhourly
8
+
9
+ using Dates
10
+ import ..UnitCommitment
11
+
12
+ export convert_to_subhourly, interpolate_values, repeat_values
13
+
14
+ """
15
+ convert_to_subhourly(instance_path::AbstractString, next_day_path::AbstractString)
16
+
17
+ Convert a UC instance from 36 time periods (40 minutes each) to 72 time periods (20 minutes each).
18
+
19
+ # Arguments
20
+ - `instance_path::AbstractString`: Path to the current day's instance
21
+ - `next_day_path::AbstractString`: Path to the next day's instance (needed for interpolation at boundaries)
22
+
23
+ # Returns
24
+ - Modified `UnitCommitmentInstance` with 72 time periods
25
+
26
+ # Details
27
+ The function performs the following transformations:
28
+ 1. **Interpolated quantities** (demands, profiled unit outputs): Linear interpolation using next day's first period
29
+ 2. **Repeated quantities** (max_power, min_power, flow limits): Each value repeated twice
30
+ 3. **Ramping capacities**: Halved (since time periods are half the duration)
31
+ 4. **Time period counts** (min_uptime, min_downtime, startup delay): Doubled
32
+ 5. **Power production costs (not the fixed startup costs)**: Halved
33
+
34
+ # Example
35
+ ```julia
36
+ current_instance = convert_to_subhourly(
37
+ "matpower/case14/2017-01-01",
38
+ "matpower/case14/2017-01-02"
39
+ )
40
+ println("Total time periods: ", current_instance.time) # Should be 72
41
+ ```
42
+ """
43
+ function convert_to_subhourly(instance_path::AbstractString, next_day_path::AbstractString)
44
+ # Read instances
45
+ instance = read_instance(instance_path)
46
+ next_instance = read_instance(next_day_path)
47
+
48
+ return convert_to_subhourly(instance, next_instance)
49
+ end
50
+
51
+ """
52
+ convert_to_subhourly(instance::UnitCommitment.UnitCommitmentInstance,
53
+ next_instance::UnitCommitment.UnitCommitmentInstance)
54
+
55
+ Convert a UC instance from 36 time periods to 72 time periods using instance objects.
56
+ """
57
+ function convert_to_subhourly(instance, next_instance)
58
+ sc_current = instance.scenarios[1]
59
+ sc_next = next_instance.scenarios[1]
60
+
61
+ # Process buses - interpolate loads
62
+ for i in 1:length(sc_current.buses)
63
+ load_current = sc_current.buses[i].load
64
+ load_next_first = sc_next.buses[i].load[1]
65
+ sc_current.buses[i].load = interpolate_values(load_current, load_next_first)
66
+ end
67
+
68
+ # Process thermal units
69
+ for i in 1:length(sc_current.thermal_units)
70
+ unit = sc_current.thermal_units[i]
71
+ unit_next = sc_next.thermal_units[i]
72
+
73
+ # Repeat time-dependent vector quantities
74
+ unit.max_power = repeat_values(unit.max_power)
75
+ unit.min_power = repeat_values(unit.min_power)
76
+ unit.must_run = repeat_values(unit.must_run)
77
+ unit.min_power_cost = repeat_values(unit.min_power_cost)
78
+
79
+ # Process cost segments
80
+ for j in 1:length(unit.cost_segments)
81
+ # cost should be repeated and then halved
82
+ unit.cost_segments[j].cost = interpolate_values(unit.cost_segments[j].cost, unit.cost_segments[j].cost[1]) ./ 2.0
83
+ unit.cost_segments[j].mw = interpolate_values(unit.cost_segments[j].mw, unit.cost_segments[j].mw[1])
84
+ end
85
+
86
+ # Repeat commitment status
87
+ unit.commitment_status = repeat_values(unit.commitment_status)
88
+
89
+ # Halve ramping capacities per time period (since time periods are half the duration)
90
+ unit.ramp_up_limit = unit.ramp_up_limit / 2.0
91
+ unit.ramp_down_limit = unit.ramp_down_limit / 2.0
92
+ # Note: startup_limit and shutdown_limit are NOT modified (they are power limits, not per-period rates)
93
+
94
+ # Double time period counts
95
+ unit.min_uptime = unit.min_uptime * 2
96
+ unit.min_downtime = unit.min_downtime * 2
97
+
98
+ # Double startup delays in startup categories
99
+ for startup_cat in unit.startup_categories
100
+ startup_cat.delay = startup_cat.delay * 2
101
+ end
102
+ end
103
+
104
+ # Process transmission lines
105
+ for i in 1:length(sc_current.lines)
106
+ line = sc_current.lines[i]
107
+
108
+ # Repeat flow limits
109
+ line.normal_flow_limit = repeat_values(line.normal_flow_limit)
110
+ line.emergency_flow_limit = repeat_values(line.emergency_flow_limit)
111
+ line.flow_limit_penalty = repeat_values(line.flow_limit_penalty)
112
+ end
113
+
114
+ # Process reserves - interpolate
115
+ for i in 1:length(sc_current.reserves)
116
+ reserve = sc_current.reserves[i]
117
+ reserve_next = sc_next.reserves[i]
118
+
119
+ # Interpolate reserve requirements
120
+ reserve.amount = interpolate_values(reserve.amount, reserve_next.amount[1])
121
+ end
122
+
123
+ # Process price-sensitive loads - interpolate
124
+ for i in 1:length(sc_current.price_sensitive_loads)
125
+ psl = sc_current.price_sensitive_loads[i]
126
+ psl_next = sc_next.price_sensitive_loads[i]
127
+
128
+ # Interpolate demand and revenue
129
+ psl.demand = interpolate_values(psl.demand, psl_next.demand[1])
130
+ psl.revenue = interpolate_values(psl.revenue, psl_next.revenue[1])
131
+ end
132
+
133
+ # Process profiled units (renewables)
134
+ for i in 1:length(sc_current.profiled_units)
135
+ pu = sc_current.profiled_units[i]
136
+ pu_next = sc_next.profiled_units[i]
137
+
138
+ # Interpolate renewable profiles
139
+ pu.min_power = interpolate_values(pu.min_power, pu_next.min_power[1])
140
+ pu.max_power = interpolate_values(pu.max_power, pu_next.max_power[1])
141
+ pu.cost = interpolate_values(pu.cost, pu_next.cost[1])
142
+ end
143
+
144
+ # Process storage units
145
+ for i in 1:length(sc_current.storage_units)
146
+ su = sc_current.storage_units[i]
147
+ su_next = sc_next.storage_units[i]
148
+
149
+ # Interpolate storage levels
150
+ su.min_level = interpolate_values(su.min_level, su_next.min_level[1])
151
+ su.max_level = interpolate_values(su.max_level, su_next.max_level[1])
152
+
153
+ # Repeat other storage parameters
154
+ su.simultaneous_charge_and_discharge = repeat_values(su.simultaneous_charge_and_discharge)
155
+ su.charge_cost = interpolate_values(su.charge_cost, su_next.charge_cost[1])
156
+ su.discharge_cost = interpolate_values(su.discharge_cost, su_next.discharge_cost[1])
157
+ su.charge_efficiency = repeat_values(su.charge_efficiency)
158
+ su.discharge_efficiency = repeat_values(su.discharge_efficiency)
159
+ su.loss_factor = repeat_values(su.loss_factor)
160
+
161
+ # Repeat rate limits
162
+ su.min_charge_rate = repeat_values(su.min_charge_rate)
163
+ su.max_charge_rate = repeat_values(su.max_charge_rate)
164
+ su.min_discharge_rate = repeat_values(su.min_discharge_rate)
165
+ su.max_discharge_rate = repeat_values(su.max_discharge_rate)
166
+ end
167
+
168
+ # Process scenario-level fields
169
+ sc_current.power_balance_penalty = repeat_values(sc_current.power_balance_penalty)
170
+
171
+ # Update time count
172
+ sc_current.time = 72
173
+ instance.time = 72
174
+
175
+ return instance
176
+ end
177
+
178
+ """
179
+ interpolate_values(values::Vector{T}, next_first::T) where T
180
+
181
+ Interpolate a vector of 36 values to 72 values using linear interpolation.
182
+
183
+ # Arguments
184
+ - `values::Vector{T}`: Original 36-element vector
185
+ - `next_first::T`: First value from the next day (for boundary interpolation)
186
+
187
+ # Returns
188
+ - 72-element vector with interpolated values
189
+
190
+ # Details
191
+ For each pair of consecutive values, inserts an interpolated midpoint.
192
+ Uses `next_first` to interpolate the value after the last period.
193
+
194
+ # Example
195
+ ```julia
196
+ interpolate_values([10.0, 20.0, 30.0], 40.0)
197
+ # Returns: [10.0, 15.0, 20.0, 25.0, 30.0, 35.0]
198
+ ```
199
+ """
200
+ function interpolate_values(values::Vector{T}, next_first::T) where T
201
+ n = length(values)
202
+ result = Vector{T}(undef, 2 * n)
203
+
204
+ for i in 1:n-1
205
+ result[2*i-1] = values[i]
206
+ result[2*i] = (values[i] + values[i+1]) / 2
207
+ end
208
+
209
+ # Handle the last period using next day's first value
210
+ result[2*n-1] = values[n]
211
+ result[2*n] = (values[n] + next_first) / 2
212
+
213
+ return result
214
+ end
215
+
216
+ """
217
+ repeat_values(values::Vector{T}) where T
218
+
219
+ Repeat each element of a 36-element vector to create a 72-element vector.
220
+
221
+ # Arguments
222
+ - `values::Vector{T}`: Original 36-element vector
223
+
224
+ # Returns
225
+ - 72-element vector with each value repeated twice
226
+
227
+ # Example
228
+ ```julia
229
+ repeat_values([1, 2, 3])
230
+ # Returns: [1, 1, 2, 2, 3, 3]
231
+ ```
232
+ """
233
+ function repeat_values(values::Vector{T}) where T
234
+ n = length(values)
235
+ result = Vector{T}(undef, 2 * n)
236
+
237
+ for i in 1:n
238
+ result[2*i-1] = values[i]
239
+ result[2*i] = values[i]
240
+ end
241
+
242
+ return result
243
+ end
244
+
245
+ """
246
+ read_instance(path::AbstractString)
247
+
248
+ Read a UC instance from a local file or benchmark.
249
+
250
+ # Arguments
251
+ - `path::AbstractString`: Path to instance file or benchmark name
252
+
253
+ # Returns
254
+ - `UnitCommitmentInstance` object
255
+ """
256
+ function read_instance(path::AbstractString)
257
+ # Check if path exists locally with common extensions
258
+ if isfile(path)
259
+ return UnitCommitment.read(path)
260
+ elseif isfile(path * ".json.gz")
261
+ return UnitCommitment.read(path * ".json.gz")
262
+ elseif isfile(path * ".json")
263
+ return UnitCommitment.read(path * ".json")
264
+ else
265
+ # Try reading as benchmark
266
+ return UnitCommitment.read_benchmark(path)
267
+ end
268
+ end
269
+
270
+ end # module
271
+
UnitCommitment_Trajectory_Test/src/model/formulations/{xxx2005 → ArrCon2004}/powertrajectories.jl RENAMED
@@ -2,7 +2,7 @@ function _add_power_trajectory_eqs!(
2
  model::JuMP.Model,
3
  g::ThermalUnit,
4
  formulation_prod_vars::Gar1962.ProdVars,
5
- formulation_power_trajectories::xxx2005.PowerTrajectories,
6
  formulation_status_vars::Gar1962.StatusVars,
7
  sc::UnitCommitmentScenario,
8
  )::Nothing
@@ -165,4 +165,4 @@ function _add_power_trajectory_eqs!(
165
  )
166
  end
167
  return
168
- end
 
2
  model::JuMP.Model,
3
  g::ThermalUnit,
4
  formulation_prod_vars::Gar1962.ProdVars,
5
+ formulation_power_trajectories::ArrCon2004.PowerTrajectories,
6
  formulation_status_vars::Gar1962.StatusVars,
7
  sc::UnitCommitmentScenario,
8
  )::Nothing
 
165
  )
166
  end
167
  return
168
+ end
UnitCommitment_Trajectory_Test/src/model/formulations/{xxx2005 → ArrCon2004}/structs.jl RENAMED
@@ -1,7 +1,7 @@
1
- module xxx2005
2
 
3
  import ..PowerTrajectoriesFormulation
4
 
5
  struct PowerTrajectories <: PowerTrajectoriesFormulation end
6
 
7
- end
 
1
+ module ArrCon2004
2
 
3
  import ..PowerTrajectoriesFormulation
4
 
5
  struct PowerTrajectories <: PowerTrajectoriesFormulation end
6
 
7
+ end
UnitCommitment_Trajectory_Test/src/model/formulations/Gar1962/prod.jl CHANGED
@@ -59,7 +59,7 @@ function _add_production_limit_eqs!(
59
  model::JuMP.Model,
60
  g::ThermalUnit,
61
  formulation_prod_vars::Gar1962.ProdVars,
62
- formulation_power_trajectories::xxx2005.PowerTrajectories,
63
  sc::UnitCommitmentScenario
64
  )::Nothing
65
  if isempty(g.startup_curve) || isempty(g.shutdown_curve)
 
59
  model::JuMP.Model,
60
  g::ThermalUnit,
61
  formulation_prod_vars::Gar1962.ProdVars,
62
+ formulation_power_trajectories::ArrCon2004.PowerTrajectories,
63
  sc::UnitCommitmentScenario
64
  )::Nothing
65
  if isempty(g.startup_curve) || isempty(g.shutdown_curve)
UnitCommitment_Trajectory_Test/test/test_instance_modification.jl ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ using UnitCommitment
2
+ using HiGHS
3
+ import JuMP
4
+
5
+ CASES =[
6
+ ("testdata/case2383wp/2017-07-27.json.gz", "case2383wp"),
7
+ # ("testdata/case2736sp/2017-07-21.json.gz", "case2736sp")
8
+ ]
9
+
10
+ function build_and_solve(instance, use_traj)
11
+ formulation = use_traj ?
12
+ UnitCommitment.Formulation(
13
+ power_trajectories = UnitCommitment.ArrCon2004.PowerTrajectories()
14
+ ) :
15
+ UnitCommitment.Formulation()
16
+ model = UnitCommitment.build_model(
17
+ instance = instance,
18
+ optimizer = HiGHS.Optimizer,
19
+ formulation = formulation,
20
+ )
21
+ JuMP.set_silent(model)
22
+ JuMP.optimize!(model)
23
+ return model
24
+ end
25
+
26
+ function get_actual_power(model, g, t, T, prev_p)
27
+ uname = g.name
28
+ UD = length(g.startup_curve)
29
+ DD = length(g.shutdown_curve)
30
+ startup_curve = g.startup_curve
31
+ shutdown_curve = g.shutdown_curve
32
+ Pmin = g.min_power[t]
33
+ Pmax = g.max_power[t]
34
+ RU = g.ramp_up_limit
35
+
36
+ v = round(JuMP.value(model[:is_on][uname, t]))
37
+ y = round(JuMP.value(model[:switch_on][uname, t]))
38
+ pa = JuMP.value(model[:prod_above]["s1", uname, t])
39
+
40
+ in_su = UD > 0 && any(
41
+ round(JuMP.value(model[:switch_on][uname, t-i+1])) == 1.0
42
+ for i in 1:UD if t-i+1 >= 1)
43
+ in_sd = DD > 0 && any(
44
+ round(JuMP.value(model[:switch_off][uname, t+i])) == 1.0
45
+ for i in 1:DD if t+i <= T)
46
+
47
+ p = 0.0
48
+ if in_su
49
+ for i in 1:UD
50
+ if t-i+1 >= 1 && round(JuMP.value(model[:switch_on][uname, t-i+1])) == 1.0
51
+ p = startup_curve[i]; break
52
+ end
53
+ end
54
+ elseif in_sd
55
+ for i in 1:DD
56
+ if t+i <= T && round(JuMP.value(model[:switch_off][uname, t+i])) == 1.0
57
+ p = shutdown_curve[DD-i+1]; break
58
+ end
59
+ end
60
+ elseif v > 0.5
61
+ p = pa + Pmin
62
+ end
63
+
64
+ status = if v == 0.0; "Offline"
65
+ elseif y == 1.0; "Startup_t1"
66
+ elseif in_su; "Startup_traj"
67
+ elseif in_sd; "Shutdown_traj"
68
+ else "Normal"
69
+ end
70
+
71
+ # 计算上下界 (ub, lb)
72
+ sum_y = sum(round(JuMP.value(model[:switch_on][uname, t-i+1])) for i in 1:UD if t-i+1 >= 1; init=0.0)
73
+ sum_z = DD > 0 ? sum(round(JuMP.value(model[:switch_off][uname, t+i])) for i in 1:DD if t+i <= T; init=0.0) : 0.0
74
+
75
+ ramp_ub = if t == 1
76
+ g.initial_power + RU
77
+ elseif sum_y > 0
78
+ Pmax
79
+ else
80
+ prev_p + RU
81
+ end
82
+
83
+ if v == 0.0 || (UD == 0 && DD == 0)
84
+ ub_val = ""
85
+ lb_val = ""
86
+ else
87
+ su_sum = sum(startup_curve[i] * round(JuMP.value(model[:switch_on][uname, t-i+1]))
88
+ for i in 1:UD if t-i+1 >= 1; init=0.0)
89
+ sd_sum = DD > 0 ? sum(shutdown_curve[i] * round(JuMP.value(model[:switch_off][uname, t+DD-i+1]))
90
+ for i in 1:DD if t+DD-i+1 >= 1 && t+DD-i+1 <= T; init=0.0) : 0.0
91
+ ub_val = min(su_sum + Pmax*(v-sum_y), sd_sum + Pmax*(v-sum_z), ramp_ub)
92
+ lb_val = max(Pmin*(v-sum_z-sum_y)+su_sum, Pmin*(v-sum_z-sum_y)+sd_sum)
93
+ end
94
+
95
+ return p, status, ub_val, lb_val
96
+ end
97
+
98
+ function export_combined_csv(
99
+ model_base, model_v1, model_v2,
100
+ instance_orig, instance_v1, instance_v2,
101
+ run_name, out_dir
102
+ )
103
+ sc_orig = instance_orig.scenarios[1]
104
+ sc_v1 = instance_v1.scenarios[1]
105
+ sc_v2 = instance_v2.scenarios[1]
106
+ T = instance_orig.time
107
+
108
+ unit_map_v1 = Dict(g.name => g for g in sc_v1.thermal_units)
109
+ unit_map_v2 = Dict(g.name => g for g in sc_v2.thermal_units)
110
+
111
+ rows = String[]
112
+ push!(rows,
113
+ "run_name,case,unit,t," *
114
+ "p_base,p_v1,p_v2," *
115
+ "ub_v1,lb_v1,ub_v2,lb_v2," *
116
+ "status_v1,status_v2," *
117
+ "pmin,pmax,startup_limit,shutdown_limit,UD,DD," *
118
+ "has_startup_v1,has_shutdown_v1,has_startup_v2,has_shutdown_v2," *
119
+ "min_uptime_orig,min_uptime_v2,has_curve"
120
+ )
121
+
122
+ case_name = split(run_name, "-")[1]
123
+
124
+ for g_orig in sc_orig.thermal_units
125
+ uname = g_orig.name
126
+ g_v1 = unit_map_v1[uname]
127
+ g_v2 = unit_map_v2[uname]
128
+
129
+ has_curve = !isempty(g_v1.startup_curve)
130
+ UD = length(g_v1.startup_curve)
131
+ DD = length(g_v1.shutdown_curve)
132
+ startup_limit = g_v1.startup_limit
133
+ shutdown_limit = g_v1.shutdown_limit
134
+ min_uptime_orig = g_orig.min_uptime
135
+ min_uptime_v2 = g_v2.min_uptime
136
+
137
+ # 判断整个周期内是否有真实启动/停机行为
138
+ su_v1 = has_curve && any(round(JuMP.value(model_v1[:switch_on][uname, t])) == 1.0 for t in 1:T)
139
+ sd_v1 = has_curve && any(round(JuMP.value(model_v1[:switch_off][uname, t])) == 1.0 for t in 1:T)
140
+ su_v2 = has_curve && any(round(JuMP.value(model_v2[:switch_on][uname, t])) == 1.0 for t in 1:T)
141
+ sd_v2 = has_curve && any(round(JuMP.value(model_v2[:switch_off][uname, t])) == 1.0 for t in 1:T)
142
+
143
+ prev_p_v1 = 0.0
144
+ prev_p_v2 = 0.0
145
+
146
+ for t in 1:T
147
+ Pmin = g_orig.min_power[t]
148
+ Pmax = g_orig.max_power[t]
149
+
150
+ # base model
151
+ pa0 = JuMP.value(model_base[:prod_above]["s1", uname, t])
152
+ v0 = round(JuMP.value(model_base[:is_on][uname, t]))
153
+ p_base = pa0 + Pmin * v0
154
+
155
+ # v1 model
156
+ p_v1, status_v1, ub_v1, lb_v1 = get_actual_power(model_v1, g_v1, t, T, prev_p_v1)
157
+ prev_p_v1 = p_v1
158
+
159
+ # v2 model
160
+ p_v2, status_v2, ub_v2, lb_v2 = get_actual_power(model_v2, g_v2, t, T, prev_p_v2)
161
+ prev_p_v2 = p_v2
162
+
163
+ push!(rows,
164
+ "$run_name,$case_name,$uname,$t," *
165
+ "$(round(p_base,digits=4)),$(round(p_v1,digits=4)),$(round(p_v2,digits=4))," *
166
+ "$ub_v1,$lb_v1,$ub_v2,$lb_v2," *
167
+ "$status_v1,$status_v2," *
168
+ "$(round(Pmin,digits=4)),$(round(Pmax,digits=4))," *
169
+ "$(round(startup_limit,digits=4)),$(round(shutdown_limit,digits=4))," *
170
+ "$UD,$DD,$su_v1,$sd_v1,$su_v2,$sd_v2," *
171
+ "$min_uptime_orig,$min_uptime_v2,$has_curve"
172
+ )
173
+ end
174
+ end
175
+
176
+ fname = joinpath(out_dir, "$(run_name)_combined.csv")
177
+ open(fname, "w") do f
178
+ for row in rows
179
+ println(f, row)
180
+ end
181
+ end
182
+ println(" Saved: $fname")
183
+
184
+ # ---------- 物理合理性验证统计输出 ----------
185
+ total_curves = 0
186
+ v1_su_count, v1_sd_count = 0, 0
187
+ v2_su_count, v2_sd_count = 0, 0
188
+
189
+ for g_v1 in sc_v1.thermal_units
190
+ uname = g_v1.name
191
+ if !isempty(g_v1.startup_curve)
192
+ total_curves += 1
193
+ if any(round(JuMP.value(model_v1[:switch_on][uname, t])) == 1.0 for t in 1:T) v1_su_count += 1 end
194
+ if any(round(JuMP.value(model_v1[:switch_off][uname, t])) == 1.0 for t in 1:T) v1_sd_count += 1 end
195
+ if any(round(JuMP.value(model_v2[:switch_on][uname, t])) == 1.0 for t in 1:T) v2_su_count += 1 end
196
+ if any(round(JuMP.value(model_v2[:switch_off][uname, t])) == 1.0 for t in 1:T) v2_sd_count += 1 end
197
+ end
198
+ end
199
+
200
+ println("\n [物理合理性验证] 启停轨迹激活统计:")
201
+ println(" 配置了曲线的机组总数: $total_curves")
202
+ println(" v1 发生实际启动: $(v1_su_count)/$total_curves | 发生实际停机: $(v1_sd_count)/$total_curves")
203
+ println(" v2 发生实际启动: $(v2_su_count)/$total_curves | 发生实际停机: $(v2_sd_count)/$total_curves")
204
+ if total_curves > 0 && v1_su_count == 0 && v2_su_count == 0
205
+ println(" 警告: 即使添加了曲线,也没有任何相关机组发生真实启动/停机。")
206
+ end
207
+ end
208
+
209
+ function export_summary_csv(results, run_name, case_name, out_dir)
210
+ fname = joinpath(out_dir, "$(run_name)_summary.csv")
211
+ open(fname, "w") do f
212
+ println(f, "case,model,objective,lower_bound,mip_gap_actual,solve_time,diff_pct")
213
+ for (tag, obj, lb, gap, stime, diff) in results
214
+ println(f, "$case_name,$tag,$(round(obj,digits=4)),$(round(lb,digits=4)),$(round(gap,digits=6)),$(round(stime,digits=4)),$(round(diff,digits=6))")
215
+ end
216
+ end
217
+ println(" Saved: $fname")
218
+ end
219
+
220
+ println("="^60)
221
+
222
+ # 定义主输出文件夹
223
+ master_dir = "test"
224
+ mkpath(master_dir)
225
+ println(">> 开始测试")
226
+
227
+ for (json_path, case_name) in CASES
228
+ # 提取日期,例如:2017-01-01(现在是从 .json 文件中提取)
229
+ date_str = split(basename(json_path), ".")[1]
230
+
231
+ # 拼接统一前缀名称:case2383wp-2017-01-01
232
+ run_name = "$(case_name)-$(date_str)"
233
+
234
+ # 输出存放在 master_dir 下
235
+ out_dir = joinpath(master_dir, run_name)
236
+ mkpath(out_dir)
237
+
238
+ println("\n[Run: $run_name]")
239
+
240
+ # 读取原始instance → 求解base model
241
+ instance_orig = UnitCommitment.read(json_path)
242
+ println(" [1/4] 求解 base model...")
243
+ model_base = build_and_solve(instance_orig, false)
244
+
245
+ obj_base = JuMP.objective_value(model_base)
246
+ lb_base = JuMP.objective_bound(model_base)
247
+ time_base = JuMP.solve_time(model_base)
248
+ gap_base = abs(obj_base - lb_base) / max(1e-10, abs(obj_base))
249
+ println(" base obj: $(round(obj_base, digits=2)) | gap: $(round(gap_base*100, digits=4))% | solver_time: $(round(time_base, digits=2))s")
250
+
251
+ println("\n[2/4] Part 1 - 添加 startup/shutdown curve...")
252
+ json_v1_path = UnitCommitment.add_trajectory_curves_to_source_data(
253
+ json_path;
254
+ top_pct = 0.10,
255
+ output_path = joinpath(out_dir, "$(run_name)-part1.json"),
256
+ )
257
+ instance_v1 = UnitCommitment.read(json_v1_path)
258
+ println(" 求解 traj_v1 model...")
259
+ model_v1 = build_and_solve(instance_v1, true)
260
+
261
+ obj_v1 = JuMP.objective_value(model_v1)
262
+ lb_v1 = JuMP.objective_bound(model_v1)
263
+ time_v1 = JuMP.solve_time(model_v1)
264
+ gap_v1 = abs(obj_v1 - lb_v1) / max(1e-10, abs(obj_v1))
265
+ diff_v1 = (obj_v1 - obj_base) / obj_base * 100
266
+ println(" v1 obj: $(round(obj_v1, digits=2)) diff=$(round(diff_v1, digits=4))% | gap: $(round(gap_v1*100, digits=4))% | solver_time: $(round(time_v1, digits=2))s")
267
+
268
+ println("\n [3/4] Part 2 - 修改 Minimum uptime...")
269
+ json_v2_path = UnitCommitment.modify_min_uptime_in_source_data(
270
+ json_v1_path;
271
+ output_path = joinpath(out_dir, "$(run_name)-part2.json"),
272
+ )
273
+ instance_v2 = UnitCommitment.read(json_v2_path)
274
+ println(" 求解 traj_v2 model...")
275
+ model_v2 = build_and_solve(instance_v2, true)
276
+
277
+ obj_v2 = JuMP.objective_value(model_v2)
278
+ lb_v2 = JuMP.objective_bound(model_v2)
279
+ time_v2 = JuMP.solve_time(model_v2)
280
+ gap_v2 = abs(obj_v2 - lb_v2) / max(1e-10, abs(obj_v2))
281
+ diff_v2 = (obj_v2 - obj_base) / obj_base * 100
282
+ println(" v2 obj: $(round(obj_v2, digits=2)) diff=$(round(diff_v2, digits=4))% | gap: $(round(gap_v2*100, digits=4))% | solver_time: $(round(time_v2, digits=2))s")
283
+
284
+ println("\n ── 目标值汇总 " * "─"^30)
285
+ println(" base : obj=$(round(obj_base, digits=2)), gap=$(round(gap_base*100, digits=4))%, solver_time=$(round(time_base, digits=2))s")
286
+ println(" v1 : obj=$(round(obj_v1, digits=2)), diff=$(round(diff_v1, digits=4))%, gap=$(round(gap_v1*100, digits=4))%, solver_time=$(round(time_v1, digits=2))s")
287
+ println(" v2 : obj=$(round(obj_v2, digits=2)), diff=$(round(diff_v2, digits=4))%, gap=$(round(gap_v2*100, digits=4))%, solver_time=$(round(time_v2, digits=2))s")
288
+
289
+ println("\n[4/4] 导出 CSV...")
290
+ export_combined_csv(
291
+ model_base, model_v1, model_v2,
292
+ instance_orig, instance_v1, instance_v2,
293
+ run_name, out_dir
294
+ )
295
+
296
+ results_to_export =[
297
+ ("base", obj_base, lb_base, gap_base, time_base, 0.0),
298
+ ("v1", obj_v1, lb_v1, gap_v1, time_v1, diff_v1),
299
+ ("v2", obj_v2, lb_v2, gap_v2, time_v2, diff_v2)
300
+ ]
301
+ export_summary_csv(results_to_export, run_name, case_name, out_dir)
302
+ end
303
+
304
+ println("\n" * "="^60)
305
+ println("测试完成")
UnitCommitment_Trajectory_Test/test_main.jl CHANGED
@@ -1,307 +1 @@
1
- using UnitCommitment
2
- using HiGHS
3
- import JuMP
4
- include("pmax-preprocessing.jl")
5
-
6
- CASES =[
7
- ("testdata/case2383wp/2017-01-01.json.gz", "case2383wp"),
8
- # ("testdata/case2736sp/2017-07-21.json.gz", "case2736sp")
9
- ]
10
-
11
- function build_and_solve(instance, use_traj)
12
- formulation = use_traj ?
13
- UnitCommitment.Formulation(
14
- power_trajectories = UnitCommitment.xxx2005.PowerTrajectories()
15
- ) :
16
- UnitCommitment.Formulation()
17
- model = UnitCommitment.build_model(
18
- instance = instance,
19
- optimizer = HiGHS.Optimizer,
20
- formulation = formulation,
21
- )
22
- JuMP.set_silent(model)
23
- JuMP.optimize!(model)
24
- return model
25
- end
26
-
27
- function get_actual_power(model, g, t, T, prev_p)
28
- uname = g.name
29
- UD = length(g.startup_curve)
30
- DD = length(g.shutdown_curve)
31
- startup_curve = g.startup_curve
32
- shutdown_curve = g.shutdown_curve
33
- Pmin = g.min_power[t]
34
- Pmax = g.max_power[t]
35
- RU = g.ramp_up_limit
36
-
37
- v = round(JuMP.value(model[:is_on][uname, t]))
38
- y = round(JuMP.value(model[:switch_on][uname, t]))
39
- pa = JuMP.value(model[:prod_above]["s1", uname, t])
40
-
41
- in_su = UD > 0 && any(
42
- round(JuMP.value(model[:switch_on][uname, t-i+1])) == 1.0
43
- for i in 1:UD if t-i+1 >= 1)
44
- in_sd = DD > 0 && any(
45
- round(JuMP.value(model[:switch_off][uname, t+i])) == 1.0
46
- for i in 1:DD if t+i <= T)
47
-
48
- p = 0.0
49
- if in_su
50
- for i in 1:UD
51
- if t-i+1 >= 1 && round(JuMP.value(model[:switch_on][uname, t-i+1])) == 1.0
52
- p = startup_curve[i]; break
53
- end
54
- end
55
- elseif in_sd
56
- for i in 1:DD
57
- if t+i <= T && round(JuMP.value(model[:switch_off][uname, t+i])) == 1.0
58
- p = shutdown_curve[DD-i+1]; break
59
- end
60
- end
61
- elseif v > 0.5
62
- p = pa + Pmin
63
- end
64
-
65
- status = if v == 0.0; "Offline"
66
- elseif y == 1.0; "Startup_t1"
67
- elseif in_su; "Startup_traj"
68
- elseif in_sd; "Shutdown_traj"
69
- else "Normal"
70
- end
71
-
72
- # 计算上下界 (ub, lb)
73
- sum_y = sum(round(JuMP.value(model[:switch_on][uname, t-i+1])) for i in 1:UD if t-i+1 >= 1; init=0.0)
74
- sum_z = DD > 0 ? sum(round(JuMP.value(model[:switch_off][uname, t+i])) for i in 1:DD if t+i <= T; init=0.0) : 0.0
75
-
76
- ramp_ub = if t == 1
77
- g.initial_power + RU
78
- elseif sum_y > 0
79
- Pmax
80
- else
81
- prev_p + RU
82
- end
83
-
84
- if v == 0.0 || (UD == 0 && DD == 0)
85
- ub_val = ""
86
- lb_val = ""
87
- else
88
- su_sum = sum(startup_curve[i] * round(JuMP.value(model[:switch_on][uname, t-i+1]))
89
- for i in 1:UD if t-i+1 >= 1; init=0.0)
90
- sd_sum = DD > 0 ? sum(shutdown_curve[i] * round(JuMP.value(model[:switch_off][uname, t+DD-i+1]))
91
- for i in 1:DD if t+DD-i+1 >= 1 && t+DD-i+1 <= T; init=0.0) : 0.0
92
- ub_val = min(su_sum + Pmax*(v-sum_y), sd_sum + Pmax*(v-sum_z), ramp_ub)
93
- lb_val = max(Pmin*(v-sum_z-sum_y)+su_sum, Pmin*(v-sum_z-sum_y)+sd_sum)
94
- end
95
-
96
- return p, status, ub_val, lb_val
97
- end
98
-
99
- function export_combined_csv(
100
- model_base, model_v1, model_v2,
101
- instance_orig, instance_v1, instance_v2,
102
- run_name, out_dir
103
- )
104
- sc_orig = instance_orig.scenarios[1]
105
- sc_v1 = instance_v1.scenarios[1]
106
- sc_v2 = instance_v2.scenarios[1]
107
- T = instance_orig.time
108
-
109
- unit_map_v1 = Dict(g.name => g for g in sc_v1.thermal_units)
110
- unit_map_v2 = Dict(g.name => g for g in sc_v2.thermal_units)
111
-
112
- rows = String[]
113
- push!(rows,
114
- "run_name,case,unit,t," *
115
- "p_base,p_v1,p_v2," *
116
- "ub_v1,lb_v1,ub_v2,lb_v2," *
117
- "status_v1,status_v2," *
118
- "pmin,pmax,startup_limit,shutdown_limit,UD,DD," *
119
- "has_startup_v1,has_shutdown_v1,has_startup_v2,has_shutdown_v2," *
120
- "min_uptime_orig,min_uptime_v2,has_curve"
121
- )
122
-
123
- case_name = split(run_name, "-")[1]
124
-
125
- for g_orig in sc_orig.thermal_units
126
- uname = g_orig.name
127
- g_v1 = unit_map_v1[uname]
128
- g_v2 = unit_map_v2[uname]
129
-
130
- has_curve = !isempty(g_v1.startup_curve)
131
- UD = length(g_v1.startup_curve)
132
- DD = length(g_v1.shutdown_curve)
133
- startup_limit = g_v1.startup_limit
134
- shutdown_limit = g_v1.shutdown_limit
135
- min_uptime_orig = g_orig.min_uptime
136
- min_uptime_v2 = g_v2.min_uptime
137
-
138
- # 判断整个周期内是否有真实启动/停机行为
139
- su_v1 = has_curve && any(round(JuMP.value(model_v1[:switch_on][uname, t])) == 1.0 for t in 1:T)
140
- sd_v1 = has_curve && any(round(JuMP.value(model_v1[:switch_off][uname, t])) == 1.0 for t in 1:T)
141
- su_v2 = has_curve && any(round(JuMP.value(model_v2[:switch_on][uname, t])) == 1.0 for t in 1:T)
142
- sd_v2 = has_curve && any(round(JuMP.value(model_v2[:switch_off][uname, t])) == 1.0 for t in 1:T)
143
-
144
- prev_p_v1 = 0.0
145
- prev_p_v2 = 0.0
146
-
147
- for t in 1:T
148
- Pmin = g_orig.min_power[t]
149
- Pmax = g_orig.max_power[t]
150
-
151
- # base model
152
- pa0 = JuMP.value(model_base[:prod_above]["s1", uname, t])
153
- v0 = round(JuMP.value(model_base[:is_on][uname, t]))
154
- p_base = pa0 + Pmin * v0
155
-
156
- # v1 model
157
- p_v1, status_v1, ub_v1, lb_v1 = get_actual_power(model_v1, g_v1, t, T, prev_p_v1)
158
- prev_p_v1 = p_v1
159
-
160
- # v2 model
161
- p_v2, status_v2, ub_v2, lb_v2 = get_actual_power(model_v2, g_v2, t, T, prev_p_v2)
162
- prev_p_v2 = p_v2
163
-
164
- push!(rows,
165
- "$run_name,$case_name,$uname,$t," *
166
- "$(round(p_base,digits=4)),$(round(p_v1,digits=4)),$(round(p_v2,digits=4))," *
167
- "$ub_v1,$lb_v1,$ub_v2,$lb_v2," *
168
- "$status_v1,$status_v2," *
169
- "$(round(Pmin,digits=4)),$(round(Pmax,digits=4))," *
170
- "$(round(startup_limit,digits=4)),$(round(shutdown_limit,digits=4))," *
171
- "$UD,$DD,$su_v1,$sd_v1,$su_v2,$sd_v2," *
172
- "$min_uptime_orig,$min_uptime_v2,$has_curve"
173
- )
174
- end
175
- end
176
-
177
- fname = joinpath(out_dir, "$(run_name)_combined.csv")
178
- open(fname, "w") do f
179
- for row in rows
180
- println(f, row)
181
- end
182
- end
183
- println(" Saved: $fname")
184
-
185
- # ---------- 物理合理性验证统计输出 ----------
186
- total_curves = 0
187
- v1_su_count, v1_sd_count = 0, 0
188
- v2_su_count, v2_sd_count = 0, 0
189
-
190
- for g_v1 in sc_v1.thermal_units
191
- uname = g_v1.name
192
- if !isempty(g_v1.startup_curve)
193
- total_curves += 1
194
- if any(round(JuMP.value(model_v1[:switch_on][uname, t])) == 1.0 for t in 1:T) v1_su_count += 1 end
195
- if any(round(JuMP.value(model_v1[:switch_off][uname, t])) == 1.0 for t in 1:T) v1_sd_count += 1 end
196
- if any(round(JuMP.value(model_v2[:switch_on][uname, t])) == 1.0 for t in 1:T) v2_su_count += 1 end
197
- if any(round(JuMP.value(model_v2[:switch_off][uname, t])) == 1.0 for t in 1:T) v2_sd_count += 1 end
198
- end
199
- end
200
-
201
- println("\n [物理合理性验证] 启停轨迹激活统计:")
202
- println(" 配置了曲线的机组总数: $total_curves")
203
- println(" v1 发生实际启动: $(v1_su_count)/$total_curves | 发生实际停机: $(v1_sd_count)/$total_curves")
204
- println(" v2 发生实际启动: $(v2_su_count)/$total_curves | 发生实际停机: $(v2_sd_count)/$total_curves")
205
- if total_curves > 0 && v1_su_count == 0 && v2_su_count == 0
206
- println(" 警告: 即使添加了曲线,也没有任何相关机组发生真实启动/停机。")
207
- end
208
- end
209
-
210
- function export_summary_csv(results, run_name, case_name, out_dir)
211
- fname = joinpath(out_dir, "$(run_name)_summary.csv")
212
- open(fname, "w") do f
213
- println(f, "case,model,objective,lower_bound,mip_gap_actual,solve_time,diff_pct")
214
- for (tag, obj, lb, gap, stime, diff) in results
215
- println(f, "$case_name,$tag,$(round(obj,digits=4)),$(round(lb,digits=4)),$(round(gap,digits=6)),$(round(stime,digits=4)),$(round(diff,digits=6))")
216
- end
217
- end
218
- println(" Saved: $fname")
219
- end
220
-
221
- println("="^60)
222
-
223
- # 定义主输出文件夹
224
- master_dir = "test"
225
- mkpath(master_dir)
226
- println(">> 开始测试")
227
-
228
- for (json_path, case_name) in CASES
229
- # 提取日期,例如:2017-01-01(现在是从 .json 文件中提取)
230
- date_str = split(basename(json_path), ".")[1]
231
-
232
- # 拼接统一前缀名称:case2383wp-2017-01-01
233
- run_name = "$(case_name)-$(date_str)"
234
-
235
- # 输出存放在 master_dir 下
236
- out_dir = joinpath(master_dir, run_name)
237
- mkpath(out_dir)
238
-
239
- println("\n[Run: $run_name]")
240
-
241
- # 读取原始instance → 求解base model
242
- instance_orig = UnitCommitment.read(json_path)
243
- println(" [1/4] 求解 base model...")
244
- model_base = build_and_solve(instance_orig, false)
245
-
246
- obj_base = JuMP.objective_value(model_base)
247
- lb_base = JuMP.objective_bound(model_base)
248
- time_base = JuMP.solve_time(model_base)
249
- gap_base = abs(obj_base - lb_base) / max(1e-10, abs(obj_base))
250
- println(" base obj: $(round(obj_base, digits=2)) | gap: $(round(gap_base*100, digits=4))% | solver_time: $(round(time_base, digits=2))s")
251
-
252
- println("\n[2/4] Part 1 - 添加 startup/shutdown curve...")
253
- json_v1_path = add_trajectory_curves(
254
- json_path;
255
- top_pct = 0.10,
256
- output_path = joinpath(out_dir, "$(run_name)-part1.json"),
257
- )
258
- instance_v1 = UnitCommitment.read(json_v1_path)
259
- println(" 求解 traj_v1 model...")
260
- model_v1 = build_and_solve(instance_v1, true)
261
-
262
- obj_v1 = JuMP.objective_value(model_v1)
263
- lb_v1 = JuMP.objective_bound(model_v1)
264
- time_v1 = JuMP.solve_time(model_v1)
265
- gap_v1 = abs(obj_v1 - lb_v1) / max(1e-10, abs(obj_v1))
266
- diff_v1 = (obj_v1 - obj_base) / obj_base * 100
267
- println(" v1 obj: $(round(obj_v1, digits=2)) diff=$(round(diff_v1, digits=4))% | gap: $(round(gap_v1*100, digits=4))% | solver_time: $(round(time_v1, digits=2))s")
268
-
269
- println("\n [3/4] Part 2 - 修改 Minimum uptime...")
270
- json_v2_path = modify_min_uptime(
271
- json_v1_path;
272
- top_pct = 0.10,
273
- output_path = joinpath(out_dir, "$(run_name)-part2.json"),
274
- )
275
- instance_v2 = UnitCommitment.read(json_v2_path)
276
- println(" 求解 traj_v2 model...")
277
- model_v2 = build_and_solve(instance_v2, true)
278
-
279
- obj_v2 = JuMP.objective_value(model_v2)
280
- lb_v2 = JuMP.objective_bound(model_v2)
281
- time_v2 = JuMP.solve_time(model_v2)
282
- gap_v2 = abs(obj_v2 - lb_v2) / max(1e-10, abs(obj_v2))
283
- diff_v2 = (obj_v2 - obj_base) / obj_base * 100
284
- println(" v2 obj: $(round(obj_v2, digits=2)) diff=$(round(diff_v2, digits=4))% | gap: $(round(gap_v2*100, digits=4))% | solver_time: $(round(time_v2, digits=2))s")
285
-
286
- println("\n ── 目标值汇总 " * "─"^30)
287
- println(" base : obj=$(round(obj_base, digits=2)), gap=$(round(gap_base*100, digits=4))%, solver_time=$(round(time_base, digits=2))s")
288
- println(" v1 : obj=$(round(obj_v1, digits=2)), diff=$(round(diff_v1, digits=4))%, gap=$(round(gap_v1*100, digits=4))%, solver_time=$(round(time_v1, digits=2))s")
289
- println(" v2 : obj=$(round(obj_v2, digits=2)), diff=$(round(diff_v2, digits=4))%, gap=$(round(gap_v2*100, digits=4))%, solver_time=$(round(time_v2, digits=2))s")
290
-
291
- println("\n[4/4] 导出 CSV...")
292
- export_combined_csv(
293
- model_base, model_v1, model_v2,
294
- instance_orig, instance_v1, instance_v2,
295
- run_name, out_dir
296
- )
297
-
298
- results_to_export =[
299
- ("base", obj_base, lb_base, gap_base, time_base, 0.0),
300
- ("v1", obj_v1, lb_v1, gap_v1, time_v1, diff_v1),
301
- ("v2", obj_v2, lb_v2, gap_v2, time_v2, diff_v2)
302
- ]
303
- export_summary_csv(results_to_export, run_name, case_name, out_dir)
304
- end
305
-
306
- println("\n" * "="^60)
307
- println("测试完成")
 
1
+ include(joinpath(@__DIR__, "test", "test_instance_modification.jl"))