Improve model card: Add pipeline tag, library name, and link to code

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +358 -0
README.md CHANGED
@@ -4,13 +4,371 @@ frameworks:
4
  - Pytorch
5
  tasks:
6
  - underwater laser imaging
 
 
7
  ---
8
 
9
  <div align="center"><img src="./assets/streaknet_logo.png" width="400"></div><br>
10
  <div align="center"><img src="./assets/overview.jpg"></div>
11
 
 
 
12
  ## Introduction
13
 
14
  In this paper, we introduce StreakNet-Arch, a novel signal processing architecture designed for Underwater Carrier LiDAR-Radar (UCLR) imaging systems, to address the limitations in scatter suppression and real-time imaging. StreakNet-Arch formulates the signal processing as a real-time, end-to-end binary classification task, enabling real-time image acquisition. To achieve this, we leverage Self-Attention networks and propose a novel Double Branch Cross Attention (DBC-Attention) mechanism that surpasses the performance of traditional methods. Furthermore, we present a method for embedding streak-tube camera images into attention networks, effectively acting as a learned bandpass filter. To facilitate further research, we contribute a publicly available streak-tube camera image dataset. The dataset contains 2,695,168 real-world underwater 3D point cloud data. These advancements significantly improve UCLR capabilities, enhancing its performance and applicability in underwater imaging tasks.
15
 
16
  For further details, please refer to our [paper](https://arxiv.org/abs/2404.09158).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - Pytorch
5
  tasks:
6
  - underwater laser imaging
7
+ pipeline_tag: image-to-3d
8
+ library_name: pytorch
9
  ---
10
 
11
  <div align="center"><img src="./assets/streaknet_logo.png" width="400"></div><br>
12
  <div align="center"><img src="./assets/overview.jpg"></div>
13
 
14
+ **Code:** [https://github.com/BestAnHongjun/StreakNet](https://github.com/BestAnHongjun/StreakNet)
15
+
16
  ## Introduction
17
 
18
  In this paper, we introduce StreakNet-Arch, a novel signal processing architecture designed for Underwater Carrier LiDAR-Radar (UCLR) imaging systems, to address the limitations in scatter suppression and real-time imaging. StreakNet-Arch formulates the signal processing as a real-time, end-to-end binary classification task, enabling real-time image acquisition. To achieve this, we leverage Self-Attention networks and propose a novel Double Branch Cross Attention (DBC-Attention) mechanism that surpasses the performance of traditional methods. Furthermore, we present a method for embedding streak-tube camera images into attention networks, effectively acting as a learned bandpass filter. To facilitate further research, we contribute a publicly available streak-tube camera image dataset. The dataset contains 2,695,168 real-world underwater 3D point cloud data. These advancements significantly improve UCLR capabilities, enhancing its performance and applicability in underwater imaging tasks.
19
 
20
  For further details, please refer to our [paper](https://arxiv.org/abs/2404.09158).
21
+
22
+ ## Dataset
23
+ <details>
24
+ <summary>Introduction</summary>
25
+
26
+ **StreakNet-Dataset** is an underwater laser imaging dataset for **UCLR** systems. It comprises a collection of streak-tube images captured by a **UCLR** system at distances of 10m, 13m, 15m, and 20m. See the table below to learn more details of the dataset.
27
+
28
+ |Distance|Number of streak-tube images|Resolution of streak-tube images|Data type|Training set|Validation set|Test set|
29
+ |:---:|:---:|:---:|:---:|:---:|:---:|:---:|\
30
+ |10m|400|2048x2048|uint16|315,200|40,800|819,200|\
31
+ |13m|349|2048x2048|uint16|281,992|47,530|714,752|\
32
+ |15m|300|2048x2048|uint16|245,400|39,200|614,400|\
33
+ |20m|267|2048x2048|uint16|229,086|31,240|546,816|\
34
+
35
+ </details>
36
+
37
+ <details>
38
+ <summary id="datasetdownload">Download</summary>
39
+
40
+ You can download **StreakNet-Dataset** for free from [HuggingFace](https://huggingface.co/datasets/Coder-AN/StreakNet-Dataset) or [ModelScope](https://modelscope.cn/datasets/CoderAN/StreakNet-Dataset/) by Git.
41
+
42
+ Firstly, install `git-lfs`.
43
+
44
+ ```sh
45
+ curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
46
+ sudo apt update
47
+ sudo apt install git-lfs
48
+ sudo git lfs install --system
49
+ ```
50
+
51
+ Then, download **StreakNet-Dataset** in work directory of StreakNet.
52
+
53
+ * From [HuggingFace](https://huggingface.co/datasets/Coder-AN/StreakNet-Dataset): For Global Users
54
+
55
+ ```sh
56
+ cd StreakNet
57
+ git clone https://huggingface.co/datasets/Coder-AN/StreakNet-Dataset ./datasets
58
+ ```
59
+
60
+ * From [ModelScope](https://modelscope.cn/datasets/CoderAN/StreakNet-Dataset): For Chinese Users
61
+
62
+ ```sh
63
+ cd StreakNet
64
+ git clone https://www.modelscope.cn/datasets/CoderAN/StreakNet-Dataset.git ./datasets
65
+ ```
66
+
67
+ </details>
68
+
69
+ <details>
70
+ <summary>Organizational Structure</summary>
71
+
72
+ After downloading **StreakNet-Dataset** from [HuggingFace](https://huggingface.co/datasets/Coder-AN/StreakNet-Dataset) or [ModelScope](https://modelscope.cn/datasets/CoderAN/StreakNet-Dataset/), you will see the following directory structure.
73
+
74
+ ```sh
75
+ datasets
76
+ |- clean_water_10m # The directory of data taken at a distance of 10m
77
+ | |- data # Original streak images
78
+ | | |- 001.tif
79
+ | | |- 002.tif
80
+ | | |- 003.tif
81
+ | | |- ...
82
+ | |
83
+ | |- groundtruth.npy # The ground-truth of the final imaged image
84
+ | |- preview.jpg # A preview of the ground-truth
85
+ |
86
+ |- clean_water_13m # The directory of data taken at a distance of 13m (has the same structure as 10m)
87
+ |- clean_water_15m # The directory of data taken at a distance of 15m (has the same structure as 10m)
88
+ |- clean_water_20m # The directory of data taken at a distance of 20m (has the same structure as 10m)
89
+ |- template.npy # The 1-D time sequence of the template signal
90
+ |- test_config.yaml # The config file of test-set
91
+ |- train_config.yaml # The config file of training-set
92
+ |- valid_config.yaml # The config file of validation-set
93
+ ```
94
+
95
+ </details>
96
+
97
+ ## Quick Start
98
+ <details>
99
+ <summary id="quickstartinstallation">Installation</summary>
100
+
101
+ * Step1. Setup your conda environment. ([What is Anaconda?](https://www.anaconda.com/download))
102
+ ```sh
103
+ conda create -n streaknet python=3.10
104
+ conda activate streaknet
105
+ ```
106
+
107
+ * Step2. Install StreakNet from source.
108
+ ```sh
109
+ git clone https://github.com/BestAnHongjun/StreakNet.git
110
+ cd StreakNet
111
+ pip install -e .
112
+ ```
113
+ </details>
114
+
115
+ <details>
116
+ <summary id="preparedataset">Prepare Dataset</summary>
117
+
118
+ * Step1. Install the StreakNet module by following the ['*Installation*'](#quickstartinstallation) section.
119
+
120
+ * Step2. Download the [**StreakNet-Dataset**](#dataset) by following the ['*Download*'](#datasetdownload) section, then you will see the following directory structure.
121
+
122
+ ```sh
123
+ StreakNet
124
+ |- datasets
125
+ | |- clean_water_10m
126
+ | |- clean_water_13m
127
+ | |- clean_water_15m
128
+ | |- ...
129
+ |
130
+ |- assets
131
+ |- exps
132
+ |- scripts
133
+ |- streaknet
134
+ |- ...
135
+ ```
136
+
137
+ </details>
138
+
139
+ <details>
140
+ <summary id="trainmodels">Train Models</summary>
141
+
142
+ * Step1. Install the StreakNet module by following the ['*Installation*'](#quickstartinstallation) section.
143
+
144
+ * Step2. Prepare the [**StreakData**](#dataset) dataset by following the ['*Prepare Dataset*'](#preparedataset) setction.
145
+
146
+ * Step3. Run the following commands to train the respective models in the root directory.
147
+ ```sh
148
+ python tools/train_streaknet.py -b 512 -f exps/streaknet/streaknet_s.py --cache
149
+ streaknet_m.py
150
+ streaknet_l.py
151
+ streaknet_x.py
152
+ ```
153
+
154
+ ```sh
155
+ python tools/train_streaknet.py -b 512 -f exps/streaknetv2/streaknetv2_s.py --cache
156
+ streaknetv2_m.py
157
+ streaknetv2_l.py
158
+ streaknetv2_x.py
159
+ ```
160
+ > Arguments: \
161
+ > **-b**: set the batch-size when training. \
162
+ > **-f**: specify the experiment profile. \
163
+ > **--cache**: use RAM cache when training
164
+
165
+ **Attention**:
166
+
167
+ (1) When you enable the `--cache` option, the program will preload the dataset into the RAM to accelerate the training process. Please ensure that your server has at least **25GB** of free RAM space to use this option. If your RAM space is insufficient, please disable the `--cache` option. In that case, the program will load data directly from the disk when needed. However, this approach often results in 10 times longer training times.
168
+
169
+ (2) The program will utilize CUDA to accelerate the training process. Please ensure that your server is equipped with at least one NVIDIA GPU with a graphics memory capacity of more than **2GB**.
170
+
171
+ ```sh
172
+ python tools/train.py -b 512 -f exps/streaknet/streaknet_s.py
173
+ streaknet_m.py
174
+ streaknet_l.py
175
+ streaknet_x.py
176
+ ```
177
+
178
+ ```sh
179
+ python tools/train.py -b 512 -f exps/streaknetv2/streaknetv2_s.py
180
+ streaknetv2_m.py
181
+ streaknetv2_l.py
182
+ streaknetv2_x.py
183
+ ```
184
+
185
+ * Step4. Real-time training status will be saved to *StreakNet_outputs* folder. Run *tensorboard* to visualize the status of the training process.
186
+
187
+ ```sh
188
+ tensorboard --logdir=StreakNet_outputs
189
+ ```
190
+
191
+ </details>
192
+
193
+ <details>
194
+ <summary>Demo</summary>
195
+
196
+ * Step1. Download a pretrained model from [HuggingFace](https://huggingface.co/Coder-AN/StreakNet-Models) or [ModelScope](https://modelscope.cn/models/CoderAN/StreakNet-Models/summary). Alternatively, you can directly use the model you just trained in the ['*Train Models*'](#trainmodels) section.
197
+
198
+ ```sh
199
+ # From HuggingFace: For Global Users
200
+ cd StreakNet
201
+ git clone https://huggingface.co/Coder-AN/StreakNet-Models ./checkpoints
202
+ ```
203
+
204
+ ```sh
205
+ # From ModelScope: For Chinese Users
206
+ cd StreakNet
207
+ git clone https://www.modelscope.cn/CoderAN/StreakNet-Models.git ./checkpoints
208
+ ```
209
+
210
+ * Step2. Run the following command to run StreakNet demo:
211
+
212
+ ```sh
213
+ python tools/demo_streaknet.py -b 2 \
214
+ --path datasets/clean_water_13m \
215
+ -f exps/streaknet/streaknet_s.py \
216
+ -c checkpoints/streaknet_s_ckpt.pth \
217
+ --device "cuda:0" \
218
+ --cache --real-time
219
+ ```
220
+
221
+ > Arguments: \
222
+ > **--path**: path to the dataset. \
223
+ > **-f**: specify the experiment profile. \
224
+ > **-b**: set the batch-size when inferring. \
225
+ > **-c**: specify the model weights when inferring. \
226
+ > **--device**: specify the GPU when inferring. \
227
+ > **--realtime**: enable real-time preview. \
228
+ > **--save**: save imaging results.
229
+
230
+ **Attention**: If you omit the `-c` option, the program will automatically use the '*best_ckpt.pth*' file located in the '*StreakNet_outputs*' directory, which you just trained in the ['*Train Models*'](#trainmodels) section.
231
+
232
+ ```sh
233
+ python tools/demo_streaknet.py -b 2 \
234
+ --path datasets/clean_water_13m \
235
+ -f exps/streaknet/streaknet_s.py \
236
+ --device "cuda:0" \
237
+ --save
238
+ ```
239
+
240
+ * Step3. Run the following command to run traditional bandpass-filter demo:
241
+
242
+ ```sh
243
+ python tools/demo_bandpass.py -b 2 --path datasets/clean_water_13m --device "cuda:0" --cache
244
+ ```
245
+
246
+ > Arguments: \
247
+ > **--path**: path to the dataset. \
248
+ > **-b**: set the batch-size when inferring. \
249
+ > **--device**: specify the GPU when inferring. \
250
+ > **--save**: save imaging results.
251
+
252
+ * Step4. Use FDEL as an equivalent bandpass filter:
253
+
254
+ ```sh
255
+ python tools/demo_bandpass.py -b 2 \
256
+ --path datasets/clean_water_13m \
257
+ -f exps/streaknet/streaknet_s.py \
258
+ -c checkpoints/streaknet_s_ckpt.pth \
259
+ --device "cuda:0" --cache
260
+ ```
261
+
262
+ > Arguments: \
263
+ > **--path**: path to the dataset. \
264
+ > **-f**: specify the experiment profile. \
265
+ > **-b**: set the batch-size when inferring. \
266
+ > **-c**: specify the model weights when inferring. \
267
+ > **--device**: specify the GPU when inferring. \
268
+ > **--save**: save imaging results.
269
+
270
+ </details>
271
+
272
+ <details>
273
+ <summary>Evaluation</summary>
274
+
275
+ * Step1. Install the StreakNet module by following the ['*Installation*'](#quickstartinstallation) section.
276
+
277
+ * Step2. Prepare the [**StreakNet-Dataset**](#dataset) dataset by following the ['*Prepare Dataset*'](#preparedataset) setction.
278
+
279
+ * Step3. Train models by following the ['*Train Models*'](#trainmodels) section.
280
+
281
+ * Step4. Evaluate StreakNet:
282
+
283
+ ```sh
284
+ python tools/valid_streaknet.py -b 2 \
285
+ -f exps/streaknet/streaknet_s.py \
286
+ -c checkpoints/streaknet_s_ckpt.pth \
287
+ -d "cuda:0" --cache
288
+ ```
289
+
290
+ > Arguments: \
291
+ > **-f**: specify the experiment profile. \
292
+ > **-b**: set the batch-size when inferring. \
293
+ > **-c**: specify the model weights when inferring. \
294
+ > **-d**: specify the GPU when inferring. \
295
+ > **--save**: save imaging results.
296
+
297
+ * Step5. Evaluate traditional bandpass filter algorithm:
298
+
299
+ ```sh
300
+ python tools/valid_bandpass.py -b 2 -d "cuda:0" --cache
301
+ ```
302
+
303
+ > Arguments: \
304
+ > **-b**: set the batch-size when inferring. \
305
+ > **--device**: specify the GPU when inferring. \
306
+ > **--save**: save imaging results.
307
+
308
+ * Step 6. Evaluate the equivalent bandpass filter:
309
+
310
+ ```sh
311
+ python tools/valid_bandpass.py -b 2 \
312
+ -f exps/streaknet/streaknet_s.py \
313
+ -c checkpoints/streaknet_s_ckpt.pth \
314
+ -d "cuda:0" --cache
315
+ ```
316
+
317
+ > Arguments: \
318
+ > **-f**: specify the experiment profile. \
319
+ > **-b**: set the batch-size when inferring. \
320
+ > **-c**: specify the model weights when inferring. \
321
+ > **-d**: specify the GPU when inferring. \
322
+ > **--save**: save imaging results.
323
+
324
+ </details>
325
+
326
+ <details>
327
+ <summary>Test speed benchmark</summary>
328
+
329
+ * Step1. Install the StreakNet module by following the ['*Installation*'](#quickstartinstallation) section.
330
+
331
+ * Step2. Prepare the [**StreakNet-Dataset**](#dataset) dataset by following the ['*Prepare Dataset*'](#preparedataset) setction.
332
+
333
+ * Step3. Test AIT of StreakNets.
334
+
335
+ ```sh
336
+ python tools/benchmark_streaknet.py -f exps/streaknet/streaknet_s.py -d "cuda:0" --save
337
+ ```
338
+
339
+ * Step 4. Test AIT of traditional bandpass filter algorithm.
340
+
341
+ ```sh
342
+ python tools/benchmark_bandpass.py -d "cuda:0" --save
343
+ ```
344
+
345
+ </details>
346
+
347
+ <!-- ## Deployment
348
+
349
+ 1. [ONNX export and an ONNXRuntime](./demo/ONNXRuntime/)
350
+ 2. [TensorRT in C++ and Python](./demo/TensorRT/) -->
351
+
352
+ ## Cite StreakNet
353
+ If you use StreakNet in your research, please cite our work by using the following BibTeX entry:
354
+
355
+ ```latex
356
+ @misc{li2024streaknetarch,
357
+ title={StreakNet-Arch: An Anti-scattering Network-based Architecture for Underwater Carrier LiDAR-Radar Imaging},
358
+ author={Xuelong Li and Hongjun An and Guangying Li and Xing Wang and Guanghua Cheng and Zhe Sun},
359
+ year={2024},
360
+ eprint={2404.09158},
361
+ archivePrefix={arXiv},
362
+ primaryClass={cs.CV}
363
+ }
364
+ ```
365
+
366
+ ## Respect to Predecessors
367
+ * During the development of this open-source project, we drew inspiration from the excellent engineering architecture of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) project by [Megvii](https://www.megvii.com/) Technology. The YOLOX project was led by [Dr. Jian Sun](https://baike.baidu.com/item/%E5%AD%99%E5%89%91/19814032) (1976.10-2022.6.14), a respected scientist, who made significant contributions to the advancement of computer vision.🕯️🕯️🕯️
368
+ * We were deeply saddened to hear the news of the passing of [Prof. Xiaoou Tang](https://baike.baidu.com/item/%E6%B1%A4%E6%99%93%E9%B8%A5/7200225) (1968.1-2023.12.15) on December 16, 2023, shortly after completing all the preliminary experiments for this project. Prof. Tang devoted his entire life to computer science research and made outstanding contributions to the advancement of computer vision and artificial intelligence. We express our utmost respect to Prof. Tang.🕯️🕯️🕯️
369
+
370
+ ## Copyright
371
+
372
+ <br>
373
+ <div align="center"><img src="./assets/iopen.jpg" width="500"></div>
374
+ <div align="center"><p>Copyright &copy; School of Artificial Intelligence, OPtics and ElectroNics(iOPEN), Northwestern PolyTechnical University. <br>All rights reserved.</p></div>