majogamit commited on
Commit
50d6ddc
1 Parent(s): 7795538

Upload 18 files

Browse files
README.md CHANGED
@@ -1,13 +1,70 @@
1
  ---
2
- title: Crack Mapping
3
- emoji: 🐢
4
- colorFrom: purple
5
- colorTo: gray
6
  sdk: gradio
7
- sdk_version: 4.8.0
8
  app_file: app.py
9
  pinned: false
10
  license: mit
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Crack Segmentation
3
+ emoji: 🗿
4
+ colorFrom: yellow
5
+ colorTo: blue
6
  sdk: gradio
7
+ sdk_version: 4.1.1
8
  app_file: app.py
9
  pinned: false
10
  license: mit
11
  ---
12
 
13
+ # Crack Segmentation Web Application
14
+
15
+ Welcome to the Crack Segmentation Web Application! This tool is designed to segment concrete cracks in images. It's powered by Gradio and deployed on HuggingFace's Spaces platform. The underlying model and utilities are built using Python 3.10.9.
16
+
17
+ ## Table of Contents
18
+
19
+ - [Features](#features)
20
+ - [Installation](#installation)
21
+ - [Usage](#usage)
22
+ - [Dependencies](#dependencies)
23
+ - [License](#license)
24
+
25
+ ## Features
26
+
27
+ - **Interactive UI:** Powered by Gradio for easy upload and visualization of corrosion segmentation results.
28
+ - **High-Quality Segmentation:** Uses state-of-the-art machine learning techniques to provide accurate segmentation results.
29
+ - **Deployed on HuggingFace Spaces:** Access the application anytime from anywhere!
30
+
31
+ ## Installation
32
+
33
+ To run this application locally:
34
+
35
+ 1. Clone the repository:
36
+ ```bash
37
+ git clone https://github.com/cawil-ai/Cawil-Corrosion-Segmentation.git
38
+ cd Cawil-Corrosion-Segmentation
39
+ ```
40
+
41
+ 2. Install the required dependencies:
42
+ ```bash
43
+ pip install -r requirements.txt
44
+ ```
45
+
46
+ ## Usage
47
+
48
+ 1. Once the dependencies are installed, run the application using the command:
49
+ ```bash
50
+ python app.py
51
+ ```
52
+
53
+ 2. Navigate to the URL provided in the console to access the web application.
54
+
55
+ 3. Upload an image and click on 'Segment' to view the corrosion segmentation results.
56
+
57
+ ## Dependencies
58
+
59
+ The application requires the following Python packages:
60
+
61
+ ```
62
+ gradio==4.1.1
63
+ numpy==1.26.1
64
+ pandas==2.1.2
65
+ Pillow==10.1.0
66
+ shortuuid==1.0.11
67
+ ultralytics==8.0.206
68
+ ```
69
+
70
+ These dependencies can also be found in the `requirements.txt` file in the repository.
__init__.py ADDED
File without changes
app.py ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import cv2
2
+ import gradio as gr
3
+ import pandas as pd
4
+ import shortuuid
5
+ from ultralytics import YOLO
6
+ from utils.data_utils import clear_all
7
+ import torch
8
+ import numpy as np
9
+ import os
10
+ from utils.measure_utils import ContourAnalyzer
11
+ from PIL import Image
12
+ import utils.plot as pt
13
+
14
+
15
+ # Clear any previous data and configurations
16
+ clear_all()
17
+ model = YOLO('./weights/best.pt')
18
+ # Define the color scheme/theme for the website
19
+ theme = gr.themes.Soft(
20
+ primary_hue="orange",
21
+ secondary_hue="sky",
22
+ )
23
+ #Custom css for styling
24
+ css = """
25
+ .size {
26
+ min-height: 400px !important;
27
+ max-height: 400px !important;
28
+ overflow: auto !important;
29
+ }
30
+ """
31
+
32
+ # Create the Gradio interface using defined theme and CSS
33
+ with gr.Blocks(theme=theme, css=css) as demo:
34
+ # Title and description for the app
35
+ gr.Markdown("# Concrete Crack Detection and Segmentation")
36
+ gr.Markdown("Upload concrete crack images and get segmented results.")
37
+ with gr.Tab('Instructions'):
38
+ gr.Markdown(
39
+ """**Instructions for Concrete Crack Detection and Segmentation App:**
40
+
41
+ **Input:**
42
+ - Upload one or more concrete crack images using the "Image Input" section.
43
+ - Adjust confidence level and distance sliders if needed.\n
44
+ **Buttons:**
45
+ - Click "Segment" to perform crack segmentation.
46
+ - Click "Clear" to reset inputs and outputs.\n
47
+ **Output:**
48
+ - View segmented images in the "Image Output" gallery.
49
+ - Check crack detection results in the "Results" table.
50
+ - Download the PDF report file with detailed information..
51
+
52
+ **Additional Information:**
53
+ - The app uses a YOLOv8 trained model for crack detection with 86.8\% accuracy.
54
+ - Results include orientation category, width of the crack (widest), number of cracks per photo.
55
+
56
+ **Notes:**
57
+ - Ensure uploaded images are in the supported formats: PNG, JPG, JPEG, WEBP.
58
+ - Remarks and Reference Image must have data.
59
+
60
+ **Enjoy detecting and segmenting concrete cracks with the app!**
61
+ """)
62
+ # Image tab
63
+ with gr.Tab("Image"):
64
+
65
+ with gr.Row():
66
+ with gr.Column():
67
+ # Input section for uploading images
68
+ image_input = gr.File(
69
+ file_count="multiple",
70
+ file_types=["image"],
71
+ label="Image Input",
72
+ elem_classes="size",
73
+ )
74
+
75
+
76
+ #Confidence Score for prediction
77
+ conf = gr.Slider(value=20,step=5, label="Confidence",
78
+ interactive=True)
79
+ distance = gr.Slider(value=10,step=1, label="Distance (cm)",
80
+ interactive=True)
81
+ # Buttons for segmentation and clearing
82
+ image_remark = gr.Textbox(label="Remark for the Batch",
83
+ placeholder='Fifth floor: Wall facing the door')
84
+ with gr.Row():
85
+ image_button = gr.Button("Segment", variant='primary')
86
+ image_clear = gr.ClearButton()
87
+
88
+ with gr.Column():
89
+ # Display section for segmented images
90
+ image_output = gr.Gallery(
91
+ label="Image Output",
92
+ show_label=True,
93
+ elem_id="gallery",
94
+ columns=2,
95
+ object_fit="contain",
96
+ height=400,
97
+ )
98
+ md_result = gr.Markdown("**Results**", visible=False)
99
+ csv_image = gr.File(label='Report', interactive=False, visible=False)
100
+ df_image = gr.DataFrame(visible=False)
101
+
102
+
103
+ image_reference = gr.File(
104
+ file_count="multiple",
105
+ file_types=["image"],
106
+ label="Reference Image",)
107
+
108
+
109
+ def detect_pattern(image_path):
110
+ """
111
+ Detect concrete cracks in the binary image.
112
+
113
+ Parameters:
114
+ image_path (str): Path to the binary image.
115
+
116
+ Returns:
117
+ tuple: Principal orientation and orientation category.
118
+ """
119
+ image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
120
+ skeleton = cv2.erode(image, np.ones((3, 3), dtype=np.uint8), iterations=1)
121
+ contours, _ = cv2.findContours(skeleton, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
122
+ data_pts = np.vstack([contour.squeeze() for contour in contours])
123
+ mean, eigenvectors = cv2.PCACompute(data_pts.astype(np.float32), mean=None)
124
+ principal_orientation = np.arctan2(eigenvectors[0, 1], eigenvectors[0, 0])
125
+
126
+ if -0.05 <= principal_orientation <= 0.05:
127
+ orientation_category = "Horizontal"
128
+ elif 1 <= principal_orientation <= 1.8:
129
+ orientation_category = "Vertical"
130
+ elif -0.99 <= principal_orientation <= 0.99:
131
+ orientation_category = "Diagonal"
132
+ else:
133
+ orientation_category = "Other"
134
+
135
+ return principal_orientation, orientation_category
136
+
137
+ def load_model():
138
+ """
139
+ Load the YOLO model with pre-trained weights.
140
+
141
+ Returns:
142
+ model: Loaded YOLO model.
143
+ """
144
+ return YOLO('./weights/best.pt')
145
+
146
+ def generate_uuid():
147
+ """
148
+ Generates a short unique identifier.
149
+
150
+ Returns:
151
+ str: Unique identifier string.
152
+ """
153
+ return str(shortuuid.uuid())
154
+
155
+
156
+ def preprocess_image(image):
157
+ """
158
+ Preprocesses the input image.
159
+
160
+ Parameters:
161
+ image (numpy.array or PIL.Image): Image to preprocess.
162
+
163
+ Returns:
164
+ numpy.array: Resized and converted RGB version of the input image.
165
+ """
166
+
167
+ image = np.array(image)
168
+
169
+ input_image = Image.fromarray(image)
170
+ input_image = input_image.resize((640, 640))
171
+ input_image = input_image.convert("RGB")
172
+
173
+ return np.array(input_image)
174
+
175
+
176
+ def predict_segmentation_im(image, conf, reference, remark):
177
+ """
178
+ Perform segmentation prediction on a list of images.
179
+
180
+ Parameters:
181
+ image (list): List of images for segmentation.
182
+ conf (float): Confidence score for prediction.
183
+
184
+ Returns:
185
+ tuple: Paths of the processed images, CSV file, DataFrame, and Markdown.
186
+ """
187
+ # Check if reference or remark is empty
188
+ if not reference:
189
+ raise gr.Error("Reference Image cannot be empty.")
190
+ if not remark:
191
+ raise gr.Error("Batch Remark cannot be empty.")
192
+ if not image:
193
+ raise gr.Error("Image input cannot be empty.")
194
+
195
+ print("THE REFERENCE IN APPPY", reference)
196
+ uuid = generate_uuid()
197
+ image_list = [preprocess_image(Image.open(file.name)) for file in image]
198
+ filenames = [file.name for file in image]
199
+ conf= conf * 0.01
200
+ model = load_model()
201
+ results = model.predict(image_list, conf=conf, save=True, project='output', name=uuid, stream=True)
202
+ processed_image_paths = []
203
+ output_image_paths = []
204
+ result_list = []
205
+ width_list = []
206
+ orientation_list = []
207
+ width_interpretations = []
208
+ # Populate the dataframe with counts
209
+ for i, r in enumerate(results):
210
+ result_list.append(r)
211
+ instance_count = len(r)
212
+ if r.masks is not None and r.masks.data.numel() > 0:
213
+ masks = r.masks.data
214
+ boxes = r.boxes.data
215
+ clss = boxes[:, 5]
216
+ people_indices = torch.where(clss == 0)
217
+ people_masks = masks[people_indices]
218
+ people_mask = torch.any(people_masks, dim=0).int() * 255
219
+ processed_image_path = str(model.predictor.save_dir / f'binarize{i}.jpg')
220
+ cv2.imwrite(processed_image_path, people_mask.cpu().numpy())
221
+ processed_image_paths.append(processed_image_path)
222
+
223
+ crack_image_path = processed_image_path
224
+ principal_orientation, orientation_category = detect_pattern(crack_image_path)
225
+
226
+ # Print the results if needed
227
+ print(f"Crack Detection Results for {crack_image_path}:")
228
+ print("Principal Component Analysis Orientation:", principal_orientation)
229
+ print("Orientation Category:", orientation_category)
230
+
231
+ # Load the original image in color
232
+ original_img = cv2.imread(f'output/{uuid}/image{i}.jpg')
233
+ orig_image_path = str(model.predictor.save_dir / f'image{i}.jpg')
234
+ processed_image_paths.append(orig_image_path)
235
+ # Load and resize the binary image to match the dimensions of the original image
236
+ binary_image = cv2.imread(f'output/{uuid}/binarize{i}.jpg', cv2.IMREAD_GRAYSCALE)
237
+ binary_image = cv2.resize(binary_image, (original_img.shape[1], original_img.shape[0]))
238
+
239
+ contour_analyzer = ContourAnalyzer()
240
+ max_width, thickest_section, thickest_points, distance_transforms = contour_analyzer.find_contours(binary_image)
241
+
242
+ visualized_image = original_img.copy()
243
+ cv2.drawContours(visualized_image, [thickest_section], 0, (0, 255, 0), 1)
244
+
245
+ contour_analyzer.draw_circle_on_image(visualized_image, (int(thickest_points[0]), int(thickest_points[1])), 5, (57, 255, 20), -1)
246
+ print("Max Width in pixels: ", max_width)
247
+
248
+ width = contour_analyzer.calculate_width(y=10, x=5, pixel_width=max_width, calibration_factor=0.001, distance=150)
249
+ print("Max Width, converted: ", width)
250
+
251
+ prets = pt.classify_wall_damage(width)
252
+ width_interpretations.append(prets)
253
+
254
+ visualized_image_path = f'output/{uuid}/visualized_image{i}.jpg'
255
+ output_image_paths.append(visualized_image_path)
256
+ cv2.imwrite(visualized_image_path, visualized_image)
257
+
258
+ width_list.append(round(width, 2))
259
+ orientation_list.append(orientation_category)
260
+ else:
261
+ original_img = cv2.imread(f'output/{uuid}/image{i}.jpg')
262
+ visualized_image_path = f'output/{uuid}/visualized_image{i}.jpg'
263
+ output_image_paths.append(visualized_image_path)
264
+ cv2.imwrite(visualized_image_path, original_img)
265
+ width_list.append('None')
266
+ orientation_list.append('None')
267
+ width_interpretations.append('None')
268
+
269
+ # Delete binarized and initial segmented images after processing
270
+ for path in processed_image_paths:
271
+ if os.path.exists(path):
272
+ os.remove(path)
273
+
274
+ # results = gr.Textbox(res, visible=True)
275
+ csv, df = pt.count_instance(result_list, filenames, uuid, width_list, orientation_list, output_image_paths, reference, remark, width_interpretations)
276
+
277
+ csv = gr.File(value=csv, visible=True)
278
+ df = gr.DataFrame(value=df, visible=True)
279
+ md = gr.Markdown(visible=True)
280
+
281
+ # return get_all_file_paths(f"output/{uuid}/"), csv, df, md
282
+ return output_image_paths, csv, df, md
283
+
284
+
285
+ # Connect the buttons to the prediction function and clear function
286
+ image_button.click(
287
+ predict_segmentation_im,
288
+ inputs=[image_input, conf, image_reference, image_remark],
289
+ outputs=[image_output, csv_image, df_image, md_result]
290
+ )
291
+
292
+ image_clear.click(
293
+ lambda: [
294
+ None,
295
+ None,
296
+ gr.Markdown(visible=False),
297
+ gr.File(visible=False),
298
+ gr.DataFrame(visible=False),
299
+ gr.Slider(value=20),
300
+ None,
301
+ None
302
+ ],
303
+ outputs=[image_input, image_output, md_result, csv_image, df_image, conf, image_reference, image_remark]
304
+ )
305
+
306
+ # Launch the Gradio app
307
+ demo.launch()
packages.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ wkhtmltopdf
requirements.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ beautifulsoup4==4.12.2
2
+ gradio==4.8.0
3
+ imgkit==1.2.3
4
+ ipython==8.18.1
5
+ numpy==1.26.2
6
+ pandas==2.1.3
7
+ pdfkit==1.0.0
8
+ Pillow==10.1.0
9
+ shortuuid==1.0.11
10
+ torch==2.1.1
11
+ ultralytics==8.0.222
utils/__init__.py ADDED
File without changes
utils/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (176 Bytes). View file
 
utils/__pycache__/data_utils.cpython-310.pyc ADDED
Binary file (1.71 kB). View file
 
utils/__pycache__/image_utils.cpython-310.pyc ADDED
Binary file (1.63 kB). View file
 
utils/__pycache__/measure_utils.cpython-310.pyc ADDED
Binary file (2.05 kB). View file
 
utils/__pycache__/model_utils.cpython-310.pyc ADDED
Binary file (3.62 kB). View file
 
utils/__pycache__/plot.cpython-310.pyc ADDED
Binary file (4.75 kB). View file
 
utils/data_utils.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import shutil
3
+ import shortuuid
4
+
5
+
6
+ def clear_all():
7
+ """
8
+ Removes the 'output/' directory along with its content.
9
+
10
+ Returns:
11
+ None
12
+ """
13
+ shutil.rmtree('output/', ignore_errors=True)
14
+
15
+ def clear_value_tab(path):
16
+ """
17
+ Removes a specific sub-directory under 'output/'.
18
+
19
+ Parameters:
20
+ path (str): Sub-directory to remove.
21
+
22
+ Returns:
23
+ None
24
+ """
25
+ print(path)
26
+ shutil.rmtree(os.path.join('output/', path), ignore_errors=True)
27
+
28
+ def get_all_file_paths(directory):
29
+ """
30
+ Collects all image file paths under a given directory.
31
+
32
+ Parameters:
33
+ directory (str): Directory to search for image files.
34
+
35
+ Returns:
36
+ list: List of image file paths.
37
+ """
38
+ allowed_extensions = ('.png', '.jpg', '.jpeg', '.gif', '.bmp')
39
+ return [
40
+ os.path.join(root, file)
41
+ for root, _, files in os.walk(directory)
42
+ for file in files
43
+ if file.lower().endswith(allowed_extensions)
44
+ ]
45
+
46
+ def generate_uuid():
47
+ """
48
+ Generates a short unique identifier.
49
+
50
+ Returns:
51
+ str: Unique identifier string.
52
+ """
53
+ return str(shortuuid.uuid())
utils/image_utils.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import numpy as np
3
+ import pandas as pd
4
+ from PIL import Image
5
+
6
+ def preprocess_image(image):
7
+ """
8
+ Preprocesses the input image.
9
+
10
+ Parameters:
11
+ image (numpy.array or PIL.Image): Image to preprocess.
12
+
13
+ Returns:
14
+ numpy.array: Resized and converted RGB version of the input image.
15
+ """
16
+ # Convert PIL image to numpy array if required
17
+ if isinstance(image, Image.Image):
18
+ image = np.array(image)
19
+
20
+ # Resize and convert the image to RGB
21
+ input_image = Image.fromarray(image)
22
+ input_image = input_image.resize((640, 640))
23
+ input_image = input_image.convert("RGB")
24
+
25
+ return np.array(input_image)
26
+
27
+
28
+ import pandas as pd
29
+ import os
30
+
31
+ def count_instance(result, filenames, uuid, width_list, orientation_list):
32
+ """
33
+ Counts the instances in the result and generates a CSV with the counts.
34
+
35
+ Parameters:
36
+ result (list): List containing results for each instance.
37
+ filenames (list): Corresponding filenames for each result.
38
+ uuid (str): Unique ID for the output folder name.
39
+ width_list (list): List containing width values for each instance.
40
+ orientation_list (list): List containing orientation values for each instance.
41
+
42
+ Returns:
43
+ tuple: Path to the generated CSV and dataframe with counts.
44
+ """
45
+ # Initializing the dataframe
46
+ data = {
47
+ 'Index': [],
48
+ 'FileName': [],
49
+ 'Orientation': [],
50
+ 'Width': [],
51
+ 'Instance': []
52
+ }
53
+ df = pd.DataFrame(data)
54
+
55
+ # Populate the dataframe with counts, width, and orientation
56
+ for i, res in enumerate(result):
57
+ instance_count = len(res)
58
+ df.loc[i] = [i, os.path.basename(filenames[i]), orientation_list[i], width_list[i], instance_count]
59
+
60
+ # Save dataframe to a CSV file
61
+ path = os.path.join('output', uuid)
62
+ os.makedirs(path, exist_ok=True)
63
+ csv_filename = os.path.join(path, '_results.csv')
64
+
65
+ # Reorder columns
66
+ df = df[['Index', 'FileName', 'Orientation', 'Width', 'Instance']]
67
+
68
+ df.to_csv(csv_filename, index=False)
69
+
70
+ return csv_filename, df
utils/measure_utils.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import cv2
2
+ import numpy as np
3
+ import math
4
+
5
+ class ContourAnalyzer:
6
+ def __init__(self, min_area_threshold=5):
7
+ self.min_area_threshold = min_area_threshold
8
+
9
+ def find_thickest_contour(self, contours, binary_image):
10
+ max_width = 0
11
+ thickest_section = None
12
+ thickest_points = None
13
+ distance_transforms = []
14
+
15
+ for contour in contours:
16
+ if cv2.contourArea(contour) > self.min_area_threshold:
17
+
18
+ mask = np.zeros_like(binary_image)
19
+ cv2.drawContours(mask, [contour], 0, 255, thickness=cv2.FILLED)
20
+
21
+ distance_transform = cv2.distanceTransform(mask, cv2.DIST_L2, cv2.DIST_MASK_PRECISE)
22
+ distance_transforms.append(distance_transform)
23
+
24
+ _, _, _, max_loc = cv2.minMaxLoc(distance_transform)
25
+ width = 2 * distance_transform[max_loc[1], max_loc[0]]
26
+
27
+ if width > max_width:
28
+ max_width = width
29
+ thickest_section = contour
30
+ thickest_points = max_loc
31
+
32
+ return max_width, thickest_section, thickest_points, distance_transforms
33
+
34
+ def find_contours(self, binary_image):
35
+ contours, _ = cv2.findContours(binary_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
36
+
37
+ # for debugging
38
+ print("Number of contours:", len(contours))
39
+
40
+ max_width, thickest_section, thickest_points, distance_transforms = self.find_thickest_contour(contours, binary_image)
41
+
42
+ return max_width, thickest_section, thickest_points, distance_transforms
43
+
44
+ @staticmethod
45
+ def calculate_width(y, x, pixel_width, calibration_factor, distance):
46
+ angle = math.atan2(y, x)
47
+ width = angle * pixel_width * distance * calibration_factor
48
+ return width
49
+
50
+ def draw_circle_on_image(self, image, center, radius, color=(57, 255, 20), thickness=-1):
51
+ cv2.circle(image, center, radius, color, thickness)
52
+
utils/model_utils.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from ultralytics import YOLO
2
+ import gradio as gr
3
+ from PIL import Image
4
+ import torch
5
+ import cv2
6
+ from utils.image_utils import preprocess_image, count_instance
7
+ from utils.data_utils import get_all_file_paths, generate_uuid
8
+ import numpy as np
9
+ import os
10
+
11
+ def load_model():
12
+ """
13
+ Load the YOLO model with pre-trained weights.
14
+
15
+ Returns:
16
+ model: Loaded YOLO model.
17
+ """
18
+ return YOLO('./weights/best.pt')
19
+ def detect_pattern(image_path):
20
+ """
21
+ Detect concrete cracks in the binary image.
22
+
23
+ Parameters:
24
+ image_path (str): Path to the binary image.
25
+
26
+ Returns:
27
+ tuple: Principal orientation and orientation category.
28
+ """
29
+ image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
30
+ skeleton = cv2.erode(image, np.ones((3, 3), dtype=np.uint8), iterations=1)
31
+ contours, _ = cv2.findContours(skeleton, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
32
+ data_pts = np.vstack([contour.squeeze() for contour in contours])
33
+ mean, eigenvectors = cv2.PCACompute(data_pts.astype(np.float32), mean=None)
34
+ principal_orientation = np.arctan2(eigenvectors[0, 1], eigenvectors[0, 0])
35
+
36
+ if -0.05 <= principal_orientation <= 0.05:
37
+ orientation_category = "Horizontal"
38
+ elif 1 <= principal_orientation <= 1.8:
39
+ orientation_category = "Vertical"
40
+ elif -0.99 <= principal_orientation <= 0.99:
41
+ orientation_category = "Diagonal"
42
+ else:
43
+ orientation_category = "Other"
44
+
45
+ return principal_orientation, orientation_category
46
+
47
+ def predict_segmentation(image, conf):
48
+ """
49
+ Perform segmentation prediction on a list of images.
50
+
51
+ Parameters:
52
+ image (list): List of images for segmentation.
53
+ conf (float): Confidence score for prediction.
54
+
55
+ Returns:
56
+ tuple: Paths of the processed images, CSV file, DataFrame, and Markdown.
57
+ """
58
+ uuid = generate_uuid()
59
+ image_list = [preprocess_image(Image.open(file.name)) for file in image]
60
+ filenames = [file.name for file in image]
61
+ conf= conf * 0.01
62
+ model = load_model()
63
+ results = model.predict(image_list, conf=conf, save=True, project='output', name=uuid, stream=True)
64
+ processed_image_paths = []
65
+ annotated_image_paths = []
66
+ for i, r in enumerate(results):
67
+ for m in r:
68
+ masks = r.masks.data
69
+ boxes = r.boxes.data
70
+ clss = boxes[:, 5]
71
+ people_indices = torch.where(clss == 0)
72
+ people_masks = masks[people_indices]
73
+ people_mask = torch.any(people_masks, dim=0).int() * 255
74
+ processed_image_path = str(model.predictor.save_dir / f'binarize{i}.jpg')
75
+ cv2.imwrite(processed_image_path, people_mask.cpu().numpy())
76
+ processed_image_paths.append(processed_image_path)
77
+
78
+ crack_image_path = processed_image_path
79
+ principal_orientation, orientation_category = detect_pattern(crack_image_path)
80
+
81
+ # Print the results if needed
82
+ print(f"Crack Detection Results for {crack_image_path}:")
83
+ print("Principal Component Analysis Orientation:", principal_orientation)
84
+ print("Orientation Category:", orientation_category)
85
+
86
+ csv, df = count_instance(results, filenames, uuid)
87
+
88
+ csv = gr.File(value=csv, visible=True)
89
+ df = gr.DataFrame(value=df, visible=True)
90
+ md = gr.Markdown(visible=True)
91
+
92
+ # # Delete binarized images after processing
93
+ # for path in processed_image_paths:
94
+ # if os.path.exists(path):
95
+ # os.remove(path)
96
+
97
+ return get_all_file_paths(f'output/{uuid}'), csv, df, md
98
+
utils/plot.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ from IPython.display import display, HTML
3
+ import os
4
+ import imgkit
5
+ import pdfkit
6
+
7
+ def classify_wall_damage(crack_width):
8
+ if crack_width <= 0.1:
9
+ return "Negligible"
10
+ elif 0.1 <= crack_width <= 1:
11
+ return "Very slight"
12
+ elif 1.1 <= crack_width <= 5:
13
+ return "Slight"
14
+ elif 5 <= crack_width <= 15:
15
+ return "Moderate"
16
+ elif 15 <= crack_width <= 25:
17
+ return "Severe"
18
+ elif crack_width > 25:
19
+ return "Very severe"
20
+ else:
21
+ return "Invalid input"
22
+
23
+
24
+ from collections import Counter
25
+
26
+ def generate_html_summary(crack_list):
27
+ # Define the possible damage levels
28
+ damage_levels = ["Negligible", "Very Slight", "Slight", "Moderate", "Severe", "Very Severe"]
29
+
30
+ # Count the occurrences of each damage level
31
+ string_counts = Counter(crack_list)
32
+
33
+ # Build the HTML string
34
+ html_summary = "<html>\n<body>\n"
35
+ html_summary += "<h2>Summary of this batch</h2>\n"
36
+ html_summary += "<p><strong>Number of Cracks Detected:</strong></p>\n"
37
+ html_summary += "<ul>\n"
38
+
39
+ # Append the damage level and count to the HTML string
40
+ for level in damage_levels:
41
+ count = string_counts.get(level, 0)
42
+ html_summary += f"<li>{level} = {count}</li>\n"
43
+
44
+ html_summary += "</ul>\n"
45
+ html_summary += "</body>\n</html>"
46
+ print(html_summary)
47
+ return html_summary
48
+
49
+ def merge_html_files(file1_path, file2_path, output_path):
50
+ # Read contents of the first HTML file
51
+ with open(file1_path, 'r', encoding='utf-8') as file1:
52
+ content1 = file1.read()
53
+
54
+ # Read contents of the second HTML file
55
+ with open(file2_path, 'r', encoding='utf-8') as file2:
56
+ content2 = file2.read()
57
+
58
+ # Concatenate the contents
59
+ merged_content = content1 + content2
60
+
61
+ # Write the merged content to the output file
62
+ with open(output_path, 'w', encoding='utf-8') as output_file:
63
+ output_file.write(merged_content)
64
+
65
+ def count_instance(result, filenames, uuid, width_list, orientation_list, image_path, reference, remark, damage):
66
+ """
67
+ Counts the instances in the result and generates a CSV with the counts.
68
+
69
+ Parameters:
70
+ result (list): List containing results for each instance.
71
+ filenames (list): Corresponding filenames for each result.
72
+ uuid (str): Unique ID for the output folder name.
73
+ width_list (list): List containing width values for each instance.
74
+ orientation_list (list): List containing orientation values for each instance.
75
+
76
+ Returns:
77
+ tuple: Path to the generated CSV and dataframe with counts.
78
+ """
79
+ # Initializing the dataframe
80
+ print(damage)
81
+ data = {
82
+ 'Index': [],
83
+ 'FileName': [],
84
+ 'Orientation': [],
85
+ 'Width (mm)': [],
86
+ 'Instance': [],
87
+ 'Damage Level': []
88
+ }
89
+
90
+ df_ref = pd.DataFrame({'Reference': [f'<img src="{ref}" width="640" >' for ref in reference]})
91
+
92
+
93
+ df = pd.DataFrame(data)
94
+
95
+ # Populate the dataframe with counts, width, and orientation
96
+ for i, res in enumerate(result):
97
+ instance_count = len(res)
98
+ df.loc[i] = [i, os.path.basename(filenames[i]), orientation_list[i], width_list[i], instance_count, damage[i]]
99
+
100
+ # Reorder columns
101
+ df = df[['Index', 'FileName', 'Orientation', 'Width (mm)','Damage Level', 'Instance']]
102
+
103
+ # Create a new dataframe (df2) with all columns from df
104
+ df2 = df.copy()
105
+ summary = generate_html_summary(damage)
106
+ # Add another column for the image (modify as per your requirement)
107
+ print("IMG PATHS")
108
+ print(image_path)
109
+ base_path = [os.path.basename(path) for path in image_path]
110
+ df2['Image'] = base_path
111
+ df2['Remarks'] = remark
112
+ # convert your links to html tags
113
+ def path_to_image_html(path):
114
+ return '<img src="'+ path + '" width="320" >'
115
+ print("This executed 1")
116
+ pd.set_option('display.max_colwidth', None)
117
+
118
+ image_cols = ['Image']
119
+
120
+ format_dict = {}
121
+ for image_col in image_cols:
122
+ format_dict[image_col] = path_to_image_html
123
+
124
+ print("This executed 2")
125
+ col_widths = [100, 50, 50, 50, 50, 120, 150]
126
+ df2 = df2.drop(df.columns[0], axis=1)
127
+
128
+ # Create the HTML file
129
+ df_html = df2.to_html(f'output/{uuid}/df_batch.html', escape=False, formatters=format_dict, col_space=col_widths, justify='left')
130
+ df_refs = df_ref.to_html(f'output/{uuid}/df_ref.html', escape=False, justify='left')
131
+ print("This executed 3")
132
+ # Save the modified dataframe to a CSV file
133
+ from bs4 import BeautifulSoup
134
+
135
+ # Load the HTML file
136
+ with open(f'output/{uuid}/df_ref.html', 'r') as file:
137
+ html_content = file.read()
138
+
139
+ # Parse the HTML using BeautifulSoup
140
+ soup = BeautifulSoup(html_content, 'html.parser')
141
+
142
+ # Find the table in the HTML (assuming there is only one table)
143
+ table = soup.find('table')
144
+
145
+ # Append the new summary HTML after the table
146
+ table.insert_after(BeautifulSoup(summary, 'html.parser'))
147
+
148
+ # Save the modified HTML to a new file
149
+ with open(f'output/{uuid}/df_ref_summary.html', 'w') as file:
150
+ file.write(str(soup))
151
+
152
+ html_table = HTML(df2.to_html(escape=False))
153
+ display(html_table)
154
+
155
+ print('This executed 4')
156
+ file1 = f'output/{uuid}/df_ref_summary.html'
157
+ file2 = f'output/{uuid}/df_batch.html'
158
+ merge_html_files(file1, file2, f'output/{uuid}/out.html')
159
+ opt = {"enable-local-file-access": ""}
160
+ # new_parser = HtmlToDocx()
161
+ # new_parser.parse_html_file(f'output/{uuid}/df_batch.html', f'output/{uuid}/report_batch')
162
+ # new_parser.parse_html_file(f'output/{uuid}/df_ref_summary.html', f'output/{uuid}/report_ref')
163
+
164
+ # convert(f"output/{uuid}/report_batch.docx", f"output/{uuid}/Mine.pdf")
165
+ pdfkit.from_file(f'output/{uuid}/df_batch.html', f'output/{uuid}/report_batch.pdf', options=opt)
166
+ pdfkit.from_file(f'output/{uuid}/df_ref_summary.html', f'output/{uuid}/report_ref.pdf', options=opt)
167
+
168
+ print("This executed 5")
169
+
170
+ # pdfs = [f'output/{uuid}/report_ref.pdf', f'output/{uuid}/report_batch.pdf']
171
+
172
+ # merger = PdfMerger()
173
+
174
+ # for pdf in pdfs:
175
+ # merger.append(pdf)
176
+
177
+ # merger.write(f'output/{uuid}/report.pdf')
178
+ # merger.close()
179
+ # options = {'width': 1280, 'disable-smart-width': ''}
180
+ imgkit.from_file(f'output/{uuid}/out.html', f'output/{uuid}/out.jpg', options=opt)
181
+ return f'output/{uuid}/out.jpg', df
182
+
183
+
184
+
weights/best.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:196fcfc2d95d9904c89738b771f5877d07bf44b513fddcc0b831ec45464f24f0
3
+ size 54816533