This work was conducted by Soeun Han, Wonjun Park, Kyumin Jeong, Taehoon Hong, and Choongwan Koo (Corresponding Author)
| Paper
Affiliation: Construction Engineering & Management Lab, Incheon National University (INUCEM).
Han, S., Park, W., Jeong, K., Hong, T., and Koo, C. (2024). “Utilizing synthetic images to enhance the automated recognition of small-sized construction tools”, Automation in Construction, 163, 105415, https://doi.org/10.1016/j.autcon.2024.105415.
Previous studies on vision-based classifiers often overlooked the need for detecting small-sized construction tools. Considering the substantial variations in these tools' size and shape, it is essential to train models using synthetic images that encompass diverse angles and distances. This study aimed to improve the performance of classifiers for small-sized construction tools by leveraging synthetic data. Three classifiers were proposed using YOLOv8 algorithm, varying in data composition: (i) 'Real-4000': 4,000 authentic images; (ii) 'Hybrid-4000': 2,000 authentic and 2,000 synthetic images; (iii) 'Hybrid-8000': 4,000 authentic and 4,000 synthetic images. To assess practical applicability, a test dataset of 144 samples for each type was collected directly from construction sites. Results revealed that the 'Hybrid-8000' model, utilizing synthetic images, excelled at 94.8% of mAP_0.5. This represented a significant 15.2% improvement, affirming its practical applicability. These classifiers hold promise for enhancing safety and advancing real-time automation and robotics in construction.
Small-sized construction tools; Object detection and classification; Synthetic images; Practical applicability; Construction site
Figure 1. Research framework
Figure 2. Representative examples of training and test dataset
Figure 3. Inference results - confidence scores of small-sized tools, ‘Hammer’
Figure 4. Inference results - confidence scores of small-sized tools, ‘Tacker'
Category | Description |
---|---|
ScreenshotCapture.cs |
This code has been used to automatically generate a group of synthetic images by capturing the screen while incrementally changing the viewing angles of a 3D object's x and y axes from 0° to 360° by 15° increments. |
YOLOv8_open.ipynb |
This code has been used to develop the proposed vision-based classifiers (i.e., 'Real-4000,' 'Hybrid-4000,' and 'Hybrid-8000') for the automated recognition of small-sized construction tools. |
Some or all of the data or code that support the findings of this study are available from the corresponding author upon reasonable request. Request Form (Please fill out the request form and send it to the the corresponding author's email: [email protected])
Category | Total | Link | Release Date |
---|---|---|---|
[2024-07-AUTCON]_Training dataset(synthetic)-images |
4,000 | https://drive.google.com/file/d/1U8sfR3kgLP8RCHV7g7xt5ryhSCtg-IDC/view?usp=sharing | 13 Apr 2024 |
[2024-07-AUTCON]_Training dataset(synthetic)-labels |
4,000 | https://drive.google.com/file/d/1ochS95h0Vtl12TlLRJ325B__lZ1Fq4HY/view?usp=sharing | 13 Apr 2024 |
[2024-07-AUTCON]_Test dataset-images |
576 | https://drive.google.com/file/d/1UvEtkHdngPOi-EZBd7RsBvKBJ9m3F0PD/view?usp=sharing | 13 Apr 2024 |
[2024-07-AUTCON]_Test dataset-labels |
576 | https://drive.google.com/file/d/1ZPdcgjDNNHVWH-jBEStJ04v_09jNMAKi/view?usp=sharing | 13 Apr 2024 |