Haskell is an architecture, engineering, construction, and consulting firm headquartered in Jacksonville, Florida. Founded in 1965, they are one of the world’s most prestigious and oldest construction companies. They have close to 2000 employees across the globe. In 2021 alone, they did 1.45 billion USD in annual revenue.
We built a mobile app that can classify and segment defects from weld images as a proof of concept for Haskell Construction. It can detect and segment a variety of weld defects such as inclusions, lack of fusion, porosity, undercut, under-fill, cracks, etc.
Detecting indications (pixels suspicious of defects) in weld images is a challenging computer vision task due to several reasons. First, most of the research in academia has only focused on identifying defects in particular types of images, such as X-RAY. However, in this project, we relied on ordinary images captured from mobile phones. Different defects exhibit different visual properties in shape, size, texture, contrast, and position, increasing the complexity of building a mobile-friendly, light-weight, all-in-one machine learning model that can segment a variety of defects. Moreover, It is vital to filter out the background around the weld surface for a more accurate weld defect segmentation. Thus we used a computer vision pipeline that utilized two convolutional neural networks, which first segmented the entire weld surface and subsequently segmented the defects from the weld surface filtered from the first step.
We trained the segmentation models on a proprietary dataset provided by Haskell. Once trained and tested, we ported the weld and defect segmentation models to TFLite for an edge deployment in the mobile app. Several clever strategies executed on image preprocessing, augmentation, and the segmentation model architecture made the project a huge success. The customer could not even believe the accuracy and the robustness of the AI against the images captured on the construction sites.