联合测温贴片和计算机视觉的列车关键部件超温监测算法

舒冬1,2张贝嘉2杨鸿泰1

Critical Component Overheating Monitoring Algorithm Integrating Temperature-sensing Patches and Computer Vision Trains

SHU Dong1,2ZHANG Beijia2YANG Hongtai1
摘要:
[目的]由于基于温度传感器的列车关键部件温度监测方案存在投资及运维成本高昂的问题,难以满足大型工程项目中对该类部件温度监测广泛推广的需求,因此,亟须研发一种低造价且运维成本低的温度监测方案。[方法]提出了一种融合测温贴片和计算机视觉的超温监测算法。该算法遵循“先定位、后分割、再计算”的识别逻辑。通过采用二分k-means聚类算法并引入注意力机制对YOLOV3算法网络进行优化,实现了图像中测温贴片的精确定位;在U-Net++算法网络架构中嵌入主体边界分离模块,并在损失函数中增加相应的边界监督项,以增强边界分割效果,提高测温贴片在图像中的分割精度;对分割后的图像进行计算,根据测温贴片变色的相对体积质量来判定超温结果。[结果及结论]通过对SSD算法、Retina-Net算法、YOLOV3算法、YOLV4算法及改进后的YOLOV3算法等5种算法的定位精度进行试验对比,其定位准确率分别为95.32%、97.15%、98.09%、98.36%及99.21%,其中改进后的YOLOV3算法准确率接近100%。同时,对DeepLabV3+算法、U-Net++算法及改进后的U-Net++算法等3种算法的分割精度进行对比试验,结果显示分割精准度分别为95.97%、96.81%及98.36%,改进后的U-Net++算法表现最优。在真实测试集上进行的试验中,改进算法达到了99.30%的准确率。
Abstracts:
[Objective] The present temperature sensor-based monitoring solutions for critical train component temperature entail high investment and operation-maintenance costs, hindering the widespread adoption of such systems in large-scale engineering projects. Consequently, a low-cost and low-maintenance temperature monitoring solution is urgently needed. [Method] A novel overheating monitoring algorithm integrating temperature-sensing patches and computer vision is proposed, which adheres to a ′locate, segment, and calculate′ identification logic. Precise localization of temperature-sensing patches in images is achieved by optimizing the YOLOV3 algorithm with a binary k-means clustering algorithm and an attention mechanism. A main-boundary separation module is embedded in the U-Net++ network architecture, and corresponding boundary supervision terms are added to the loss function, so that boundary segmentation performance could be enhanced and segmentation precision of temperature-sensing patches in the image be improved. The segmented images are calculated and overheating results are determined by analyzing the relative volumetric in the color changes of temperature-sensing patches. [Result & Conclusion] Comparative experiments are conducted on the positioning accuracy of five algorithms: SSD, Retina-Net, YOLOV3, YOLV4, and the improved YOLOV3, yielding their accuracies of 95.32%, 97.15%, 98.09%, 98.36%, and 99.21%, respectively, with the improved YOLOV3 approaching nearly 100%. For segmentation accuracy, comparative experiments are conducted among algorithms of DeepLabV3+, U-Net++, and the improved U-Net++, yielding their accuracies of 95.97%, 96.81%, and 98.36%, respectively, with the improved U-Net++ performing the best. In the test on a real dataset, the improved algorithm achieves an accuracy rate of 99.30%.
论文检索