基于双目视觉与改进YOLOv7-Pose模型的城市轨道交通碳滑板厚度测量

Urban Rail Transit Carbon Strip Thickness Measurement Based on Binocular Vision and Improved YOLOv7-Pose Model

  • 摘要:
    目的 随着城市轨道交通的快速发展,受流系统的安全性问题日益突出,其中靴轨受流系统作为第三轨供电的核心方式,其关键部件集电靴碳滑板在长期摩擦中产生的非均匀磨耗,会导致接触电阻增大、电弧放电等问题,严重时可引发供电中断事故,因此须对集电靴碳滑板厚度测量问题进行深入研究。
    方法 提出了一种基于双目视觉和改进YOLOv7-Pose模型的集电靴碳滑板厚度测量方法,通过构建网格化标注的数据集,确保在磨耗不均匀的情况下可捕捉最小的厚度位置。训练了融合CBAM(卷积块注意力模块)注意力机制的YOLOv7-Pose模型,精准定位了左目图像中碳滑板上下边缘关键点。结合RAFT-Stereo生成的视差图,利用极线约束匹配右目对应点;结合照相机参数,通过三维坐标转换计算欧氏距离得出集电靴碳滑板的厚度。
    结果及结论 改进的YOLOv7-Pose模型通过嵌入CBAM注意力机制,在复杂环境下关键点检测时重叠度阈值为50%~95%的平均精度达到了94.5%,较原YOLOv7-Pose模型提升了0.5%,结合双目视觉系统实现了0.8 mm±0.3 mm的碳滑板厚度测量误差。

     

    Abstract:
    Objective With the rapid development of urban rail transit, the safety issue of current collection system has become increasingly prominent. The shoe-rail current collection system serves as the core method for third rail power supply, but its key component, namely the collector shoe carbon strip, suffers from the non-uniform wear caused by long-term friction, leading to increased contact resistance, arc discharge and other problems, potentially triggering power supply interruption accidents in serious cases. Therefore, it is necessary to conduct an in-depth research on the thickness measurement of the current collector shoe carbon strip.
    Method A measurement method for the collector shoe carbon strip thickness based on binocular vision and an improved YOLOv7-Pose model is proposed. By constructing a grid labeled dataset, this method ensures that the minimum thickness position can be captured under non-uniform wear conditions. The YOLOv7-Pose model integrating CBAM (convolutional block attention module) attention mechanism is trained to accurately locate the key points on the carbon strip upper and lower edges in the left-eye image. Combining the disparity map generated by RAFT-Stereo, the corresponding points in the right-eye image are matched using epipolar constraints. Using camera parameters, the thickness of the collector shoe carbon strip is obtained by calculating as the Euclidean distance through 3D coordinate transformation.
    Result & Conclusion  The improved YOLOv7-Pose model embedded with CBAM attention mechanism, achieves an average accuracy of 94.5% for key point detection in complex environments with an IoU threshold of 50%~95%, which is 0.5% higher than the original model. Combined with a binocular vision system, the model achieves a measurement error of 0.8 mm ± 0.3 mm in carbon strip thickness.

     

/

返回文章
返回