JI Kaifeng, GUO Yanhui, MA Yinghao, et al. Urban Rail Transit Carbon Strip Thickness Measurement Based on Binocular Vision and Improved YOLOv7-Pose Model[J]. Urban Mass Transit, 2025, 28(12): 142-148. DOI: 10.16037/j.1007-869x.20253048
Citation: JI Kaifeng, GUO Yanhui, MA Yinghao, et al. Urban Rail Transit Carbon Strip Thickness Measurement Based on Binocular Vision and Improved YOLOv7-Pose Model[J]. Urban Mass Transit, 2025, 28(12): 142-148. DOI: 10.16037/j.1007-869x.20253048

Urban Rail Transit Carbon Strip Thickness Measurement Based on Binocular Vision and Improved YOLOv7-Pose Model

  • Objective With the rapid development of urban rail transit, the safety issue of current collection system has become increasingly prominent. The shoe-rail current collection system serves as the core method for third rail power supply, but its key component, namely the collector shoe carbon strip, suffers from the non-uniform wear caused by long-term friction, leading to increased contact resistance, arc discharge and other problems, potentially triggering power supply interruption accidents in serious cases. Therefore, it is necessary to conduct an in-depth research on the thickness measurement of the current collector shoe carbon strip.
    Method A measurement method for the collector shoe carbon strip thickness based on binocular vision and an improved YOLOv7-Pose model is proposed. By constructing a grid labeled dataset, this method ensures that the minimum thickness position can be captured under non-uniform wear conditions. The YOLOv7-Pose model integrating CBAM (convolutional block attention module) attention mechanism is trained to accurately locate the key points on the carbon strip upper and lower edges in the left-eye image. Combining the disparity map generated by RAFT-Stereo, the corresponding points in the right-eye image are matched using epipolar constraints. Using camera parameters, the thickness of the collector shoe carbon strip is obtained by calculating as the Euclidean distance through 3D coordinate transformation.
    Result & Conclusion  The improved YOLOv7-Pose model embedded with CBAM attention mechanism, achieves an average accuracy of 94.5% for key point detection in complex environments with an IoU threshold of 50%~95%, which is 0.5% higher than the original model. Combined with a binocular vision system, the model achieves a measurement error of 0.8 mm ± 0.3 mm in carbon strip thickness.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return