OriginBot源码学习之摄像头驱动

发布于:2024-05-06 ⋅ 阅读:(32) ⋅ 点赞:(0)

这篇博客主要记录自己对于OriginBot-相机驱动与可视化代码的学习与理解,我会注释写在代码文件中。

文档中,提供了两种驱动摄像头的方法:一个启动之后可以通过页面实时展示画面和人体检测算法的结果,另一种方法启动之后只是通过一个话题来发布图像数据。

可以通过浏览器查看的启动方式
文档里面说的很清楚,用以下命令启动:

ros2 launch originbot_bringup camera_websoket_display.launch.py

启动之后用浏览器打开 http://IP:8000 即可,

这个命令最后执行的代码是originbot.originbot_bringup.launch.camera_websoket_display.launch.py, 具体内容如下:

import osfrom launch import LaunchDescriptionfrom launch_ros.actions import Nodefrom launch.actions import IncludeLaunchDescriptionfrom launch.launch_description_sources import PythonLaunchDescriptionSourcefrom ament_index_python import get_package_share_directoryfrom launch.actions import DeclareLaunchArgumentfrom launch.substitutions import LaunchConfigurationdef generate_launch_description():    mipi_cam_device_arg = DeclareLaunchArgument(        'device',        default_value='GC4663',        description='mipi camera device')    # 这里是实际启动摄像头的Node,最终执行的事mipi_cam.launch.py,会在下面单独解释这个代码    mipi_node = IncludeLaunchDescription(        PythonLaunchDescriptionSource(            os.path.join(                get_package_share_directory('mipi_cam'),                'launch/mipi_cam.launch.py')),        launch_arguments={            'mipi_image_width': '960',            'mipi_image_height': '544',            'mipi_io_method': 'shared_mem',            'mipi_video_device': LaunchConfiguration('device')        }.items()    )    # nv12->jpeg    # 这里调用了TogetheROS.Bot的图像编解码模块,目的是为了提升性能,具体参考:    # https://developer.horizon.cc/documents_tros/quick_demo/hobot_codec    jpeg_codec_node = IncludeLaunchDescription(        PythonLaunchDescriptionSource(            os.path.join(                get_package_share_directory('hobot_codec'),                'launch/hobot_codec_encode.launch.py')),        launch_arguments={            'codec_in_mode': 'shared_mem',            'codec_out_mode': 'ros',            'codec_sub_topic': '/hbmem_img',            'codec_pub_topic': '/image'        }.items()    )    # web    # 这个就是启动web的部分,实际上背后是一个Nginx静态服务器,    # 订阅了image来展示图片,订阅了smart_topic来获取人体检测的数据    # 这里最后是执行了websocket.laucn.py这个代码,下面再详细解释    web_smart_topic_arg = DeclareLaunchArgument(        'smart_topic',        default_value='/hobot_mono2d_body_detection',        description='websocket smart topic')    web_node = IncludeLaunchDescription(        PythonLaunchDescriptionSource(            os.path.join(                get_package_share_directory('websocket'),                'launch/websocket.launch.py')),        launch_arguments={            'websocket_image_topic': '/image',            'websocket_smart_topic': LaunchConfiguration('smart_topic')        }.items()    )    # mono2d body detection    # TogetheROS.Bot的人体检测功能,    # 会订阅/image_raw或者/hbmem_img的图片数据来做检测,    # 然后把检测结果发布到hobot_mono2d_body_detection,    # 我在https://www.guyuehome.com/45835里面有用到这个模块,也有相对详细的介绍,可以查看    # 源码和官方文档在:https://developer.horizon.cc/documents_tros/quick_demo/hobot_codec    mono2d_body_pub_topic_arg = DeclareLaunchArgument(        'mono2d_body_pub_topic',        default_value='/hobot_mono2d_body_detection',        description='mono2d body ai message publish topic')    mono2d_body_det_node = Node(        package='mono2d_body_detection',        executable='mono2d_body_detection',        output='screen',        parameters=[            {"ai_msg_pub_topic_name": LaunchConfiguration(                'mono2d_body_pub_topic')}        ],        arguments=['--ros-args', '--log-level', 'warn']    )    return LaunchDescription([        mipi_cam_device_arg,        # image publish        mipi_node,        # image codec        jpeg_codec_node,        # body detection        mono2d_body_pub_topic_arg,        mono2d_body_det_node,        # web display        web_smart_topic_arg,        web_node    ])

上面的代码里面调用了mipi_cam.launch.py和 websocket.launch.py, 现在分别来介绍。

以下是originbot.mipi_cam.launch.mipi_cam.launch.py的内容:

from launch import LaunchDescriptionfrom launch.actions import DeclareLaunchArgumentfrom launch.substitutions import LaunchConfigurationfrom launch_ros.actions import Nodedef generate_launch_description():    return LaunchDescription([        DeclareLaunchArgument(            'mipi_camera_calibration_file_path',            default_value='/userdata/dev_ws/src/origineye/mipi_cam/config/SC132GS_calibration.yaml',            description='mipi camera calibration file path'),        DeclareLaunchArgument(            'mipi_out_format',            default_value='nv12',            description='mipi camera out format'),        DeclareLaunchArgument(            'mipi_image_width',            default_value='1088',            description='mipi camera out image width'),        DeclareLaunchArgument(            'mipi_image_height',            default_value='1280',            description='mipi camera out image height'),        DeclareLaunchArgument(            'mipi_io_method',            default_value='shared_mem',            description='mipi camera out io_method'),        DeclareLaunchArgument(            'mipi_video_device',            default_value='F37',            description='mipi camera device'),        # 启动图片发布pkg        Node(            package='mipi_cam',            executable='mipi_cam',            output='screen',            parameters=[                {"camera_calibration_file_path": LaunchConfiguration(                    'mipi_camera_calibration_file_path')},                {"out_format": LaunchConfiguration('mipi_out_format')},                {"image_width": LaunchConfiguration('mipi_image_width')},                {"image_height": LaunchConfiguration('mipi_image_height')},                {"io_method": LaunchConfiguration('mipi_io_method')},                {"video_device": LaunchConfiguration('mipi_video_device')},                {"rotate_degree": 90},            ],            arguments=['--ros-args', '--log-level', 'error']        )    ])

这段代码其实也很简单,就是一些参数声明,但是如果使用了OriginBot一段时间的小伙伴应该记得,小车启动摄像头后,会通过一个叫做/image_raw的话题发布图像数据,这个话题在这里没有提到。

这一部分在originbot.mipi_cam.src.mipi_cam_node.cpp 里面的236行, 函数如下:

点击OriginBot源码学习之摄像头驱动 - 古月居可查看全文