通过ROS控制真实机械臂(18) --- 构建 octocmap 用于碰撞检测和避障
师兄和同门在做SLAM的时候,经常会用到的 octomap ,也就是八叉树地图。octomap相比于点云地图来说大大减小了地图的存储空间。既然octomap可以用于导航和避障,那么自然也可以导入moveit!,作为机械臂路径规划过程中的障碍物,方便机械臂和障碍物之间进行碰撞检测。
这里首先要提一下Movelt!并没有整合了物体识别和环境建模这些模块,而是利用传感器采集的数据导入的。而我的想法就是通过双目相机的深度及RGB图像生成用于碰撞检测的 octomap,这些octomap也可以依据贝叶斯准则不断实时更新。这样机械臂就可以避开真实世界的障碍物了。
碰撞检测是运动规划的一大难题,我们需要对每个采样点做有效性判断。所以,运动规划需要提供一个高效的碰撞检测算法。虽然Movelt!需要从外部传感器获取点云或深度信息,但是碰撞检测的算法moveit!还是完美集成了FCL( Flexible Collision Library),可以非常快速地实现octomap的碰撞检测。这个算法在学习ROS的时候,好像就囫囵吞枣的使用过了,当时还是手动往moveit!中导入物体的model的。值得一提的是,对于场景中每个物体都进行碰撞检测是浪费时间的,于是我们在配置moveit!的时候曾经生成过ACM(Allowed Collision Matrix)来进行优化。
1.ROS中深度图像转换成点云地图PCL
如果不是在ROS中操作的话,需要安装PCL库,并进行一定的依赖安装和配置,可以参考如下官网连接https://github.com/OctoMap/octomap 和大佬博客 https://blog.csdn.net/LOVE1055259415/article/details/79911653
大佬提供都是通过读取.pcd文件然后转换成octomap并写入.ot等文件当中,对于咱ROS中的使用不太方便,需要通过ROS消息的方式订阅点云消息并转换成octomap然后发布导入moveit!中。而且我所使用的传感器是双目相机,最开始获取的数据是RGBD信息,所以首先需要的是深度图像转换成点云信息的方法。于是构建了一个depth2pointcloud包测试一下
#include <pcl/point_types.h>
#include <pcl/io/pcd_io.h>
#include <iostream>
#include <string>
#include <stdlib.h>
#include <stdio.h>
#include <sstream>
#include <vector>
#include <opencv2/opencv.hpp>
#include<ros/ros.h>
#include<pcl/point_cloud.h>
#include<pcl_conversions/pcl_conversions.h>
#include<sensor_msgs/PointCloud2.h>
using namespace cv;
using namespace std;
pcl::PointCloud<pcl::PointXYZRGB>::Ptr depth2cloud( cv::Mat rgb_image, cv::Mat depth_image )
{
float f = 570.3;
float cx = 320.0, cy = 240.0;
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_ptr( new pcl::PointCloud<pcl::PointXYZRGB> () );
cloud_ptr->width = rgb_image.cols;
cloud_ptr->height = rgb_image.rows;
cloud_ptr->is_dense = false;
for ( int y = 0; y < rgb_image.rows; ++ y ) {
for ( int x = 0; x < rgb_image.cols; ++ x ) {
pcl::PointXYZRGB pt;
if ( depth_image.at<unsigned short>(y, x) != 0 )
{
pt.z = depth_image.at<unsigned short>(y, x)/1000.0;
pt.x = (x-cx)*pt.z/f;
pt.y = (y-cy)*pt.z/f;
pt.r = rgb_image.at<cv::Vec3b>(y, x)[2];
pt.g = rgb_image.at<cv::Vec3b>(y, x)[1];
pt.b = rgb_image.at<cv::Vec3b>(y, x)[0];
cloud_ptr->points.push_back( pt );
}
else
{
pt.z = 0;
pt.x = 0;
pt.y = 0;
pt.r = 0;
pt.g = 0;
pt.b = 0;
cloud_ptr->points.push_back( pt );
}
}
}
return cloud_ptr;
}
int main(int argc,char* argv[])
{
ros::init (argc, argv, "publish_depth");
ros::NodeHandle nh;
cv::Mat depth;
cv::Mat image;
image=cv::imread("/home/redwall/catkin_ws/src/depth2pointcloud/data/rgb_index/144.ppm");
depth=cv::imread("/home/redwall/catkin_ws/src/depth2pointcloud/data/dep_index/144.pgm",IMREAD_ANYDEPTH);
string pcdName("/home/redwall/catkin_ws/src/pointcloud2octomap/data/testpcd.pcd");
if(!image.data||!depth.data) // 判断图片调入是否成功
{
cout<< "no image"<<endl;
return -1; // 调入图片失败则退出
}
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGB>);
cloud=depth2cloud(image,depth);
pcl::io::savePCDFileASCII(pcdName,*cloud);
cout<<"successful"<<endl;
return 0;
}
上述ROS包通过读取RGB图片和 深度图像,利用pcl库方法转换成 <PointXYZRGB>形式的点云文件,并写入pcd文件保存,然后通过大佬的方法转换成的octomap如下:
2.ROS中深度图像直接转换成octomap
既然测试完毕,那就可以直接通过depth转换成octomap了,上面使用的方法同时使用了RGBD,转换成PointXYZRGB,简单起见,我舍弃了RGB信息,仅使用深度图像来生成<PointXYZ>,然后生成octomap,详细请参考《PCL点云库》。
1.创建ROS包:depth2octomap
2.CMakeLists.txt和package.xml
cmake_minimum_required(VERSION 2.8.3)
project(depth2octomap)
find_package(catkin REQUIRED COMPONENTS
roscpp
rospy
std_msgs
rostime
sensor_msgs
message_filters
)
find_package(PCL REQUIRED)
find_package(OpenCV REQUIRED)
find_package(OpenMP)
catkin_package()
include_directories(${catkin_INCLUDE_DIRS} ${PCL_INCLUDE_DIRS} ${OPENCV_INCLUDE_DIRS})
add_executable (depth2octomap src/depth2octomap.cpp)
target_link_libraries (depth2octomap ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${OpenCV_LIBS})
install(TARGETS depth2octomap RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
<?xml version="1.0"?>
<package>
<name>depth2octomap</name>
<version>0.0.0</version>
<description>The depth2octomap package</description>
<maintainer email="aaa@qq.com">crp</maintainer>
<license>TODO</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>rospy</build_depend>
<build_depend>std_msgs</build_depend>
<run_depend>roscpp</run_depend>
<run_depend>rospy</run_depend>
<run_depend>std_msgs</run_depend>
<export>
</export>
</package>
3.src/depth2octomap.cpp
#include <pcl/point_types.h>
#include <pcl/io/pcd_io.h>
#include <iostream>
#include <string>
#include <stdlib.h>
#include <stdio.h>
#include <sstream>
#include <vector>
#include <opencv2/opencv.hpp>
#include<ros/ros.h>
#include<pcl/point_cloud.h>
#include<pcl_conversions/pcl_conversions.h>
#include<sensor_msgs/PointCloud2.h>
using namespace cv;
using namespace std;
/*** PCL处理 ***/
pcl::PointCloud<pcl::PointXYZ>::Ptr depth2octomap( cv::Mat rgb_image, cv::Mat depth_image )
{
float f = 570.3;
float cx = 320.0, cy = 240.0;
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_ptr( new pcl::PointCloud<pcl::PointXYZ> () );
cloud_ptr->width = rgb_image.cols;
cloud_ptr->height = rgb_image.rows;
cloud_ptr->is_dense = false;
// 仅仅使用深度信息,舍去RGB信息
for ( int y = 0; y < rgb_image.rows; ++ y ) {
for ( int x = 0; x < rgb_image.cols; ++ x ) {
pcl::PointXYZ pt;
if ( depth_image.at<unsigned short>(y, x) != 0 )
{
pt.z = depth_image.at<unsigned short>(y, x)/1000.0;
pt.x = (x-cx)*pt.z/f;
pt.y = (y-cy)*pt.z/f;
//pt.r = rgb_image.at<cv::Vec3b>(y, x)[2];
//pt.g = rgb_image.at<cv::Vec3b>(y, x)[1];
//pt.b = rgb_image.at<cv::Vec3b>(y, x)[0];
cloud_ptr->points.push_back( pt );
}
else
{
pt.z = 0;
pt.x = 0;
pt.y = 0;
//pt.r = 0;
//pt.g = 0;
//pt.b = 0;
cloud_ptr->points.push_back( pt );
}
}
}
return cloud_ptr;
}
/*** RGB处理 ***/
void RGBCallback(const sensor_msgs::ImagePtr& msg)
{
image=cv::imread("/home/redwall/catkin_ws/src/depth2octomap/data/rgb_index/144.ppm");
}
/*** Depth处理 ***/
void DEPCallback(const sensor_msgs::ImagePtr& msg)
{
depth=cv::imread("/home/redwall/catkin_ws/src/depth2octomap/data/dep_index/144.pgm",IMREAD_ANYDEPTH);
}
int main(int argc,char* argv[])
{
ros::init (argc, argv, "publish_octomap");
ros::NodeHandle nh,n;
std::string topic,path,frame_id;
int hz;
nh.param<std::string>("frame_id", frame_id, "camera");
nh.param<std::string>("topic", topic, "/pointcloud/output");
nh.param<int>("hz", hz, 5);
ros::Publisher pcl_pub = nh.advertise<sensor_msgs::PointCloud2> (topic, 10);
///// TODO: 将读取深度图片的方式换成订阅相机深度消息,注意订阅消息的频率
cv::Mat depth;
cv::Mat image;
// image=cv::imread("/home/redwall/catkin_ws/src/depth2octomap/data/rgb_index/144.ppm");
// depth=cv::imread("/home/redwall/catkin_ws/src/depth2octomap/data/dep_index/144.pgm",IMREAD_ANYDEPTH);
ros::Subscriber sub_rgb = n.subscribe("/rgb_img", 10, RGBCallback);
ros::Subscriber sub_dep = n.subscribe("/dep_img", 10, DEPCallback);
if(!image.data||!depth.data) // 判断图片调入是否成功
{
cout<< "no image"<<endl;
return -1; // 调入图片失败则退出
}
// 注意这里Ptr类型,所以toROSMsg的时候需要类型转换
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
// ROS消息发布的类型
sensor_msgs::PointCloud2 output;
// 可以使用回调函数来将每个深度转换成点云
cloud = depth2octomap(image,depth);
// cloud=*cloud_ptr;
pcl::toROSMsg(*cloud,output);
// 加上时间戳等信息
output.header.stamp=ros::Time::now();
output.header.frame_id = frame_id;
// 循环频率
ros::Rate loop_rate(hz);
while (ros::ok())
{
pcl_pub.publish(output);
ros::spinOnce();
loop_rate.sleep();
}
return 0;
}
4.上面的depth2octomap节点已经将深度转换成点云了,至于怎么转换成octomap还需要通过launch来启动octomap_server节点:octomaptransform.launch
<launch>
<node pkg="octomap_server" type="octomap_server_node" name="octomap_server">
<!-- resolution in meters per pixel -->
<param name="resolution" value="0.05" />
<!-- name of the fixed frame, needs to be "/map" for SLAM -->
<param name="frame_id" type="string" value="/camera" />
<!-- max range / depth resolution of the kinect in meter -->
<param name="sensor_model/max_range" value="100.0" />
<param name="latch" value="true" />
<!-- max/min height for occupancy map, should be in meters -->
<param name="pointcloud_max_z" value="1000" />
<param name="pointcloud_min_z" value="0" />
<!-- topic from where pointcloud2 messages are subscribed -->
<remap from="/cloud_in" to="/pointcloud/output" />
</node>
</launch>
5.最后通过一个完整的launch来启动:demo.launch
<?xml version="1.0" encoding="UTF-8"?>
<launch>
<node name="depth2octomap" pkg="depth2octomap" type="depth2octomap">
<!--param name="path" value="$(find pointcloud2octomap)/data/testpcd.pcd" type="str" /-->
<param name="frame_id" value="camera" type="str" />
<param name="topic" value="/pointcloud/output" type="str" />
<param name="hz" value="2" type="int" />
</node>
<!-- Load ocotmap launch -->
<include file="$(find depth2octomap)/launch/octomaptransform.launch" />
<!-- RViz -->
<node pkg="rviz" type="rviz" name="$(anon rviz)" respawn="false" output="screen" args="-d $(find depth2octomap)/rviz/OctomapShow.rviz"/>
</launch>
3.不完整的地方
以上代码并没有实现导入moveit!的部分,至于如何导入moveit!,需要的应该是一个TF坐标变换,如上的点云和八叉树消息都是发布在camera坐标系下,而之前的机械臂的固定坐标系是base_link,也即是需要增加一个坐标变换,而这个变换矩阵来源是手眼标定的矩阵。那样应该才是一个完整的映射,而现在人在家中战疫,很多想法无法动手去实现,又害怕一段时间会忘记,只能通过一个博客来记录一下脑电波。
如果想在moveit!中看一下机械臂和octomap共存,可以在launch中加一个静态坐标变换节点,然后设置RViz的坐标系,应该是可以看到的,虽然没什么用!
<node pkg="tf" type="static_transform_publisher" name="link_broadcaster" args="0 0 0 0 0 0 0 camera base_link 100" />
当然,在获取点云数据时 ,由于设备精度,操作者经验环境因素带来的影响,被测物体表面性质变化和数据拼接配准操作过程的影响,点云数据中讲不可避免的出现一些噪声。所以还需要在上述文件中根据需要加上点云滤波,离群点舍弃等操作。参考大神:https://blog.csdn.net/qq_34719188/article/details/79179430
下一篇: C#读取txt文件