Apollo 6.0的融合模块入口分析(obstacle_multi_sensor_fusion)
程序员文章站
2022-03-05 18:25:25
...
Apollo 6.0的融合模块入口分析(obstacle_multi_sensor_fusion)
入口函数为app/obstacle_multi_sensor_fusion.cc,由perception/onboard/compent/fusion_component.cc调用。
1. 方法解读
它主要包含Init方法和Process方法。
bool ObstacleMultiSensorFusion::Init(
const ObstacleMultiSensorFusionParam& param) {
if (fusion_ != nullptr) {
AINFO << "Already inited";
return true;
}
fusion_ = BaseFusionSystemRegisterer::GetInstanceByName(param.fusion_method); //设置融合方法
FusionInitOptions init_options;
init_options.main_sensor = param.main_sensor; //设置主传感器
if (fusion_ == nullptr || !fusion_->Init(init_options)) { //调用设定融合方法的Init方法
AINFO << "Failed to Get Instance or Initialize " << param.fusion_method;
return false;
}
return true;
}
Init方法,传入参数主传感器设置和融合方法设置,ObstacleMultiSensorFusionParam定义如下所示:
struct ObstacleMultiSensorFusionParam {
std::string main_sensor;
std::string fusion_method;
};
struct FusionInitOptions {
std::string main_sensor;
};
fusion_ 是BaseFusionSystem*的(lib/interface/base_fusion_system),是总融合方法的基类,目前实现了DummyFusionSysytem和ProbabilisticFusion。主要应该使用ProbabilisticFusion(Apollo 2.5就用的这个类)。
frame为输入,objects为融合后的输出。他们共用Object结构体的内存。
bool ObstacleMultiSensorFusion::Process(const base::FrameConstPtr& frame,
std::vector<base::ObjectPtr>* objects) {
FusionOptions options;
return fusion_->Fuse(options, frame, objects); //调用总融合方法的Fuse方法,传入空options和传感器帧数据和障碍物数据
}
2. 部分重要结构体解读
第一个输入参数对frame,结构体Frame为(perception/base/frame.h):
struct alignas(16) Frame {
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
Frame() { sensor2world_pose.setIdentity(); }
void Reset() {
timestamp = 0.0;
objects.clear();
sensor2world_pose.setIdentity();
sensor_info.Reset();
lidar_frame_supplement.Reset();
radar_frame_supplement.Reset();
camera_frame_supplement.Reset();
}
// @brief sensor information
SensorInfo sensor_info; //传感器信息
double timestamp = 0.0; //帧时间戳
std::vector<std::shared_ptr<Object>> objects; //障碍物
Eigen::Affine3d sensor2world_pose; //时变的传感器到世界的旋转变换?(目前不清楚什么意思,可能与动态标定相关)
// sensor-specific frame supplements
LidarFrameSupplement lidar_frame_supplement; //结构体定义如下所示,主要包括是否开启和原始数据的指针
RadarFrameSupplement radar_frame_supplement;
CameraFrameSupplement camera_frame_supplement;
UltrasonicFrameSupplement ultrasonic_frame_supplement;
};
SensorInfo的结构体定义在/perception/base/sensor_meta.h.
struct SensorInfo {
std::string name = "UNKNONW_SENSOR"; //传感器名称
SensorType type = SensorType::UNKNOWN_SENSOR_TYPE; //传感器类型
SensorOrientation orientation = SensorOrientation::FRONT; //传感器安装位置
std::string frame_id = "UNKNOWN_FRAME_ID"; //?? 帧id字符串(可能与传感器数据帧的结构有关)
void Reset() {
name = "UNKNONW_SENSOR";
type = SensorType::UNKNOWN_SENSOR_TYPE;
orientation = SensorOrientation::FRONT;
frame_id = "UNKNOWN_FRAME_ID";
}
};
四个FrameSupplement的结构体定义在/perception/base/frame_supplement。
struct alignas(16) LidarFrameSupplement {
// @brief valid only when on_use = true
bool on_use = false;
// @brief only reference of the original cloud in lidar coordinate system
std::shared_ptr<AttributePointCloud<PointF>> cloud_ptr;
void Reset() {
on_use = false;
cloud_ptr = nullptr;
}
};
struct alignas(16) RadarFrameSupplement {
// @brief valid only when on_use = true
bool on_use = false;
void Reset() { on_use = false; }
};
struct alignas(16) CameraFrameSupplement {
// @brief valid only when on_use = true
bool on_use = false;
// @brief only reference of the image data
Image8UPtr image_ptr = nullptr;
// TODO(guiyilin): modify interfaces of visualizer, use Image8U
std::shared_ptr<Blob<uint8_t>> image_blob = nullptr; //??图像块?这是什么?
void Reset() {
on_use = false;
image_ptr = nullptr;
image_blob = nullptr;
}
};
struct alignas(16) UltrasonicFrameSupplement {
// @brief valid only when on_use = true
bool on_use = false;
// @brief only reference of the image data
std::shared_ptr<ImpendingCollisionEdges> impending_collision_edges_ptr;
void Reset() {
on_use = false;
impending_collision_edges_ptr = nullptr;
}
};
第二个输入参数对objects,结构体Object(perception/base/object.h)为:
struct alignas(16) Object {
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
Object();
std::string ToString() const;
void Reset();
// @brief object id per frame, required
int id = -1;
// @brief convex hull of the object, required
PointCloud<PointD> polygon;
// oriented boundingbox information
// @brief main direction of the object, required
Eigen::Vector3f direction = Eigen::Vector3f(1, 0, 0);
/*@brief the yaw angle, theta = 0.0 <=> direction(1, 0, 0),
currently roll and pitch are not considered,
make sure direction and theta are consistent, required
*/
float theta = 0.0f;
// @brief theta variance, required
float theta_variance = 0.0f;
// @brief center of the boundingbox (cx, cy, cz), required
Eigen::Vector3d center = Eigen::Vector3d(0, 0, 0);
// @brief covariance matrix of the center uncertainty, required
Eigen::Matrix3f center_uncertainty;
/* @brief size = [length, width, height] of boundingbox
length is the size of the main direction, required
*/
Eigen::Vector3f size = Eigen::Vector3f(0, 0, 0);
// @brief size variance, required
Eigen::Vector3f size_variance = Eigen::Vector3f(0, 0, 0);
// @brief anchor point, required
Eigen::Vector3d anchor_point = Eigen::Vector3d(0, 0, 0);
// @brief object type, required
ObjectType type = ObjectType::UNKNOWN;
// @brief probability for each type, required
std::vector<float> type_probs;
// @brief object sub-type, optional
ObjectSubType sub_type = ObjectSubType::UNKNOWN;
// @brief probability for each sub-type, optional
std::vector<float> sub_type_probs;
// @brief existence confidence, required
float confidence = 1.0f;
// tracking information
// @brief track id, required
int track_id = -1;
// @brief velocity of the object, required
Eigen::Vector3f velocity = Eigen::Vector3f(0, 0, 0);
// @brief covariance matrix of the velocity uncertainty, required
Eigen::Matrix3f velocity_uncertainty;
// @brief if the velocity estimation is converged, true by default
bool velocity_converged = true;
// @brief velocity confidence, required
float velocity_confidence = 1.0f;
// @brief acceleration of the object, required
Eigen::Vector3f acceleration = Eigen::Vector3f(0, 0, 0);
// @brief covariance matrix of the acceleration uncertainty, required
Eigen::Matrix3f acceleration_uncertainty;
// @brief age of the tracked object, required
double tracking_time = 0.0;
// @brief timestamp of latest measurement, required
double latest_tracked_time = 0.0;
// @brief motion state of the tracked object, required
MotionState motion_state = MotionState::UNKNOWN;
// // Tailgating (trajectory of objects)
std::array<Eigen::Vector3d, 100> drops;
std::size_t drop_num = 0;
// // CIPV
bool b_cipv = false;
// @brief brake light, left-turn light and right-turn light score, optional
CarLight car_light;
// @brief sensor-specific object supplements, optional
LidarObjectSupplement lidar_supplement;
RadarObjectSupplement radar_supplement;
CameraObjectSupplement camera_supplement;
FusionObjectSupplement fusion_supplement;
// @debug feature to be used for semantic mapping
// std::shared_ptr<apollo::prediction::Feature> feature;
};
3. 部分重要结构体表格
综上,Process的两个输入,第一个描述传感器帧的固有属性:
名称 | 类型 | 含义 |
---|---|---|
sensor_info | SensorInfo | 传感器的固有信息 |
timestamp | double | 该帧的时间戳 |
objects | Object指针向量 | 障碍物信息 |
sensor2world_pose | ?可能与动态标定相关 | |
可选的四种传感器支持信息 | 四个传感器的支持信息 |
第二个描述障碍物的属性:
名称 | 类型 | 含义 |
---|---|---|
id | int | id号 |
polygon | 多点 | 凸包 |
direction | 3维向量 | 朝向 |
theta | float | yaw角 |
center | 3维向量 | 中心点 |
center_uncertainty | 3*3矩阵 | 中心点协方差矩阵 |
size | 3维向量 | 长宽高 |
size_variance | 3维向量 | 长宽高的不确定度 |
type | 枚举 | 类型 |
type_probs | float向量 | 每种类型的概率 |
sub_type | 枚举 | 子类型 |
sub_type_probs | float向量 | 子类型概率 |
confidence | float | 存在概率 |
track_id | int | 轨迹id号 |
velocity | 3维向量 | 速度 |
velocity_uncerntainty | 3*3矩阵 | 速度协方差矩阵 |
acceleration | 3维向量 | 加速度 |
acceleration_uncerntainty | 3*3矩阵 | 加速度协方差矩阵 |
tracking_time | double | 轨迹的累计时间 |
latest_tracked_time | double | 最后一次追踪到的时间 |
motion_status | 枚举 | 运动状态 |
drops | 长度100的3维点数组 | 轨迹 |
drop_num | size_t | 轨迹长度 |
b_cipv | bool | cipv标志位 |
car_light | CarLight | 车灯(包括可见性) |
可选的四种传感器支持信息 | - | 传感器ObjectSupplement |
上一篇: Apollo感知模块:一
下一篇: 反转链表中的第m个节点到第n个节点