Apollo 6.0前景匹配逻辑分析(hm_data_association文件夹)
相关链接 | |
---|---|
上篇链接 | Apollo6.0融合帧分析(FuseFrame函数) |
下篇链接 | - |
hm_data_association文件夹中记录了与匹配相关的代码,参考文献。
1. 流程图
HMTrackersObjectsAssociation::Associate
该文件在/fusion/lib/data_association/hm_data_association/hm_tracks_objects_match.cc
主public函数: HMTrackersObjectsAssociation::Associate
,运行步骤如下:
-
IdAssign,首先直接匹配航迹中之前就关联过的观测
- 前向雷达雷达目标和长焦相机不进IdAssign
-
ComputeAssociationDistanceMat计算关联矩阵,关联值取的与历史多种传感器计算距离的最小值
- 2.1 TrackObjectDistance::Compute函数计算单个观测和单个轨迹的距离
-
MinimizeAssignment最小化匹配,先计算连通子图,再对连通子图计算匈牙利匹配
-
PostIdAssign,最后仍未匹配上的被相机看到且没被lidar看到的航迹
-
生成最终的未匹配的航迹和未匹配的观测
-
保存航迹track和观测meas的关联值(最小距离) 暂时不知道干什么用的
HMTrackersObjectsAssociation::IdAssign
第一步说大白话就是“传感器的这个track在航迹的第几个里能找到“。这样可以避免对每个传感器测量都去找一遍航迹。
TrackObjectDistance::Compute
该函数在track_obj_distance.cc中。主要函数调用关系如下图所示。
Compute函数度量一个航迹和一个传感器测量的相互距离,主要分为3个部分,
- 非相机传感器间的距离计算主要考虑鸟瞰平面下(2d)的障碍物bbox的中心点的欧式距离,其中坐标带CV模型的预测。
-
激光雷达相机的距离计算
- 如果track内维护的lidar/camera航迹的track_id和该传感器(lidar/camera)测量的track_id不同(IsTrackIdConsistent),那么需要进行动态阈值过滤(LidarCameraCenterDistanceExceedDynamicThreshold)。一旦航迹和测量的中心点距离(不带CV模型补偿)大于阈值,那么直接返回最大值4.0米。
- 阈值的确定:阈值由该障碍物的点云(来自lidar_supplement信息)第一个点给出距离 d d d.那么阈值为 d t h = 5 + 0.15 ∗ d d_{th} = 5 + 0.15 * d dth=5+0.15∗d。
-
QueryProjectionCacheObject: 获取点云投影的2d点和2d bbox。
-
BuildProjectionCacheObject: 点云投影并缓存
- 1 获取lidar2camera的投影矩阵
- 2 CV模型预测offset
- 3 投影缓存(projection_cache_)记录
- 4 如果障碍物的点云大于20个点并且8个3d box的顶点都在图像的框内,那么继续 5,否则跳过 5。
- 5 用所有投影点的最大最小的x和y构建lidar的bbox。
- 6 写入缓存所有在图像2d框里的lidar投影点云和投影box
-
BuildProjectionCacheObject: 点云投影并缓存
- ComputePtsBoxSimilarity 计算2d bbox融合相似度,考虑bbox的位置和长宽
- 如果没有障碍物点云,只能QueryProjectedVeloCtOnCamera 获取了velodyne激光雷达在相机平面上的中心点,进而计算它们的欧式距离
- 如果track内维护的lidar/camera航迹的track_id和该传感器(lidar/camera)测量的track_id不同(IsTrackIdConsistent),那么需要进行动态阈值过滤(LidarCameraCenterDistanceExceedDynamicThreshold)。一旦航迹和测量的中心点距离(不带CV模型补偿)大于阈值,那么直接返回最大值4.0米。
-
毫米波雷达相机的距离计算
- 获取World2Camera的投影矩阵。猜测World坐标系可能就是Radar输出的坐标系。
- 雷达中心点补偿(CV模型)并投影至图像平面。
- 用中心点,长宽高构造构造雷达的3d bbox
- ComputeRadarCameraXSimilarity 比较图像坐标系中心点x坐标,该分数范围属于[0,0.9]
- ComputeRadarCameraYSimilarity 比较图像坐标系中心点y坐标,该分数范围属于[0.5,0.6]
- ComputeRadarCameraLocSimilarity 比较3d坐标系中心点坐标,该分数范围属于[0,0.7]
- ComputeRadarCameraVelocitySimilarity 比较3d坐标系的速度,该分数范围属于[0,0.9] (大于2m/s才比较,否则分数为0.5)
- FuseMultipleProbabilities 多分数融合为一个分数。
HMTrackersObjectsAssociation::MinimizeAssignment
包含该函数的文件链接: hm_tracks_objects_match.cc,参考博客: Apollo perception fusion感知融合源码分析–匈牙利匹配。
2. 部分重要代码解读
bool HMTrackersObjectsAssociation::Associate(
const AssociationOptions& options, SensorFramePtr sensor_measurements,
ScenePtr scene, AssociationResult* association_result) {
// 参数解释:
// 1.[输入] AssociationOptions结构体是空的。
// 2.[输入] sensor_measurements为传感器量测
// 3.[输入] scene是前景和后景轨迹
// 4.[输出] association_result为匹配结果
const std::vector<SensorObjectPtr>& sensor_objects =
sensor_measurements->GetForegroundObjects(); //取前景量测
const std::vector<TrackPtr>& fusion_tracks = scene->GetForegroundTracks(); //取前景轨迹
std::vector<std::vector<double>> association_mat; //用于匈牙利匹配的距离矩阵
//corner case: 轨迹空或者观测空,不需要匹配,直接返回
if (fusion_tracks.empty() || sensor_objects.empty()) {
association_result->unassigned_tracks.resize(fusion_tracks.size());
association_result->unassigned_measurements.resize(sensor_objects.size());
//std::iota赋值0,1,2,3,...,n-1的下标标号。其中,n为向量长度。
std::iota(association_result->unassigned_tracks.begin(),
association_result->unassigned_tracks.end(), 0);
std::iota(association_result->unassigned_measurements.begin(),
association_result->unassigned_measurements.end(), 0);
return true;
}
std::string measurement_sensor_id = sensor_objects[0]->GetSensorId(); // 传感器型号
double measurement_timestamp = sensor_objects[0]->GetTimestamp(); // 时间戳
track_object_distance_.ResetProjectionCache(measurement_sensor_id, //重置之前的计算距离
measurement_timestamp);
//前雷达不进行IdAssign(可能前向毫米波输出障碍物的id号不稳定,所以全部重新匹配)
bool do_nothing = (sensor_objects[0]->GetSensorId() == "radar_front");
// 1.ID进行匹配,重复利用上一帧匹配的结果。
IdAssign(fusion_tracks, sensor_objects, &association_result->assignments, //
&association_result->unassigned_tracks,
&association_result->unassigned_measurements, do_nothing, false);
Eigen::Affine3d pose;
sensor_measurements->GetPose(&pose);
Eigen::Vector3d ref_point = pose.translation(); //传感器到地图的平移向量
ADEBUG << "[email protected]" << measurement_timestamp;
// 2.计算关联距离矩阵association_mat,num_track * num_meas的矩阵
ComputeAssociationDistanceMat(fusion_tracks, sensor_objects, ref_point,
association_result->unassigned_tracks,
association_result->unassigned_measurements,
&association_mat);
// 计算下标映射g2l,l2g
int num_track = static_cast<int>(fusion_tracks.size());
int num_measurement = static_cast<int>(sensor_objects.size());
association_result->track2measurements_dist.assign(num_track, 0);
association_result->measurement2track_dist.assign(num_measurement, 0);
// 对于track,g:global,是fusion_track的下标。 l:local,unassigned_tracks的下标。
// g2l是fusion_track的下标到unassigned_tracks的下标的映射。-1为已匹配的fusion_track。
std::vector<int> track_ind_g2l;
track_ind_g2l.resize(num_track, -1);
for (size_t i = 0; i < association_result->unassigned_tracks.size(); i++) {
track_ind_g2l[association_result->unassigned_tracks[i]] =
static_cast<int>(i);
}
// 对于measurement,g:global,是sensor_objects的下标。 l:local,unassigned_measurements的下标。
// g2l是sensor_objects的下标到unassigned_measurements的下标的映射。-1为已匹配的fusion_track。
std::vector<int> measurement_ind_g2l;
measurement_ind_g2l.resize(num_measurement, -1);
std::vector<size_t> measurement_ind_l2g =
association_result->unassigned_measurements;
for (size_t i = 0; i < association_result->unassigned_measurements.size();
i++) {
measurement_ind_g2l[association_result->unassigned_measurements[i]] =
static_cast<int>(i);
}
std::vector<size_t> track_ind_l2g = association_result->unassigned_tracks;
// IdAssign已经完成了所有匹配,则返回
if (association_result->unassigned_tracks.empty() ||
association_result->unassigned_measurements.empty()) {
return true;
}
// 3. 最小化匹配(匈牙利匹配)
// 关联矩阵对应的是i,通过l2g转换为index,插入到对应的三个vector中
bool state = MinimizeAssignment(
association_mat, track_ind_l2g, measurement_ind_l2g,
&association_result->assignments, &association_result->unassigned_tracks,
&association_result->unassigned_measurements);
// start do post assign
// 4.单独再做一遍IdAssign,未分配的航迹(仅有相机观测生成的)和未分配的观测
// 可能可以改善由于相机测距不准带来的匹配错误问题。
std::vector<TrackMeasurmentPair> post_assignments;
PostIdAssign(fusion_tracks, sensor_objects,
association_result->unassigned_tracks,
association_result->unassigned_measurements, &post_assignments);
association_result->assignments.insert(association_result->assignments.end(),
post_assignments.begin(),
post_assignments.end());
// 5.生成未匹配的航迹和观测
GenerateUnassignedData(fusion_tracks.size(), sensor_objects.size(),
association_result->assignments,
&association_result->unassigned_tracks,
&association_result->unassigned_measurements);
// 6.保存航迹track和观测meas的关联值(最小距离),到association_result中
ComputeDistance(fusion_tracks, sensor_objects,
association_result->unassigned_tracks, track_ind_g2l,
measurement_ind_g2l, measurement_ind_l2g, association_mat,
association_result);
AINFO << "association: measurement_num = " << sensor_objects.size()
<< ", track_num = " << fusion_tracks.size()
<< ", assignments = " << association_result->assignments.size()
<< ", unassigned_tracks = "
<< association_result->unassigned_tracks.size()
<< ", unassigned_measuremnets = "
<< association_result->unassigned_measurements.size();
return state;
}
IdAssign
// ID匹配,track关联之前匹配过的障碍物ID,主要是lidar和camera,前向radar直接返回。
// post为控制第一次IdAssign和PostIdAssign的控制量。
void HMTrackersObjectsAssociation::IdAssign(
const std::vector<TrackPtr>& fusion_tracks,
const std::vector<SensorObjectPtr>& sensor_objects,
std::vector<TrackMeasurmentPair>* assignments,
std::vector<size_t>* unassigned_fusion_tracks,
std::vector<size_t>* unassigned_sensor_objects, bool do_nothing,
bool post) {
size_t num_track = fusion_tracks.size();
size_t num_obj = sensor_objects.size();
if (num_track == 0 || num_obj == 0 || do_nothing) {
unassigned_fusion_tracks->resize(num_track);
unassigned_sensor_objects->resize(num_obj);
std::iota(unassigned_fusion_tracks->begin(),
unassigned_fusion_tracks->end(), 0);
std::iota(unassigned_sensor_objects->begin(),
unassigned_sensor_objects->end(), 0);
return;
}
const std::string sensor_id = sensor_objects[0]->GetSensorId(); //这帧是来自sensor_id的。
std::map<int, int> sensor_id_2_track_ind;
// 搜索track存储的障碍物的track_id
for (size_t i = 0; i < num_track; i++) {
SensorObjectConstPtr obj = fusion_tracks[i]->GetSensorObject(sensor_id);
// 相机系统会做预匹配,所以它们是在一套统一的track_id系下的,可以拿相机最后一帧的信息。
// 宽角和窄角可能存在视野问题?
/* when camera system has sub-fusion of obstacle & narrow, they share
* the same track-id sequence. thus, latest camera object is ok for
* camera id assign and its information is more up to date. */
if (sensor_id == "front_6mm" || sensor_id == "front_12mm") {
obj = fusion_tracks[i]->GetLatestCameraObject();
}
if (obj == nullptr) {
continue;
}
// sensor_id_2_track_ind的下标是上次匹配的传感器的track_id,值是fusion_tracks的下标。
sensor_id_2_track_ind[obj->GetBaseObject()->track_id] = static_cast<int>(i);
}
//
std::vector<bool> fusion_used(num_track, false);
std::vector<bool> sensor_used(num_obj, false);
for (size_t i = 0; i < num_obj; i++) {
int track_id = sensor_objects[i]->GetBaseObject()->track_id; //这次的传感器测量的track_id
auto it = sensor_id_2_track_ind.find(track_id); //找到这次track_id对应的航迹下标
// In id_assign, we don't assign the narrow camera object
// with the track which only have narrow camera object
// In post id_assign, we do this.
// 第一遍IdAssign不对窄视角(长焦)相机的目标操作。
if (!post && (sensor_id == "front_6mm" || sensor_id == "front_12mm"))
continue;
//如果找到则赋值assignments
//Q: 上一帧匹配成功到这一帧真的可以直接用吗?万一上一帧是错的呢?
//A: 建议增加检查逻辑。
if (it != sensor_id_2_track_ind.end()) {
sensor_used[i] = true;
fusion_used[it->second] = true;
assignments->push_back(std::make_pair(it->second, i));
}
}
for (size_t i = 0; i < fusion_used.size(); ++i) {
if (!fusion_used[i]) {
unassigned_fusion_tracks->push_back(i); //更新未匹配的航迹
}
}
for (size_t i = 0; i < sensor_used.size(); ++i) {
if (!sensor_used[i]) {
unassigned_sensor_objects->push_back(i); //更新未匹配的测量
}
}
}
ComputeAssociationDistanceMat
void HMTrackersObjectsAssociation::ComputeAssociationDistanceMat(
const std::vector<TrackPtr>& fusion_tracks,
const std::vector<SensorObjectPtr>& sensor_objects,
const Eigen::Vector3d& ref_point,
const std::vector<size_t>& unassigned_tracks,
const std::vector<size_t>& unassigned_measurements,
std::vector<std::vector<double>>* association_mat) {
// if (sensor_objects.empty()) return;
TrackObjectDistanceOptions opt;
// TODO(linjian) ref_point
Eigen::Vector3d tmp = Eigen::Vector3d::Zero();
opt.ref_point = &tmp;
association_mat->resize(unassigned_tracks.size());
//双重for循环,外循环航迹,内循环观测
for (size_t i = 0; i < unassigned_tracks.size(); ++i) {
int fusion_idx = static_cast<int>(unassigned_tracks[i]);
(*association_mat)[i].resize(unassigned_measurements.size());
const TrackPtr& fusion_track = fusion_tracks[fusion_idx];
for (size_t j = 0; j < unassigned_measurements.size(); ++j) {
int sensor_idx = static_cast<int>(unassigned_measurements[j]);
const SensorObjectPtr& sensor_object = sensor_objects[sensor_idx];
double distance = s_match_distance_thresh_; //=4.0
double center_dist =
(sensor_object->GetBaseObject()->center -
fusion_track->GetFusedObject()->GetBaseObject()->center)
.norm();
if (center_dist < s_association_center_dist_threshold_) { // < 30
distance =
track_object_distance_.Compute(fusion_track, sensor_object, opt);
} else {
ADEBUG << "center_distance " << center_dist
<< " exceeds slack threshold "
<< s_association_center_dist_threshold_
<< ", track_id: " << fusion_track->GetTrackId()
<< ", obs_id: " << sensor_object->GetBaseObject()->track_id;
}
(*association_mat)[i][j] = distance;
ADEBUG << "track_id: " << fusion_track->GetTrackId()
<< ", obs_id: " << sensor_object->GetBaseObject()->track_id
<< ", distance: " << distance;
}
}
}
Compute调用的子函数们
以下多个函数由Compute调用且都在track_obj_distance.cc中。
ComputeLidarLidar函数
// @brief: compute the distance between velodyne64 observation and
// velodyne64 observation
// @return the distance of velodyne64 vs. velodyne64
float TrackObjectDistance::ComputeLidarLidar( //下面的LidarRadar和RadarRadar有相似的逻辑
const SensorObjectConstPtr& fused_object,
const SensorObjectPtr& sensor_object, const Eigen::Vector3d& ref_pos,
int range) {
double center_distance = (sensor_object->GetBaseObject()->center -
fused_object->GetBaseObject()->center)
.head(2)
.norm();
if (center_distance > s_lidar2lidar_association_center_dist_threshold_) { //10.0
ADEBUG << "center distance exceed lidar2lidar tight threshold: "
<< "[email protected]" << center_distance << ", "
<< "[email protected]"
<< s_lidar2lidar_association_center_dist_threshold_;
return (std::numeric_limits<float>::max)();
}
float distance =
ComputePolygonDistance3d(fused_object, sensor_object, ref_pos, range);
ADEBUG << "ComputeLidarLidar distance: " << distance;
return distance;
}
// @brief: compute polygon distance between fused object and sensor object
// @return 3d distance between fused object and sensor object
float TrackObjectDistance::ComputePolygonDistance3d(
const SensorObjectConstPtr& fused_object,
const SensorObjectPtr& sensor_object, const Eigen::Vector3d& ref_pos,
int range) {
const base::ObjectConstPtr& obj_f = fused_object->GetBaseObject();
Eigen::Vector3d fused_poly_center(0, 0, 0); //Eigen库
if (!QueryPolygonDCenter(obj_f, ref_pos, range, &fused_poly_center)) {
return (std::numeric_limits<float>::max());
}
const base::ObjectConstPtr obj_s = sensor_object->GetBaseObject();
Eigen::Vector3d sensor_poly_center(0, 0, 0);
if (!QueryPolygonDCenter(obj_s, ref_pos, range, &sensor_poly_center)) {
return (std::numeric_limits<float>::max());
}
// CV模型预测,返回欧式距离
double fusion_timestamp = fused_object->GetTimestamp();
double sensor_timestamp = sensor_object->GetTimestamp();
double time_diff = sensor_timestamp - fusion_timestamp;
fused_poly_center(0) += obj_f->velocity(0) * time_diff;
fused_poly_center(1) += obj_f->velocity(1) * time_diff;
float distance =
ComputeEuclideanDistance(fused_poly_center, sensor_poly_center);
return distance;
}
bool TrackObjectDistance::QueryPolygonDCenter(
const base::ObjectConstPtr& object, const Eigen::Vector3d& ref_pos,
const int range, Eigen::Vector3d* polygon_ct) {
if (object == nullptr) {
return false;
}
const base::PolygonDType& polygon = object->polygon;
if (!ComputePolygonCenter(polygon, ref_pos, range, polygon_ct)) { //鸟瞰2d的点,(c_x,c_y,0)。
return false;
}
return true;
}
// @brief: compute polygon center
// @return true if get center successfully, otherwise return false
bool TrackObjectDistance::ComputePolygonCenter(
const base::PolygonDType& polygon, const Eigen::Vector3d& ref_pos,
int range, Eigen::Vector3d* center) {
base::PolygonDType polygon_part;
std::map<double, int> distance2idx;
for (size_t idx = 0; idx < polygon.size(); ++idx) {
const auto& point = polygon.at(idx);
double distance =
sqrt(pow(point.x - ref_pos(0), 2) + pow(point.y - ref_pos(1), 2));
distance2idx.insert(std::make_pair(distance, idx));
}
int size = static_cast<int>(distance2idx.size());
int nu = std::max(range, size / range + 1); //?不太懂。range = 3,如果size >= 9, nu > 3
nu = std::min(nu, size); //假设此时 nu = 3,那么就是取(3dbbox的)前3个点
int count = 0;
std::map<double, int>::iterator it = distance2idx.begin();
for (; it != distance2idx.end(), count < nu; ++it, ++count) { // 前一个逗号表达式前一句可以删除
polygon_part.push_back(polygon[it->second]); // 把最近的nu个点送进polygon_part
}
bool state = ComputePolygonCenter(polygon_part, center); // 计算鸟瞰平面的中心点
return state;
}
// @brief: compute euclidean distance of input pts
// @return eculidean distance of input pts
float TrackObjectDistance::ComputeEuclideanDistance(
const Eigen::Vector3d& des, const Eigen::Vector3d& src) {
Eigen::Vector3d diff_pos = des - src;
float distance = static_cast<float>(
std::sqrt(diff_pos.head(2).cwiseProduct(diff_pos.head(2)).sum())); //点乘,Eigen写法
return distance;
}
IsTrackConsistant 函数
文件链接: track_obj_distance.cc
bool TrackObjectDistance::IsTrackIdConsistent(
const SensorObjectConstPtr& object1, const SensorObjectConstPtr& object2) {
if (object1 == nullptr || object2 == nullptr) {
return false;
}
if (object1->GetBaseObject()->track_id ==
object2->GetBaseObject()->track_id) {
return true;
}
return false;
}
ComputeLidarCamera函数
文件链接: track_obj_distance.cc
// @brief: compute the distance between lidar observation and
// camera observation
// @return distance of lidar vs. camera
float TrackObjectDistance::ComputeLidarCamera(
const SensorObjectConstPtr& lidar, const SensorObjectConstPtr& camera,
const bool measurement_is_lidar, const bool is_track_id_consistent) {
if (!is_track_id_consistent) {
// 如果新旧lidar帧的track_id不同,Lidar,camera的中心距离不能太远
if (LidarCameraCenterDistanceExceedDynamicThreshold(lidar, camera)) {
return distance_thresh_; // = 4.0
}
}
float distance = distance_thresh_;
// 1. get camera intrinsic and pose
base::BaseCameraModelPtr camera_model = QueryCameraModel(camera); // 获取相机内参
if (camera_model == nullptr) {
AERROR << "Failed to get camera model for " << camera->GetSensorId();
return distance;
}
Eigen::Matrix4d world2camera_pose;
if (!QueryWorld2CameraPose(camera, &world2camera_pose)) { // 获取相机外参-地图坐标系到相机坐标系
AERROR << "Failed to query camera pose";
return distance;
}
Eigen::Matrix4d lidar2world_pose;
if (!QueryLidar2WorldPose(lidar, &lidar2world_pose)) { // 获取激光雷达外参
AERROR << "Failed to query lidar pose";
return distance;
}
Eigen::Matrix4d lidar2camera_pose = // 计算激光雷达到相机的外参
static_cast<Eigen::Matrix<double, 4, 4, 0, 4, 4>>(world2camera_pose *
lidar2world_pose);
// 2. compute distance of camera vs. lidar observation
const base::PointFCloud& cloud = // 获取激光雷达点云
lidar->GetBaseObject()->lidar_supplement.cloud;
const base::BBox2DF& camera_bbox = // 获取相机2d框
camera->GetBaseObject()->camera_supplement.box;
const base::Point2DF camera_bbox_ct = camera_bbox.Center(); // 相机中心点
const Eigen::Vector2d box2d_ct = // 相机中心点转Eigen
Eigen::Vector2d(camera_bbox_ct.x, camera_bbox_ct.y);
ADEBUG << "object cloud size : " << cloud.size();
if (cloud.size() > 0) {
// 2.1 if cloud is not empty, calculate distance according to pts box
// similarity
ProjectionCacheObject* cache_object = QueryProjectionCacheObject(
lidar, camera, camera_model, measurement_is_lidar); //获取点云投影的2d点和2d bbox
if (cache_object == nullptr) {
AERROR << "Failed to query projection cached object";
return distance;
}
// 计算2d bbox融合相似度,考虑bbox的位置和长宽
double similarity =
ComputePtsBoxSimilarity(&projection_cache_, cache_object, camera_bbox); // [0, 1]
// distance 属于[0, 4.3] 单位 pix? m?
distance =
distance_thresh_ * ((1.0f - static_cast<float>(similarity)) / // thresh_ = 4.0
(1.0f - vc_similarity2distance_penalize_thresh_)); // thresh = 0.07
} else {
// 2.2 if cloud is empty, calculate distance according to ct diff
Eigen::Vector3d projected_velo_ct;
QueryProjectedVeloCtOnCamera(lidar, camera, lidar2camera_pose,
&projected_velo_ct);
if (projected_velo_ct[2] > 0) {
Eigen::Vector2f project_pt2f = camera_model->Project(
Eigen::Vector3f(static_cast<float>(projected_velo_ct[0]),
static_cast<float>(projected_velo_ct[1]),
static_cast<float>(projected_velo_ct[2])));
Eigen::Vector2d project_pt2d = project_pt2f.cast<double>();
Eigen::Vector2d ct_diff = project_pt2d - box2d_ct; //lidar投影的中心点 - 相机bbox中心店
// distance 属于[0, inf] 单位 pix
distance =
static_cast<float>(ct_diff.norm()) * vc_diff2distance_scale_factor_; // = 0.8
} else {
distance = std::numeric_limits<float>::max();
}
}
ADEBUG << "ComputeLidarCamera distance: " << distance;
return distance;
}
bool TrackObjectDistance::LidarCameraCenterDistanceExceedDynamicThreshold(
const SensorObjectConstPtr& lidar, const SensorObjectConstPtr& camera) {
double center_distance =
(lidar->GetBaseObject()->center - camera->GetBaseObject()->center)
.head(2)
.norm(); //Eigen点乘,(a.cwiseProduct(a)).sum() 也可以做点乘,这样更简洁
double local_distance = 60;
const base::PointFCloud& cloud =
lidar->GetBaseObject()->lidar_supplement.cloud;
if (cloud.size() > 0) {
const base::PointF& pt = cloud.at(0);
local_distance = std::sqrt(pt.x * pt.x + pt.y * pt.y); //计算第一点的距离代表lidar检测的距离
}
double dynamic_threshold = 5 + 0.15 * local_distance; //线性的动态阈值,[5,14]
if (center_distance > dynamic_threshold) {
return true;
}
return false;
}
ProjectionCacheObject* TrackObjectDistance::QueryProjectionCacheObject(
const SensorObjectConstPtr& lidar, const SensorObjectConstPtr& camera,
const base::BaseCameraModelPtr& camera_model,
const bool measurement_is_lidar) {
// 1. try to query existed projection cache object
// 如果之前投影并缓存了,那么直接拿。
const std::string& measurement_sensor_id =
measurement_is_lidar ? lidar->GetSensorId() : camera->GetSensorId();
const double measurement_timestamp =
measurement_is_lidar ? lidar->GetTimestamp() : camera->GetTimestamp();
const std::string& projection_sensor_id =
measurement_is_lidar ? camera->GetSensorId() : lidar->GetSensorId();
const double projection_timestamp =
measurement_is_lidar ? camera->GetTimestamp() : lidar->GetTimestamp();
const int lidar_object_id = lidar->GetBaseObject()->id;
ProjectionCacheObject* cache_object = projection_cache_.QueryObject(
measurement_sensor_id, measurement_timestamp, projection_sensor_id,
projection_timestamp, lidar_object_id);
if (cache_object != nullptr) {
return cache_object;
} // 2. if query failed, build projection and cache it
return BuildProjectionCacheObject(
lidar, camera, camera_model, measurement_sensor_id, measurement_timestamp,
projection_sensor_id, projection_timestamp);
}
// 建立Lidar至Camera的投影
ProjectionCacheObject* TrackObjectDistance::BuildProjectionCacheObject(
const SensorObjectConstPtr& lidar, const SensorObjectConstPtr& camera,
const base::BaseCameraModelPtr& camera_model,
const std::string& measurement_sensor_id, double measurement_timestamp,
const std::string& projection_sensor_id, double projection_timestamp) {
// 1. get lidar2camera_pose
Eigen::Matrix4d world2camera_pose;
if (!QueryWorld2CameraPose(camera, &world2camera_pose)) {
return nullptr;
}
Eigen::Matrix4d lidar2world_pose;
if (!QueryLidar2WorldPose(lidar, &lidar2world_pose)) {
return nullptr;
}
Eigen::Matrix4d lidar2camera_pose =
static_cast<Eigen::Matrix<double, 4, 4, 0, 4, 4>>(world2camera_pose *
lidar2world_pose);
// 2. compute offset
double time_diff = camera->GetTimestamp() - lidar->GetTimestamp();
Eigen::Vector3d offset =
lidar->GetBaseObject()->velocity.cast<double>() * time_diff;
// 3. build projection cache
const base::PointFCloud& cloud =
lidar->GetBaseObject()->lidar_supplement.cloud;
double width = static_cast<double>(camera_model->get_width());
double height = static_cast<double>(camera_model->get_height());
const int lidar_object_id = lidar->GetBaseObject()->id;
ProjectionCacheObject* cache_object = projection_cache_.BuildObject(
measurement_sensor_id, measurement_timestamp, projection_sensor_id,
projection_timestamp, lidar_object_id);
if (cache_object == nullptr) {
AERROR << "Failed to build projection cache object";
return nullptr;
}
size_t start_ind = projection_cache_.GetPoint2dsSize();
size_t end_ind = projection_cache_.GetPoint2dsSize();
float xmin = std::numeric_limits<float>::max();
float ymin = std::numeric_limits<float>::max();
float xmax = -std::numeric_limits<float>::max();
float ymax = -std::numeric_limits<float>::max();
// 4. check whether all lidar's 8 3d vertices would projected outside frustum,
// if not, build projection object of its cloud and cache it
// else, build empty projection object and cache it
bool is_all_lidar_3d_vertices_outside_frustum = false;
if (cloud.size() > s_lidar2camera_projection_vertices_check_pts_num_) { // 至少20个点
is_all_lidar_3d_vertices_outside_frustum = true;
std::vector<Eigen::Vector3d> lidar_box_vertices;
GetObjectEightVertices(lidar->GetBaseObject(), &lidar_box_vertices); // lidar_box_vertices为世界坐标系下的坐标
for (size_t i = 0; i < lidar_box_vertices.size(); ++i) {
Eigen::Vector3d& vt = lidar_box_vertices[i];
Eigen::Vector4d project_vt =
static_cast<Eigen::Matrix<double, 4, 1, 0, 4, 1>>(
world2camera_pose * Eigen::Vector4d(vt(0) + offset(0), //CV模型预测
vt(1) + offset(1),
vt(2) + offset(2), 1.0));
if (project_vt(2) <= 0) { //相机坐标系的z朝前,所以这是相机背后的点,直接跳过。
continue;
}
// 投影至像素(2d)平面,多态实现了多种相机模型,包括pinhole、BrownDistortion、
// OmnidirectionalDistortion和MultiCameraProjection
Eigen::Vector2f project_vt2f = camera_model->Project(Eigen::Vector3f(
static_cast<float>(project_vt(0)), static_cast<float>(project_vt(1)),
static_cast<float>(project_vt(2))));
if (!IsPtInFrustum(project_vt2f, width, height)) { //在图像范围内?
continue;
}
is_all_lidar_3d_vertices_outside_frustum = false;
break;
}
}
// 5. if not all lidar 3d vertices outside frustum, build projection object
// of its cloud and cache it, else build & cache an empty one.
if (!is_all_lidar_3d_vertices_outside_frustum) {
// 5.1 check whehter downsampling needed
// 100点以上降采样
size_t every_n = 1;
if (cloud.size() > s_lidar2camera_projection_downsample_target_pts_num_) { // = 100
every_n =
cloud.size() / s_lidar2camera_projection_downsample_target_pts_num_;
}
for (size_t i = 0; i < cloud.size(); ++i) {
if ((every_n > 1) && (i % every_n != 0)) {
continue;
}
const base::PointF& pt = cloud.at(i); //点在lidar坐标系下
Eigen::Vector4d project_pt =
static_cast<Eigen::Matrix<double, 4, 1, 0, 4, 1>>(
lidar2camera_pose * Eigen::Vector4d(pt.x + offset(0),
pt.y + offset(1),
pt.z + offset(2), 1.0));
if (project_pt(2) <= 0) {
continue; //点在相机背后
}
Eigen::Vector2f project_pt2f = camera_model->Project(Eigen::Vector3f(
static_cast<float>(project_pt(0)), static_cast<float>(project_pt(1)),
static_cast<float>(project_pt(2)))); // lidar点投影至camera坐标系
if (!IsPtInFrustum(project_pt2f, width, height)) {
continue; //不在图像里
}
// 找到所有投影点的最大最小x和y
if (project_pt2f.x() < xmin) {
xmin = project_pt2f.x();
}
if (project_pt2f.y() < ymin) {
ymin = project_pt2f.y();
}
if (project_pt2f.x() > xmax) {
xmax = project_pt2f.x();
}
if (project_pt2f.y() > ymax) {
ymax = project_pt2f.y();
}
projection_cache_.AddPoint(project_pt2f); // 缓存2d点
}
}
end_ind = projection_cache_.GetPoint2dsSize(); // 记录结束位置
cache_object->SetStartInd(start_ind);
cache_object->SetEndInd(end_ind);
base::BBox2DF box = base::BBox2DF(xmin, ymin, xmax, ymax); // lidar点投影后的bbox大小
cache_object->SetBox(box);
return cache_object;
}
// @brief: calculate the similarity between cloud and camera box
// @return the similarity which belongs to [0, 1].
// @key idea:
// 1. compute location similarity and shape similarity
// 2. fuse the two similarity above
// @NOTE: original method name is compute_pts_box_score
double ComputePtsBoxSimilarity(const ProjectionCachePtr& cache,
const ProjectionCacheObject* object,
const base::BBox2DF& camera_bbox) {
double location_similarity =
ComputePtsBoxLocationSimilarity(cache, object, camera_bbox); //属于[0,1]
double shape_similarity =
ComputePtsBoxShapeSimilarity(cache, object, camera_bbox); //属于[0,1]
double fused_similarity =
FuseTwoProbabilities(location_similarity, shape_similarity); //属于[0,1]
ADEBUG << "[email protected]" << fused_similarity << ", [email protected]"
<< location_similarity << ", [email protected]" << shape_similarity;
return fused_similarity;
}
// @brief: calculate the location similarity between cloud and camera box
// @return the location similarity which belongs to [0, 1].
// @key idea:
// 1. calculate the mean x y pixel distance
// 2. normalize mean pixel distance on the size of box and std of x/y
// 3. generate location similarity from Chi-Squared distribution
// @NOTE: original method name is compute_pts_box_dist_score
double ComputePtsBoxLocationSimilarity(const ProjectionCachePtr& cache,
const ProjectionCacheObject* object,
const base::BBox2DF& camera_bbox) {
static const double min_p = 1e-6;
static const double max_p = 1 - 1e-6;
double x_std_dev = 0.4;
double y_std_dev = 0.5;
size_t check_augmented_iou_minimum_pts_num = 20;
float augmented_buffer = 25.0f;
if (object->Empty()) {
ADEBUG << "cache object is empty!";
return min_p;
}
Eigen::Vector2d mean_pixel_dist(0.0, 0.0);
// calculate mean x y pixel distance
const size_t start_ind = object->GetStartInd();
const size_t end_ind = object->GetEndInd();
if (end_ind - start_ind >= check_augmented_iou_minimum_pts_num) { // = 20
base::BBox2DF velo_bbox = object->GetBox(); //点云2d投影的bbox
// 计算增强IoU,即扩大版两个框的IoU
float augmented_iou =
CalculateAugmentedIOUBBox(velo_bbox, camera_bbox, augmented_buffer); // = 25
if (augmented_iou < kFloatEpsilon) {
ADEBUG << "augmented iou is empty!";
return min_p;
}
}
for (size_t i = start_ind; i < end_ind; ++i) {
auto* velo_pt2d = cache->GetPoint2d(i); //该障碍物2d投影的点
if (velo_pt2d == nullptr) {
AERROR << "query pt from projection cache failed!";
continue;
}
if (velo_pt2d->x() >= camera_bbox.xmin &&
velo_pt2d->x() < camera_bbox.xmax &&
velo_pt2d->y() >= camera_bbox.ymin &&
velo_pt2d->y() < camera_bbox.ymax) {
continue;
}
//如果点云投影点在相机bbox框外,计算平均投影误差
Eigen::Vector2d diff;
diff.x() = std::max(0.0, camera_bbox.xmin - velo_pt2d->x());
diff.x() = std::max(diff.x(), velo_pt2d->x() - camera_bbox.xmax);
diff.y() = std::max(0.0, camera_bbox.ymin - velo_pt2d->y());
diff.y() = std::max(diff.y(), velo_pt2d->y() - camera_bbox.ymax);
mean_pixel_dist += diff;
}
mean_pixel_dist /= static_cast<double>(object->Size());
ADEBUG << "mean_pixel_dist is: " << mean_pixel_dist;
// normalize according to box size
Eigen::Vector2d box_size = Eigen::Vector2d(
camera_bbox.xmax - camera_bbox.xmin, camera_bbox.ymax - camera_bbox.ymin);
mean_pixel_dist.array() /= box_size.array();
// assuming the normalized distance obeys gauss distribution
double square_norm_mean_pixel_dist =
mean_pixel_dist.x() * mean_pixel_dist.x() / x_std_dev / x_std_dev + // x_std_dev = 0.4
mean_pixel_dist.y() * mean_pixel_dist.y() / y_std_dev / y_std_dev; // y_std_dev = 0.5
// use chi-square distribution. Cauchy may be more reasonable.
// 卡方分布的*度为2
double location_similarity =
1 - ChiSquaredCdf2TableFun(square_norm_mean_pixel_dist);
// for numerical stability
location_similarity = std::max(min_p, std::min(max_p, location_similarity));
return location_similarity;
}
// @brief: calculate the shape similarity between cloud and camera box
// @return the shape similarity which belongs to [0, 1].
// @key idea:
// 1. calculate box size diff between velo box and camera box
// 2. normalize box size diff according to the std of x/y
// 3. generate shape similarity from Chi-Squared distribution
// @NOTE: original method name is compute_pts_box_shape_score
double ComputePtsBoxShapeSimilarity(const ProjectionCachePtr& cache,
const ProjectionCacheObject* object,
const base::BBox2DF& camera_bbox) {
static const double min_p = 1e-3;
static const double max_p = 1 - 1e-3;
double x_std_dev = 0.3;
double y_std_dev = 0.4;
if (object->Empty()) {
return min_p;
}
// compute 2d bbox size of camera & velo
Eigen::Vector2d camera_box_size = Eigen::Vector2d(
camera_bbox.xmax - camera_bbox.xmin, camera_bbox.ymax - camera_bbox.ymin);
// handled one point case
base::BBox2DF velo_projection_bbox = object->GetBox();
Eigen::Vector2d velo_box_size = camera_box_size / 10; //防止只有一点时为(0,0)
velo_box_size.x() =
std::max(static_cast<float>(velo_box_size.x()),
velo_projection_bbox.xmax - velo_projection_bbox.xmin);
velo_box_size.y() =
std::max(static_cast<float>(velo_box_size.y()),
velo_projection_bbox.ymax - velo_projection_bbox.ymin);
// compute normalized box size diff
Eigen::Vector2d mean_box_size = (camera_box_size + velo_box_size) / 2;
Eigen::Vector2d box_size_diff =
(velo_box_size - camera_box_size).array() / mean_box_size.array();
// assuming the normalized distance obeys gauss distribution
double square_norm_box_size_diff =
box_size_diff.x() * box_size_diff.x() / x_std_dev / x_std_dev +
box_size_diff.y() * box_size_diff.y() / y_std_dev / y_std_dev;
ADEBUG << "[email protected](" << camera_box_size(0) << ","
<< camera_box_size(1) << "); "
<< "[email protected](" << velo_box_size.x() << "," << velo_box_size.y()
<< "); "
<< "[email protected](" << box_size_diff.x() << "," << box_size_diff.y()
<< "); "
<< "[email protected]" << square_norm_box_size_diff;
// use chi-square distribution. Cauchy may be more reasonable.
double shape_similarity =
1 - ChiSquaredCdf2TableFun(square_norm_box_size_diff);
// for numerical stability
shape_similarity = std::max(min_p, std::min(max_p, shape_similarity));
return shape_similarity;
}
文件链接: camera_util.cc
// 利用障碍物的中心点和长宽高构造8点的3d长方体bbox
void GetObjectEightVertices(std::shared_ptr<const base::Object> obj,
std::vector<Eigen::Vector3d>* vertices) {
vertices->clear();
vertices->resize(8);
Eigen::Vector3d center = obj->center; // 3d中心点
Eigen::Vector2d dir = obj->direction.head(2).cast<double>(); // 朝向
dir.normalize();
Eigen::Vector2d orth_dir(-dir.y(), dir.x()); // 正交朝向
Eigen::Vector2d delta_x = dir * obj->size(0) * 0.5; // 半长, 朝向方向的变化量, delta_dir更贴切
Eigen::Vector2d delta_y = orth_dir * obj->size(1) * 0.5; //半宽,正交朝向方向的变化量, delta_orth_dir更贴切
// lower four points
center.z() -= obj->size(2) * 0.5; //先减去半高
(*vertices)[0] = center, (*vertices)[0].head(2) += (-delta_x - delta_y);
(*vertices)[1] = center, (*vertices)[1].head(2) += (-delta_x + delta_y);
(*vertices)[2] = center, (*vertices)[2].head(2) += (delta_x + delta_y);
(*vertices)[3] = center, (*vertices)[3].head(2) += (delta_x - delta_y);
// upper four points
(*vertices)[4] = (*vertices)[0], (*vertices)[4].z() += obj->size(2);
(*vertices)[5] = (*vertices)[1], (*vertices)[5].z() += obj->size(2);
(*vertices)[6] = (*vertices)[2], (*vertices)[6].z() += obj->size(2);
(*vertices)[7] = (*vertices)[3], (*vertices)[7].z() += obj->size(2);
}
文件链接: probabilities.cc
// @brief: fuse two probabilities, fused prob is greater than 0.5, if
// the sum of input prob pair is greater than 1, otherwise, fused prob
// is less than 0.5
// @return fused prob of input prob pair
// @NOTE: original method name is fused_tow_probabilities
double FuseTwoProbabilities(double prob1, double prob2) {
double prob = (prob1 * prob2) / (2 * prob1 * prob2 + 1 - prob1 - prob2);
return prob;
}
上一篇: 计算汉明权重
下一篇: [LeetCode]292_Nim游戏