欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

传感器融合-SFND_3D_Object_Tracking环境配置

程序员文章站 2022-03-07 12:49:54
...

1.环境:
cmake >= 2.8
All OSes: click here for installation instructions
make >= 4.1 (Linux, Mac), 3.81 (Windows)
Linux: make is installed by default on most Linux distros
Mac: install Xcode command line tools to get make
Windows: Click here for installation instructions
Git LFS
Weight files are handled using LFS
OpenCV >= 4.1
This must be compiled from source using the -D OPENCV_ENABLE_NONFREE=ON cmake flag for testing the SIFT and SURF detectors.
The OpenCV 4.1.0 source code can be found here
gcc/g++ >= 5.4
Linux: gcc / g++ is installed by default on most Linux distros
Mac: same deal as make - install Xcode command line tools
Windows: recommend using MinGW
2.配置过程中遇到的问题
在配置opencv4.1.0的时候把opencv_contrib4.1.0也配置好,采用cmake-gui的配置教程:
Ubuntu16.04安装配置opencv3.4.3+opencv_contrib3.4.3
虽然版本不一样,但是配置过程是一样的,这个我试过,在配置的过程中遇到一些坑,由于没有记录报错截图,所以在这里直接说了吧,
(1)在配置gui界面的时候将opencl相关的配置都勾掉,因如果不勾掉会报"useOpenCL不是cv::ocl的成员函数"的错误,这个错误在配置完以后不好解决,所以索性在配置的时候将opencl相关的选项去掉:
传感器融合-SFND_3D_Object_Tracking环境配置
(2)在编译的过程中会报缺文件的错误,缺失文件下载地址:
boostdesc_bgm.i等.tar.gz
下载好之后放到opencv-4.1.0/opencv_contrib-4.1.0/modules/xfeatures2d/test文件夹下
(3)头文件修改:
传感器融合-SFND_3D_Object_Tracking环境配置
修改的部分:

#include "test_detectors_regression.impl.hpp"
#include "test_descriptors_regression.impl.hpp"
#include <opencv2/xfeatures2d.hpp> 
#include "opencv2/calib3d.hpp"
using namespace cv::xfeatures2d;

这部分的报错大概意思是没有test_detectors_regression.impl.hpp这个文件,修改头文件引用方式即可.
添加namespace cv::xfeatures2d的原因是下面有一些函数是在这个namespace中定义的,如果不添加会报有些函数中的某个参数没有声明或者定义的错误,或者没找到这个函数.
如果再次运行make还是报其他文件的这种错误,就将其他文件的头文件部分这样修改.
(4)在完成opencv4.1.0后就可以去完成该项目的配置了

Clone this repo.
Make a build directory in the top level project directory: mkdir build && cd build
Compile: cmake .. && make
Run it: ./3D_object_tracking.

传感器融合-SFND_3D_Object_Tracking环境配置
在配置该项目的过程中会报一些错误:
传感器融合-SFND_3D_Object_Tracking环境配置
详细信息如下:

#1 : LOAD IMAGE INTO BUFFER done
terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.1.0) /home/wyh/Downloads/opencv-4.1.0/modules/dnn/src/darknet/darknet_importer.cpp:207: error: (-212:Parsing error) Failed to parse NetParameter file: /home/wyh/Downloads/SFND_3D_Object_Tracking/images/dat/yolo/yolov3.cfg in function 'readNetFromDarknet'

Aborted (core dumped)

一开始看到这个问题很懵逼,从源头处查看了下是输入不对,但是这个输入不对不在文件中,而是调用的yolov3.weights不对,为什么不对,你可以看一下这个项目中的dat文件夹下的yolov3.weights的大小,正常的大小是二百多M,这个才多大小点…重新在官网下载了yolov3.weights替换后就可以正常配置了.

代码部分:
1.匹配3D对象
实现方法“ matchBoundingBoxes”,该方法将先前数据帧和当前数据帧都作为输入,并提供匹配的感兴趣区域的ID(即boxID属性)作为输出。 匹配必须是关键点对应数量最多的匹配。

void matchBoundingBoxes(std::vector<cv::DMatch> &matches, std::map<int, int> &bbBestMatches, DataFrame &prevFrame, DataFrame &currFrame)
{    
    // NOTE: After calling a cv::DescriptorMatcher::match function, each DMatch
    // contains two keypoint indices, queryIdx and trainIdx, based on the order of image arguments to match.
    // https://docs.opencv.org/4.1.0/db/d39/classcv_1_1DescriptorMatcher.html#a0f046f47b68ec7074391e1e85c750cba
    // prevFrame.keypoints is indexed by queryIdx
    // currFrame.keypoints is indexed by trainIdx
    int p = prevFrame.boundingBoxes.size();
    int c = currFrame.boundingBoxes.size();
    int pt_counts[p][c] = { };
    for (auto it = matches.begin(); it != matches.end() - 1; ++it)     
    {
        cv::KeyPoint query = prevFrame.keypoints[it->queryIdx];
        auto query_pt = cv::Point(query.pt.x, query.pt.y);
        bool query_found = false;

        cv::KeyPoint train = currFrame.keypoints[it->trainIdx];
        auto train_pt = cv::Point(train.pt.x, train.pt.y);
        bool train_found = false;

        std::vector<int> query_id, train_id;
        for (int i = 0; i < p; i++) 
        {
            if (prevFrame.boundingBoxes[i].roi.contains(query_pt))            
             {
                query_found = true;
                query_id.push_back(i);
             }
        }
        for (int i = 0; i < c; i++) 
        {
            if (currFrame.boundingBoxes[i].roi.contains(train_pt))            
            {
                train_found= true;
                train_id.push_back(i);
            }
        }

        if (query_found && train_found) 
        {
            for (auto id_prev: query_id)
                for (auto id_curr: train_id)
                     pt_counts[id_prev][id_curr] += 1;
        }
    }
   
    for (int i = 0; i < p; i++)
    {  
         int max_count = 0;
         int id_max = 0;
         for (int j = 0; j < c; j++)
             if (pt_counts[i][j] > max_count)
             {  
                  max_count = pt_counts[i][j];
                  id_max = j;
             }
          bbBestMatches[i] = id_max;
    } 
}

2.计算基于激光雷达的TTC
仅使用来自当前帧与前一帧之间的匹配边界框的激光雷达测量值,为所有匹配的3D对象计算以秒为单位的碰撞时间。
为了以统计上可靠的方式处理离群的激光雷达点以避免严重的估计误差,这里仅考虑自我车道内的激光雷达点,然后获得平均距离以获得稳定的输出。

void computeTTCLidar(std::vector<LidarPoint> &lidarPointsPrev,
                     std::vector<LidarPoint> &lidarPointsCurr, double frameRate, double &TTC)
{		     
    int lane_wide = 4;
    //just consider Lidar points within ego lane
    std::vector<float> ppx;
    std::vector<float> pcx;
    for(auto it = lidarPointsPrev.begin(); it != lidarPointsPrev.end(); ++it)
    {
        if(abs(it->y) < lane_wide/2) 
            ppx.push_back(it->x);
    }
    for(auto it = lidarPointsCurr.begin(); it != lidarPointsCurr.end(); ++it)
    {
        if(abs(it->y) < lane_wide/2) 
            pcx.push_back(it->x);
    }

    float min_px, min_cx;
    int p_size = ppx.size();
    int c_size = pcx.size();
    if(p_size > 0 && c_size > 0)
    {
        for(int i=0; i<p_size; i++)
        {
            min_px += ppx[i];
        }

        for(int j=0; j<c_size; j++)
        {
            min_cx += pcx[j];
        }
    }
    else 
    {
        TTC = NAN;
        return;
    }

    min_px = min_px /p_size;
    min_cx = min_cx /c_size;
    std::cout<<"lidar_min_px:"<<min_px<<std::endl;
    std::cout<<"lidar_min_cx:"<<min_cx<<std::endl;

    float dt = 1/frameRate;
    TTC = min_cx * dt / (min_px - min_cx);
}

3.将关键点对应关系与边界框相关联
通过将关键点对应关系与包围它们的边界框相关联,根据相机测量值准备TTC计算。 必须将满足此条件的所有匹配项添加到相应边界框中的向量中。

// associate a given bounding box with the keypoints it contains
void clusterKptMatchesWithROI(BoundingBox &boundingBox_c, BoundingBox &boundingBox_p, std::vector<cv::KeyPoint> &kptsPrev, std::vector<cv::KeyPoint> &kptsCurr, std::vector<cv::DMatch> &kptMatches)
{
    double dist_mean = 0;
    std::vector<cv::DMatch>  kptMatches_roi;

    float shrinkFactor = 0.15;
    cv::Rect smallerBox_c, smallerBox_p;
    // shrink current bounding box slightly to avoid having too many outlier points around the edges
    smallerBox_c.x = boundingBox_c.roi.x + shrinkFactor * boundingBox_c.roi.width / 2.0;
    smallerBox_c.y = boundingBox_c.roi.y + shrinkFactor * boundingBox_c.roi.height / 2.0;
    smallerBox_c.width = boundingBox_c.roi.width * (1 - shrinkFactor);
    smallerBox_c.height = boundingBox_c.roi.height * (1 - shrinkFactor);

    // shrink pre bounding box slightly to avoid having too many outlier points around the edges
    smallerBox_p.x = boundingBox_p.roi.x + shrinkFactor * boundingBox_p.roi.width / 2.0;
    smallerBox_p.y = boundingBox_p.roi.y + shrinkFactor * boundingBox_p.roi.height / 2.0;
    smallerBox_p.width = boundingBox_p.roi.width * (1 - shrinkFactor);
    smallerBox_p.height = boundingBox_p.roi.height * (1 - shrinkFactor);

    //get the matches within curr_boundingBox and pre_boundingBox
    for(auto it=kptMatches.begin(); it != kptMatches.end(); ++ it)
    {
        cv::KeyPoint train = kptsCurr.at(it->trainIdx);
        auto train_pt = cv::Point(train.pt.x, train.pt.y);

        cv::KeyPoint query = kptsPrev.at(it->queryIdx); 
        auto query_pt = cv::Point(query.pt.x, query.pt.y);

        // check wether point is within current bounding box
        if (smallerBox_c.contains(train_pt) && smallerBox_p.contains(query_pt)) 
            kptMatches_roi.push_back(*it);           
    }

    //caculate the mean distance of all the matches within boundingBox 
    for(auto it=kptMatches_roi.begin(); it != kptMatches_roi.end(); ++ it)
    {
        dist_mean += cv::norm(kptsCurr.at(it->trainIdx).pt - kptsPrev.at(it->queryIdx).pt); 
    }
    if(kptMatches_roi.size() > 0)
        dist_mean = dist_mean/kptMatches_roi.size();
    else return;

    //keep the matches  distance < dist_mean * 1.5
    double threshold = dist_mean*1.5;
    for  (auto it = kptMatches_roi.begin(); it != kptMatches_roi.end(); ++it)
    {
       float dist = cv::norm(kptsCurr.at(it->trainIdx).pt - kptsPrev.at(it->queryIdx).pt);
       if (dist< threshold)
           boundingBox_c.kptMatches.push_back(*it);
    }

    std::cout<<"curr_bbx_matches_size: "<<boundingBox_c.kptMatches.size()<<std::endl;
}

4.计算基于相机的TTC
仅使用来自当前帧与前一帧之间的匹配边界框的关键点对应关系,为所有匹配的3D对象计算以秒为单位的碰撞时间。

// Compute time-to-collision (TTC) based on keypoint correspondences in successive images
void computeTTCCamera(std::vector<cv::KeyPoint> &kptsPrev, std::vector<cv::KeyPoint> &kptsCurr, 
                      std::vector<cv::DMatch> kptMatches, double frameRate, double &TTC, cv::Mat *visImg)
{
    vector<double> distRatios; // stores the distance ratios for all keypoints between curr. and prev. frame
    for (auto it1 = kptMatches.begin(); it1 != kptMatches.end() - 1; ++it1)
    {
        cv::KeyPoint kpOuterCurr = kptsCurr.at(it1->trainIdx);
        cv::KeyPoint kpOuterPrev = kptsPrev.at(it1->queryIdx);
       
        for (auto it2 = it1 + 1; it2 != kptMatches.end(); ++it2)
        {  
            double minDist = 100.0; // min. required distance
            cv::KeyPoint kpInnerCurr = kptsCurr.at(it2->trainIdx);
            cv::KeyPoint kpInnerPrev = kptsPrev.at(it2->queryIdx);
            // compute distances and distance ratios
            double distCurr = cv::norm(kpOuterCurr.pt - kpInnerCurr.pt);
            double distPrev = cv::norm(kpOuterPrev.pt - kpInnerPrev.pt);
            if (distPrev > std::numeric_limits<double>::epsilon() && distCurr >= minDist)
            { // avoid division by zero
                double distRatio = distCurr / distPrev;
                distRatios.push_back(distRatio);
            }
        }
    }  
    // only continue if list of distance ratios is not empty
    if (distRatios.size() == 0)
    {
        TTC = NAN;
        return;
    }

    std::sort(distRatios.begin(), distRatios.end());

    /* 
    int num_ration = distRatios.size();
    int crop_head_tail = floor(distRatios.size() / 10.0);
    double medDistRatio = 0;
    for (auto it = distRatios.begin() + crop_head_tail; it != distRatios.end() - crop_head_tail; ++it)
    {
        medDistRatio += *it;
    }
    medDistRatio = medDistRatio/(num_ration - 2*crop_head_tail);
    */

    long medIndex = floor(distRatios.size() / 2.0);
    double medDistRatio = distRatios.size() % 2 == 0 ? (distRatios[medIndex - 1] + distRatios[medIndex]) / 2.0 : distRatios[medIndex]; // compute median dist. ratio to remove outlier influence

    double dT = 1 / frameRate;
    TTC = -dT / (1 - medDistRatio);
}

5.绩效评估1
查找在激光雷达的TTC估计似乎不合理的示例。描述您的观察并提供合理的论据,说明您为什么会这样。
激光雷达的TTC是不正确的,因为前车的后视镜存在一些异常值和不稳定点,因此需要将其过滤掉。在这里,我们采用更大的rinkeFactor = 0.2,以获得更可靠和稳定的激光雷达点。然后得到更准确的结果。
6.绩效评估2
运行几种检测器/描述符组合,并查看TTC估计中的差异。找出哪种方法效果最好,并且还包括一些基于相机的TTC估计无法实现的示例。与激光雷达一样,再次描述您的观察结果,并调查潜在原因。
当获得强大的集群时,KptMatchesWithROI可以从Camera获得稳定的TTC。如果结果变得不稳定,则可能是最差的按键匹配。

作为我们检测车辆关键点的最佳选择,TOP3检测器/描述符组合是
SHITOMASI/BRISK
SHITOMASI/BRIEF
SHITOMASI/ORB

相关标签: 传感器融合