欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

使用小觅相机录制数据集

程序员文章站 2022-04-16 23:35:52
...

1 目的

在不同场景录制不同条件的数据集,主要是为了方便进行系统的测试,相当于一个benchmark,给不同的算法提供一个相同的测试环境。

2 方法

2.1 SDK

首先,安装好小觅相机的SDK。

然后,使用SDK中提供的样例程序,录制数据集。

cd ~/MYNT-EYE-S-SDK/samples/_output/bin
./record

终端的输出信息:
使用小觅相机录制数据集
程序默认在当前路径创建datase文件夹存储数据,数据集包括:

  • left文件夹为左摄像头的图像,编码格式为png
  • disparity文件夹为视差图
  • deth文件夹为深度图
  • motion.txt为IMU数据,格式为seq, flag, timestamp, accel_x, accel_y, accel_z, gyro_x, gyro_y, gyro_z, temperature
  • 在left等文件夹中还有stream.txt,存储图片的信息,格式为seq, frame_id, timestamp, exposure_time

使用小觅相机录制数据集
如果想制作成KITTI,TUM,EuRoC等数据集的格式,需要对代码进行一些修改,改动应该不大,主要是路径和命名。

2.2 ROS bag

2.2.1 录制bag

首先,启动相机。

source ~/MYNT-EYE-S-SDK/wrappers/ros/devel/setup.bash
roslaunch mynt_eye_ros_wrapper display.launch

然后,录制bag,包括左目,右目和IMU数据。

rosbag record /mynteye/left/image_raw /mynteye/right/image_raw /mynteye/imu/data_raw 

2.2.2 从bag文件中提取数据

如果只存储了bag文件,又想得到上面方式采集的数据,这时可以从bag文件中提取出图像和IMU数据。

2.2.2.1 提取图像

打开一个终端,从bag文件中提取图像:

rosrun image_view extract_images _sec_per_frame:=0.01 image:=/mynteye/left/image_raw

注意:如果输出的图片数量与使用rosbag info命令查询得到的数量不符,可以减小_sec_per_frame参数的值。

打开另一个终端,播放bag:

rosbag play data.bag

这种方式提取的图片的名称是从frame0000.jpg依次升序的,如果想图片名是时间戳,需要使用以下程序。

新建extract_images.py:

# coding:utf-8
#!/usr/bin/python

# Extract images from a bag file.

import roslib   #roslib.load_manifest(PKG)
import rosbag
import rospy
import decimal
import cv2
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
from cv_bridge import CvBridgeError
  

left_path = '/home/wb/MYNT-EYE-S-SDK/dataset/left/'   # 左目图像的路径,需提前手动创建,也可以使用程序自动创建
right_path = '/home/wb/MYNT-EYE-S-SDK/dataset/right/'
  
class ImageCreator():
    def __init__(self):
        self.bridge = CvBridge()
        with rosbag.Bag('/home/wb/MYNT-EYE-S-SDK/2020-08-05-23-05-50.bag', 'r') as bag:  # 读取bag文件,注意设置正确的bag文件路径
            for topic,msg,t in bag.read_messages():
                if topic == "/mynteye/left/image_raw": # 左目图像的topic
                        try:
                            cv_image = self.bridge.imgmsg_to_cv2(msg,"bgr8")
                        except CvBridgeError as e:
                            print e
                        # %.6f表示小数点后带有6位,可根据精确度需要修改
                        timestr = "%.6f" % msg.header.stamp.to_sec()
                        image_name = timer + ".png" #图像命名:时间戳.png
                        cv2.imwrite(left_path + image_name, cv_image)  # 保存图像
                elif topic == "/mynteye/right/image_raw": # 右目图像的topic
                        try:
                            cv_image = self.bridge.imgmsg_to_cv2(msg,"bgr8")
                        except CvBridgeError as e:
                            print e
                        # %.6f表示小数点后带有6位,可根据精确度需要修改
                        timestr = "%.6f" % msg.header.stamp.to_sec()
                        image_name = timer + ".png" #图像命名:时间戳.png
                        cv2.imwrite(right_path + image_name, cv_image)  # 保存图像
  
if __name__ == '__main__': 
    try:
        image_creator = ImageCreator()
    except rospy.ROSInterruptException:
        pass

配置好路径后,执行:

python extract_images.py

2.2.2.2 提取IMU数据

rostopic echo -b 2020-08-05-23-05-50.bag -p /mynteye/imu/data_raw > imu_data.csv

当然,也可以扩展上面的Python代码,自定义IMU数据的内容。

3 相关核心代码

3.1 record.cc

#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include "mynteye/logger.h"
#include "mynteye/device/device.h"
#include "mynteye/device/utils.h"
#include "mynteye/util/times.h"

#include "dataset.h"

MYNTEYE_USE_NAMESPACE

int main(int argc, char *argv[]) {
  glog_init _(argc, argv);

  auto &&api = API::Create(argc, argv);
  if (!api) return 1;

  auto request = api->GetStreamRequest();

//   struct StreamRequest {
//   /** Stream width in pixels */
//   std::uint16_t width;
//   /** Stream height in pixels */
//   std::uint16_t height;
//   /** Stream pixel format */
//   Format format;
//   /** Stream frames per second */
//   std::uint16_t fps;
//   }

  // request.fps = 10;
  api->ConfigStreamRequest(request);
  api->EnableMotionDatas();

  api->EnableStreamData(Stream::DEPTH);

  api->Start(Source::ALL);

  const char *outdir;
  if (argc >= 2) {
    outdir = argv[1];   // 可以更改数据集存储的路径
  } else {
    outdir = "./dataset";
  }
  tools::Dataset dataset(outdir);

  cv::namedWindow("frame");

  std::size_t img_count = 0;
  std::size_t imu_count = 0;
  auto &&time_beg = times::now();
  while (true) {
    api->WaitForStreams();

    auto &&left_datas = api->GetStreamDatas(Stream::LEFT);
    auto &&right_datas = api->GetStreamDatas(Stream::RIGHT);
    auto &&depth_data = api->GetStreamData(Stream::DEPTH);
    auto &&disparity_data = api->GetStreamData(Stream::DISPARITY);
    img_count += left_datas.size();

    auto &&motion_datas = api->GetMotionDatas();
    imu_count += motion_datas.size();
    cv::Mat img;
    if (left_datas.size() > 0 && right_datas.size() > 0) {
      auto &&left_frame = left_datas.back().frame_raw;
      auto &&right_frame = right_datas.back().frame_raw;
      if (right_frame->data() && left_frame->data()) {
        if (left_frame->format() == Format::GREY) {
          cv::Mat left_img(
              left_frame->height(), left_frame->width(), CV_8UC1,
              left_frame->data());
          cv::Mat right_img(
              right_frame->height(), right_frame->width(), CV_8UC1,
              right_frame->data());
          cv::hconcat(left_img, right_img, img);
        } else if (left_frame->format() == Format::YUYV) {
          cv::Mat left_img(
              left_frame->height(), left_frame->width(), CV_8UC2,
              left_frame->data());
          cv::Mat right_img(
              right_frame->height(), right_frame->width(), CV_8UC2,
              right_frame->data());
          cv::cvtColor(left_img, left_img, cv::COLOR_YUV2BGR_YUY2);
          cv::cvtColor(right_img, right_img, cv::COLOR_YUV2BGR_YUY2);
          cv::hconcat(left_img, right_img, img);
        } else if (left_frame->format() == Format::BGR888) {
          cv::Mat left_img(
              left_frame->height(), left_frame->width(), CV_8UC3,
              left_frame->data());
          cv::Mat right_img(
              right_frame->height(), right_frame->width(), CV_8UC3,
              right_frame->data());
          cv::hconcat(left_img, right_img, img);
        } else {
          return -1;
        }
        cv::imshow("frame", img);
      }
    }

    if (img_count > 10 && imu_count > 50) {  // save
      // save Stream::LEFT
      for (auto &&left : left_datas) {
        dataset.SaveStreamData(Stream::LEFT, left);
      }
      // save Stream::RIGHT
      // for (auto &&right : right_datas) {
      //   dataset.SaveStreamData(Stream::RIGHT, right);
      // }

      // save Stream::DEPTH
      if (!depth_data.frame.empty()) {
        dataset.SaveStreamData(Stream::DEPTH, depth_data);
      }

      // save Stream::DISPARITY
      if (!disparity_data.frame.empty()) {
        dataset.SaveStreamData(Stream::DISPARITY, disparity_data);
      }

      for (auto &&motion : motion_datas) {
        dataset.SaveMotionData(motion);
      }

      std::cout << "\rSaved " << img_count << " imgs"
                << ", " << imu_count << " imus" << std::flush;
    }

    char key = static_cast<char>(cv::waitKey(1));
    if (key == 27 || key == 'q' || key == 'Q') {  // 按下ESC/Q键,退出程序,停止数据集的录制
      break;
    }
  }
  std::cout << " to " << outdir << std::endl;
  auto &&time_end = times::now();

  api->Stop(Source::ALL);

  float elapsed_ms =
      times::count<times::microseconds>(time_end - time_beg) * 0.001f;
  LOG(INFO) << "Time beg: " << times::to_local_string(time_beg)
            << ", end: " << times::to_local_string(time_end)
            << ", cost: " << elapsed_ms << "ms";
  LOG(INFO) << "Img count: " << img_count
            << ", fps: " << (1000.f * img_count / elapsed_ms);
  LOG(INFO) << "Imu count: " << imu_count
            << ", hz: " << (1000.f * imu_count / elapsed_ms);
  return 0;
}

3.2 dataset.h

#ifndef MYNTEYE_TOOLS_DATASET_H_  // NOLINT
#define MYNTEYE_TOOLS_DATASET_H_
#pragma once

#include <fstream>
#include <map>
#include <memory>
#include <string>

#include "mynteye/mynteye.h"
#include "mynteye/api/api.h"
#include "mynteye/device/callbacks.h"

MYNTEYE_BEGIN_NAMESPACE

namespace tools {

class Dataset {
 public:
  struct Writer {
    std::ofstream ofs;
    std::string outdir;
    std::string outfile;
  };

  using writer_t = std::shared_ptr<Writer>;

  explicit Dataset(std::string outdir);
  ~Dataset();

  void SaveStreamData(const Stream &stream, const device::StreamData &data);
  void SaveMotionData(const device::MotionData &data);

  void SaveStreamData(const Stream &stream, const api::StreamData &data);
  void SaveMotionData(const api::MotionData &data);

 private:
  writer_t GetStreamWriter(const Stream &stream);
  writer_t GetMotionWriter();

  std::string outdir_;

  std::map<Stream, writer_t> stream_writers_;
  writer_t motion_writer_;

  std::map<Stream, std::size_t> stream_counts_;
  std::size_t motion_count_;
  std::size_t accel_count_;
  std::size_t gyro_count_;
};

}  // namespace tools

MYNTEYE_END_NAMESPACE

#endif  // MYNTEYE_TOOLS_DATASET_H_ NOLINT

3.3 dataset.cc

#include "dataset.h"

#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include <iomanip>
#include <limits>
#include <stdexcept>
#include <utility>

#include "mynteye/logger.h"
#include "mynteye/util/files.h"

#define FULL_PRECISION \
  std::fixed << std::setprecision(std::numeric_limits<double>::max_digits10)

#define IMAGE_FILENAME_WIDTH 6

MYNTEYE_BEGIN_NAMESPACE

namespace tools {

Dataset::Dataset(std::string outdir) : outdir_(std::move(outdir)) {
  VLOG(2) << __func__;
  if (!files::mkdir(outdir_)) {
    LOG(FATAL) << "Create directory failed: " << outdir_;
  }
}

Dataset::~Dataset() {
  VLOG(2) << __func__;
  for (auto &&it = stream_writers_.begin(); it != stream_writers_.end(); it++) {
    if (it->second) {
      it->second->ofs.flush();
      it->second->ofs.close();
    }
  }
  if (motion_writer_) {
    motion_writer_->ofs.flush();
    motion_writer_->ofs.close();
  }
}

void Dataset::SaveStreamData(
    const Stream &stream, const device::StreamData &data) {
  auto &&writer = GetStreamWriter(stream);
  auto seq = stream_counts_[stream];
  writer->ofs << seq << ", " << data.img->frame_id << ", "
              << data.img->timestamp << ", " << data.img->exposure_time
              << std::endl;
  if (data.frame) {
    std::stringstream ss;
    ss << writer->outdir << MYNTEYE_OS_SEP << std::dec
       << std::setw(IMAGE_FILENAME_WIDTH) << std::setfill('0') << seq << ".png";
    if (data.frame->format() == Format::GREY) {
      cv::Mat img(
          data.frame->height(), data.frame->width(), CV_8UC1,
          data.frame->data());
      cv::imwrite(ss.str(), img);
    } else if (data.frame->format() == Format::YUYV) {
      cv::Mat img(
          data.frame->height(), data.frame->width(), CV_8UC2,
          data.frame->data());
      cv::cvtColor(img, img, cv::COLOR_YUV2BGR_YUY2);
      cv::imwrite(ss.str(), img);
    } else if (data.frame->format() == Format::BGR888) {
      cv::Mat img(
          data.frame->height(), data.frame->width(), CV_8UC3,
          data.frame->data());
      cv::imwrite(ss.str(), img);
    } else {
      cv::Mat img(
          data.frame->height(), data.frame->width(), CV_8UC1,
          data.frame->data());
      cv::imwrite(ss.str(), img);
    }
  }
  ++stream_counts_[stream];
}

void Dataset::SaveMotionData(const device::MotionData &data) {
  auto &&writer = GetMotionWriter();
  // auto seq = data.imu->serial_number;
  auto seq = motion_count_;
  writer->ofs << seq << ", " << static_cast<int>(data.imu->flag) << ", "
              << data.imu->timestamp << ", " << data.imu->accel[0] << ", "
              << data.imu->accel[1] << ", " << data.imu->accel[2] << ", "
              << data.imu->gyro[0] << ", " << data.imu->gyro[1] << ", "
              << data.imu->gyro[2] << ", " << data.imu->temperature
              << std::endl;
  ++motion_count_;
  /*
  if(motion_count_ != seq) {
    LOG(INFO) << "motion_count_ != seq !" << " motion_count_: " << motion_count_
  << " seq: " << seq;
    motion_count_ = seq;
  }
  */
}

void Dataset::SaveStreamData(
    const Stream &stream, const api::StreamData &data) {
  auto &&writer = GetStreamWriter(stream);
  auto seq = stream_counts_[stream];
  writer->ofs << seq << ", " << data.img->frame_id << ", "
              << data.img->timestamp << ", " << data.img->exposure_time
              << std::endl;
  if (!data.frame.empty()) {
    std::stringstream ss;
    ss << writer->outdir << MYNTEYE_OS_SEP << std::dec
       << std::setw(IMAGE_FILENAME_WIDTH) << std::setfill('0') << seq << ".png";
       cv::imwrite(ss.str(), data.frame);
  }
  ++stream_counts_[stream];
}

void Dataset::SaveMotionData(const api::MotionData &data) {
  auto &&writer = GetMotionWriter();
  // auto seq = data.imu->serial_number;
  auto seq = motion_count_;
  writer->ofs << seq << ", " << static_cast<int>(data.imu->flag) << ", "
                << data.imu->timestamp << ", " << data.imu->accel[0] << ", "
                << data.imu->accel[1] << ", " << data.imu->accel[2] << ", "
                << data.imu->gyro[0] << ", " << data.imu->gyro[1] << ", "
                << data.imu->gyro[2] << ", " << data.imu->temperature
                << std::endl;

  motion_count_++;
  /*
  if(motion_count_ != seq) {
    LOG(INFO) << "motion_count_ != seq !" << " motion_count_: " << motion_count_
  << " seq: " << seq;
    motion_count_ = seq;
  }
  */
}

Dataset::writer_t Dataset::GetStreamWriter(const Stream &stream) {
  try {
    return stream_writers_.at(stream);
  } catch (const std::out_of_range &e) {
    writer_t writer = std::make_shared<Writer>();
    switch (stream) {
      case Stream::LEFT: {
        writer->outdir = outdir_ + MYNTEYE_OS_SEP "left";
      } break;
      case Stream::RIGHT: {
        writer->outdir = outdir_ + MYNTEYE_OS_SEP "right";
      } break;
      case Stream::DEPTH: {
        writer->outdir = outdir_ + MYNTEYE_OS_SEP "depth";
      } break;
      case Stream::DISPARITY: {
        writer->outdir = outdir_ + MYNTEYE_OS_SEP "disparity";
      } break;
      case Stream::RIGHT_RECTIFIED: {
        writer->outdir = outdir_ + MYNTEYE_OS_SEP "right_rect";
      } break;
      case Stream::LEFT_RECTIFIED: {
        writer->outdir = outdir_ + MYNTEYE_OS_SEP "left_rect";
      } break;
      case Stream::DISPARITY_NORMALIZED: {
         writer->outdir = outdir_ + MYNTEYE_OS_SEP "disparity_norm";
      } break;
      default:
        LOG(FATAL) << "Unsupported stream: " << stream;
    }
    writer->outfile = writer->outdir + MYNTEYE_OS_SEP "stream.txt";

    files::mkdir(writer->outdir);
    writer->ofs.open(writer->outfile, std::ofstream::out);
    writer->ofs << "seq, frame_id, timestamp, exposure_time" << std::endl;
    writer->ofs << FULL_PRECISION;

    stream_writers_[stream] = writer;
    stream_counts_[stream] = 0;
    return writer;
  }
}

Dataset::writer_t Dataset::GetMotionWriter() {
  if (motion_writer_ == nullptr) {
    writer_t writer = std::make_shared<Writer>();
    writer->outdir = outdir_;
    writer->outfile = writer->outdir + MYNTEYE_OS_SEP "motion.txt";

    files::mkdir(writer->outdir);
    writer->ofs.open(writer->outfile, std::ofstream::out);
    writer->ofs << "seq, flag, timestamp, accel_x, accel_y, accel_z, "
                   "gyro_x, gyro_y, gyro_z, temperature"
                << std::endl;
    writer->ofs << FULL_PRECISION;

    motion_writer_ = writer;
    motion_count_ = 0;
    accel_count_ = 0;
    gyro_count_ = 0;
  }
  return motion_writer_;
}

}  // namespace tools

MYNTEYE_END_NAMESPACE

4 参考

  1. https://github.com/slightech/MYNT-EYE-S-SDK
  2. https://mynt-eye-s-sdk-docs-zh-cn.readthedocs.io/zh_CN/latest/
  3. https://www.cnblogs.com/arkenstone/p/6676203.html
  4. https://blog.csdn.net/corfox_liu/article/details/50920332?locationNum=15
  5. https://blog.csdn.net/Draonly/article/details/74642747
相关标签: SLAM