Baxter學習筆記5-Kinect攝像頭標定(內參和外參)篇

1: Kinect內參標定

參考:https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration

1.1 注意事項:

1. Print your calibration pattern (for the examples, we used chess5x7x0.03) and glue it to a flat object
2. Get two tripods, one for holding the calibration pattern, and another one for holding the kinect2 sensor
3. When recording images for all the steps indicated below (RGB, IR, SYNC), start the recording program, then press spacebar to record each image
4. It is recommended to take images that show the calibration pattern in all areas of the image, and with different orientations of the pattern (tilting the pattern relative to the plane of the image), and at least two distances. So you can easily reach 100 images per calibration set
5. We normally start at a short distance, where the calibration pattern covers most of the image, there we take several pictures tilting the calibration pattern vertically, then horizontally
6. Then we move the calibration pattern further away, and for different orientations (tilting) of the pattern, we take many images so that we calibration pattern is present around most of the camera image. For example, at first the calibration pattern is on the left upper corner. Then on the next image on the upper middle, then on the upper right corner. Then some images where the calibration pattern is in the middle vertically, etc...

1.2操作步驟:

1. 設定視頻幀: rosrun kinect2_bridge kinect2_bridge _fps_limit:=2 (電腦配置高可不加限制)
    即: rosrun kinect2_bridge kinect2_bribge
2. create a directory for your calibration data files, for example:     
    mkdir ${DIR_FOR_KINECT}/iai_kinect2/kinect2_cal_data

3. rosrun kinect2_calibration kinect2_calibration chess5x7x0.03 record color 
  and use the spacebar in the picture to recore the picture
4. rosrun kinect2_calibration kinect2_calibration chess5x7x0.03 calibrate color

5. rosrun kinect2_calibration kinect2_calibration chess5x7x0.03 record ir
  and use the spacebar in the picture to recore the picture
6. rosrun kinect2_calibration kinect2_calibration chess5x7x0.03 calibrate ir

7. rosrun kinect2_calibration kinect2_calibration chess5x7x0.03 record sync 
  and use the spacebar in the picture to recore the picture
8. rosrun kinect2_calibration kinect2_calibration chess5x7x0.03 calibrate sync
9. rosrun kinect2_calibration kinect2_calibration chess5x7x0.03 calibrate depth

10. Find out the serial number of your kinect2 by looking at the first lines printed out by the kinect2_bridge. The line looks like this: device serial: 015643252847
11. Create the calibration results directory in kinect2_bridge/data/$serial: 
        roscd kinect2_bridge/data; mkdir 015643252847
12. Copy the following files from your calibration directory ( ${DIR_FOR_KINECT}/iai_kinect2/kinect2_cal_data ) into the directory you just created: calib_color.yaml calib_depth.yaml calib_ir.yaml calib_pose.yaml
13. 到此kinect內參已經標定完成,對應device serial的文件夾。當計算機連接kinect時,自動會去讀取對應設備號文件夾的參數文件,因此對於新的一款kinect接入時,需要重新標定得到配置文件保存在對應的設備序列號文件夾下。

2: Kinect外參標定

對於需要將kinect座標系下採集到的三維座標信息映射到機器人座標系下的場景,必須知道Kinect基座標原點在robot座標系下的三維座標或者robot基座標系原點在Kinect座標系下的三維座標。對於我所涉及的場景,我更加關心將Kinect採集的深度圖上關於物體的三維座標轉換到robot座標系下,方便robot去到達或者計算等後續工作。
這裏寫圖片描述
本kinect標定場景圖( kinect視角,kinect在robot正前方 )
首先解析一下在 https://github.com/code-iai/iai_kinect2/blob/master/kinect2_viewer/src/viewer.cpp
中 官方提供的viewer的功能:
1: 通過訂閱ros下kinect發佈的話題,實時更新imageColor and imageDepth
2: imageViewer() 完成彩色圖像的顯示,保存當前圖片和退出
3: cloudViewer() 完成點雲圖像的顯示,保存當前圖片和退出
4: 顯示質量選擇:qhd,hd,ir,sd
(具體實現參考源代碼)
作爲本次標定工作,需要在此基礎上增加功能–捕獲鼠標座標點和其對應的三維點.
首先構造體Receiver() 創建一個publisher 的室友成員:

ros::Publisher mouseEventPosePub = nh.advertise<geometry_msgs::PoseStamped>("/mouse_event_pose", 10);

然後在imageViewer或者cloudViewer中註冊回調映射

visualizer->registerMouseCallback(&Receiver::mouseEventOccurred, *this);

與本身自帶的鍵盤迴調映射非常相似:

visualizer->registerKeyboardCallback(&Receiver::keyboardEvent, *this);

最後,需要回調函數的封裝(與鍵盤迴調函數(keyboardEvent)相似):

  void mouseEventOccurred (const pcl::visualization::MouseEvent &event, void* )
  {
  if (event.getButton () == pcl::visualization::MouseEvent::LeftButton &&
      event.getType () == pcl::visualization::MouseEvent::MouseButtonRelease)
  {
    std::cout << "Left mouse button released at position (" << event.getX () << ", " << event.getY () << ")" << std::endl;
    int x = event.getX();
    int y = event.getY();
    int height = depth.rows;
    int width = depth.cols;
    int point_num = (height - y) * width + x;

    std::cout << height << " " << width << " " << point_num << std::endl;

    pcl::PointXYZRGBA index = tmp_cloud->points[point_num];
    std::cout << "x, " << index.x << " y, " << index.y << " z, " << index.z << std::endl;
    geometry_msgs::PoseStamped pose_arm;
    pose_arm.pose.position.x = index.x;
    pose_arm.pose.position.y = index.y;
    pose_arm.pose.position.z = index.z;
    pose_arm.header.frame_id = "kinect2_link";
    mouseEventPosePub.publish(pose_arm);
    ros::spinOnce();
   }
  }

到此位置完成了捕獲鼠標座標點的功能,現在回到標定工作,我們需要完成的時候在kinect座標下的點和robot座標系下的點一一映射的採樣。主要需要定位robot在end_point_state 的值,即robot末端執行器的三維點。在baxter中該點在topic爲 /robot/limb/left/endpoint_state 下。爲了方便後期的自動標定,我們同時也訂閱了 baxter各關節值–/robot/joint_states 。
首先,完成訂閱者和發佈者的構建:

  //add the Publisher 
  ros::Publisher leftBtnPointPub = nh.advertise<geometry_msgs::PointStamped>
                                      ("/kinect2/click_point/left", 1);
  ros::Publisher rightBtnPointPub = nh.advertise<geometry_msgs::PointStamped>
                                      ("/kinect2/click_point/right", 1);
  //add the Subscriber
  ros::Subscriber lefthand_endpoint_stateSub = nh.subscribe("/robot/limb/left/endpoint_state",
                                                            1,left_hand_endpoint_state_callback);
  ros::Subscriber left_limb_state = nh.subscribe("/robot/joint_states",
                                                            1,left_limb_state_callback);

方便保存數據,創建流對象:

std::ofstream fout_kinect;
std::ofstream fout_robot;
std::ofstream fout_joint_states;

// for calibration Save the calibration data file
std::string filename_kinect = "~/kinect_calibration.data";
std::string filename_robot  = "~/robot_calibration.data";
std::string filename_joint_states  = "~/robot_states.data";

創建回調函數:

void onMouse(int event, int x, int y, int flags, void* ustc){
    // std::cout << "onMouse: " << x << " " << y << std::endl;
    mouseX  = x;
    mouseY  = y;
    mouseBtnType = event;
}
// record the left hand pose
void left_hand_endpoint_state_callback(const baxter_core_msgs::EndpointState& data){
    left_hand_pose.pose =  data.pose;
}
// record the left_limb joint_value
void left_limb_state_callback(const sensor_msgs::JointState& data){
    // robot_joint_states.position =  data.position
    for (int i = 0;i < data.name.size();i++)
    {
      std::string key = data.name[i];
      double value = data.position[i];
      if (joint_states_value.find(key) != joint_states_value.end() ) {
        joint_states_value[key] = value;
      } else {
        joint_states_value.insert({key, value});
      }
    }

}

最後修改imageViewer() 如下,完成。

void imageViewer()
  {

    cv::Mat color, depth, depthDisp, combined;
    std::chrono::time_point<std::chrono::high_resolution_clock> start, now;
    double fps = 0;
    size_t frameCount = 0;
    std::ostringstream oss;
    std::ostringstream ossXYZ;
    const cv::Point pos(5, 15);
    const cv::Scalar colorText = CV_RGB(255, 0, 0); //text color is red
    const double sizeText = 0.5;
    const int lineText = 1;
    const int font = cv::FONT_HERSHEY_SIMPLEX;

    //init the img_x img_y to record the mouse position
    int img_x = mouseX;
    int img_y = mouseY;

    const std::string window_name = "Image viewer";
    cv::namedWindow(window_name);
    cv::setMouseCallback(window_name, onMouse, nullptr);
    oss << "starting calibration...";

    // init the message
    geometry_msgs::PointStamped ptMsg;
    ptMsg.header.frame_id = "kinect_link";

    start = std::chrono::high_resolution_clock::now();
    for(; running && ros::ok();)
    {
      if(updateImage)
      {
        //read current image (lock current Kinect video)
        lock.lock();
        color = this->color;
        depth = this->depth;
        updateImage = false;
        lock.unlock();

        //read current mouse postion
        img_x = mouseX;
        img_y = mouseY;

        createCloud(depth, color, cloud);
        const pcl::PointXYZRGBA& target_pt = cloud->points[img_y * depth.cols + img_x];

        ptMsg.point.x = target_pt.x;
        ptMsg.point.y = target_pt.y;
        ptMsg.point.z = target_pt.z;

        // Save the mouse position and robot->left_hand position into file
        // And We publish the leftBtn nad rightBtn postion OUT including follow topic
        // kinect2/click_point/left  and  /kinect2/click_point/right
        switch (mouseBtnType) {
        case cv::EVENT_LBUTTONUP:
            // publish the leftBtnPoint
            ptMsg.header.stamp = ros::Time::now();
            leftBtnPointPub.publish(ptMsg);
            // for calibration, Save the point position into file
            // IMPORTANCE: MUST point the left hand
            fout_kinect << ptMsg.point.x << "; " << ptMsg.point.y <<"; " << ptMsg.point.z << "; " << std::endl;
            fout_robot << left_hand_pose.pose.position.x << "; " << left_hand_pose.pose.position.y <<"; " << left_hand_pose.pose.position.z << "; " << std::endl;
            for (auto& x: joint_states_value)
              fout_joint_states << x.first << ": " << x.second << std::endl;
            ros::spinOnce();
            break;
        case cv::EVENT_RBUTTONUP:
            // publish the leftBtnPoint
            ptMsg.header.stamp = ros::Time::now();
            rightBtnPointPub.publish(ptMsg);
            ros::spinOnce();
            break;
        default:
            break;
        }
        mouseBtnType = cv::EVENT_MOUSEMOVE;

        // cal fps
        ++frameCount;
        now = std::chrono::high_resolution_clock::now();
        double elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(now - start).count() / 1000.0;
        if(elapsed >= 1.0)
        {
          fps = frameCount / elapsed;
          oss.str("");
          oss << "fps: " << fps << " ( " << elapsed / frameCount * 1000.0 << " ms)";
          start = now;
          frameCount = 0;
        }

        //show the fps
        cv::putText(color, oss.str(), pos, font, sizeText, colorText, lineText, CV_AA);

        //show the postion nearly mouse
        ossXYZ.str("");
        ossXYZ << "( " << ptMsg.point.x << ", " << ptMsg.point.y << ", " << ptMsg.point.z << " )";
        cv::putText(color, ossXYZ.str(), cv::Point(img_x, img_y), font, 1, colorText, 3, CV_AA);
        cv::imshow(window_name, color);
        cv::waitKey(1);
      }

      int key = cv::waitKey(1);
      // press the 'q' to exit the running
      switch(key & 0xFF)
      {
      case 27:
      case 'q':
        running = false;
        break;
      }
    }
    cv::destroyAllWindows();
    cv::waitKey(100);
  }

注意事項:
1: 作爲ros的一個功能包內的其中一個源文件,需要修改對應功能包的CmakeLists.txt
2 : 爲保證能正常訂閱 /robot/limb/left/endpoint_state 等與機器人相關的topic 在啓動的時候需要保證與baxter正常連接。

一切準備編譯通過後,在本場景中採用機器人手心–left_hand_pose 作爲採樣點,在kinect視野下也是點擊該點。這樣就建立起kinect 和robot兩個座標系下的一一對應點。(標定過程參考圖博客圖)
得到的樣本是這樣:

kinect_calibration.data:
-0.0423108; -0.15503; 0.71; 
0.0582223; -0.0984413; 0.673; 
0.269309; -0.0427475; 0.696; 
0.166621; -0.255596; 0.784; 
0.0234459; -0.349472; 0.761; 
-0.181031; -0.226408; 0.683; 
-0.0563763; -0.152473; 0.684; 
-0.0350522; -0.132116; 0.637; 
-0.0361221; -0.171701; 0.874; 
-0.123561; -0.216335; 0.855; 
0.0903724; -0.106467; 0.954; 
-0.104762; -0.168514; 0.666; 

robot_calibration.data:
0.775974; -0.138468; 0.217299; 
0.839093; -0.037254; 0.256112; 
0.895594; 0.175993; 0.2428; 
0.669936; 0.0821183; 0.168346; 
0.574983; -0.06055; 0.196522; 
0.706441; -0.277943; 0.24417; 
0.782715; -0.153396; 0.247191; 
0.808589; -0.136154; 0.292246; 
0.740126; -0.124856; 0.0591615; 
0.694805; -0.214193; 0.0789651; 
0.802439; 0.00907258; -0.00754669; 
0.766809; -0.20514; 0.268391; 

樣本十幾個,越多也好,不低於四個(原因自己想)
最後採用如下公式計算轉換矩陣:

kinect_to_robot_matrix = np.dot(robot_matrix ,pinv(kinect_matrix))

其中:
robot_matrix和kinect_matrix 分別爲採樣數據構成的增廣矩陣。
得到轉換矩陣後開始後續工作。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章