计算机视觉CV 之 CMT跟踪算法分析四

1 前言

在上一部分我们已经分析到了计算特征点的缩放和旋转,这里最后分析去掉不好的特征点的方法。

2 最后步骤分析

vote的基本思想就是这些特征点相对中心的相对距离在把缩放旋转考虑进去之后是相对不变的,也就是按道理下一帧的特征点相对中心的位置是不变的。
但是由于图像本身的变化,不可能得到完全一样的相对位置,这个时候,有一些会离中心近,有一些会偏差很大。那么,作者就采用聚类的方法,选择最大的一类作为最好的特征点。其他的不要。

上面这个图应该可以很好的理解这个过程。再看作者自己官网上的图大家应该可以理解。
代码上作者找了一个聚类的库来做,具体我没有深入分析了:
void Consensus::findConsensus(const vector<Point2f> & points, const vector<int> & classes,
        const float scale, const float rotation,
        Point2f & center, vector<Point2f> & points_inlier, vector<int> & classes_inlier)
{

    //If no points are available, reteurn nan
    if (points.size() == 0)
    {
        center.x = numeric_limits<float>::quiet_NaN();
        center.y = numeric_limits<float>::quiet_NaN();

        return;
    }

    //Compute votes 计算投票:基本方法就是计算点相对于正规化且计算其旋转加缩放后的点的相对位置 保持相对一致
    vector<Point2f> votes(points.size());
    for (size_t i = 0; i < points.size(); i++)
    {
        votes[i] = points[i] - scale * rotate(points_normalized[classes[i]], rotation);
    }

    t_index N = points.size();

    float * D = new float[N*(N-1)/2]; //This is a lot of memory, so we put it on the heap
    cluster_result Z(N-1);

    //Compute pairwise distances between votes
    //计算votes点之间的相对距离
    int index = 0;
    for (size_t i = 0; i < points.size(); i++)
    {
        for (size_t j = i+1; j < points.size(); j++)
        {
            //TODO: This index calculation is correct, but is it a good thing?
            //int index = i * (points.size() - 1) - (i*i + i) / 2 + j - 1;
            // 计算相对距离
            D[index] = norm(votes[i] - votes[j]);
            index++;
        }
    }

    MST_linkage_core(N,D,Z);

    union_find nodes(N);

    //Sort linkage by distance ascending
    std::stable_sort(Z[0], Z[N-1]);

    //S are cluster sizes
    int * S = new int[2*N-1];
    //TODO: Why does this loop go to 2*N-1? Shouldn't it be simply N? Everything > N gets overwritten later
    for(int i = 0; i < 2*N-1; i++)
    {
        S[i] = 1;
    }

    t_index parent = 0; //After the loop ends, parent contains the index of the last cluster
    for (node const * NN=Z[0]; NN!=Z[N-1]; ++NN)
    {
        // Get two data points whose clusters are merged in step i.
        // Find the cluster identifiers for these points.
        t_index node1 = nodes.Find(NN->node1);
        t_index node2 = nodes.Find(NN->node2);

        // Merge the nodes in the union-find data structure by making them
        // children of a new node
        // if the distance is appropriate
        if (NN->dist < thr_cutoff)
        {
            parent = nodes.Union(node1, node2);
            S[parent] = S[node1] + S[node2];
        }
    }

    //Get cluster labels
    int * T = new int[N];
    for (t_index i = 0; i < N; i++)
    {
        T[i] = nodes.Find(i);
    }

    //Find largest cluster
    int S_max = distance(S, max_element(S, S + 2*N-1));

    //Find inliers, compute center of votes
    points_inlier.reserve(S[S_max]);
    classes_inlier.reserve(S[S_max]);
    center.x = center.y = 0;

    for (size_t i = 0; i < points.size(); i++)
    {
        //If point is in consensus cluster
        if (T[i] == S_max)
        {
            points_inlier.push_back(points[i]);
            classes_inlier.push_back(classes[i]);
            center.x += votes[i].x;
            center.y += votes[i].y;
        }

    }

    center.x /= points_inlier.size();
    center.y /= points_inlier.size();

    delete[] D;
	delete[] S;
	delete[] T;

}

通过这样的算法得到inlier

然后在代码中,作者又做了一次匹配,matchlocal,在我看来和findconsensus的目的是一样的,也是通过相对的点的距离来判定是不是要的特征,然后在对这些特征做一次匹配,是就选进来,最后将inlier的点和matchlocal的点合并,作为最终的特征点。
matchlocal的代码如下:
void Matcher::matchLocal(const vector<KeyPoint> & keypoints, const Mat descriptors,
        const Point2f center, const float scale, const float rotation,
        vector<Point2f> & points_matched, vector<int> & classes_matched)
{

    if (keypoints.size() == 0) {
        return;
    }

    //Transform initial points
    vector<Point2f> pts_fg_trans;
    pts_fg_trans.reserve(pts_fg_norm.size());
    for (size_t i = 0; i < pts_fg_norm.size(); i++)
    {
        // 同样是计算相对位置
        pts_fg_trans.push_back(scale * rotate(pts_fg_norm[i], -rotation));
    }

    //Perform local matching
    for (size_t i = 0; i < keypoints.size(); i++)
    {
        //Normalize keypoint with respect to center
        Point2f location_rel = keypoints[i].pt - center;

        //Find potential indices for matching
        vector<int> indices_potential;
        for (size_t j = 0; j < pts_fg_trans.size(); j++)
        {
            // 计算位置偏差
            float l2norm = norm(pts_fg_trans[j] - location_rel);

            // 设置一个阈值
            if (l2norm < thr_cutoff) {
                indices_potential.push_back(num_bg_points + j);
            }

        }

        //If there are no potential matches, continue
        if (indices_potential.size() == 0) continue;

        //Build descriptor matrix and classes from potential indices
        Mat database_potential = Mat(indices_potential.size(), database.cols, database.type());
        for (size_t j = 0; j < indices_potential.size(); j++) {
            database.row(indices_potential[j]).copyTo(database_potential.row(j));
        }

        //Find distances between descriptors
        vector<vector<DMatch> > matches;
        // 对选出的特征点进行特征匹配
        bfmatcher->knnMatch(descriptors.row(i), database_potential, matches, 2);

        vector<DMatch> m = matches[0];

        float distance1 = m[0].distance / desc_length;
        float distance2 = m.size() > 1 ? m[1].distance / desc_length : 1;

        if (distance1 > thr_dist) continue;
        if (distance1/distance2 > thr_ratio) continue;

        int matched_class = classes[indices_potential[m[0].trainIdx]];

        points_matched.push_back(keypoints[i].pt);
        classes_matched.push_back(matched_class);
    }

}

好了,由于时间关系,CMT算法就分析到这了。有很多不足,可能也分析不到位甚至有错的地方,请批评指正。

文章原创,转载麻烦注明出处:blog.csdn.net/songrotek


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章