Matlab图像识别/检索系列(9)—开源工具介绍之图像识别reco_toolbox

特征是图像识别、图像检索的关键之一。特征提取对于识别、检索的效果至关重要,它主要经历了底层特征(颜色、纹理、形状等)提取、局部特征(SIFT、SURF等)提取、词频向量(图像对图象集BOW的编码结果,可以作为图像特征,在局部特征基础上进行)提取、深度神经网络提取几个过程。虽然在很多场景下深度网络提取特征效果较好,现在已经成为主流,但在特定环境、特定场景下,结合其他技术(空间金字塔、稀疏学习、LBP等)的底层特征提取可能以低得多的代价取得忧于深度神经网络的效果。毕竟深度神经网络的效果需要对参数进行调优、对结构进行调整,直接达到最好效果是非常困难的。
这里介绍一个开源的图像识别工具——reco_toolbox,作者给它的名字是Scenes/Objects classification toolbox,采用的数据集是scene_15、skinan等。该工具箱提供了多种特征提取、特征处理、词典学习等功能,并可以自由组合。还采用了稀疏学习、空间金字塔池化、LBP、Fast k-means等技术。它的主要函数有:

 A) Patch functions            
        denseCOLOR                                 Compute histogram of color projection on a regular dense grid
        denseMBLBP                                  Extract Histogram of Multi-Block LBP on a regular dense grid and computed on image I after color projection
        denseMBLDP                                  Extract Histogram of Multi-Block LDP on a regular dense grid and computed on image I after color projection
        densePATCH                                   Extract patches of pixels after color projection on a regular dense grid
        denseSIFT                                       Compute SIFT (Scale-invariant feature transform) descriptors on a regular dense grid

 B) Direct descriptors

        mlhmslbp_spyr                               Color Multi-Level Histogram of Multi-Scale Local Binary Pattern with Spatial Pyramid
        mlhmsldp_spyr                               Color Multi-Level Histogram of Multi-Scale Local Derivative Pattern with Spatial Pyramid
        mlhmslsd_spyr                               Color Multi-Level Histogram of Multi-Scale Line Segment Detector with Spatial Pyramid
        mlhoee_spyr                                  Color Multi-Level Histogram of Oriented Edge Energy with Spatial Pyramid

 C)  Dictionary learning

        yael_kmeans                                Fast K-means algorithm to learn codebook
        mexTrainDL                                  Sparse dictionary learning algoritm
        mexTrainDL_Memory                   Faster Sparse dictionary learning algoritm but more memory consumming
        mexLasso                                     Lasso algorithm to compute alpha s weights

 D)  Spatial pyramidal pooling

        mlhbow_spyr                                 Histogram of color visual words with a Multi-Level Spatial pyramid
        dl_spyr                                          Pooling with a multi-Level spatial pyramid
        mlhlcc_spyr                                  Pooling with a multi-Level spatial pyramid and Locality-Constraint Linear Coding 

 E) Classifiers

        homker_pegasos_train                Pegasos solver with Homeogeneous additive kernel transform included
        homker_predict                            Predict new instances with trained model
        train_dense                                  Liblinear training algorithm for dense data
        svmtrain                                       Train SVM model via Libsvm for dense data
        svmpredict                                   Predict new instances with trained model
        pegasos_train                              Pegasos solver
        predict_dense                              Liblinear predict algorithm for dense data

从上述函数中可以看出,该工具箱已经包括空间金字塔、稀疏学习、BOW词典构建等功能。基本上代表了深度学习应用在图像识别以前,最好的图像特征提取和识别方法。即使底层特征,如mlhmslbp_spyr,也能超越部分深度神经网络的达到的识别效果。具体功能可参看其Readme.txt文档.
该工具箱底层代码用c语言编写,执行效率高,内存消耗小。作者提供了在64位和32位Windows上的mexw编译文件,如不能运行可在本机使用mex命令重新编译。若未添加路径,可将用到的mexw64或mexw32文件解压并复制至与执行的m文件同级的目录。
下面简单介绍演示程序simple_train.m的部分代码。由于该文件太长,只给出了一部分。在工具箱中运行这个文件前,要先运行extract_bag_of_features.m或extract_direct_features.m以提取特征并在指定路径下存为文件。同时需要在该文件设置对应的choice_descriptors ,否则会提示找不到特征文件。

    clc,close all, clear ,drawnow
    database_name               = {'scenes15' , 'skinan' , 'agemen'};
    database_ext                = {'jpg' , 'jpg' , 'png'};
    descriptors_name            = {'denseSIFT_mlhbow_spyr' , 'denseSIFT_dl_spyr' , 'denseSIFT_mlhlcc_spyr' ,...
                                                 'denseCOLOR_mlhbow_spyr' , 'denseCOLOR_dl_spyr' , 'denseCOLOR_mlhlcc_spyr' , ...
                                                'densePATCH_mlhbow_spyr' , 'densePATCH_dl_spyr' , 'densePATCH_mlhlcc_spyr' , ...
                                                 'denseMBLBP_mlhbow_spyr' , 'denseMBLBP_dl_spyr' , 'denseMBLBP_mlhlcc_spyr' , ...
                                                 'denseMBLDP_mlhbow_spyr' , 'denseMBLDP_dl_spyr' , 'denseMBLDP_mlhlcc_spyr' , ...
                                                'mlhoee_spyr' , 'mlhmslsd_spyr' , 'mlhmslbp_spyr' , 'mlhmsldp_spyr'};
    classifier                  = {'liblinear' , 'pegasos' , 'libsvm'};
    %用数字表示对应位置的图像集,scenes15=1/skinan=2/agemen=3
    choice_database             = [1]; 
    %用数字表示对应位置的特征,8表示densePATCH_dl_spyr
    choice_descriptors          = [8]; 
    %用数字表示对应位置的分类器,Libnear=1/Pegasos=2/Libsvm=3
    choice_classifier           = [1]; 

    data_name                   = database_name{choice_database(1)};
    im_ext                          = database_ext{choice_database(1)};
    %获取当前路径、图像集路径、核心代码路径、特征文件存放路径、模型存放路径
    rootbase_dir                = pwd;
    images_dir                  = fullfile(pwd , 'images' , data_name);
    core_dir                       = fullfile(pwd , 'core');
    feat_dir                        = fullfile(pwd , 'features');
    models_dir                  = fullfile(pwd , 'models');
    addpath(core_dir)
    %遍历图像集目录
    dir_image                   = dir(images_dir);
    %图像类别数目,即子文件夹数目
    nb_topic                    = length(dir_image) - 2;
    %每一类图像的名称
    classe_name                 = cellstr(char(dir_image(3:nb_topic+2).name));

    %设置训练参数,K表示K-折交叉验证
    K                                 = 1;   
    seed_value                 = 5489;
    post_norm                  = 0;
    do_weightinglearning = [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];
    uselogic                      = 1;
    fusion_method            = 1;   %max=0/mean=1
    nbin                             = 100; % for ROC curves

    %声明存放图像数的数组,元素数为图像类别数目
    nb_images_per_topic  = zeros(1 , nb_topic);
    %声明类别标签数组
    y                                   = [];
    for i = 1:nb_topic
            nb_images_per_topic(i) = length(dir(fullfile(pwd , 'images' , data_name , dir_image(i+2).name , ['*.' , im_ext])));
            y                                       = [y , i*ones(1 , nb_images_per_topic(i))];
    end
    %图像总数
    N                                           = sum(nb_images_per_topic);
    %执行config_databases.m文件,设置相关配置信息
    config_databases;
    %执行脚本文件,文件名为[data_name , '_config_classifier']
    eval([data_name , '_config_classifier']);
    %%
    s                                         = RandStream.create('mt19937ar','seed',seed_value);
    RandStream.setDefaultStream(s);
    %设置训练图像、测试图像存储变量
    Itrain                                  = zeros(K , sum(base{choice_database}.maxperclasstrain));
    Itest                                   = zeros(K , sum(base{choice_database}.maxperclasstest));

    for j = 1:K
            cotrain                        = 1;
            cotest                         = 1;
            for i = 1:nb_topic
                    %将标签转换为只含0、1的矩阵
                    indi                        = find(y==i);
                    %对图像序号进行置乱
                    tempind                 = randperm(nb_images_per_topic(i));
                    %按设定数目取训练图像序号
                    indtrain                  = tempind(1:base{choice_database}.maxperclasstrain(i));
                    indtest                    = tempind(base{choice_database}.maxperclasstrain(i)+1:base{choice_database}.maxperclasstrain(i)+base{choice_database}.maxperclasstest(i));
                    %取每折训练图像序号
                    Itrain(j , cotrain:cotrain+base{choice_database}.maxperclasstrain(i)-1)  = indi(indtrain);
                    %取每折测试图像序号
                    Itest(j  , cotest:cotest+base{choice_database}.maxperclasstest(i)-1)     = indi(indtest);
                    cotrain                      = cotrain + base{choice_database}.maxperclasstrain(i);
                    cotest                       = cotest  + base{choice_database}.maxperclasstest(i);
            end
    end
%逐个特征进行训练
for d = 1 : nb_descriptors    
    cdescriptors                 = choice_descriptors(d);
    base_descriptor           = descriptors_name{cdescriptors};
    %逐个分类器进行训练
    for c = 1:nb_classifier        
        ccurrent                    = choice_classifier(c);
        base_classifier          = classifier{ccurrent};
        base_name                = [data_name , '_' , base_descriptor];
        base_name_model    = [data_name , '_' ,base_descriptor , '_' , base_classifier];        
        fprintf('\nLoad descriptor %s for classifier = %s\n\n' , base_name , base_classifier );        
        drawnow
        clear X y        
                %加载特征文件
        load(fullfile(feat_dir , base_name ));
                %对特征进行1-norm或2-norm变换
        if(post_norm == 1)          
            sumX                   = sum(X , 1) + 10e-8;
            X                          = X./sumX(ones(size(X , 1) , 1) , :);           
        end
        if(post_norm == 2)          
            sumX                   = sum(X , 1);
            temp                    = sqrt(sumX.*sumX + 10e-8);
            X                          = X./temp(ones(size(X , 1) , 1) , :);           
        end        
        if(param_classif{cdescriptors,ccurrent}.n > 0)
            fprintf('Homoegeous Feature Kernel Map with n = %d, L = %4.2f, kernel = %d\n\n', param_classif{cdescriptors,ccurrent}.n , param_classif{cdescriptors,ccurrent}.L , param_classif{cdescriptors,ccurrent}.kerneltype);
            drawnow
            X                         = homkermap(X , param_classif{cdescriptors,ccurrent});
        end        
        for k = 1:K             
           %按序号取出训练图像
            Xtrain                  = X(: , Itrain(k , :));
            %按序号取出训练图像标签
            ytrain                   = y(Itrain(k , :));
            fprintf('\nLearn train data for classifier = %s and descriptor = %s\n\n' ,  base_classifier , base_name);
            drawnow        
            for t = 1:nb_topic     
                ind_topic            = (ytrain==t);
                ytopic                 = double(ind_topic);
                ytopic(ytopic==0) = -1;

                if((strcmp(base_classifier , 'liblinear'))  )                    
                    fprintf('cv = %d/%d, learn topic = %s (%d/%d), h1 = %10.5f for classifier = %s and descriptor = %s \n' , k , K , classe_name{t} , t , nb_topic  , param_classif{cdescriptors,ccurrent}.c , base_classifier , base_descriptor)
                    drawnow                    
                    if(do_weightinglearning(c))
                        npos                                    = sum(ytopic==1);
                        nneg                                    = length(ytopic) - npos;
                        wpos                                    = nneg/npos;
                        options                                 = ['-q -s ' num2str(param_classif{cdescriptors,ccurrent}.s) ' -B ' num2str(param_classif{cdescriptors,ccurrent}.B) ' -w1 ' num2str(wpos)  ' -c ' num2str(param_classif{cdescriptors,ccurrent}.c)];   
                    else
                        options                                 = ['-q -s ' num2str(param_classif{cdescriptors,ccurrent}.s) ' -B ' num2str(param_classif{cdescriptors,ccurrent}.B)   ' -c ' num2str(param_classif{cdescriptors,ccurrent}.c)];
                    end
                    %训练第t类图像的分类模型
                    model{t}                                    = train_dense(ytopic' , Xtrain , options , 'col');
                    %对训练图像进行预测
                    [ytopic_est ,  accuracy_test  , ftopic]     = predict_dense(ytopic' , Xtrain , model{t} , '-b 0' , 'col'); % test the training data                    
                    if(uselogic)
                        options                                 = ['-q -s 0  -B ' num2str(param_classif{cdescriptors,ccurrent}.B)  ' -c ' num2str(param_classif{cdescriptors,ccurrent}.c)];
                        model{t}.logist                      = train_dense(ytopic' , ftopic' , options , 'col');
                    else
                        [A , B]                                 = sigmoid_train(ytopic , ftopic');
                        ptopic                                  = sigmoid_predict(ftopic' , A , B);                        
                        model{t}.A                              = A;
                        model{t}.B                              = B;
                    end        
                end
           clear Xtrain ytrain;
            Xtest                                      = X(: , Itest(k , :));
            ytest                                      = y(Itest(k , :));            
            fprintf('\nPredict test data for classifier = %s and descriptor = %s\n\n' ,  base_classifier , base_name);
            drawnow                   
            for t = 1:nb_topic
                ind_topic                              = (ytest==t);
                ytopic                                   = double(ind_topic);
                ytopic(ytopic==0)                 = -1;                
                fprintf('cv = %d, predict topic = %s (%d/%d) for classifier = %s and descriptor = %s\n' , k , classe_name{t} , t , nb_topic ,  base_classifier , base_descriptor);
                drawnow                
                if((strcmp(base_classifier , 'liblinear'))  )
                    [ytopic_est ,  accuracy_test , ftopic ]       = predict_dense(ytopic' , Xtest , model{t} , '-b 0' , 'col'); % test the training data
                    if(uselogic)     
                        [l2,a2,d2]                                = predict_dense(ytopic' , ftopic , model{t}.logist , '-b 1');
                        ptopic                                    = d2(:,find(model{t}.logist.Label==1))';
                    else
                        ptopic                                    = sigmoid_predict(ftopic' , model{t}.A , model{t}.B);
                    end    
            end        
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章