在動作觀察,運動想象和站立和坐姿執行過程中解碼腦電節律

 

 

事件相關去同步化與同步化(ERD/S)和運動相關皮質電位(MRCP)在下肢康復的腦機接口(BCI)中,特別是在站立和坐姿中,起着重要的作用。然而,人們對站立和坐着的大腦皮層活動的差異知之甚少,尤其是大腦的意圖是如何調節運動前的感覺運動節奏的。在本研究中,研究人員旨在研究在站立和坐着的動作觀察(AO)、運動想象(MI)和運動執行(ME) 期間連續性EEG節奏的解碼。研究人員開發了一項行爲任務,在該任務中,參與者被指示對坐立和站坐的動作執行AO和MI/ME。實驗結果表明,在AO期間ERD比較顯著,而在MI期間ERS在感覺運動區域的alpha帶較爲典型。結合常用空間模式(FBCSP)和支持向量機(SVM)進行離線和分類器測試分析。離線分析表明,AO和MI的分類在站-坐轉換時的平均準確率最高,爲82.73±2.54%。通過分類器測試分析,研究人員證明了MI範式比ME範式具有更高的解碼神經意圖的性能。

 

爲了研究在連續腦電圖記錄下的運動執行過程中解碼MI信號(包括ERD/S)和MRCPs的可行性,整個實驗過程由MI和ME兩個階段組成。每一階段包括3次運行過程(每次5次試驗),共包含30次試驗。實驗以坐姿開始,然後重複5次坐立和站坐交替試驗。圖1顯示了每次試驗中四個狀態的序列:R、AO、idle和任務執行狀態(MI或ME)。在R狀態期間,顯示器上顯示黑色屏幕長達6秒,參與者被要求保持放鬆和靜止。爲了避免指令的模糊性,我們提供了持續4 ~ 5秒的坐-站或站-坐視頻任務的視頻刺激來指導參與者處於AO狀態。參與者被要求在聽到音頻提示後立即完成兩個階段的任務。在ME階段,參與者被要求在音頻提示後完成一系列自主節奏的動作。在MI階段,參與者在聽到音頻提示後開始想象指定的動作。在MI期間,可以通過音頻或視覺提示獲得運動啓動,而在ME期間,可以從EMG信號獲得運動啓動。

Fig 1.每個實驗的時間安排

每個試驗的的時間安排表,顯示的四種狀態包括休眠(0-4秒)、AO(4-8秒)、idle(8-9秒)和任務執行(MI或ME(9-13秒)。

 

數據採集

搭建傳感系統,在整個實驗過程中同時記錄EEG、EOG、EMG信號,如圖2所示。

Fig 2

下圖爲國際10-20系統通道配置(11個EEG和2個EOG記錄電極)。左面板上每個電極的對應位置;右邊的面板表示索引。

Fig 3

EEG和EOG信號

  • 使用g.USBampRESEARCH來重新記錄EEG和EOG信號,如上圖所示。

  • 採樣率設置爲1200 Hz。

  • EEG:將11個電極放置在FCz,C3,Cz,C4,CP3,CPz,CP4,P3,Pz,P4和POz上

  • EOG:將2個電極放在右眼下方(VEOG)和(HEOG)上

  • 在整個實驗過程中,EEG和EOG信號的阻抗均保持在10kΩ以下

  • 在MI和ME會話中均提供了EEG和EOG信號

  • MI / ME會話在每次從坐-站/從站-坐的轉換時,原始EEG和EOG的尺寸爲participants×runs×trials×channels×timepoints(8×3×5×6×16800)。

Fig 4.6個EMG記錄電極的通道配置,顯示每個電極的索引對應位置 

EMG信號

  • 使用OpenBCI記錄EMG信號。

  • 採樣率設置爲250 Hz。

  • 將6個電極置於兩下肢的股直肌(RF)、脛骨前肌(TA)和腓腸肌外側(GL)上,如上圖所示。

  • 在實驗會話上只有記錄肌電圖

  • 每次坐-站/站-坐轉換的原始肌電圖形成在

    participants×runs×trials×channels×timepoints (8×3×5×6×3500)。

數據預處理

離線信號處理使用MNE-python軟件包(0.20.0版)執行。在MI和ME期間,預處理過程分爲兩個主要步驟:基於EEG的MI和基於EEG的MRCP。下圖說明了EEG,EOG和EMG數據處理的過程。

Fig 5.EEG、EOG和肌電圖數據預處理概述。

EEG、EOG和肌電圖數據預處理概述:

  1. 展示了MI在EEG和EOG數據上的預處理過程;

  2. 展示了從ME中提取MRCPs的預處理步驟。

運動想象:陷波濾波器設置爲50Hz,以減少電噪聲。使用2階非因果Butterworth濾波器對記錄的EEG和EOG信號進行1–40 Hz之間的帶通濾波。兩個信號都被下采樣到250 Hz。

將基於獨立成分分析(ICA)的眼動僞影校正算法應用於EOG信號,對腦電數據中剔除的僞影分量進行識別。將腦電圖信號按每類(R、AO、MI)開始的epoch分段4 s,如下圖所示,然後採用移0.2 s的2 s滑動窗口進行預處理。

來自每個參與者的每個類別的處理數據包含試驗×窗口×通道×時間點的集合(15×11×11×500)。

 

使用scikit-learn對15個試驗使用left-one(trial) out交叉驗證(LOOCV)實現受試者獨立分類任務,如圖6所示。在訓練過程中,首先對訓練集進行信號預處理,如圖5所示。利用濾波器組公共空間模式(FBCSP)從下采樣訓練集中提取空間特徵,生成用於分類任務的特徵向量。重要的是,FBCSP在MI分類任務中通常表現良好。引入FBCSP作爲公共空間模式(CSP)的擴展,從多個濾波器組中自主選擇識別的腦電圖特徵。在本研究中,9個濾波器組帶通濾波器的帶寬爲4 Hz從4到40 Hz (4 - 8 Hz, 8-12Hz,…,36–40 Hz)。構建了6個濾波器組帶通濾波器,其中第一個頻帶的帶寬爲0.4 Hz,其他頻帶帶寬爲0.5 Hz (0.1-0.5 Hz,0.5 - 1 Hz,…,2.5-3Hz)。

Fig 6.

圖6所示爲基於網格搜索算法的二分類模型的leave-one(trial)-outcross-validation (LOOCV)架構。受試者逐個獨立執行LOOCV。

Fig 7.

圖7所示。在MI會話中使用的僞在線分析流程圖。當連續檢測5次(AO >= 5)產生動作觀察時,應用網格搜索算法輔助確定動作觀察與運動想象(AOvs.MI)分類模型。

圖8.對MI任務的神經反應。在整個試驗中,針對事件的頻譜相關攝動(ERSP)在4–40 Hz之間進行分組,在(a)sit-to-stand和(b) stand-to-sit任務相比,R基線狀態(1 - 0 s)。從0-4 s的時間間隔對應於R狀態,從4-8 s的間隔對應於AO狀態,從8-9 s的間隔對應於空閒狀態,從9 s開始的時間間隔對應於執行狀態。爲了可視化,採樣率設置爲600 Hz。與基線相比,所有目前的ERSP值均具有統計學意義(p = 0:05)。

Fig 8. 對MI任務的神經反應

圖9所示。從-1.5-1 s到實際運動開始,8個參與者sit-to-stand (a) andstand-to-sit (b)轉換的Grandaverage MRCP波形(11個通道)。

Fig 9.

下圖頭皮地形圖顯示了MRCP振幅隨時間變化的空間表徵。

研究人員在這項研究中開發的任務中,參與者被指示對坐立和站坐的動作執行AO和MI/ME。實驗結果表明,在AO期間ERD比較顯著,而在MI期間ERS在感覺運動區域的alpha帶較爲典型。結合常用空間模式(FBCSP)和支持向量機(SVM)進行離線和分類器測試分析。離線分析表明,AO和MI的分類在站-坐轉換時的平均準確率最高,爲82.73±2.54%。通過分類器測試分析,研究人員證明了MI範式比ME範式具有更高的解碼神經意圖的性能。

 

這些觀察讓我們看到了使用基於AO和MI集成的開發任務構建未來基於外骨骼的康復系統的前景。

項目案例代碼

fbcsp處理運動想象數據源碼

run_fbcsp_mi

import numpy as np
import os
import sys
import csv
from sklearn.model_selection import KFold


from pysitstand.model import fbcsp
from pysitstand.utils import sliding_window, sliding_window2
from pysitstand.eeg_preprocessing import apply_eeg_preprocessing
"""
Binary classification model.
We apply FBCSP-SVM (9 subbands from 4-40 Hz) on the subject-dependent scheme (leave a single trial for testing) for EEG-based MI classification.
x sec window size with y% step (0.1 means overlap 90%)


# How to run


>> python run_fbcsp_mi.py <window_size> <step> <filter order> <performing task> <prediction motel> <artifact remover>


>> python run_fbcsp_mi.py 2 0.1 4 stand R_vs_AO rASR
>> python run_fbcsp_mi.py 2 0.1 4 sit R_vs_AO rASR
>> python run_fbcsp_mi.py 2 0.1 4 stand AO_vs_MI rASR
>> python run_fbcsp_mi.py 2 0.1 4 sit AO_vs_MI rASR


>> python run_fbcsp_mi.py 2 0.1 4 stand AO_vs_MI ICA && python run_fbcsp_mi.py 2 0.1 4 stand R_vs_AO ICA
>> python run_fbcsp_mi.py 2 0.1 4 sit AO_vs_MI ICA && python run_fbcsp_mi.py 2 0.1 4 sit R_vs_AO ICA
"""


def load_data(subject, task, prediction_model, artifact_remover, filter_order, window_size, step, sfreq):
    #load data the preprocessing


    # filter params
    notch = {'f0': 50}
    bandpass = {'lowcut': 1, 'highcut': 40, 'order': filter_order}
    ica = {'new_sfreq': sfreq, 'save_name': None, 'threshold': 2}
    rASR = {'new_sfreq': sfreq}


    # it will perform preprocessing from this order
    if artifact_remover == 'ICA':
        filter_medthod = {'notch_filter': notch, 
                    'butter_bandpass_filter': bandpass,
                    'ica': ica}
    elif artifact_remover == 'rASR':
        filter_medthod = {'notch_filter': notch, 
                    'butter_bandpass_filter': bandpass,
                    'rASR': rASR}


    
    # apply filter and ICA 
    data = apply_eeg_preprocessing(subject_name=subject, session='mi', task=task, filter_medthod=filter_medthod)


    # data : 15 sec 
    # define data
    R_class = data[:,:,int(2*sfreq):int(6*sfreq)]          # rest 2 to 6 s
    AO_class = data[:,:,int(6*sfreq):int(10*sfreq)] 
    MI_class = data[:,:,int(11*sfreq):int(15*sfreq)]       # beep at 11s


    len_data_point = R_class.shape[-1]
    num_windows = int(((len_data_point-win_len_point)/(win_len_point*step))+1)


    # define class
    if prediction_model == 'R_vs_AO':
        # sliding windows
        R_class_slided = np.zeros([15, num_windows, 11, window_size*sfreq])
        AO_class_slided = np.zeros([15, num_windows, 11, window_size*sfreq])
        for i, (R,AO) in enumerate(zip(R_class, AO_class)):
            R_class_slided[i,:,:,:] = np.copy(sliding_window2(np.array([R]), win_sec_len=window_size, step=step, sfreq=sfreq))
            AO_class_slided[i,:,:,:] = np.copy(sliding_window2(np.array([AO]), win_sec_len=window_size, step=step, sfreq=sfreq))
        X0 = np.copy(R_class_slided)
        X1 = np.copy(AO_class_slided)
    elif prediction_model == 'AO_vs_MI':
        # sliding windows
        AO_class_slided = np.zeros([15, num_windows, 11, window_size*sfreq])
        MI_class_slided = np.zeros([15, num_windows, 11, window_size*sfreq])
        for i, (AO,MI) in enumerate(zip(AO_class, MI_class)):
            AO_class_slided[i,:,:,:] = np.copy(sliding_window2(np.array([AO]), win_sec_len=window_size, step=step, sfreq=sfreq))
            MI_class_slided[i,:,:,:] = np.copy(sliding_window2(np.array([MI]), win_sec_len=window_size, step=step, sfreq=sfreq))
        X0 = np.copy(AO_class_slided)
        X1 = np.copy(MI_class_slided)
            
    del data, R_class, AO_class, MI_class


    y0 = np.zeros([X0.shape[0], X0.shape[1]])
    y1 = np.ones([X1.shape[0], X1.shape[1]])
    assert len(X0) == len(y0)
    assert len(X1) == len(y1)
    return X0, y0, X1, y1


if __name__ == "__main__":


    window_size = int(sys.argv[1]) # 1,2,3 sec.
    step = float(sys.argv[2]) # 0.1 --> overlap(90%)
    filter_order = int(sys.argv[3]) # 2 order of all fillter
    task = sys.argv[4] # stand, sit
    prediction_model = sys.argv[5] # R_vs_AO, AO_vs_MI
    artifact_remover = sys.argv[6] # ICA, rASR
    sfreq = 250 # new sampling rate [max = 1200 Hz]
    win_len_point = int(window_size*sfreq)


    for x in sys.argv:
        print("Argument: ", x)
    
    subjects = [ 'S01', 'S02', 'S03', 'S04', 'S05', 'S06', 'S07', 'S08']


    if task == 'stand':
        save_name = 'sit_to_stand_mi'
    elif task == 'sit':
        save_name = 'stand_to_sit_mi'


    if prediction_model == 'R_vs_AO':
        save_path = 'MI-v2-'+artifact_remover+'-FBCSP-cv-'+str(window_size)+'s_'+task+'_'+prediction_model+'_filter_order_'+str(filter_order)
    elif prediction_model == 'AO_vs_MI':
        save_path = 'MI-v2-'+artifact_remover+'-FBCSP-cv-'+str(window_size)+'s_'+task+'_'+prediction_model+'_filter_order_'+str(filter_order)


    header = [ 'fold', 'accuracy', 
                '0.0 f1-score', '1.0 f1-score', 'average f1-score',
                '0.0 recall', '1.0 recall', 'average recall',
                '0.0 precision', '1.0 precision', 'average precision',
                'sensitivity', 'specificity'
            ]


    sum_value_all_subjects = []
    for subject in subjects:
        from joblib import dump, load
        print('===================='+subject+'==========================')


        for directory in [save_path, save_path+'/model', save_path+'/y_slice_wise']:
            if not os.path.exists(directory):
                os.makedirs(directory)


        #load data the preprocessing
        X0, y0, X1, y1 = load_data(subject=subject, task=task, 
                                    prediction_model=prediction_model, 
                                    artifact_remover=artifact_remover,
                                    filter_order=filter_order, 
                                    window_size=window_size, 
                                    step=step, 
                                    sfreq=sfreq)


        with open(save_path+'/'+save_path+'_result.csv', 'a') as csvFile:
            writer = csv.writer(csvFile)
            writer.writerow([str(subject)])
            writer.writerow(header)


        kf = KFold(n_splits=15, shuffle=False) # Define the split - into 15 folds 
        print(kf)
        accuracy_sum, precision_0_sum, recall_0_sum, f1_0_sum, precision_1_sum, recall_1_sum, f1_1_sum, precision_sum, recall_sum, f1_sum = [], [], [], [], [], [], [], [], [], []
        sen_sum, spec_sum = [], []
        predict_result = []
        X_csp_com = []
        for index_fold, (train_idx, test_idx) in enumerate(kf.split(X0)):
            print("=============fold {:02d}==============".format(index_fold))
            print('fold: {}, train_index: {}, test_index: {}'.format(index_fold, train_idx, test_idx))


            X0_train, X1_train = X0[train_idx], X1[train_idx]
            y0_train, y1_train = y0[train_idx], y1[train_idx]
            X0_test, X1_test = X0[test_idx], X1[test_idx]
            y0_test, y1_test = y0[test_idx], y1[test_idx]


            X_train = np.concatenate((X0_train.reshape(-1, X0_train.shape[-2], X0_train.shape[-1]), 
                        X1[train_idx].reshape(-1, X1_train.shape[-2], X1_train.shape[-1])), axis=0)
            y_train = np.concatenate((y0_train.reshape(-1), y1_train.reshape(-1)), axis=0)


            X_test = np.concatenate((X0_test.reshape(-1, X0_test.shape[-2], X0_test.shape[-1]), 
                        X1[test_idx].reshape(-1, X1_test.shape[-2], X1_test.shape[-1])), axis=0)
            y_test = np.concatenate((y0_test.reshape(-1), y1_test.reshape(-1)), axis=0)
            
            print("Dimesion of training set is: {} and label is: {}".format (X_train.shape, y_train.shape))
            print("Dimesion of testing set is: {} and label is: {}".format( X_test.shape, y_test.shape))
        
            # classification
            accuracy, report, sen, spec, X_test_csp, y_true, y_pred, classifier = fbcsp(X_train=X_train, y_train=y_train,
                                                                                        X_test=X_test, y_test=y_test, 
                                                                                        filter_order=filter_order, session='mi')
            dump(classifier, save_path+'/model/'+subject+save_name+'_'+str(index_fold+1).zfill(2)+'.gz') 
            
            # saving
            precision_0 = report['0.0']['precision']
            recall_0 = report['0.0']['recall']
            f1_0 = report['0.0']['f1-score']


            precision_1 = report['1.0']['precision']
            recall_1 = report['1.0']['recall']
            f1_1 = report['1.0']['f1-score']
            
            precision = report['weighted avg']['precision']
            recall = report['weighted avg']['recall']
            f1 = report['weighted avg']['f1-score']


            accuracy_sum.append(accuracy)


            precision_0_sum.append(precision_0)
            recall_0_sum.append(recall_0)
            f1_0_sum.append(f1_0)


            precision_1_sum.append(precision_1)
            recall_1_sum.append(recall_1)
            f1_1_sum.append(f1_1)


            precision_sum.append(precision)
            recall_sum.append(recall)
            f1_sum.append(f1)
            sen_sum.append(sen)
            spec_sum.append(spec)


            row = [index_fold+1, accuracy,
                f1_0, f1_1, f1,
                recall_0, recall_1, recall,
                precision_0, precision_1, precision,
                sen, spec]




            predict_result.append([y_true, y_pred])
            X_csp_com.append(X_test_csp)


            with open(save_path+'/'+save_path+'_result.csv', 'a') as csvFile:
                writer = csv.writer(csvFile)
                writer.writerow(row)


            print(subject, 'save DONE!!!!')
            print('***************************************')
            print('***************************************')
            print('***************************************')
            print('***************************************')


        mean_value = [np.mean(accuracy_sum),
        np.mean(f1_0_sum), np.mean(f1_1_sum), np.mean(f1_sum),
        np.mean(recall_0_sum), np.mean(recall_1_sum), np.mean(recall_sum),
        np.mean(precision_0_sum), np.mean(precision_1_sum), np.mean(precision_sum),
        np.mean(sen_sum), np.mean(spec_sum)]


        sum_value_all_subjects.append(mean_value)


        np.savez(save_path+'/y_slice_wise/'+subject+save_name+'.npz', x = np.array(X_csp_com), y = np.array(predict_result))


        with open(save_path+'/'+save_path+'_result.csv', 'a') as csvFile:
            writer = csv.writer(csvFile)
            writer.writerow(['mean', mean_value[0], 
            mean_value[1], mean_value[2], mean_value[3], 
            mean_value[4], mean_value[5], mean_value[6], 
            mean_value[7], mean_value[8], mean_value[9],
            mean_value[10], mean_value[11]])
            writer.writerow([])
    
    mean_all = np.mean(sum_value_all_subjects, axis=0)
    print(mean_all)


    with open(save_path+'/'+save_path+'_result.csv', 'a') as csvFile:
        writer = csv.writer(csvFile)
        writer.writerow(['accuracy', 
                    '0.0 f1-score', '1.0 f1-score', 'average f1-score',
                    '0.0 recall', '1.0 recall', 'average recall',
                    '0.0 precision', '1.0 precision', 'average precision',
                    'sensitivity', 'specificity'
                    ])
        writer.writerows(sum_value_all_subjects)
        writer.writerow(mean_all)

後臺發送"MIME"即可獲取相關代碼資源

論文信息

Decoding EEG Rhythms During Action Observation, Motor Imagery, and Execution for Standing and Sitting

文章僅限學習使用,不用於商業行爲,若有侵權及疑問,請後臺留言!

文章轉載請聯繫後臺管理人員!

更多閱讀

腦機接口讓脊髓損傷患者重新獲得手部觸覺

一種基於EEG和sEMG的假手控制策略

信號處理之頻譜原理與python實現

EMD算法之Hilbert-Huang Transform原理詳解和案例分析

EEGLAB 使用教程 3 -參考電極和重採樣

第2期 | 國內腦機接口領域專家教授彙總

精彩長文 | 腦機接口技術的現狀與未來!

ICA處理腦電資料彙總

收藏 | 腦電EEG基礎與處理彙總

腦機接口BCI學習交流QQ羣:903290195

微信羣請掃碼添加,Rose拉你進羣

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章