DIAC-WOZ數據集(二)---Visual signals

**

視覺信號:

**
Visual signals from the face-to-face data show that several features can serve as indicators of depression, anxiety, and PTSD (Scherer et al., 2013b; Scherer et al., 2014). Specifically, these forms of psychological distress are predicted by a more downward angle of the gaze, less intense smiles and shorter average durations of smile, as well as longer self-touches and fidget on average longer with both hands (e.g. rubbing, stroking) and legs (e.g. tapping, shaking). Moreover, the predictive ability of these indicators is moderated by gender (Stratou et al., 2013). A crossover interaction was observed between gender and distress level on emotional displays such as frowning,contempt,anddisgust. For example, men who scored positively for depression tend to display more frowning than men who did not, whereas women who scored positively for depression tend to display less frowning than those who did not. Other features such as variability of facial expressions show a main effect of gender – women tend to be more expressive than men, while still other observations, such as head-rotation variation, were entirely gender independent.

來自面對面數據的視覺信號表明,一些功能可以作爲抑鬱,焦慮和PTSD的指標(Scherer等,2013b; Scherer等,2014)。具體地說,這些心理困擾的形式通過一下指標來預測:凝視的向下角度更大,微笑強度降低和平均微笑持續時間較短,以及用雙手(例如,撫摸,撫摸)和腿(例如輕拍,搖晃),更長的自我觸摸和平均時間更長的手指。此外,這些指標的預測能力受性別影響(Stratou等,2013)。觀察到性別與患病級別的一種運動顯示之間的交叉互動,如皺着眉頭,輕蔑和厭惡。例如,抑鬱得分爲正的男人比沒有得分的男人更容易皺眉,而抑鬱得分爲正的女人比沒有得分的男人更容易皺眉。面部表情的變異性等其他特徵顯示出性別的主要影響–女性往往比男性更具表現力,而其他諸如頭部旋轉度的觀察結果則完全不依賴性別。

**

非語言行爲註解

**
Severalnon-verbalbehaviors were annotated (Waxer, 1974; Hall et al., 1995): gaze directionality (up, down, left, right, towards interviewer), listening smiles (smiles while not speaking), selfadaptors (self-touches in the hand, body, and head), fidgetingbehaviors,andfoot-tappingorshakingbehaviors. Each behavior was annotated in a separate tier in ELAN. Four student annotators participated in the annotation; each tier was assigned to apair of annotators, who first went through a training phase until the inter-rater agreement (Krippendorff’salpha)exceeded0.7. Following training, each video was annotated by a single annotator; to monitor reliability, every 10–15 videos each pair was assigned the same video and inter-rater agreement was re-checked. Annotators were informed that their reliability was measured but did not know which videos were used for cross-checking (Wildman et al., 1975; Harris and Lahey, 1982).

註釋了幾種非語言行爲(Waxer,1974; Hall等,1995):注視方向(上,下,左,右,向面試官),傾聽的微笑(不說話時微笑),自適應器(手中的自我觸碰) ,身體和頭部),獲取行爲以及踩踏或晃動行爲。 每個行爲都在ELAN的單獨層中進行了註釋。 四個學生註釋者參加了註釋; 每個等級都分配給一對註釋者,他們首先經過培訓階段,直到評分者間協議(Krippendorff的alpha)超過0.7。 經過培訓後,每個視頻都由一個註釋者來註釋; 爲了監控可靠性,對每10到15個視頻分配了相同的視頻,並且重新檢查了評估者之間的協議。 註釋者被告知他們的可靠性已經過測量,但是不知道哪些視頻用於交叉檢查(Wildman等,1975; Harris和Lahey,1982)。

In addition, automatic annotation of non-verbal features was carried out using a multimodal sensor fusion framework called MultiSense, with a multithreading architecture that enables different face- and body-tracking technologies to run in parallel and in realtime. Output from MultiSense was used to estimate the head orientation, the eye-gaze direction, smile level, and smile duration. Further, we automatically analyzed voice characteristics including speakers’ prosody(e.g.fundamentalfrequencyorvoiceintensity) and voice quality characteristics, on a breathy to tense dimension (Scherer et al., 2013a).

**

此外,使用稱爲MultiSense的多模式傳感器融合框架執行了非語言功能的自動註釋,該框架具有多線程架構,該架構使不同的面部和身體跟蹤技術能夠並行並實時運行。 MultiSense的輸出用於估計頭部方向,視線方向,微笑水平和微笑持續時間。 此外,我們會在呼吸到緊張的維度上自動分析語音特徵,包括說話者的韻律(例如基本頻率或語音強度)和語音質量特徵(Scherer等人,2013a)。

**

實例

該語料庫通過開發定製的聲音和聲音來支持自動代理的交互功能。 用於語音識別的語言模型,訓練對自然語言理解的分類器以及爲制定對話政策提供信息; 有關詳細信息,請參見DeVault等。 (2014)。 該語料庫還用於支持座席的遇險檢測功能,它使用多種類型的信息,包括視覺信號,語音質量和對話級別的功能。

** 來自面對面數據的視覺信號表明,一些功能可以作爲抑鬱,焦慮和PTSD的指標(Scherer等,2013b; Scherer等,2014)。具體地說,這些心理困擾的形式是通過凝視的向下角度更大,微笑強度降低和平均微笑持續時間較短,以及用雙手(例如,撫摸,撫摸)和更長的自我觸摸和平均時間更長的手指來預測的。腿(例如輕拍,搖晃)。此外,這些指標的預測能力受性別影響(Stratou等,2013)。觀察到性別與遇險級別的一種運動顯示之間的交叉互動,如皺着眉頭,輕蔑和厭惡。例如,抑鬱得分爲正的男人比沒有得分的男人更容易皺眉,而抑鬱得分爲正的女人比沒有得分的男人更容易皺眉。面部表情的變異性等其他特徵顯示出性別的主要影響–女性往往比男性更具表現力,而其他諸如頭部旋轉度的觀察結果則完全不依賴性別。**

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章