判別模型(discriminative model) 和 生成模型(generative model)

http://blog.sciencenet.cn/home.php?mod=space&uid=248173&do=blog&id=227964

http://www.gooseeker.com/cn/node/knowledgebase/discriminative_generative_model

Article 1

【摘要】
- 生成模型:無窮樣本==》概率密度模型 = 產生模型==》預測
- 判別模型:有限樣本==》判別函數 = 預測模型==》預測

【簡介】
簡單的說,假設o是觀察值,q是模型。
如果對P(o|q)建模,就是Generative模型。其基本思想是首先建立樣本的概率密度模型,再利用模型進行推理預測。要求已知樣本無窮或儘可能的大限制。
這種方法一般建立在統計力學和bayes理論的基礎之上。
如果對條件概率(後驗概率) P(q|o)建模,就是Discrminative模型。基本思想是有限樣本條件下建立判別函數,不考慮樣本的產生模型,直接研究預測模型。代表性理論爲統計學習理論。
這兩種方法目前交叉較多。

【判別模型Discriminative Model】——inter-class probabilistic description

又可以稱爲條件模型,或條件概率模型。估計的是條件概率分佈(conditional distribution), p(class|context)。
利用正負例和分類標籤,focus在判別模型的邊緣分佈。目標函數直接對應於分類準確率。

- 主要特點:
尋找不同類別之間的最優分類面,反映的是異類數據之間的差異。
- 優點:
分類邊界更靈活,比使用純概率方法或生產模型得到的更高級。
能清晰的分辨出多類或某一類與其他類之間的差異特徵
在聚類、viewpoint changes, partial occlusion and scale variations中的效果較好
適用於較多類別的識別
判別模型的性能比生成模型要簡單,比較容易學習
- 缺點:
不能反映訓練數據本身的特性。能力有限,可以告訴你的是1還是2,但沒有辦法把整個場景描述出來。
Lack elegance of generative: Priors, 結構, 不確定性
Alternative notions of penalty functions, regularization, 核函數
黑盒操作: 變量間的關係不清楚,不可視

- 常見的主要有:
logistic regression
SVMs
traditional neural networks
Nearest neighbor
Conditional random fields(CRF): 目前最新提出的熱門模型,從NLP領域產生的,正在向ASR和CV上發展。

- 主要應用:
Image and document classification
Biosequence analysis
Time series prediction

【生成模型Generative Model】——intra-class probabilistic description

又叫產生式模型。估計的是聯合概率分佈(joint probability distribution),p(class, context)=p(class|context)*p(context)。

用於隨機生成的觀察值建模,特別是在給定某些隱藏參數情況下。在機器學習中,或用於直接對數據建模(用概率密度函數對觀察到的draw建模),或作爲生成條件概率密度函數的中間步驟。通過使用貝葉斯rule可以從生成模型中得到條件分佈。

如果觀察到的數據是完全由生成模型所生成的,那麼就可以fitting生成模型的參數,從而僅可能的增加數據相似度。但數據很少能由生成模型完全得到,所以比較準確的方式是直接對條件密度函數建模,即使用分類或迴歸分析。

與描述模型的不同是,描述模型中所有變量都是直接測量得到。

- 主要特點:
一般主要是對後驗概率建模,從統計的角度表示數據的分佈情況,能夠反映同類數據本身的相似度。
只關注自己的inclass本身(即點左下角區域內的概率),不關心到底 decision boundary在哪。
- 優點:
實際上帶的信息要比判別模型豐富,
研究單類問題比判別模型靈活性強
模型可以通過增量學習得到
能用於數據不完整(missing data)情況
modular construction of composed solutions to complex problems
prior knowledge can be easily taken into account
robust to partial occlusion and viewpoint changes
can tolerate significant intra-class variation of object appearance
- 缺點:
tend to produce a significant number of false positives. This is particularly true for object classes which share a high visual similarity such as horses and cows
學習和計算過程比較複雜

- 常見的主要有:
Gaussians, Naive Bayes, Mixtures of multinomials
Mixtures of Gaussians, Mixtures of experts, HMMs
Sigmoidal belief networks, Bayesian networks
Markov random fields

所列舉的Generative model也可以用disriminative方法來訓練,比如GMM或HMM,訓練的方法有EBW(Extended Baum Welch),或最近Fei Sha提出的Large Margin方法。

- 主要應用:
NLP:
Traditional rule-based or Boolean logic systems (Dialog and Lexis-Nexis) are giving way to statistical approaches (Markov models and stochastic context grammars)
Medical Diagnosis:
QMR knowledge base, initially a heuristic expert systems for reasoning about diseases and symptoms been augmented with decision theoretic formulation Genomics and Bioinformatics
Sequences represented as generative HMMs

【兩者之間的關係】
由生成模型可以得到判別模型,但由判別模型得不到生成模型。
Can performance of SVMs be combined elegantly with flexible Bayesian statistics?
Maximum Entropy Discrimination marries both methods: Solve over a distribution of parameters (a distribution over solutions)

Article 2

Let's say you have input data x and you want to classify the data into labels y. A generative model learns the joint probability distribution p(x,y) and a discriminative model learns the conditional probability distribution p(y|x) - which you should read as 'the probability of y given x'.

Here's a really simple example. Suppose you have the following data in the form (x,y):

       (1,0), (1,0), (2,0), (2, 1)

p(x,y) is

             y=0   y=1
            -----------
       x=1 | 1/2   0
       x=2 | 1/4   1/4

p(y|x) is

             y=0   y=1
            -----------
       x=1 | 1     0
       x=2 | 1/2   1/2

If you take a few minutes to stare at those two matrices, you will understand the difference between the two probability distributions.

The distribution p(y|x) is the natural distribution for classifying a given example x into a class y, which is why algorithms that model this directly are called discriminative algorithms. Generative algorithms model p(x,y), which can be tranformed into p(y|x) by applying Bayes rule and then used for classification. However, the distribution p(x,y) can also be used for other purposes. For example you could use p(x,y) to generate likely (x,y) pairs.

From the description above you might be thinking that generative models are more generally useful and therefore better, but it's not as simple as that. This paper is a very popular reference on the subject of discriminative vs. generative classifiers, but it's pretty heavy going. The overall gist is that discriminative models generally outperform generative models in classification tasks.

另一個解釋,摘錄如下:

  • 判別模型Discriminative Model,又可以稱爲條件模型,或條件概率模型。估計的是條件概率分佈(conditional distribution), p(class|context)。
  • 生成模型Generative Model,又叫產生式模型。估計的是聯合概率分佈(joint probability distribution),p(class, context)=p(class|context)*p(context)。
發佈了9 篇原創文章 · 獲贊 32 · 訪問量 12萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章