有監督分類(Supervised Classification)
如果分類的建立基於包含每個輸入的正確標籤的訓練語料,被稱爲有監督分類。其框架圖如下:
性別鑑定(Gender Identification)
以下特徵提取器函數建立一個字典:
>>> def gender_features(word):
... return {'last_letter': word[-1]}
>>> gender_features('Shrek')
{'last_letter': 'k'}
這個函數返回的字典被稱爲 特徵集。
>>> from nltk.corpus import names
>>> import random
>>> names = ([(name, 'male') for name in names.words('male.txt')] +
... [(name, 'female') for name in names.words('female.txt')])
>>> random.shuffle(names)
接下來,我們使用特徵提取器處理名稱數據,並劃分特徵集的結果鏈表爲一個訓練集和一個測試集。訓練集用於訓練一個新的“樸素貝葉斯”分類器。
>>> featuresets = [(gender_features(n), g) for (n,g) in names]
>>> train_set, test_set = featuresets[500:], featuresets[:500]
>>> classifier = nltk.NaiveBayesClassifier.train(train_set)
現在,讓我們在上面測試一些沒有出現在訓練數據中的名字:
>>> classifier.classify(gender_features('Neo'))
'male'
>>> classifier.classify(gender_features('Trinity'))
'female'
用測試集生成準確率:
>>> nltk.classify.accuracy(classifier, test_set)
0.758
然後,檢查分類器,確定哪些特徵對於區分名字的性別是最有效的。
>>> classifier.show_most_informative_features(5)
Most Informative Features
last_letter = 'a' female : male = 38.3 : 1.0
last_letter = 'k' male : female = 31.4 : 1.0
last_letter = 'f' male : female = 15.3 : 1.0
last_letter = 'p' male : female = 10.6 : 1.0
last_letter = 'w' male : female = 10.6 : 1.0
選擇正確的特徵(Choosing The Right Features)
下面這個特徵提取器返回的特徵集包括大量指定的特徵,從而導致對於相對較小的名字語料庫過擬合。
def gender_features2(name):
features = {}
features["first_letter"] = name[0].lower()
features["last_letter"] = name[-1].lower()
for letter in 'abcdefghijklmnopqrstuvwxyz':
features["count({})".format(letter)] = name.lower().count(letter)
features["has({})".format(letter)] = (letter in name.lower())
return features
如果你提供太多的特徵,那麼該算法將高度依賴你的訓練數據的特徵,從而一般化到新的例子的效果不會很好。這個問題被稱爲 過擬合,當運作在小訓練集上時尤其會有問題。
用上面所示的特徵提取器訓練樸素貝葉斯分類器,將會過擬合這個相對較小的訓練集,造成這個系統的精度比只考慮每個名字最後一個字母的分類器的精度低約 1%。
>>> featuresets = [(gender_features2(n), g) for (n,g) in names]
>>> train_set, test_set = featuresets[500:], featuresets[:500]
>>> classifier = nltk.NaiveBayesClassifier.train(train_set)
>>> print nltk.classify.accuracy(classifier, test_set)
0.748
一旦初始特徵集被選定,完善特徵集的一個非常有成效的方法是 錯誤分析。首先,我們選擇一個 開發集,包含用於創建模型的語料數據。然後將這種開發集分爲 訓練集和 開發測試集。
>>> train_names = names[1500:]
>>> devtest_names = names[500:1500]
>>> test_names = names[:500]
訓練集用於訓練模型,開發測試集用於進行錯誤分析,測試集用於系統的最終評估。下圖顯示了將語料數據劃分成不同的子集。
>>> train_set = [(gender_features(n), g) for (n,g) in train_names]
>>> devtest_set = [(gender_features(n), g) for (n,g) in devtest_names]
>>> test_set = [(gender_features(n), g) for (n,g) in test_names]
>>> classifier = nltk.NaiveBayesClassifier.train(train_set)
>>> nltk.classify.accuracy(classifier, devtest_set)
0.765
使用開發測試集,我們可以生成一個分類器預測名字性別時的錯誤列表。
>>> errors = []
>>> for (name, tag) in devtest_names:
... guess = classifier.classify(gender_features(name))
... if guess != tag:
... errors.append( (tag, guess, name) )
然後,可以檢查errors裏面的個別錯誤案例,嘗試確定什麼額外信息將使其能夠作出正確的決定(或者現有的哪部分信息導致其做出錯誤的決定)。然後可以相應的調整特徵集。
文檔分類(Document Classification)
我們選擇電影評論語料庫,將每個評論歸類爲正面或負面。
>>> from nltk.corpus import movie_reviews
>>> documents = [(list(movie_reviews.words(fileid)), category)
... for category in movie_reviews.categories()
... for fileid in movie_reviews.fileids(category)]
>>> random.shuffle(documents)
我們爲文檔定義一個特徵提取器,我們可以爲每個詞定義一個特性表示該文檔是否包含這個詞。我們一開始構建一個整個語料庫中前 2000個最頻繁的詞的鏈表。然後,定義一個特徵提取器,簡單地檢查這些詞是否在一個給定的文檔中。
all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = list(all_words)[:2000]
def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features['contains({})'.format(word)] = (word in document_words)
return features
然後,我們用特徵提取器來訓練一個分類器。
featuresets = [(document_features(d), c) for (d,c) in documents]
train_set, test_set = featuresets[100:], featuresets[:100]
classifier = nltk.NaiveBayesClassifier.train(train_set)
我們可以使用 show_most_informative_features()來找出哪些特徵是分類器發現最有信息量的。
>>> print(nltk.classify.accuracy(classifier, test_set))
0.81
>>> classifier.show_most_informative_features(5)
Most Informative Features
contains(outstanding) = True pos : neg = 11.1 : 1.0
contains(seagal) = True neg : pos = 7.7 : 1.0
contains(wonderfully) = True pos : neg = 6.8 : 1.0
contains(damon) = True pos : neg = 5.9 : 1.0
contains(wasted) = True neg : pos = 5.8 : 1.0
探索上下文語境
下面,我們將傳遞整個(未標註的)句子,以及目標詞的索引。
def pos_features(sentence, i):
features = {"suffix(1)": sentence[i][-1:],
"suffix(2)": sentence[i][-2:],
"suffix(3)": sentence[i][-3:]}
if i == 0:
features["prev-word"] = "<START>"
else:
features["prev-word"] = sentence[i-1]
return features
>>> pos_features(brown.sents()[0], 8)
{'suffix(3)': 'ion', 'prev-word': 'an', 'suffix(2)': 'on', 'suffix(1)': 'n'}
>>> tagged_sents = brown.tagged_sents(categories='news')
>>> featuresets = []
>>> for tagged_sent in tagged_sents:
... untagged_sent = nltk.tag.untag(tagged_sent)
... for i, (word, tag) in enumerate(tagged_sent):
... featuresets.append((pos_features(untagged_sent, i), tag) )
>>> size = int(len(featuresets) * 0.1)
>>> train_set, test_set = featuresets[size:], featuresets[:size]
>>> classifier = nltk.NaiveBayesClassifier.train(train_set)
>>> nltk.classify.accuracy(classifier, test_set)
0.78915962207856782