关联规则挖掘——Apriori及其优化(Python实现)

关联规则挖掘

基本介绍

关联规则的概念最早是在Agrawal等人在1993年发表的论文 Miniing association rules between sets of items in large databases 中提出。关联规则挖掘(关联分析)用于发现隐藏在大型数据集中的联系或者规律。如今随着数据行业的快速发展,我们面对的数据规模愈发巨大,人们对于挖掘海量数据中隐含的关联知识也越来越感兴趣。

研究方向

目前来看,关联规则的主要研究方向有:

  1. 经典方法——Apriori算法
  2. 串行算法
    · Park等人提出的基于散列(Hash)技术产生频繁项集的算法
    · 基于划分(Partition)的算法
    · Toivonen提出基于采样(Sampling)思想的关联规则算法
    · Han等人提出的不产生候选集的FP-Growth算法
  3. 并行分布式算法
    · Agrawal等人提出了CD、DD及CaD三种并行算法
    · Park等人提出的PDM算法
    · 基于DIC思想,Cheung等人提出的APM并行算法
    · 针对DD算法的优化,引入IDD和HD算法
  4. 数据流
    · Giannella等人提出的FP-Stream算法
    · Chi等人提出的Moment算法(基于滑动窗口)
    · Manku 等人提出的Sampling Lossy Counting算法

  5. · AGM,FSG(基于广度优先)
    · gSpan,FFSM,closeGraph(基于FP-Growth)
    · 不确定频繁子图挖掘技术EDFS(基于划分思想混合深度与宽度搜素)
  6. 序列
    · Zaki 等人提出的SPADE
    · 基于投影的PrefixSpan
    · Lin等人提出的MEMISP

以上罗列了一些已知的关联规则挖掘算法,并不全只是我花一个小时查出来的。接下来我主要介绍比较经典的两种算法——Apriori以及FP-Growth的实现方法。

Apriori算法

理论介绍

核心思想: 频繁项集的子集必定是频繁项集。反之,若子集非频繁,则超集必定非频繁。
算法原理: 关联规则—Apriori算法—FPTree

代码实现

手动编写Apriori(超级精炼版)

import pandas as pd
import numpy as np
from itertools import combinations
from operator import itemgetter
from time import time
import warnings
warnings.filterwarnings("ignore")
# 拿到购物栏数据
dataset = pd.read_csv('retail.csv', usecols=['items'])
# 定义自己的Aprior算法
def my_aprior(data, support_count):
    """
    Aprior关联规则挖掘
    @data: 数据
    @support_count: 项集的频度, 最小支持度计数阈值
    """
    start = time()
    # 对数据进行处理,删除多余空格
    for index, row in data.iterrows():
        data.loc[index, 'items'] = row['items'].strip()
    # 找出所有频繁一项集
    single_items = (data['items'].str.split(" ", expand = True)).apply(pd.value_counts) \
    .sum(axis = 1).where(lambda value: value > support_count).dropna()
    print("找到所有频繁一项集")
    # 创建频繁项集对照表
    apriori_data = pd.DataFrame({'items': single_items.index.astype(int), 'support_count': single_items.values, 'set_size': 1})
    # 整理数据集
    data['set_size'] = data['items'].str.count(" ") + 1
    data['items'] = data['items'].apply(lambda row: set(map(int, row.split(" "))))
    single_items_set = set(single_items.index.astype(int))
    # 循环计算,找到频繁项集
    for length in range(2, len(single_items_set) + 1):
        data = data[data['set_size'] >= length]
        d = data['items'] \
            .apply(lambda st: pd.Series(s if set(s).issubset(st) else None for s in combinations(single_items_set, length))) \
            .apply(lambda col: [col.dropna().unique()[0], col.count()] if col.count() >= support_count else None).dropna()
        if d.empty:
            break
        apriori_data = apriori_data.append(pd.DataFrame(
            {'items': list(map(itemgetter(0), d.values)), 'support_count': list(map(itemgetter(1), d.values)),
             'set_size': length}), ignore_index=True)
    print("结束搜索,总耗时%s"%(time() - start))
    return apriori_data

运行

my_aprior(dataset, 5000)

结果

找到所有频繁一项集
结束搜索,总耗时94.51256704330444秒
	items			support_count	set_size
0	32				15167.0			1
1	38				15596.0			1
2	39				50675.0			1
3	41				14945.0			1
4	48				42135.0			1
5	(32, 39)		8455.0			2
6	(32, 48)		8034.0			2
7	(38, 39)		10345.0			2
8	(38, 48)		7944.0			2
9	(39, 41)		11414.0			2
10	(39, 48)		29142.0			2
11	(41, 48)		9018.0			2
12	(32, 39, 48)	5402.0			3
13	(38, 39, 48)	6102.0			3
14	(39, 41, 48)	7366.0			3

使用Apyori包的Apriori方法

# 使用apriori包进行分析
from apyori import apriori
dataset = pd.read_csv('retail.csv', usecols=['items'])
def create_dataset(data):
    for index, row in data.iterrows():
        data.loc[index, 'items'] = row['items'].strip()
    data = data['items'].str.split(" ", expand = True)
    # 按照list来存储
    output = []
    for i in range(data.shape[0]):
        output.append([str(data.values[i, j]) for j in range(data.shape[1])])
    return output

dataset = create_dataset(dataset)
association_rules = apriori(dataset, min_support = 0.05, min_confidence = 0.7, min_lift = 1.2, min_length = 2)
association_result = list(association_rules)
association_result

结果

[RelationRecord(items=frozenset({'41', '39'}), support=0.12946620993171662, ordered_statistics=[OrderedStatistic(items_base=frozenset({'41'}), items_add=frozenset({'39'}), confidence=0.7637336901973905, lift=1.3287082307880087)]),
 RelationRecord(items=frozenset({'38', '39', '48'}), support=0.06921349334180259, ordered_statistics=[OrderedStatistic(items_base=frozenset({'38', '48'}), items_add=frozenset({'39'}), confidence=0.7681268882175226, lift=1.336351311673078)]),
 RelationRecord(items=frozenset({'41', '39', '48'}), support=0.0835507361448243, ordered_statistics=[OrderedStatistic(items_base=frozenset({'41', '48'}), items_add=frozenset({'39'}), confidence=0.8168108227988469, lift=1.4210493489806006)]),
 RelationRecord(items=frozenset({'None', '41', '39'}), support=0.12946620993171662, ordered_statistics=[OrderedStatistic(items_base=frozenset({'41'}), items_add=frozenset({'None', '39'}), confidence=0.7637336901973905, lift=1.3287082307880087), OrderedStatistic(items_base=frozenset({'41', 'None'}), items_add=frozenset({'39'}), confidence=0.7637336901973905, lift=1.3287082307880087)]),
 RelationRecord(items=frozenset({'38', 'None', '39', '48'}), support=0.06921349334180259, ordered_statistics=[OrderedStatistic(items_base=frozenset({'38', '48'}), items_add=frozenset({'None', '39'}), confidence=0.7681268882175226, lift=1.336351311673078), OrderedStatistic(items_base=frozenset({'38', 'None', '48'}), items_add=frozenset({'39'}), confidence=0.7681268882175226, lift=1.336351311673078)]),
 RelationRecord(items=frozenset({'None', '41', '39', '48'}), support=0.0835507361448243, ordered_statistics=[OrderedStatistic(items_base=frozenset({'41', '48'}), items_add=frozenset({'None', '39'}), confidence=0.8168108227988469, lift=1.4210493489806006), OrderedStatistic(items_base=frozenset({'41', 'None', '48'}), items_add=frozenset({'39'}), confidence=0.8168108227988469, lift=1.4210493489806006)])]

FP-Growth算法

Apriori在处理大数据时I/O负载会过大,而FP-Growth在Apriori上进行了优化,它只扫描数据集两次,并将数据压缩入FP-Tree中,不需要生成候选集,大大降低了计算压力。具体算法原理可以参考关联规则—Apriori算法—FPTree
实现方式:

# FP-growth参考博客https://blog.csdn.net/songbinxu/article/details/80411388?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-3.nonecase&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-3.nonecase
class treeNode:
    def __init__(self, nameValue, numOccur, parentNode):
        self.name = nameValue  # 存放结点名字
        self.count = numOccur  # 计数器
        self.nodeLink = None  # 连接相似结点
        self.parent = parentNode  # 存放父节点,用于回溯
        self.children = {}  # 存放子节点

    def inc(self, numOccur):
        self.count += numOccur

    def disp(self, ind=1):
        # 输出调试用
        print('  '*ind, self.name, ' ', self.count)
        for child in self.children.values():
            child.disp(ind+1)

def updateHeader(nodeToTest, targetNode):
    """
    设置头结点
    @nodeToTest: 测试结点
    @targetNode: 目标结点
    """
    while nodeToTest.nodeLink != None:
        nodeToTest = nodeToTest.nodeLink
    nodeToTest.nodeLink = targetNode

def updateFPtree(items, inTree, headerTable, count):
    """
    更新FP-Tree
    @items: 读取的数据项集
    @inTree: 已经生成的树
    @headerTable: 链表的头索引表
    @count: 计数器
    """
    if items[0] in inTree.children:
        # 判断items的第一个结点是否已作为子结点
        inTree.children[items[0]].inc(count)
    else:
        # 创建新的分支
        inTree.children[items[0]] = treeNode(items[0], count, inTree)
        if headerTable[items[0]][1] == None:
            headerTable[items[0]][1] = inTree.children[items[0]]
        else:
            updateHeader(headerTable[items[0]][1], inTree.children[items[0]])
    # 递归
    if len(items) > 1:
        updateFPtree(items[1::], inTree.children[items[0]], headerTable, count)

def createFPtree(dataSet, minSup=1):
    """
    建立FP-Tree
    @dataset: 数据集
    @minSup: 最小支持度
    """
    headerTable = {}
    for trans in dataSet:
        for item in trans:
            headerTable[item] = headerTable.get(item, 0) + dataSet[trans]
    for k in list(headerTable.keys()):
        if headerTable[k] < minSup:
            del(headerTable[k]) # 删除不满足最小支持度的元素
    freqItemSet = set(headerTable.keys()) # 满足最小支持度的频繁项集
    if len(freqItemSet) == 0:
        return None, None
    for k in headerTable:
        headerTable[k] = [headerTable[k], None] # element: [count, node]
    
    retTree = treeNode('Null Set', 1, None)
    for tranSet, count in dataSet.items():
        # dataSet:[element, count]
        localD = {}
        for item in tranSet:
            if item in freqItemSet: # 过滤,只取该样本中满足最小支持度的频繁项
                localD[item] = headerTable[item][0] # element : count
        if len(localD) > 0:
            # 根据全局频数从大到小对单样本排序
            # orderedItem = [v[0] for v in sorted(localD.iteritems(), key=lambda p:(p[1], -ord(p[0])), reverse=True)]
            orderedItem = [v[0] for v in sorted(localD.items(), key=lambda p:(p[1], int(p[0])), reverse=True)]
            # 用过滤且排序后的样本更新树
            updateFPtree(orderedItem, retTree, headerTable, count)
    return retTree, headerTable

def ascendFPtree(leafNode, prefixPath):
    """
    树的回溯
    @leafNode: 叶子结点
    @prefixPath: 前缀路径索引
    """
    if leafNode.parent != None:
        prefixPath.append(leafNode.name)
        ascendFPtree(leafNode.parent, prefixPath)

def findPrefixPath(basePat, myHeaderTab):
    """
    找到条件模式基
    @basePat: 模式基
    @myHeaderTab: 链表的头索引表
    """
    treeNode = myHeaderTab[basePat][1] # basePat在FP树中的第一个结点
    condPats = {}
    while treeNode != None:
        prefixPath = []
        ascendFPtree(treeNode, prefixPath) # prefixPath是倒过来的,从treeNode开始到根
        if len(prefixPath) > 1:
            condPats[frozenset(prefixPath[1:])] = treeNode.count # 关联treeNode的计数
        treeNode = treeNode.nodeLink # 下一个basePat结点
    return condPats

def mineFPtree(inTree, headerTable, minSup, preFix, freqItemList):
    """
    生成我的FP-Tree
    @inTree:
    @headerTable:
    @minSup:
    @preFix: 频繁项
    @ freqItemList: 频繁项所有组合集合 
    """
    # 最开始的频繁项集是headerTable中的各元素
    bigL = [v[0] for v in sorted(headerTable.items(), key=lambda p:p[1])] # 根据频繁项的总频次排序
    for basePat in bigL: # 对每个频繁项
        newFreqSet = preFix.copy()
        newFreqSet.add(basePat)
        freqItemList.append(newFreqSet)
        condPattBases = findPrefixPath(basePat, headerTable) # 当前频繁项集的条件模式基
        myCondTree, myHead = createFPtree(condPattBases, minSup) # 构造当前频繁项的条件FP树
        if myHead != None:
            # print 'conditional tree for: ', newFreqSet
            # myCondTree.disp(1)
            mineFPtree(myCondTree, myHead, minSup, newFreqSet, freqItemList) # 递归挖掘条件FP树

def createInitSet(dataSet):
    """
    创建输入格式
    @dataset: 数据集
    """
    retDict={}
    for trans in dataSet:
        key = frozenset(trans)
        if key in retDict:
            retDict[frozenset(trans)] += 1
        else:
            retDict[frozenset(trans)] = 1
    return retDict

def calSuppData(headerTable, freqItemList, total):
    """
    计算支持度
    @headerTable:
    @freqItemList: 频繁项集
    @total: 总数
    """
    suppData = {}
    for Item in freqItemList:
        # 找到最底下的结点
        Item = sorted(Item, key=lambda x:headerTable[x][0])
        base = findPrefixPath(Item[0], headerTable)
        # 计算支持度
        support = 0
        for B in base:
            if frozenset(Item[1:]).issubset(set(B)):
                support += base[B]
        # 对于根的子结点,没有条件模式基
        if len(base)==0 and len(Item)==1:
            support = headerTable[Item[0]][0]
            
        suppData[frozenset(Item)] = support/float(total)
    return suppData

def aprioriGen(Lk, k):
    retList = []
    lenLk = len(Lk)
    for i in range(lenLk):
        for j in range(i+1, lenLk):
            L1 = list(Lk[i])[:k-2]; L2 = list(Lk[j])[:k-2]
            L1.sort(); L2.sort()
            if L1 == L2: 
                retList.append(Lk[i] | Lk[j])
    return retList

def calcConf(freqSet, H, supportData, br1, minConf=0.7):
    """
    计算置信度,规则评估函数
    @freqSet: 频繁项集,H的超集
    @H: 目标项
    @supportData: 测试
    """
    prunedH = []
    for conseq in H:
        conf = supportData[freqSet] / supportData[freqSet - conseq]
        if conf >= minConf:
            print("{0} --> {1} conf:{2}".format(freqSet - conseq, conseq, conf))
            br1.append((freqSet - conseq, conseq, conf))
            prunedH.append(conseq)
    return prunedH

def rulesFromConseq(freqSet, H, supportData, br1, minConf=0.7):
    """
    这里H相当于freqSet的子集,在这个函数里面,循环是从子集元素个数由2一直增大到freqSet的元素个数减1
    参数含义同calcConf
    """
    m = len(H[0])
    if len(freqSet) > m+1:
        Hmp1 = aprioriGen(H, m+1)
        Hmp1 = calcConf(freqSet, Hmp1, supportData, br1, minConf)
        if len(Hmp1)>1:
            rulesFromConseq(freqSet, Hmp1, supportData, br1, minConf)

def generateRules(freqItemList, supportData, minConf=0.7):
    """
    关联规则生成主函数
    @L: 频繁集项列表   
    @supportData: 包含频繁项集支持数据的字典 
    @minConf: 最小可信度阈值
    构建关联规则需有大于等于两个的元素
    """
    bigRuleList = []
    for freqSet in freqItemList:
        H1 = [frozenset([item]) for item in freqSet]
        if len(freqSet)>1:
            rulesFromConseq(freqSet, H1, supportData, bigRuleList, minConf)
        else:
            calcConf(freqSet, H1, supportData, bigRuleList, minConf)
    return bigRuleList

调用:

# 读取数据
dataset = pd.read_csv('retail.csv', usecols=['items'])
for index, row in dataset.iterrows():
        dataset.loc[index, 'items'] = row['items'].strip()
dataset = dataset['items'].str.split(" ")
start = time()
initSet = createInitSet(dataset.values)
# # 用数据集构造FP树,最小支持度5000
myFPtree, myHeaderTab = createFPtree(initSet, 5000)
freqItems = []
mineFPtree(myFPtree, myHeaderTab, 5000, set([]), freqItems)
print("结束搜索,总耗时%s"%(time() - start))
for x in freqItems:
    print(x)

输出结果:

结束搜索,总耗时3.236400842666626
{'41'}
{'41', '48'}
{'41', '39', '48'}
{'41', '39'}
{'32'}
{'48', '32'}
{'39', '48', '32'}
{'39', '32'}
{'38'}
{'38', '48'}
{'38', '39', '48'}
{'38', '39'}
{'48'}
{'39', '48'}
{'39'}

运算时间相比Apriori大幅降低。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章