============================================================================
原創作品,允許轉載。轉載時請務必以超鏈接形式標明原始出處、以及本聲明。
請註明轉自:http://yunjianfei.iteye.com/blog/
============================================================================
在我們做多進程應用開發的過程中,難免會遇到多個進程訪問同一個資源(臨界資源)的狀況,必須通過加一個全局性的鎖,來實現資源的同步訪問(同一時間只能有一個進程訪問資源)。
舉個例子:
假設我們用mysql來實現一個任務隊列,實現的過程如下:
1. 在Mysql中創建Job表,用於儲存隊列任務,如下:
id auto_increment not null primary key,
message text not null,
job_status not null default 0
);
message 用來存儲任務信息,job_status用來標識任務狀態,假設只有兩種狀態,0:在隊列中, 1:已出隊列
2. 有一個生產者進程,往job表中放新的數據,進行排隊
3.假設有多個消費者進程,從job表中取排隊信息,要做的操作如下:
update jobs set job_status=1 where id = ?; -- id爲剛剛取得的記錄id
4. 如果沒有跨進程的鎖,兩個消費者進程有可能同時取到重複的消息,導致一個消息被消費多次。這種情況是我們不希望看到的,於是,我們需要實現一個跨進程的鎖。
這裏我貼出非常好的一篇文章,大家可以參照一下:
https://blog.engineyard.com/2011/5-subtle-ways-youre-using-mysql-as-a-queue-and-why-itll-bite-you
=========================華麗的分割線=======================================
說道跨進程的鎖實現,我們主要有幾種實現方式:
1. 信號量
2. 文件鎖fcntl
3. socket(端口號綁定)
4. signal
這幾種方式各有利弊,總體來說前2種方式可能多一點,這裏我就不詳細說了,大家可以去查閱資料。
查資料的時候發現mysql中有鎖的實現,適用於對於性能要求不是很高的應用場景,大併發的分佈式訪問可能會有瓶頸,鏈接如下:
http://dev.mysql.com/doc/refman/5.0/fr/miscellaneous-functions.html
我用python實現了一個demo,如下:
文件名:glock.py
#!/usr/bin/env python2.7
#
# -*- coding:utf-8 -*-
#
# Author : yunjianfei
# E-mail : [email protected]
# Date : 2014/02/25
# Desc :
#
import logging, time
import MySQLdb
class Glock:
def __init__(self, db):
self.db = db
def _execute(self, sql):
cursor = self.db.cursor()
try:
ret = None
cursor.execute(sql)
if cursor.rowcount != 1:
logging.error("Multiple rows returned in mysql lock function.")
ret = None
else:
ret = cursor.fetchone()
cursor.close()
return ret
except Exception, ex:
logging.error("Execute sql \"%s\" failed! Exception: %s", sql, str(ex))
cursor.close()
return None
def lock(self, lockstr, timeout):
sql = "SELECT GET_LOCK('%s', %s)" % (lockstr, timeout)
ret = self._execute(sql)
if ret[0] == 0:
logging.debug("Another client has previously locked '%s'.", lockstr)
return False
elif ret[0] == 1:
logging.debug("The lock '%s' was obtained successfully.", lockstr)
return True
else:
logging.error("Error occurred!")
return None
def unlock(self, lockstr):
sql = "SELECT RELEASE_LOCK('%s')" % (lockstr)
ret = self._execute(sql)
if ret[0] == 0:
logging.debug("The lock '%s' the lock is not released(the lock was not established by this thread).", lockstr)
return False
elif ret[0] == 1:
logging.debug("The lock '%s' the lock was released.", lockstr)
return True
else:
logging.error("The lock '%s' did not exist.", lockstr)
return None
#Init logging
def init_logging():
sh = logging.StreamHandler()
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s -%(module)s:%(filename)s-L%(lineno)d-%(levelname)s: %(message)s')
sh.setFormatter(formatter)
logger.addHandler(sh)
logging.info("Current log level is : %s",logging.getLevelName(logger.getEffectiveLevel()))
def main():
init_logging()
db = MySQLdb.connect(host='localhost', user='root', passwd='')
lock_name = 'queue'
l = Glock(db)
ret = l.lock(lock_name, 10)
if ret != True:
logging.error("Can't get lock! exit!")
quit()
time.sleep(10)
logging.info("You can do some synchronization work across processes!")
##TODO
## you can do something in here ##
l.unlock(lock_name)
if __name__ == "__main__":
main()
在main函數裏, l.lock(lock_name, 10) 中,10是表示timeout的時間是10秒,如果10秒還獲取不了鎖,就返回,執行後面的操作。
在這個demo中,在標記TODO的地方,可以將消費者從job表中取消息的邏輯放在這裏。即分割線以上的:
3.假設有多個消費者進程,從job表中取排隊信息,要做的操作如下:
update jobs set job_status=1 where id = ?; -- id爲剛剛取得的記錄id
這樣,就能保證多個進程訪問臨界資源時同步進行了,保證數據的一致性。
測試的時候,啓動兩個glock.py, 結果如下:
[@tj-10-47 test]# ./glock.py
2014-03-14 17:08:40,277 -glock:glock.py-L70-INFO: Current log level is : DEBUG
2014-03-14 17:08:40,299 -glock:glock.py-L43-DEBUG: The lock 'queue' was obtained successfully.
2014-03-14 17:08:50,299 -glock:glock.py-L81-INFO: You can do some synchronization work across processes!
2014-03-14 17:08:50,299 -glock:glock.py-L56-DEBUG: The lock 'queue' the lock was released.
可以看到第一個glock.py是 17:08:50解鎖的,下面的glock.py是在17:08:50獲取鎖的,可以證實這樣是完全可行的。
2014-03-14 17:08:46,873 -glock:glock.py-L70-INFO: Current log level is : DEBUG
2014-03-14 17:08:50,299 -glock:glock.py-L43-DEBUG: The lock 'queue' was obtained successfully.
2014-03-14 17:09:00,299 -glock:glock.py-L81-INFO: You can do some synchronization work across processes!
2014-03-14 17:09:00,300 -glock:glock.py-L56-DEBUG: The lock 'queue' the lock was released.
[@tj-10-47 test]#