最近在工作上碰見了一些高併發的場景需要加鎖來保證業務邏輯的正確性,並且要求加鎖後性能不能受到太大的影響。初步的想法是通過數據的時間戳,id等關鍵字來加鎖,從而保證不同類型數據處理的併發性。而java自身api提供的鎖粒度太大,很難同時滿足這些需求,於是自己動手寫了幾個簡單的擴展...
1. 分段鎖
借鑑concurrentHashMap的分段思想,先生成一定數量的鎖,具體使用的時候再根據key來返回對應的lock。這是幾個實現裏最簡單,性能最高,也是最終被採用的鎖策略,代碼如下:
/**
* 分段鎖,系統提供一定數量的原始鎖,根據傳入對象的哈希值獲取對應的鎖並加鎖
* 注意:要鎖的對象的哈希值如果發生改變,有可能導致鎖無法成功釋放!!!
*/
public class SegmentLock<T> {
private Integer segments = 16;//默認分段數量
private final HashMap<Integer, ReentrantLock> lockMap = new HashMap<>();
public SegmentLock() {
init(null, false);
}
public SegmentLock(Integer counts, boolean fair) {
init(counts, fair);
}
private void init(Integer counts, boolean fair) {
if (counts != null) {
segments = counts;
}
for (int i = 0; i < segments; i++) {
lockMap.put(i, new ReentrantLock(fair));
}
}
public void lock(T key) {
ReentrantLock lock = lockMap.get((key.hashCode()>>>1) % segments);
lock.lock();
}
public void unlock(T key) {
ReentrantLock lock = lockMap.get((key.hashCode()>>>1) % segments);
lock.unlock();
}
}
2. 哈希鎖
上述分段鎖的基礎上發展起來的第二種鎖策略,目的是實現真正意義上的細粒度鎖。每個哈希值不同的對象都能獲得自己獨立的鎖。在測試中,在被鎖住的代碼執行速度飛快的情況下,效率比分段鎖慢 30% 左右。如果有長耗時操作,感覺表現應該會更好。代碼如下:
public class HashLock<T> {
private boolean isFair = false;
private final SegmentLock<T> segmentLock = new SegmentLock<>();//分段鎖
private final ConcurrentHashMap<T, LockInfo> lockMap = new ConcurrentHashMap<>();
public HashLock() {
}
public HashLock(boolean fair) {
isFair = fair;
}
public void lock(T key) {
LockInfo lockInfo;
segmentLock.lock(key);
try {
lockInfo = lockMap.get(key);
if (lockInfo == null) {
lockInfo = new LockInfo(isFair);
lockMap.put(key, lockInfo);
} else {
lockInfo.count.incrementAndGet();
}
} finally {
segmentLock.unlock(key);
}
lockInfo.lock.lock();
}
public void unlock(T key) {
LockInfo lockInfo = lockMap.get(key);
if (lockInfo.count.get() == 1) {
segmentLock.lock(key);
try {
if (lockInfo.count.get() == 1) {
lockMap.remove(key);
}
} finally {
segmentLock.unlock(key);
}
}
lockInfo.count.decrementAndGet();
lockInfo.unlock();
}
private static class LockInfo {
public ReentrantLock lock;
public AtomicInteger count = new AtomicInteger(1);
private LockInfo(boolean fair) {
this.lock = new ReentrantLock(fair);
}
public void lock() {
this.lock.lock();
}
public void unlock() {
this.lock.unlock();
}
}
}
3. 弱引用鎖
哈希鎖因爲引入的分段鎖來保證鎖創建和銷燬的同步,總感覺有點瑕疵,所以寫了第三個鎖來尋求更好的性能和更細粒度的鎖。這個鎖的思想是藉助java的弱引用來創建鎖,把鎖的銷燬交給jvm的垃圾回收,來避免額外的消耗。
有點遺憾的是因爲使用了ConcurrentHashMap作爲鎖的容器,所以沒能真正意義上的擺脫分段鎖。這個鎖的性能比 HashLock 快10% 左右。鎖代碼:
/**
* 弱引用鎖,爲每個獨立的哈希值提供獨立的鎖功能
*/
public class WeakHashLock<T> {
private ConcurrentHashMap<T, WeakLockRef<T, ReentrantLock>> lockMap = new ConcurrentHashMap<>();
private ReferenceQueue<ReentrantLock> queue = new ReferenceQueue<>();
public ReentrantLock get(T key) {
if (lockMap.size() > 1000) {
clearEmptyRef();
}
WeakReference<ReentrantLock> lockRef = lockMap.get(key);
ReentrantLock lock = (lockRef == null ? null : lockRef.get());
while (lock == null) {
lockMap.putIfAbsent(key, new WeakLockRef<>(new ReentrantLock(), queue, key));
lockRef = lockMap.get(key);
lock = (lockRef == null ? null : lockRef.get());
if (lock != null) {
return lock;
}
clearEmptyRef();
}
return lock;
}
@SuppressWarnings("unchecked")
private void clearEmptyRef() {
Reference<? extends ReentrantLock> ref;
while ((ref = queue.poll()) != null) {
WeakLockRef<T, ? extends ReentrantLock> weakLockRef = (WeakLockRef<T, ? extends ReentrantLock>) ref;
lockMap.remove(weakLockRef.key);
}
}
private static final class WeakLockRef<T, K> extends WeakReference<K> {
final T key;
private WeakLockRef(K referent, ReferenceQueue<? super K> q, T key) {
super(referent, q);
this.key = key;
}
}
}
後記
最開始想借助 locksupport 和 AQS 來實現細粒度鎖,寫着寫着發現正在實現的東西和java 原生的鎖區別不大,於是放棄改爲對java自帶鎖的封裝,浪費了不少時間。
實際上在實現了這些細粒度鎖之後,又有了新的想法,比如可以通過分段思想將數據提交給專門的線程來處理,可以減少大量線程的阻塞時間,留待日後探索...