Queue常用類解析之BlockingQueue(一):PriorityBlockingQueue、DelayQueue和DelayedWorkQueue

Queue常用類解析之PriorityQueue
Queue常用類解析之ConcurrentLinkedQueue

一、簡介

BlockingQueue是concurrent包下的一個併發Queue的接口,稱爲阻塞隊列。
與ConcurrentLinkedQueue通過CAS方式來實現併發不同,BlockingQueue的併發方案是阻塞等待。
Jdk爲BlockingQueue提供了不少的實現類,主要用於生產者-消費者模式的場景,如線程池就通過BlockingQueue管理任務。
下面將針對BlockingQueue的一些實現類展開介紹。

1. 常用Queue類圖

常用Queue類圖

2. BlockingQueue核心方法描述

BlockingQueue方法描述

二、PriorityBlockingQueue

從命名上就能看出,PriorityBlockingQueue是PriorityQueue的併發阻塞版本。
在PriorityBlockingQueue的數據結構是數組表示的最小堆,入隊時不會造成阻塞,當隊列中元素已滿時會拋出OutOfMemoryError異常。出隊時如果隊列中沒有元素線程會陷入阻塞狀態,由入隊方法進行喚醒。

1. 屬性

/**
* Priority queue represented as a balanced binary heap: the two
 * children of queue[n] are queue[2*n+1] and queue[2*(n+1)].  The
 * priority queue is ordered by comparator, or by the elements'
 * natural ordering, if comparator is null: For each node n in the
 * heap and each descendant d of n, n <= d.  The element with the
 * lowest value is in queue[0], assuming the queue is nonempty.
 */
private transient Object[] queue;

/**
 * The number of elements in the priority queue.
 */
private transient int size;

/**
 * The comparator, or null if priority queue uses elements'
 * natural ordering.
 */
private transient Comparator<? super E> comparator;

/**
 * Lock used for all public operations
 */
private final ReentrantLock lock;

/**
 * Condition for blocking when empty
 */
private final Condition notEmpty;

/**
 * Spinlock for allocation, acquired via CAS.
 */
private transient volatile int allocationSpinLock;

queue、size和comparator和PriorityQueue中的含義並沒有什麼不同。
我們主要關注的就是後面的3和併發屬性:lock, notEmpty 和 allocationSpinLock。
事實上,在構造方法中對其的初始化中可以看到

this.lock = new ReentrantLock();
 this.notEmpty = lock.newCondition();

lock用於在隊列方法中加鎖處理,保證線程安全。
notEmpty用來表示隊列非空的信號,當阻塞方法向空隊列獲取元素時,notEmpty.await陷入阻塞。當入隊方法加入了新元素時,notEmpty.notifyAll發送信號喚醒阻塞線程。
allocationSpinLock則是在擴容時的一個自旋鎖,用0, 1分別表示十分可以進行操作,通過CAS進行修改。

2. PriorityBlockingQueue#offer(Object)

public boolean offer(E e) {
    if (e == null)
        throw new NullPointerException();
    final ReentrantLock lock = this.lock;
    lock.lock();
    int n, cap;
    Object[] array;
    while ((n = size) >= (cap = (array = queue).length))
    	//擴容
        tryGrow(array, cap);
    try {
        Comparator<? super E> cmp = comparator;
        if (cmp == null)
            siftUpComparable(n, e, array);
        else
            siftUpUsingComparator(n, e, array, cmp);
        size = n + 1;
        //喚醒阻塞線程
        notEmpty.signal();
    } finally {
        lock.unlock();
    }
    return true;
}

PriorityBlockingQueue中put、add和offer的方法是完全一致的。
可以看到,除了加鎖和notEmpty.signal();外和PirorityQueue#offer()沒什麼兩樣。

3. PriorityBlockingQueue#tryGrow(Object[], int)

private void tryGrow(Object[] array, int oldCap) {
    lock.unlock(); // must release and then re-acquire main lock
    Object[] newArray = null;
    if (allocationSpinLock == 0 &&
        UNSAFE.compareAndSwapInt(this, allocationSpinLockOffset,
                                 0, 1)) {
        try {
            int newCap = oldCap + ((oldCap < 64) ?
                                   (oldCap + 2) : // grow faster if small
                                   (oldCap >> 1));
            if (newCap - MAX_ARRAY_SIZE > 0) {    // possible overflow
                int minCap = oldCap + 1;
                if (minCap < 0 || minCap > MAX_ARRAY_SIZE)
                    throw new OutOfMemoryError();
                newCap = MAX_ARRAY_SIZE;
            }
            if (newCap > oldCap && queue == array)
                newArray = new Object[newCap];
        } finally {
            allocationSpinLock = 0;
        }
    }
    if (newArray == null) // back off if another thread is allocating
        Thread.yield();
    lock.lock();
    if (newArray != null && queue == array) {
        queue = newArray;
        System.arraycopy(array, 0, newArray, 0, oldCap);
    }
}

擴容的主題邏輯分爲兩步:新建數組newArray和數組元素複製
新建數組只有當allocationSpinLock爲0纔可以進行,否則說明其他線程正在擴容。
如果其他線程正在進行擴容,本線程的本次擴容不需要進行操作,並且會進行線程讓步。
本線程新建數組成功後進行數組元素複製,並覆蓋舊數組。
新建數組的多線程併發通過allocationSpinLock來控制,因此此時需要先退出lock鎖,新建數組的邏輯結束後再次獲取鎖。

4. PriorityBlockingQueue#poll(long, TimeUnit)

public E poll(long timeout, TimeUnit unit) throws InterruptedException {
        long nanos = unit.toNanos(timeout);
        final ReentrantLock lock = this.lock;
        lock.lockInterruptibly();
        E result;
        try {
            while ( (result = dequeue()) == null && nanos > 0)
                nanos = notEmpty.awaitNanos(nanos);
        } finally {
            lock.unlock();
        }
        return result;
    }
private E dequeue() {
    int n = size - 1;
    if (n < 0)
        return null;
    else {
        Object[] array = queue;
        E result = (E) array[0];
        E x = (E) array[n];
        array[n] = null;
        Comparator<? super E> cmp = comparator;
        if (cmp == null)
            siftDownComparable(0, x, array, n);
        else
            siftDownUsingComparator(0, x, array, n, cmp);
        size = n;
        return result;
    }
}

dequeue方法就相當於PriorityQueue中的poll方法。
出隊時如果沒有元素,則進行阻塞。

三、DelayQueue

DelayQueue叫做延遲隊列,隊列中的元素必須是Delayed的實現類,隊列中的元素不但會按照延遲時間delay進行排序,且只有等待元素的延遲時間delay到期後才能出隊。
DelayQueue中含有一個PriorityQueue類型的成員變量,由其完成數據機構的操作,因此其數據結構和PriorityQueue是一致的。
元素入隊不會出現阻塞可能,但是達到最大數量後會拋出OutOfMemoryError。
當隊列中沒有元素或者所以元素的延遲時間均未到期時,出隊線程會陷入阻塞,等待入隊操作的喚醒。

1. 屬性

private final transient ReentrantLock lock = new ReentrantLock();
private final PriorityQueue<E> q = new PriorityQueue<E>();

/**
 * Thread designated to wait for the element at the head of
 * the queue.  This variant of the Leader-Follower pattern
 * (http://www.cs.wustl.edu/~schmidt/POSA/POSA2/) serves to
 * minimize unnecessary timed waiting.  When a thread becomes
 * the leader, it waits only for the next delay to elapse, but
 * other threads await indefinitely.  The leader thread must
 * signal some other thread before returning from take() or
 * poll(...), unless some other thread becomes leader in the
 * interim.  Whenever the head of the queue is replaced with
 * an element with an earlier expiration time, the leader
 * field is invalidated by being reset to null, and some
 * waiting thread, but not necessarily the current leader, is
 * signalled.  So waiting threads must be prepared to acquire
 * and lose leadership while waiting.
 */
private Thread leader = null;

/**
 * Condition signalled when a newer element becomes available
 * at the head of the queue or a new thread may need to
 * become leader.
 */
private final Condition available = lock.newCondition();

lock鎖和available的作用和其他BlockingQueue的作用一樣,用於保證線程安全和線程間通信。
leader用於記錄一個阻塞的線程,是leader-follower模式的變種,具體內容在poll方法中結束。

2. DelayQueue#offer(Object)

public boolean offer(E e) {
   final ReentrantLock lock = this.lock;
    lock.lock();
    try {
        q.offer(e);
        if (q.peek() == e) {
            leader = null;
            available.signal();
        }
        return true;
    } finally {
        lock.unlock();
    }
}

DelayQueue中put、add和offer的方法是完全一致的。
加鎖,leader置空,發送喚醒信號。

3. DelayQueue#poll(long, TimeUnit)

public E poll(long timeout, TimeUnit unit) throws InterruptedException {
    long nanos = unit.toNanos(timeout);
    final ReentrantLock lock = this.lock;
    lock.lockInterruptibly();
    try {
        for (;;) {
            E first = q.peek();
            if (first == null) {
                if (nanos <= 0)
                    return null;
                else
                    nanos = available.awaitNanos(nanos);
            } else {
                long delay = first.getDelay(NANOSECONDS);
                if (delay <= 0)
                    return q.poll();
                if (nanos <= 0)
                    return null;
                first = null; // don't retain ref while waiting
                if (nanos < delay || leader != null)
                    nanos = available.awaitNanos(nanos);
                else {
                    Thread thisThread = Thread.currentThread();
                    leader = thisThread;
                    try {
                        long timeLeft = available.awaitNanos(delay);
                        nanos -= delay - timeLeft;
                    } finally {
                        if (leader == thisThread)
                            leader = null;
                    }
                }
            }
        }
    } finally {
        if (leader == null && q.peek() != null)
            available.signal();
        lock.unlock();
    }
}

重點是leader的處理,其他邏輯就是普通BlockingQueue的出隊處理。
leader的記錄可以最小化線程的阻塞等待時間。
線程的阻塞等待時間與兩方面有關係,分別是:poll方法的等待時間參數(take方法爲永久)和頭元素的剩餘的到期時間。
leader線程的阻塞時間是上述兩個值的最小值,其他阻塞線程的阻塞時間是poll方法的等待時間參數(take方法爲永久)。
當leader線程結束等待並且取到了頭元素,必須發送信號喚醒其他阻塞線程。這時候其他的阻塞線程被喚醒後要麼成功取到了頭元素,要麼重新陷入阻塞並確定新的leader線程。
當隊列中由元素入隊時,同樣會發生喚醒信號。由於入隊的元素有可能會是新的頭元素,這時候還需要重新確定leader線程的阻塞時間,所以會將leader線程置null,在所以喚醒後又再次陷入阻塞的線程中確定一個新的leader線程。

四、DelayedWorkQueue

DelayedWorkQueue是ScheduledThreadPoolExecutor的靜態內部類,其原理和DelayQueue基本一致。
具體介紹可以參見線程池ScheduledThreadPoolExecutor源碼解析

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章