AggregateEventHandler.java
對EventHandler列表的封裝,類似EventHandler List的功能,還實現了生命週期的管理,onStart onShutdown。
Sequence.java
Cache line padded sequence counter 補齊Cache line的序列計數器,ringbuffer和BatchEventProcessor使用到此類來計數。
補齊方式:
publiclong p1, p2, p3, p4, p5, p6, p7;// cache line padding, padding1 privatevolatilelong cursor = INITIAL_CURSOR_VALUE; publiclong p8, p9, p10, p11, p12, p13, p14;// cache line padding. padding2
情形1: object(0~8byte)+ padding1,
cursor+padding2
情形2: padding1+ cursor,
padding2 + object
這樣,保證不同Sequence instance在不同的Cache line
參考資料:http://mechanical-sympathy.blogspot.com/2011/07/false-sharing.html
因爲高速緩存是64字節,而Hotspot JVM的對象頭是兩個部分組成,第一部分是由24字節的hash code和8字節的鎖等狀態標識組成,第二部分是指向該對象類的引用。數組Array有一個附加的"word"來記錄數組長度。每個對象爲了性能優化,採用8個byte粒度邊界對齊的。爲了在packing的時候更高效,對象的field被從定義順序(基於字節大小)按下列順序重排:
1.doubles(8) and longs(8)
2.ints(4) and floats(4)
3.shorts(2) and chars(2)
4.booleans(1) and bytes(1)
5.references(4/8)
6.<repeat for sub-class fields>
所以我們補齊cache line:在任意field之間補上7個long(8)
BatchEventProcessor.java 批量從RingBuffer獲取event代理給EventHandler處理。
關鍵代碼:
public void run() { if (!running.compareAndSet(false, true)) { throw new IllegalStateException("Thread is already running"); } sequenceBarrier.clearAlert(); notifyStart(); T event = null; long nextSequence = sequence.get() + 1L; while (true) { try { final long availableSequence = sequenceBarrier.waitFor(nextSequence); //批量處理,nextSequence無限增長怎麼辦? while (nextSequence <= availableSequence) { event = ringBuffer.get(nextSequence); eventHandler.onEvent(event, nextSequence,nextSequence == availableSequence); nextSequence++; } sequence.set(nextSequence - 1L);//注意回退1,標示(nextSequence - 1L)的event已經消費完成 } catch (final AlertException ex) { if (!running.get()) { break; } } catch (final Throwable ex) { exceptionHandler.handleEventException(ex,nextSequence, event);//異常處理類處理異常信息 sequence.set(nextSequence);//跳過異常信息的序列 nextSequence++; } } notifyShutdown(); running.set(false); }
ClaimStrategy.java
Sequencer裏面的、用於event publishers申請event序列的策略合同。
有以下3種實現:
SingleThreadedClaimStrategy.java: 針對發佈者的策略的單線程實現,只能在單線程做publisher的場景使用。
關鍵方法:
// availableCapacity 需要申請的可用數量
// dependentSequences 依賴的序列
public boolean hasAvailableCapacity(final int availableCapacity, final Sequence[] dependentSequences)
{
final long wrapPoint = (claimSequence.get() + availableCapacity) - bufferSize;//當前已經作爲發佈使用的序列(未被消費)+申請數量-
if (wrapPoint > minGatingSequence.get())
{
long minSequence = getMinimumSequence(dependentSequences);
//取出依賴序列中的最小的序列(未被消費)
minGatingSequence.set(minSequence);
if (wrapPoint > minSequence)
{
return false;//如果期望的到達的序列位置大於依賴序列中的最小的序列(未被消費),說明尚未消費,所以沒有可用序列用於給發佈者分配
}
}
return true;
}
private void waitForFreeSlotAt(final long sequence, final Sequence[] dependentSequences)
{
final long wrapPoint = sequence - bufferSize;
if (wrapPoint > minGatingSequence.get())
{
long minSequence;
while (wrapPoint > (minSequence = getMinimumSequence(dependentSequences)))
{
LockSupport.parkNanos(1L);//等待1納秒
}
minGatingSequence.set(minSequence);
}
}
MultiThreadedClaimStrategy.java
@Override
public long incrementAndGet(final Sequence[] dependentSequences)
{
final MutableLong minGatingSequence = minGatingSequenceThreadLocal.get();
waitForCapacity(dependentSequences,minGatingSequence);//什麼技巧?
final long nextSequence = claimSequence.incrementAndGet();
waitForFreeSlotAt(nextSequence,dependentSequences, minGatingSequence);
return nextSequence;
}
@Override
public long incrementAndGet(final int delta, final Sequence[] dependentSequences)
{
final long nextSequence = claimSequence.addAndGet(delta);
waitForFreeSlotAt(nextSequence,dependentSequences, minGatingSequenceThreadLocal.get());
return nextSequence;
}
@Override
public void serialisePublishing(final long sequence, final Sequence cursor, final int batchSize)
{
int counter = RETRIES;
while (sequence - cursor.get() > pendingPublication.length())
{
if (--counter == 0)
{
Thread.yield();
counter = RETRIES;
}
}
long expectedSequence = sequence - batchSize;
for (long pendingSequence = expectedSequence + 1;pendingSequence <= sequence; pendingSequence++)
{
pendingPublication.set((int) pendingSequence& pendingMask, pendingSequence);
}
long cursorSequence = cursor.get();
if (cursorSequence >= sequence)
{
return;
}
expectedSequence = Math.max(expectedSequence,cursorSequence);
long nextSequence = expectedSequence + 1;
while (cursor.compareAndSet(expectedSequence, nextSequence))
{
expectedSequence = nextSequence;
nextSequence++;
if (pendingPublication.get((int) nextSequence & pendingMask) != nextSequence) //這裏是什麼含義?只有當nextSequence 大於 PendingBufferSize纔會出現不相等的情況。
{
break;
}
}
}
MultiThreadedLowContentionClaimStrategy.java
與MultiThreadedClaimStrategy.java的在於:
@Override
public void serialisePublishing(final long sequence, final Sequence cursor, final int batchSize)
{
final long expectedSequence = sequence - batchSize;
while (expectedSequence != cursor.get())//會不會死循環?
{
// busy spin
}
cursor.set(sequence);
}
EventPublisher.java
時間發佈者,主要代碼:
private void translateAndPublish(final EventTranslator<E> translator, final long sequence)
{
try
{
translator.translateTo(ringBuffer.get(sequence),sequence);//需要根據傳入的translator來依據sequence轉換event後,再發布event.
}
finally
{
ringBuffer.publish(sequence);
}
}
WaitStrategy.java
定製EventProcessor等待cursor這個sequence的策略,有以下4種實現:
/**
* Blocking strategy that uses a lock andcondition variable for {@linkEventProcessor}s waiting on a barrier.
*
* This strategy can be used when throughputand low-latencyare not as important as CPU resource.
*/
BlockingWaitStrategy.java:用到了lock,所以只適合用在throughput和low-latency要求不高的情況下。
/**
* Busy Spin strategy that uses a busy spinloop for {@link com.lmax.disruptor.EventProcessor}s waiting on a barrier.
*
* This strategy will use CPU resource to avoidsyscalls which can introduce latency jitter. It is best
* used when threads can be bound to specificCPU cores.
*/
BusySpinWaitStrategy.java:這種是耗cpu的做法,不做yield()。
/**
* Sleeping strategy that initially spins, thenuses a Thread.yield(), and eventually for the minimum number of nanos
* the OS and JVM will allow while the {@link com.lmax.disruptor.EventProcessor}s are waiting on a barrier.
*
* This strategy is a good compromise betweenperformance and CPU resource. Latency spikes can occur after quiet periods.
*/
SleepingWaitStrategy.java:做一個counter的判斷,小於100才yield(),小於0做LockSupport.parkNanos(1L);
/**
* Yielding strategy that uses a Thread.yield()for {@link com.lmax.disruptor.EventProcessor}s waiting on a barrier
* after an initially spinning.
*
* This strategy is a good compromise betweenperformance and CPU resource without incurring significant latency spikes.
*/
YieldingWaitStrategy.java:counter==0,才做yield()。