netty源碼分析(28)- PooledByteBufAllocator分析

上一節分析了UnpooledByteBufAllocator,包括了堆內堆外內存是如何分配的,底層時時如何獲取數據內容的。
本節分析分析PooledByteBufAllocator,看看它是怎麼做Pooled類型的內存管理的。

  • 入口PooledByteBufAllocator#newHeapBuffer()PooledByteBufAllocator#newDirectBuffer()
    堆內內存和堆外內存分配的模式都比較固定
  1. 拿到線程局部緩存PoolThreadCache
  2. 拿到不同類型的rena
  3. 使用不同類型的arena進行內存分配
    @Override
    protected ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity) {
        //拿到線程局部緩存
        PoolThreadCache cache = threadCache.get();
        //拿到heapArena
        PoolArena<byte[]> heapArena = cache.heapArena;

        final ByteBuf buf;
        if (heapArena != null) {
            //使用heapArena分配內存
            buf = heapArena.allocate(cache, initialCapacity, maxCapacity);
        } else {
            buf = PlatformDependent.hasUnsafe() ?
                    new UnpooledUnsafeHeapByteBuf(this, initialCapacity, maxCapacity) :
                    new UnpooledHeapByteBuf(this, initialCapacity, maxCapacity);
        }

        return toLeakAwareBuffer(buf);
    }

    @Override
    protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
        //拿到線程局部緩存
        PoolThreadCache cache = threadCache.get();
        //拿到directArena
        PoolArena<ByteBuffer> directArena = cache.directArena;

        final ByteBuf buf;
        if (directArena != null) {
            //使用directArena分配內存
            buf = directArena.allocate(cache, initialCapacity, maxCapacity);
        } else {
            buf = PlatformDependent.hasUnsafe() ?
                    UnsafeByteBufUtil.newUnsafeDirectByteBuf(this, initialCapacity, maxCapacity) :
                    new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity);
        }

        return toLeakAwareBuffer(buf);
    }
  • 跟蹤threadCache.get()
    調用的是FastThreadLocal#get()方法。那麼其實threadCache也是一個FastThreadLocal,可以看成是jdk的ThreadLocal,只不過還了一種跟家塊的是西安方法。get方發住喲啊是調用了初始化方法initialize
    public final V get() {
        InternalThreadLocalMap threadLocalMap = InternalThreadLocalMap.get();
        Object v = threadLocalMap.indexedVariable(index);
        if (v != InternalThreadLocalMap.UNSET) {
            return (V) v;
        }
        //調用初始化方法
        V value = initialize(threadLocalMap);
        registerCleaner(threadLocalMap);
        return value;
    }
private final PoolThreadLocalCache threadCache;

initialValue()方法的邏輯如下

  1. 從預先準備好的heapArenasdirectArenas中獲取最少使用的arena
  2. 使用獲取到的arean爲參數,實例化一個PoolThreadCache並返回
    final class PoolThreadLocalCache extends FastThreadLocal<PoolThreadCache> {
        private final boolean useCacheForAllThreads;

        PoolThreadLocalCache(boolean useCacheForAllThreads) {
            this.useCacheForAllThreads = useCacheForAllThreads;
        }

        @Override
        protected synchronized PoolThreadCache initialValue() {
            /**
             * arena翻譯成競技場,關於內存非配的邏輯都在這個競技場中進行分配
             */
            //獲取heapArena:從heapArenas堆內競技場中拿出使用最少的一個arena
            final PoolArena<byte[]> heapArena = leastUsedArena(heapArenas);
            //獲取directArena:從directArena堆內競技場中拿出使用最少的一個arena
            final PoolArena<ByteBuffer> directArena = leastUsedArena(directArenas);

            Thread current = Thread.currentThread();
            if (useCacheForAllThreads || current instanceof FastThreadLocalThread) {
                //創建PoolThreadCache:該Cache最終被一個線程使用
                //通過heapArena和directArena維護兩大塊內存:堆和堆外內存
                //通過tinyCacheSize,smallCacheSize,normalCacheSize維護ByteBuf緩存列表維護反覆使用的內存塊
                return new PoolThreadCache(
                        heapArena, directArena, tinyCacheSize, smallCacheSize, normalCacheSize,
                        DEFAULT_MAX_CACHED_BUFFER_CAPACITY, DEFAULT_CACHE_TRIM_INTERVAL);
            }
            // No caching so just use 0 as sizes.
            return new PoolThreadCache(heapArena, directArena, 0, 0, 0, 0, 0);
        }

      //省略代碼......

      }

查看PoolThreadCache其維護了兩種類型的內存分配策略,一種是上述通過持有heapArenadirectArena,另一種是通過維護tiny,small,normal對應的緩存列表來維護反覆使用的內存。

final class PoolThreadCache {

    private static final InternalLogger logger = InternalLoggerFactory.getInstance(PoolThreadCache.class);

    //通過arena的方式維護內存
    final PoolArena<byte[]> heapArena;
    final PoolArena<ByteBuffer> directArena;

    //維護了tiny, small, normal三種類型的緩存列表
    // Hold the caches for the different size classes, which are tiny, small and normal.
    private final MemoryRegionCache<byte[]>[] tinySubPageHeapCaches;
    private final MemoryRegionCache<byte[]>[] smallSubPageHeapCaches;
    private final MemoryRegionCache<ByteBuffer>[] tinySubPageDirectCaches;
    private final MemoryRegionCache<ByteBuffer>[] smallSubPageDirectCaches;
    private final MemoryRegionCache<byte[]>[] normalHeapCaches;
    private final MemoryRegionCache<ByteBuffer>[] normalDirectCaches;

    // Used for bitshifting when calculate the index of normal caches later
    private final int numShiftsNormalDirect;
    private final int numShiftsNormalHeap;
    private final int freeSweepAllocationThreshold;
    private final AtomicBoolean freed = new AtomicBoolean();

    private int allocations;

    // TODO: Test if adding padding helps under contention
    //private long pad0, pad1, pad2, pad3, pad4, pad5, pad6, pad7;

    PoolThreadCache(PoolArena<byte[]> heapArena, PoolArena<ByteBuffer> directArena,
                    int tinyCacheSize, int smallCacheSize, int normalCacheSize,
                    int maxCachedBufferCapacity, int freeSweepAllocationThreshold) {
        checkPositiveOrZero(maxCachedBufferCapacity, "maxCachedBufferCapacity");
        this.freeSweepAllocationThreshold = freeSweepAllocationThreshold;

        //通過持有heapArena和directArena,arena的方式管理內存分配
        this.heapArena = heapArena;
        this.directArena = directArena;

        //通過tinyCacheSize,smallCacheSize,normalCacheSize創建不同類型的緩存列表並保存到成員變量
        if (directArena != null) {
            tinySubPageDirectCaches = createSubPageCaches(
                    tinyCacheSize, PoolArena.numTinySubpagePools, SizeClass.Tiny);
            smallSubPageDirectCaches = createSubPageCaches(
                    smallCacheSize, directArena.numSmallSubpagePools, SizeClass.Small);

            numShiftsNormalDirect = log2(directArena.pageSize);
            normalDirectCaches = createNormalCaches(
                    normalCacheSize, maxCachedBufferCapacity, directArena);

            directArena.numThreadCaches.getAndIncrement();
        } else {
            // No directArea is configured so just null out all caches
            tinySubPageDirectCaches = null;
            smallSubPageDirectCaches = null;
            normalDirectCaches = null;
            numShiftsNormalDirect = -1;
        }
        if (heapArena != null) {
            // Create the caches for the heap allocations
            //創建規格化緩存隊列
            tinySubPageHeapCaches = createSubPageCaches(
                    tinyCacheSize, PoolArena.numTinySubpagePools, SizeClass.Tiny);
            //創建規格化緩存隊列
            smallSubPageHeapCaches = createSubPageCaches(
                    smallCacheSize, heapArena.numSmallSubpagePools, SizeClass.Small);

            numShiftsNormalHeap = log2(heapArena.pageSize);
            //創建規格化緩存隊列
            normalHeapCaches = createNormalCaches(
                    normalCacheSize, maxCachedBufferCapacity, heapArena);

            heapArena.numThreadCaches.getAndIncrement();
        } else {
            // No heapArea is configured so just null out all caches
            tinySubPageHeapCaches = null;
            smallSubPageHeapCaches = null;
            normalHeapCaches = null;
            numShiftsNormalHeap = -1;
        }

        // Only check if there are caches in use.
        if ((tinySubPageDirectCaches != null || smallSubPageDirectCaches != null || normalDirectCaches != null
                || tinySubPageHeapCaches != null || smallSubPageHeapCaches != null || normalHeapCaches != null)
                && freeSweepAllocationThreshold < 1) {
            throw new IllegalArgumentException("freeSweepAllocationThreshold: "
                    + freeSweepAllocationThreshold + " (expected: > 0)");
        }
    }

    private static <T> MemoryRegionCache<T>[] createSubPageCaches(
            int cacheSize, int numCaches, SizeClass sizeClass) {
        if (cacheSize > 0 && numCaches > 0) {
            //MemoryRegionCache 維護緩存的一個對象
            @SuppressWarnings("unchecked")
            MemoryRegionCache<T>[] cache = new MemoryRegionCache[numCaches];
            for (int i = 0; i < cache.length; i++) {
                // TODO: maybe use cacheSize / cache.length
                //每一種MemoryRegionCache(tiny,small,normal)都表示不同內存大小(不同規格)的一個隊列
                cache[i] = new SubPageMemoryRegionCache<T>(cacheSize, sizeClass);
            }
            return cache;
        } else {
            return null;
        }
    }

    private static <T> MemoryRegionCache<T>[] createNormalCaches(
            int cacheSize, int maxCachedBufferCapacity, PoolArena<T> area) {
        if (cacheSize > 0 && maxCachedBufferCapacity > 0) {
            int max = Math.min(area.chunkSize, maxCachedBufferCapacity);
            int arraySize = Math.max(1, log2(max / area.pageSize) + 1);
            //MemoryRegionCache 維護緩存的一個對象
            @SuppressWarnings("unchecked")
            MemoryRegionCache<T>[] cache = new MemoryRegionCache[arraySize];
            for (int i = 0; i < cache.length; i++) {
                //每一種MemoryRegionCache(tiny,small,normal)都表示不同內存(不同規格)大小的一個隊列
                cache[i] = new NormalMemoryRegionCache<T>(cacheSize);
            }
            return cache;
        } else {
            return null;
        }
    }

......
}

通過查看分配緩存的方法PoolThreadCache#createSubPageCaches()可以發現具體維護的緩存列表對象MemoryRegionCache實際上時維護了一個Queue<Entry<T>> queue也就是隊列。

    private abstract static class MemoryRegionCache<T> {
        private final int size;
        private final Queue<Entry<T>> queue;
        private final SizeClass sizeClass;
        private int allocations;

        MemoryRegionCache(int size, SizeClass sizeClass) {
            //做一個簡單的規格化
            this.size = MathUtil.safeFindNextPositivePowerOfTwo(size);
            //持有這種規格的緩存隊列
            queue = PlatformDependent.newFixedMpscQueue(this.size);
            this.sizeClass = sizeClass;
        }
     ......
     }
  • 關於準備好的內存競技場heapArenadirectArenaPooledByteBufAllocator持有。在實例化分配器的時候被初始化值
    private final PoolArena<byte[]>[] heapArenas;
    private final PoolArena<ByteBuffer>[] directArenas;
    
    //三種緩存列表長度
    private final int tinyCacheSize;
    private final int smallCacheSize;
    private final int normalCacheSize;

跟蹤初始化的過程可以發現,其實headArenadirectArena都是一個PoolArena[],其內部分別定義了兩個內部類PoolArena.HeapArenaPoolArena.DirectArena分別表示堆內內存競技場和堆外內存競技場。

    public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder,
                                  int tinyCacheSize, int smallCacheSize, int normalCacheSize,
                                  boolean useCacheForAllThreads, int directMemoryCacheAlignment) {
        super(preferDirect);
        threadCache = new PoolThreadLocalCache(useCacheForAllThreads);
        this.tinyCacheSize = tinyCacheSize;
        this.smallCacheSize = smallCacheSize;
        this.normalCacheSize = normalCacheSize;
        chunkSize = validateAndCalculateChunkSize(pageSize, maxOrder);

        checkPositiveOrZero(nHeapArena, "nHeapArena");
        checkPositiveOrZero(nDirectArena, "nDirectArena");

        checkPositiveOrZero(directMemoryCacheAlignment, "directMemoryCacheAlignment");
        if (directMemoryCacheAlignment > 0 && !isDirectMemoryCacheAlignmentSupported()) {
            throw new IllegalArgumentException("directMemoryCacheAlignment is not supported");
        }

        if ((directMemoryCacheAlignment & -directMemoryCacheAlignment) != directMemoryCacheAlignment) {
            throw new IllegalArgumentException("directMemoryCacheAlignment: "
                    + directMemoryCacheAlignment + " (expected: power of two)");
        }

        int pageShifts = validateAndCalculatePageShifts(pageSize);

        //創建兩種內存分配的PoolArena數組,heapArenas和directArenas
        if (nHeapArena > 0) {
            //創建heapArenas內存競技場(其實是PoolArena[])
            //nHeapArena:數組大小
            heapArenas = newArenaArray(nHeapArena);
            List<PoolArenaMetric> metrics = new ArrayList<PoolArenaMetric>(heapArenas.length);
            for (int i = 0; i < heapArenas.length; i ++) {
                //堆內:PoolArena[]存放它下面的HeapArena
                PoolArena.HeapArena arena = new PoolArena.HeapArena(this,
                        pageSize, maxOrder, pageShifts, chunkSize,
                        directMemoryCacheAlignment);
                heapArenas[i] = arena;
                metrics.add(arena);
            }
            heapArenaMetrics = Collections.unmodifiableList(metrics);
        } else {
            heapArenas = null;
            heapArenaMetrics = Collections.emptyList();
        }

        if (nDirectArena > 0) {
            //創建heapArenas內存競技場(其實是PoolArena[])
            directArenas = newArenaArray(nDirectArena);
            List<PoolArenaMetric> metrics = new ArrayList<PoolArenaMetric>(directArenas.length);
            for (int i = 0; i < directArenas.length; i ++) {
                //堆外:PoolArena[]存放它下面的DirectArena
                PoolArena.DirectArena arena = new PoolArena.DirectArena(
                        this, pageSize, maxOrder, pageShifts, chunkSize, directMemoryCacheAlignment);
                directArenas[i] = arena;
                metrics.add(arena);
            }
            directArenaMetrics = Collections.unmodifiableList(metrics);
        } else {
            directArenas = null;
            directArenaMetrics = Collections.emptyList();
        }
        metric = new PooledByteBufAllocatorMetric(this);
    }
    private static <T> PoolArena<T>[] newArenaArray(int size) {
        //創建PoolArena數組
        return new PoolArena[size];
    }

初始化內存競技場數組的大家的默認值爲defaultMinNumArena,2被的cpu核心數,運行時每個線程可獨享一個arena,內存分配的時候就不用加鎖了

    public PooledByteBufAllocator(boolean preferDirect) {
        this(preferDirect, DEFAULT_NUM_HEAP_ARENA, DEFAULT_NUM_DIRECT_ARENA, DEFAULT_PAGE_SIZE, DEFAULT_MAX_ORDER);
    }
        //2倍cpu核心數,默認創建這個數量大小的Arena數組
        // (這個數字和創建NioEventLoop數組的數量一致,每個線程都可以由一個獨享的arena,這個數組中的arena其實在分配內存的時候是不用加鎖的)
        final int defaultMinNumArena = NettyRuntime.availableProcessors() * 2;
        final int defaultChunkSize = DEFAULT_PAGE_SIZE << DEFAULT_MAX_ORDER;
        DEFAULT_NUM_HEAP_ARENA = Math.max(0,
                SystemPropertyUtil.getInt(
                        "io.netty.allocator.numHeapArenas",
                        (int) Math.min(
                                defaultMinNumArena,
                                runtime.maxMemory() / defaultChunkSize / 2 / 3)));
        DEFAULT_NUM_DIRECT_ARENA = Math.max(0,
                SystemPropertyUtil.getInt(
                        "io.netty.allocator.numDirectArenas",
                        (int) Math.min(
                                defaultMinNumArena,
                                PlatformDependent.maxDirectMemory() / defaultChunkSize / 2 / 3)));

  • 整體分配架構,如圖
    假設初始化了4個NioEventLoop也就是4個線程的數組,默認cpu核心數爲2。那麼內存分配器PooledByteBufAllocator持有的arena數量也是4個。創建一個ByteBuf的過程如下:
  • 首先,通過PoolThreadCache去拿到一個對應的arena對象。那麼PoolThreadCache的作用就是通過ThreadLoad的方式把內存分配器PooledByteBufAllocator持有的arena數組中其中的一個arena(最少使用的)塞到PoolThreadCache的一個成員變量裏面。
  • 然後,當每個線程通過它(threadCache)去調用get方法的時候,會拿到它底層的一個arena,也就是第一個線程拿到第一個,第二個線程拿到第二個以此類推。這樣可以把線程和arena進行一個綁定
  • PoolThreadCache除了可以直接在arena管理的這塊內存進行內存分配,還可在它底層維護的一個ByteBuf緩存列表裏進行內存分配。在PooledByteBufAllocator中持有tinyCacheSize,smallCacheSize,normalCacheSize,分配內存時調用threadCache.get();的時候實例化PoolThreadCache作爲它的構造方法參數傳入,創建了對應的緩存列表。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章