Glide之DiskCache磁盤緩存

閱讀別人優秀的源碼,才知道自己之前寫的代碼都是垃圾呀

在Glide裏面好多對象都是通過工廠類生成的,DiskCache也是

先看GlideBuilder的build方法:


  @NonNull
  Glide build(@NonNull Context context) {
    if (sourceExecutor == null) {
      sourceExecutor = GlideExecutor.newSourceExecutor();
    }

    if (diskCacheExecutor == null) {
      diskCacheExecutor = GlideExecutor.newDiskCacheExecutor();
    }

    ....
    //這裏實例化一個工廠類
    if (diskCacheFactory == null) {
      diskCacheFactory = new InternalCacheDiskCacheFactory(context);
    }

    if (engine == null) {
      engine =
          new Engine(
              memoryCache,
              diskCacheFactory,//工廠類傳給Engine
              diskCacheExecutor,
              sourceExecutor,
              GlideExecutor.newUnlimitedSourceExecutor(),
              GlideExecutor.newAnimationExecutor(),
              isActiveResourceRetentionAllowed);
    }

    ....
 }

InternalCacheDiskFactory 是構造存儲目錄在應用私有目錄下的

也可以外部直接提供DiskCache.Factory,不用內部提供的。

看看Engine構造方法中:

  Engine(MemoryCache cache,
      DiskCache.Factory diskCacheFactory,
      GlideExecutor diskCacheExecutor,
      GlideExecutor sourceExecutor,
      GlideExecutor sourceUnlimitedExecutor,
      GlideExecutor animationExecutor,
      Jobs jobs,
      EngineKeyFactory keyFactory,
      ActiveResources activeResources,
      EngineJobFactory engineJobFactory,
      DecodeJobFactory decodeJobFactory,
      ResourceRecycler resourceRecycler,
      boolean isActiveResourceRetentionAllowed) {
    this.cache = cache;
    this.diskCacheProvider = new LazyDiskCacheProvider(diskCacheFactory);
    ...
}

把diskCacheFactory包裝了一些,沒啥特殊的。簡單說明一下,我們看到DiskCache.Factory是被包裝到了LazyDiskCacheProvider中,它實現了DecodeJob.DiskCacheProvider,從這種結構我們也能知道DiskCacheProvider是DecodeJob中定義的接口,是通過這個接口形式來交互的,所以需要包裝一下。


  private static class LazyDiskCacheProvider implements DecodeJob.DiskCacheProvider {

    private final DiskCache.Factory factory;
    private volatile DiskCache diskCache;

    LazyDiskCacheProvider(DiskCache.Factory factory) {
      this.factory = factory;
    }

    @VisibleForTesting
    synchronized void clearDiskCacheIfCreated() {
      if (diskCache == null) {
        return;
      }
      diskCache.clear();
    }

    @Override
    public DiskCache getDiskCache() {
      if (diskCache == null) {
        synchronized (this) {
          if (diskCache == null) {
            diskCache = factory.build();
          }
          if (diskCache == null) {
            diskCache = new DiskCacheAdapter();
          }
        }
      }
      return diskCache;
    }
  }

diskCacaheProvider在構造DecodeJob的時候會傳入,進而傳入到DecodeHelper中,在DecodeHelper中有如下方法

  DiskCache getDiskCache() {
    return diskCacheProvider.getDiskCache();
  }

看LazyDiskCacheProvider的代碼,會調用到DiskLruCacheFactory的build方法,看如下代碼:

package com.bumptech.glide.load.engine.cache;

import java.io.File;

/**
 * Creates an {@link com.bumptech.glide.disklrucache.DiskLruCache} based disk cache in the specified
 * disk cache directory.
 *
 * <p>If you need to make I/O access before returning the cache directory use the {@link
 * DiskLruCacheFactory#DiskLruCacheFactory(CacheDirectoryGetter, long)} constructor variant.
 */
// Public API.
@SuppressWarnings("unused")
public class DiskLruCacheFactory implements DiskCache.Factory {
  private final long diskCacheSize;
  private final CacheDirectoryGetter cacheDirectoryGetter;

  /**
   * Interface called out of UI thread to get the cache folder.
   */
  public interface CacheDirectoryGetter {
    File getCacheDirectory();
  }

  public DiskLruCacheFactory(final String diskCacheFolder, long diskCacheSize) {
    this(new CacheDirectoryGetter() {
      @Override
      public File getCacheDirectory() {
        return new File(diskCacheFolder);
      }
    }, diskCacheSize);
  }

  public DiskLruCacheFactory(final String diskCacheFolder, final String diskCacheName,
                             long diskCacheSize) {
    this(new CacheDirectoryGetter() {
      @Override
      public File getCacheDirectory() {
        return new File(diskCacheFolder, diskCacheName);
      }
    }, diskCacheSize);
  }

  /**
   * When using this constructor {@link CacheDirectoryGetter#getCacheDirectory()} will be called out
   * of UI thread, allowing to do I/O access without performance impacts.
   *
   * @param cacheDirectoryGetter Interface called out of UI thread to get the cache folder.
   * @param diskCacheSize        Desired max bytes size for the LRU disk cache.
   */
  // Public API.
  @SuppressWarnings("WeakerAccess")
  public DiskLruCacheFactory(CacheDirectoryGetter cacheDirectoryGetter, long diskCacheSize) {
    this.diskCacheSize = diskCacheSize;
    this.cacheDirectoryGetter = cacheDirectoryGetter;
  }

  @Override
  public DiskCache build() {
    File cacheDir = cacheDirectoryGetter.getCacheDirectory();

    if (cacheDir == null) {
      return null;
    }

    if (!cacheDir.mkdirs() && (!cacheDir.exists() || !cacheDir.isDirectory())) {
      return null;
    }

    return DiskLruCacheWrapper.create(cacheDir, diskCacheSize);
  }
}
看DiskLruCacheWrapper.create(cacheDir, diskCacheSize)

DiskLruCacheWrapper實現了DiskCache接口,create返回DiskLruCacheWrapper實例

上面說的這些類和接口都是Glide的相關邏輯,其實本身還沒有涉及到磁盤緩存,到了DisLruCacheWrapper的實現纔是重點,裏面封裝了DiskLruCache,這個在Android界很有名,也是google推薦的庫。其實DiskLruCache可以被換成別的,這樣就實現了上層和底層的解耦。

重點分析一下DiskLruCache

其實實現原理也不復雜,首先是使用了LinkedHashMap accessOrder爲true的特性,會把最近訪問的數據放到列表尾部。

  @Override
  public void put(Key key, Writer writer) {
    // We want to make sure that puts block so that data is available when put completes. We may
    // actually not write any data if we find that data is written by the time we acquire the lock.
    String safeKey = safeKeyGenerator.getSafeKey(key);
    writeLocker.acquire(safeKey);
    try {
      if (Log.isLoggable(TAG, Log.VERBOSE)) {
        Log.v(TAG, "Put: Obtained: " + safeKey + " for for Key: " + key);
      }
      try {
        // We assume we only need to put once, so if data was written while we were trying to get
        // the lock, we can simply abort.
        DiskLruCache diskCache = getDiskCache();
        Value current = diskCache.get(safeKey);
        if (current != null) {
          return;
        }

        DiskLruCache.Editor editor = diskCache.edit(safeKey);
        if (editor == null) {
          throw new IllegalStateException("Had two simultaneous puts for: " + safeKey);
        }
        try {
          File file = editor.getFile(0);
          if (writer.write(file)) {
            editor.commit();
          }
        } finally {
          editor.abortUnlessCommitted();
        }
      } catch (IOException e) {
        if (Log.isLoggable(TAG, Log.WARN)) {
          Log.w(TAG, "Unable to put to disk cache", e);
        }
      }
    } finally {
      writeLocker.release(safeKey);
    }
  }

當需要存文件的時候,生成一個key,和一個Entry,entry裏面包裝了key,一個long數組,兩個File數組,數組的大小表示一個key能對應一個緩存數據(DiskLruCache實例化的時候傳入的),一個File數組cleanFiles存的是數據緩存的位置,另一個File數組dirtryFiles存的是臨時數據緩存的位置,long數組表示緩存文件的大小。生成了key和entry後,放到lruEntries(LinkedHashMap)中,把Entry包裝到Editor裏面,Editor和Entry相互引用,Editor對外暴露(這個時候在journal裏面寫一條dirty數據,表示要開始寫了),通過Editor可以獲取需要寫到的文件地址,寫入文件(可以在外部寫入,很靈活),很重要的一步是commit,裏面主要是計算了文件大小給long數組賦值,計算已經使用的總大小size(這個時候在journal裏面寫一條clean數據),最後很重要的是,如果發現size大於了maxSize,需要把list頭的數據刪除直到size小於maxSize。

get就簡單了,通過key獲取File

網上這方面文章很多,大家自己看多看

 

需要進一步瞭解內容:

(1)DecodeJob是怎麼被實例化出來的?

(2)SafeKeyGenerator和DiskCacheWriteLocker的實現和作用?

(3)LinkedHashMap原理

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章