源碼解析Volley框架

Volley是Google在2013年I/O大會上發佈的一個網絡異步請求和圖片加載框架。框架設計的非常好,可擴展性極強,很值得我們去學習。在這篇文章中重點去分析一下它的源碼,Volley的使用在這裏就不多加贅述了,如果有疑問可以參考實例文檔。

Volley的代碼雖然不是很多,但是總有一種看多了代碼記不住類的感覺,在這裏先貼出一張類圖關係,大家如果在後面感覺有找不清類關係的時候可以看一下,圖片來源:點擊打開鏈接


我們使用Volley大多數時候都是從Volley.java這個類的方法開始,但是我認爲有必要先了解一下請求和響應的結構,這樣會使後續讀代碼時更順利。所以先來看一下代表請求的Request類:

Request

首先看一下其中的屬性

// POST請求和PUT請求默認的編碼方式
    private static final String DEFAULT_PARAMS_ENCODING = "UTF-8";
 //請求的方法
    private final int mMethod;
//Volley支持的請求方法
    public interface Method {
        int DEPRECATED_GET_OR_POST = -1;
        int GET = 0;
        int POST = 1;
        int PUT = 2;
        int DELETE = 3;
        int HEAD = 4;
        int OPTIONS = 5;
        int TRACE = 6;
        int PATCH = 7;
    }
 //請求的URL
    private final String mUrl;
    
    //出現3xx時的重定向URL
    private String mRedirectUrl;

    //請求的唯一標識符,就是ID
    private String mIdentifier;

    //默認的流量統計標識
    private final int mDefaultTrafficStatsTag;

    //發生響應錯誤時用來回調的監聽器
    private final Response.ErrorListener mErrorListener;

    //請求的序列號
    private Integer mSequence;

    //請求隊列
    private RequestQueue mRequestQueue;

    //請求是否支持緩存
    private boolean mShouldCache = true;

    //這個請求是否已經被取消了
    private boolean mCanceled = false;
 //這個請求所對應的響應是否已經傳送出去了
    private boolean mResponseDelivered = false;
 //請求的重試策略
    private RetryPolicy mRetryPolicy;
//保存了該請求的緩存
    private Cache.Entry mCacheEntry = null;
//該請求的標識,用來取消請求
    private Object mTag;

在這裏我刪除掉了一些內容,並且寫出來的內容可能在我們的主線分析中也不會出現,如果有興趣可以更深入的去挖掘代碼細節。大家先對這些屬性有個印象就可以,下面看一下它的構造方法:
public Request(int method, String url, Response.ErrorListener listener) {
        mMethod = method;
        mUrl = url;
        mIdentifier = createIdentifier(method, url);
        mErrorListener = listener;
        setRetryPolicy(new DefaultRetryPolicy());

        mDefaultTrafficStatsTag = findDefaultTrafficStatsTag(url);
    }

所有對構造方法的調用都會直接或間接的調用這個構造方法,在其中將一些必要的屬性保存了起來,createIdentifier方法通過請求方法和url還有當前時間等一些元素生成了一個唯一標識,請求的重試策略不是我們主線所關心的重點,不去深究它。默認的流量監視的標識採用了URL中HOST的哈希碼,這裏也不是主線相關,不去深究。這個類中的方法大多是針對上述屬性的get/set方法,我們重點看一下以下幾個方法:
<span style="font-size:18px;">abstract protected Response<T> parseNetworkResponse(NetworkResponse response);</span>

根據名字也可以看出來是解析網絡響應的數據,但是解析什麼數據只有子類才知道,所以強制放到子類執行。
abstract protected void deliverResponse(T response);</span>

由子類去實現將結果分發到註冊到子類的監聽器中,同樣只有子類知道怎麼處理,所以也將其設置爲抽象的。
 protected Map<String, String> getParams() throws AuthFailureError {
        return null;
    }

當使用POST或者PUT請求需要傳遞數據時重寫這個方法然後返回代表數據的map集合。
 public byte[] getBody() throws AuthFailureError {
        Map<String, String> params = getParams();
        if (params != null && params.size() > 0) {
            return encodeParameters(params, getParamsEncoding());
        }
        return null;
    }

使用到了上面的返回值,返回了一個編碼過的byte類型數組。
 private byte[] encodeParameters(Map<String, String> params, String paramsEncoding) {
        StringBuilder encodedParams = new StringBuilder();
        try {
            for (Map.Entry<String, String> entry : params.entrySet()) {
                encodedParams.append(URLEncoder.encode(entry.getKey(), paramsEncoding));
                encodedParams.append('=');
                encodedParams.append(URLEncoder.encode(entry.getValue(), paramsEncoding));
                encodedParams.append('&');
            }
            return encodedParams.toString().getBytes(paramsEncoding);
        } catch (UnsupportedEncodingException uee) {
            throw new RuntimeException("Encoding not supported: " + paramsEncoding, uee);
        }
    }

參數編碼的實現方式。

還有一個比較重要的finish方法,我在這裏將打印Log的語句去掉了:

void finish(final String tag) {
        if (mRequestQueue != null) {
            mRequestQueue.finish(this);
        }
    }

可以看到只是簡單的調用了請求隊列的finish方法,這個在後面會提到。

到這裏Request類大體上就走完了,只是簡單的有個印象就好,我們現在來看一下代表響應的Response類。

Response

public final T result;

    public final Cache.Entry cacheEntry;

    public final VolleyError error;

Response類非常的簡單,涉及到的有用的信息只有以上三項,很簡單,分別是返回的實體結果數據,緩存的實體,包含了請求錯誤信息的對象,這裏不再去深入了。

現在對基礎的請求和響應有了一個大致的瞭解,我們看一下平時使用的入口Volley類:

Volley

public static RequestQueue newRequestQueue(Context context) {
        return newRequestQueue(context, null);
    }

這個是我們最長使用的構造方法,它調用了以下一系列的構造方法:
public static RequestQueue newRequestQueue(Context context, HttpStack stack)
    {
    	return newRequestQueue(context, stack, -1);
    }


 public static RequestQueue newRequestQueue(Context context, int maxDiskCacheBytes) {
        return newRequestQueue(context, null, maxDiskCacheBytes);
    }


 public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
    	//getCacheDir()方法用於獲取/data/data/<application package>/cache目錄
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }
        Network network = new BasicNetwork(stack);
        
        RequestQueue queue;
        
        if (maxDiskCacheBytes <= -1)
        {
        	// No maximum size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        }
        else
        {
        	// Disk cache size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
        }

        queue.start();

        return queue;
    }

最後都將調用最後一個,解釋一下最後一個構造方法中參數的意思,HttpStack封裝了具體使用什麼策略去訪問網絡,maxDiskCacheBytes代表了本地緩存數據的最大大小。在我們的調用中stack參數是null,maxDiskCacheBytes是-1,所以看一下構造方法,首先根據不同的SDK版本創建了不同的網絡訪問策略,在API9以下使用了HttpClientStack,看名字大概也可以想出是對HttpClient訪問的封裝。在API9以上使用的是HurlStack,它是對HttpUrlConnection的封裝,如果有興趣可以深入這兩個類,我們這裏就先不關注了。而後創建出了請求網絡的Network和請求本地緩存的DiskBasedCache,並且用這兩個對象構造出了RequestQueue,我們簡單看一下NetWork和DiskBasedCache:

public NetworkResponse performRequest(Request<?> request) throws VolleyError;

Network接口很簡單,方法的意思就是去處理這個請求,並返回一個NetworkResponse類的對象,這個在後面會提到。

public interface Cache {
    /**
     * Retrieves an entry from the cache.
     * @param key Cache key
     * @return An {@link Entry} or null in the event of a cache miss
     */
    public Entry get(String key);

    /**
     * Adds or replaces an entry to the cache.
     * @param key Cache key
     * @param entry Data to store and metadata for cache coherency, TTL, etc.
     */
    public void put(String key, Entry entry);

    /**
     * Performs any potentially long-running actions needed to initialize the cache;
     * will be called from a worker thread.
     */
    public void initialize();

    /**
     * Invalidates an entry in the cache.
     * @param key Cache key
     * @param fullExpire True to fully expire the entry, false to soft expire
     */
    public void invalidate(String key, boolean fullExpire);

    /**
     * Removes an entry from the cache.
     * @param key Cache key
     */
    public void remove(String key);

    /**
     * Empties the cache.
     */
    public void clear();

    /**
     * Data and metadata for an entry returned by the cache.
     */
    public static class Entry {
        /** The data returned from cache. */
        public byte[] data;

        /** ETag for cache coherency. */
        public String etag;

        /** Date of this response as reported by the server. */
        public long serverDate;

        /** The last modified date for the requested object. */
        public long lastModified;

        /** TTL for this record. */
        public long ttl;

        /** Soft TTL for this record. */
        public long softTtl;

        /** Immutable response headers as received from server; must be non-null. */
        public Map<String, String> responseHeaders = Collections.emptyMap();

        /** True if the entry is expired. */
        public boolean isExpired() {
            return this.ttl < System.currentTimeMillis();
        }

        /** True if a refresh is needed from the original data source. */
        public boolean refreshNeeded() {
            return this.softTtl < System.currentTimeMillis();
        }
    }

}

DiskBasedCache實現了Cache接口,所以我們簡單的看一下接口提供的方法就可以。簡單的封裝了一下對緩存操作的方法,以及代表緩存實體的內部類。

回到主線Volley中,最後對創建的RequestQueue對象調用了start方法。縱觀整個類可以發現設計的非常到位,對擴展基本上完全開放,都是使用接口的契約去編程,我們如果想添加一個網絡請求的方法只需要繼承HttpStack就可以。可以很容易的定製自己的請求隊列。

RequestQueue

在Volley類中創建出了RequestQueue對象,我們就從它的構造方法看起:

public RequestQueue(Cache cache, Network network) {
        this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
    }


 public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }


public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }

我們調用的構造方法最終都會被彙總到第三個構造方法上,簡單的解釋一下參數信息,Cache和Network已經很熟悉了,threadPoolSize代表了網絡請求的線程數量,默認的DEFAULT_NETWORK_THREAD_POOL_SIZE是4,ResponseDeliver接口表示對一個響應的分發,看一下它的方法:

    //從網絡或者緩存解析一個響應並且對其進行分發
    public void postResponse(Request<?> request, Response<?> response);

   //和上面一樣進行分發,只不過分發之後執行Runnable對象的run方法
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable);

    //傳遞錯誤
    public void postError(Request<?> request, VolleyError error);

在這裏我們使用了它的一個實現類ExecutorDelivery,在後文中會進行介紹。

在Volley類中調用了RequestQueue的start方法,現在來看一下:

 public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

方法也不是很長,創建出了一個緩存調度器和四個網絡請求的調度器,而這兩個類都是繼承自Thread類,在start方法裏面直接就將線程啓動了。

正常在使用Volley時創建了RequestQueue後都會向其中add請求,在看add方法之前先來看幾個屬性:

 //如果一個請求正在被處理,並且可緩存,後續的請求將會放到這個隊列中
    private final Map<String, Queue<Request<?>>> mWaitingRequests =
            new HashMap<String, Queue<Request<?>>>();


// 正在進行中,尚未完成的請求集合
    private final Set<Request<?>> mCurrentRequests = new HashSet<Request<?>>();


 //通過本地緩存處理請求的無界優先隊列
    private final PriorityBlockingQueue<Request<?>> mCacheQueue =
        new PriorityBlockingQueue<Request<?>>();


 //通過網絡處理請求的無界優先隊列
    private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
        new PriorityBlockingQueue<Request<?>>();

然後是add方法:


 public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

按以下幾個步驟來執行:

1.首先給傳遞的請求注入一個當前的請求隊列對象。

2.將請求添加到mCurrentRequests集合中。

3.判斷傳入的請求是否支持緩存,如果不支持,將它放到網絡請求的隊列中。

4.如果支持緩存,看mWaitingRequest是否包含代表當前請求緩存關鍵值的key,如果不包含,向mWaitingRequest中添加一個key爲當前請求緩存關鍵值,value爲null的鍵值對,並且將request加入緩存請求隊列。

5.如果mWaitingRequest中包含,取出value代表的隊列(如果不存在就創建一個),然後將當前的請求入隊。

可以看到add方法的作用僅僅是向緩存請求隊列或者網絡請求隊列加如了request請求,但是請求究竟是如何被調用的呢,我們接下來看一下NetworkDispatcher和CacheDispatcher。

NetworkDispatcher

public NetworkDispatcher(BlockingQueue<Request<?>> queue,
            Network network, Cache cache,
            ResponseDelivery delivery) {
        mQueue = queue;
        mNetwork = network;
        mCache = cache;
        mDelivery = delivery;
    }

首先是構造方法,在創建NetworkDispatcher對象時我們已經將四個參數注入進來了,如果有不太清楚的可以回到文章前面的介紹。

既然是繼承自線程,那麼最關心的就是run方法:

 public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            Request<?> request;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }

看一下它的主線

1.從mQueue中取出一個request對象,這個mQueue就是RequestQueue對象中的網絡請求隊列,由於它是一個阻塞隊列,所以在沒有對象可以取出來的時候run方法就會被阻塞住。

2.判斷當前的請求是否被取消了,如果被取消了調用request對象的finish方法,這裏間接調用了requestQueue的finish方法。

3.直接使用Network的實現類去處理網絡請求,並將結果封裝成NetworkResponse。

4.如果服務器返回的是304並且已經有響應在傳輸,直接finish。

5.解析NetworkResponse網絡響應並且將其結果封裝成我們最開始看的Response。

6.如果這個請求是可以被緩存的並且剛剛請求網絡拿到的緩存也不爲空的話,用Cache的實現對象將緩存儲存起來。

7.ResponseDelivery的實例對象將這個結果分發出去。

這個類大體上是清晰了,再來看看CacheDispatcher的run方法。

CacheDispatcher

public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        while (true) {
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request<?> request = mCacheQueue.take();
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }

            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
        }
    }

方法比較長,還是一點一點來看:

1.還是看一下request是否被取消了,如果取消了就finish掉。

2.嘗試着拿當前請求的緩存,如果沒拿到,就將它重新的放入到網絡請求隊列中。

3.如果拿到了緩存,判斷一下是否已經過期了,如果過期了,重新放到網絡請求隊列中。

4.在沒過期的情況下用緩存構造出一個NetworkResponse並且解析它,解析成Response對象。

5.在緩存沒過期的情況下判斷一下是否需要再刷新一次,如果需要,將其放入請求隊列中。

6.如果不需要,直接調用ResponseDelivery的實例對象的方法將其分發出去。

NetworkDispatcher和CacheDispatcher我們都用到了ResponseDelivery,在這裏它的實例對象是ExecutorDelivery,我們簡單的看一下:

ExecutorDelivery

 @Override
    public void postResponse(Request<?> request, Response<?> response) {
        postResponse(request, response, null);
    }

    @Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }

可以看出它的分發都是通過mResponsePoster來執行的,看一下它的定義:

public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

在類的構造方法中創建了這個對象,是一個線程池,但是線程池執行卻調用了handler的post方法。還有印象這個handler是從哪裏創建的麼,沒錯,就是在RequestQueue的構造方法中以主線程的Looper創建出來的Handler對象,那麼它的post方法就應該是在主線程中來執行。根據上面的代碼,執行的是一個ResponseDeliveryRunnable,看一下定義:

 private class ResponseDeliveryRunnable implements Runnable {
        private final Request mRequest;
        private final Response mResponse;
        private final Runnable mRunnable;

        public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
            mRequest = request;
            mResponse = response;
            mRunnable = runnable;
        }

        @SuppressWarnings("unchecked")
        @Override
        public void run() {
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }

            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }

            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }

            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }
       }
    }

最重要的一句話,在響應成功了的話調用request對象的deliverResponse方法,如果還有印象可以想到這個是一個抽象方法,我們拿它的一個實例JsonRequest來看一下究竟執行了什麼。

JsonRequest

 @Override
    protected void deliverResponse(T response) {
        mListener.onResponse(response);
    }

可以看到調用了mListener的方法,這個mListener是哪裏來的呢?

 public JsonRequest(int method, String url, String requestBody, Listener<T> listener,
            ErrorListener errorListener) {
        super(method, url, errorListener);
        mListener = listener;
        mRequestBody = requestBody;
    }

構造方法傳遞的!這是一個回調的方法。到這裏整個線路就清晰了,不過我們還有一點沒處理,就是RequestQueue的finish方法,被request的finish方法間接調用了,來看一下。

RequestQueue

<T> void finish(Request<T> request) {
        // Remove from the set of requests currently being processed.
        synchronized (mCurrentRequests) {
            mCurrentRequests.remove(request);
        }
        synchronized (mFinishedListeners) {
          for (RequestFinishedListener<T> listener : mFinishedListeners) {
            listener.onRequestFinished(request);
          }
        }

        if (request.shouldCache()) {
            synchronized (mWaitingRequests) {
                String cacheKey = request.getCacheKey();
                Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
                if (waitingRequests != null) {
                    if (VolleyLog.DEBUG) {
                        VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
                                waitingRequests.size(), cacheKey);
                    }
                    // Process all queued up requests. They won't be considered as in flight, but
                    // that's not a problem as the cache has been primed by 'request'.
                    mCacheQueue.addAll(waitingRequests);
                }
            }
        }
    }

在這裏面主要從mCurrentRequest對象中移除了請求。但是很重要的一點就是mWaitingRequest對象,它裏面存放了重複的請求,當finish的時候它將請求都放到了緩存隊列中讓其自行讀取緩存了。


到這裏整個Volley的主線源碼就分析完畢了,代碼寫的非常的精彩,針對接口編程,擴展性極強~

下面附上Volley的總體設計圖:

和請求流程圖:

圖片來源:點擊打開鏈接




發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章