Volley原理分析之網絡請求層

前言

13年google就推出volley了,作爲一個喜歡使用這個網絡請求框架的娃,也是時候研究研究下該框架的原理了。

初始化

初始化volley,大家都知道會調用Volley.newRequestQueue(),那我們就沿着源碼追溯下去。


     /**
     * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
     * You may set a maximum size of the disk cache in bytes.
     *
     * @param context A {@link Context} to use for creating the cache dir.
     * @param stack An {@link HttpStack} to use for the network, or null for default.
     * @param maxDiskCacheBytes the maximum size of the disk cache, in bytes. Use -1 for default size.
     * @return A started {@link RequestQueue} instance.
     */
    public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                //HurlStack其實是封裝了HttpURLConnection的類
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                //封裝了HttpClient類
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);

        RequestQueue queue;
        if (maxDiskCacheBytes <= -1)
        {
            // No maximum size specified
            queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        }
        else
        {
            // Disk cache size specified
            queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
        }
        //注意這裏將啓動隊列
        queue.start();

        return queue;
    }

在上面代碼中,關鍵點如下:
1. 初始化BasicNetwork。這裏根據sdk版本選擇不同的網絡請求類,他的實現正是該框架請求網絡所使用的網絡請求類,根本還是依賴HttpURLConnection 和 HttpClient
2. 初始化RequestQueue。這是請求分發的隊列,構造函數中初始化執行網絡請求的線程數爲4,而且還初始化ExecutorDelivery,這個是負責處理響應的接口,負責把response傳給主線程

    /**
     * Starts the dispatchers in this queue.
     */
    public void start() {
        //調用stop()後之前初始化的緩存線程和網絡請求線程都會銷燬
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        //初始化是四個線程,注意,這裏不是用線程池
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }
  1. CacheDispatcher與NetworkDispatcher都是繼承自線程,這裏對總共開啓了五個線程,1個緩存線程,4個網絡工作線程。
  2. 兩者都傳入mNetworkQueue這個參數,其實例是 PriorityBlockingQueue
 * <p>{@code BlockingQueue} implementations are **thread-safe**.  All
 * queuing methods achieve their effects atomically using internal
 * locks or other forms of concurrency control

可見,BlockQueue是線程安全的,這也是不直接使用Queue的原因,而在RequestQueue中,有個currentRequest,這個是直接使用HashSet,作爲記錄當前的請求,以便進行cancelAll的處理,這裏並沒有用上線程安全的集合操作類。
到這裏,volley就已經準備就緒了

發起請求

通過RequestQueue.addRequest(),我們把自己的請求信息傳遞進volley中:

    /**
     * Adds a Request to the dispatch queue.
     * @param request The request to service
     * @return The passed-in request
     */
    public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        //這個序列號是獲取的AtomicInteger的自增←_←
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            //添加到這裏的時候,就會被工作線程所輪詢了咯
            //但是默認情況下,shouldCache都是true,也即一般不會直接跳過cache
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            //已經發出請求的東東都丟進請求隊列,如果多個相同請求,則丟到等待hashMap中去。
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

request不是直接丟進networkQueue讓工作線程執行,而是先丟進cacheQueue中,讓其miss之後再方進networkQueue,再執行網絡請求。

執行請求

現在來看一下工作線程,輪詢的那部分:

        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                //主要核心請求,mNetwork就是初始化的netWrok請求方式
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                // Parse the response here on the worker thread.
                //這裏對response進行解析,調用自己覆蓋的方法,注意
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                //注意這裏就把請求緩存起來了
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                //將response發送給主線程的handler
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }

經過網絡請求的調用,獲得response後,就通過mDelivery.postResponse()將response傳遞給Request,並調用具體請求實現類的deliverReponse方法。在具體實現類中,回調listener的onResponse方法,成功實現response的回調,精妙之處在於ExecutorDelivery中完成跨線程通信,使工作線程能夠切換至UI線程。
分發器原理主要是靠Executor實現,並通過handler post轉移

    /**
     * Creates a new response delivery interface.
     * @param handler {@link Handler} to post responses on
     */
    public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

總結

整個volley的網絡請求其實已經很清晰了,實質就是通過線程輪詢任務隊列來達到並行操作的效果,在處理線程安全與線程通信方面做到了恰到好處,緩存請求的調度使volley請求的效率提高。
可是給我還留下一些小疑問,就是如果是使用線程池來實現,併發性能是否會更好?這個得等我好好分析先,另外對於NetworkImageView,我也會繼續探究下去,歡迎大家一起來交流。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章