Managing Your App's Memory 翻譯

Android如何管理內存


android不提供內存交換的空間,android通過分頁和內存映射的方式管理內存。因此任何你new的對象或者連接的內存映射(比如打開文件)都會駐留在內存。釋放這些內存唯一的方式就是,釋放對這些對象的和文件引用的持有,以便GC程序能回收這部分內存。(有一種特殊情況,比如代碼,系統可以明確指定用哪一塊的內存(這個不是很明白))。

共享內存

In order to fit everything it needs in RAM, Android tries to share RAM pages across processes. It can do so in the following ways:

爲了滿足內存需要,android試圖通過進程來共享內存頁。android通過如下方式實現

  • Each app process is forked from an existing process called Zygote. The Zygote process starts when the system boots and loads common framework code and resources (such as activity themes). To start a new app process, the system forks the Zygote process then loads and runs the app's code in the new process. This allows most of the RAM pages allocated for framework code and resources to be shared across all app processes.
  • 每個應用程序進程都會與一個叫受精卵進程的進行連接而來。受精卵進程在系統啓動時啓動,負責加載共同框架代碼和資源(如Activity的主題)。一個應用的進程剛啓動時,系統會fork(Linux中的概念)到受精卵進程,然後在一個新的進程用運行應用的代碼,這使得大多數爲框架代碼和資源分配的內存頁能被所有應用程序共享過程。具體需要了解Linux的fork截止。簡單的理解就是通過這樣一種手段將公用的資源和代碼,只在內存中存在一份,大大的減少了內存的使用量。
  • Most static data is mmapped into a process. This not only allows that same data to be shared between processes but also allows it to be paged out when needed. Example static data include: Dalvik code (by placing it in a pre-linked .odex file for direct mmapping), app resources (by designing the resource table to be a structure that can be mmapped and by aligning the zip entries of the APK), and traditional project elements like native code in .so files.
大多數的靜態數據都映射到一個進程裏(就是說進程其實只是保存了這些靜態數據的引用而已,而不是靜態數據本身),這樣做的好處不僅方便進程間共享內存,而且可以在需要的時候交換出去。典型的靜態數據包括,dalvik 代碼(通過將其放到一個預鏈接的.odex文件從而直接與內存映射),應用的資源(通過設計一個結構化的資源映射表,這個結構化需要調整APK的zip條目,熟悉自動化編譯的童鞋肯定知道一個zipalign的優化),還有傳統項目元素,比如本地鏈接庫.so文件。
  • In many places, Android shares the same dynamic RAM across processes using explicitly allocated shared memory regions (either with ashmem or gralloc). For example, window surfaces use shared memory between the app and screen compositor, and cursor buffers use shared memory between the content provider and client.
  • 在許多地方,android通過預先明確的指定共享內存區域,通過這些區域共享內存數據。比如surfaces使用應用與屏幕繪製器間的共享內存,cursor緩衝區使用content 提供者與客戶端的共享內存。

Due to the extensive use of shared memory, determining how much memory your app is using requires care. Techniques to properly determine your app's memory use are discussed in Investigating Your RAM Usage.

因爲內存的共享大量存在,需要謹慎的決定使用多少內存,技術上到達這個目的可以參考Investigating Your RAM Usage.(分析你的內存使用這個鏈接)

Allocating and Reclaiming App Memory

分配和回收應用內存

Here are some facts about how Android allocates then reclaims memory from your app:

以下是android分配和回收內存的實例:

   Dalvik堆棧爲每個進程單獨限定虛擬內存堆棧的範圍,這個範圍值定義了內存堆棧的邏輯大小,隨應用的內存使用量增長(上限是系統給每個應用定義的最大值)。

    內存堆棧的邏輯大小與實際使用大小並不相同,分析你應用內存堆棧的時候,會發現,android會計算一個叫做“比例設置大小”PSS,這個值會計算與其他進程共享dirty和clean頁的大小不過僅僅有多少進程共享這個內存的比例值而已。系統會根據PSS的值決定你應用物理內存佔用。詳細見Investigating your ram usage幫助文檔。

  • The Dalvik heap for each process is constrained to a single virtual memory range. This defines the logical heap size, which can grow as it needs to (but only up to a limit that the system defines for each app).
  • The logical size of the heap is not the same as the amount of physical memory used by the heap. When inspecting your app's heap, Android computes a value called the Proportional Set Size (PSS), which accounts for both dirty and clean pages that are shared with other processes—but only in an amount that's proportional to how many apps share that RAM. This (PSS) total is what the system considers to be your physical memory footprint. For more information about PSS, see the Investigating Your RAM Usage guide.
  • The Dalvik heap does not compact the logical size of the heap, meaning that Android does not defragment the heap to close up space. Android can only shrink the logical heap size when there is unused space at the end of the heap. But this doesn't mean the physical memory used by the heap can't shrink. After garbage collection, Dalvik walks the heap and finds unused pages, then returns those pages to the kernel using madvise(這個函數可以對映射的內存提出使用建議,從而提高內存). So, paired allocations and deallocations of large chunks should result in reclaiming all (or nearly all) the physical memory used. However, reclaiming memory from small allocations can be much less efficient because the page used for a small allocation may still be shared with something else that has not yet been freed.
  • Dalvik堆棧不計算堆棧的邏輯大小意味着android不會通過碎片整理的方式釋放堆棧的空間,android只能通過刪除堆棧的尾部不用的空間來減少堆棧的大小,但這並不是說物理內存不能被壓縮(不能從堆棧中間刪除數據)。GC回收器會瀏覽堆棧,並把堆棧中不用的頁空間回收到內核通過madvise這個函數這個函數可以對映射的內存提出使用建議,從而提高內存)。所以,成對分配與回收大塊的數據可以全部(幾乎全部)回收物理內存。然而用同樣的方式回收小塊數據的效率就低了。因爲小塊數據可能還被其他的使用,且引用放還沒有釋放這個引用。
  • Restricting App Memory 應用內存限制
     

    To maintain a functional multi-tasking environment, Android sets a hard limit on the heap size for each app. The exact heap size limit varies between devices based on how much RAM the device has available overall. If your app has reached the heap capacity and tries to allocate more memory, it will receive an OutOfMemoryError.

    In some cases, you might want to query the system to determine exactly how much heap space you have available on the current device—for example, to determine how much data is safe to keep in a cache. You can query the system for this figure by calling getMemoryClass(). This returns an integer indicating the number of megabytes available for your app's heap. This is discussed further below, under Check how much memory you should use.

    爲了維護多任務的運行,android 爲每個應用設置了一個硬性的內存堆棧限制,一旦應用使用的內存超過這個值就會報OUtOfMemoryError的錯誤。
  • 一些情況下,你的應用可能需要知道當前運行系統環境對堆棧內存的限制大小是多少,從而做出合理的操作,不如緩存多少數據,使用高清圖片還是普通圖片。可以通過getMemoryClass()的方式獲取這個限制的大小,詳見Check how much memory you should use

  • Switching Apps 應用間切換

    Instead of using swap space when the user switches between apps, Android keeps processes that are not hosting a foreground ("user visible") app component in a least-recently used (LRU) cache. For example, when the user first launches an app, a process is created for it, but when the user leaves the app, that process does notquit. The system keeps the process cached, so if the user later returns to the app, the process is reused for faster app switching.

    If your app has a cached process and it retains memory that it currently does not need, then your app—even while the user is not using it—is constraining the system's overall performance. So, as the system runs low on memory, it may kill processes in the LRU cache beginning with the process least recently used, but also giving some consideration toward which processes are most memory intensive. To keep your process cached as long as possible, follow the advice in the following sections about when to release your references.

    More information about how processes are cached while not running in the foreground and how Android decides which ones can be killed is available in the Processes and Threads guide.


  • 當用戶切換應用的時候,別切換到後端的應用使用的內存並沒有被切換或者刪除,進程會被緩存到LRU緩存中。比如用戶首先啓動了一個應用,系統會創建一個進程,當用戶離開這個應用,這個進程並沒有退出,而是別緩存到LRU緩存中,所以當用戶再次返回到這個應用的時候,能更快速更高效的啓動,從而應用切換更快。

  • 如果你的應用被緩存到LRU中,還在不斷的申請內存或不釋放佔用的(不需要的)內存,這會影響到系統的性能,所以,當系統進行到內存低可用的時候,會從LRU緩存中最早一個被緩存的程序開始移除(釋放進程暫用的資源),同時也會考慮應用所暫用的內存的大小。爲了能讓你的應用盡可能長的在LRU中緩存,可以更加以下建議釋放引用的資源。

  • 更多關於系統入會緩存應用到LRU已經如何移除應用,參考 Processes and Threads 

  • How Your App Should Manage Memory 如何管理你應用的內存


    You should consider RAM constraints throughout all phases of development, including during app design (before you begin development). There are many ways you can design and write code that lead to more efficient results, through aggregation of the same techniques applied over and over.

    你應該在所有階段都考慮到內存的使用,包括在應用程序設計期間(在開始開發之前)。有很多方法可以設計和編寫高效使用內存的代碼,可以通過反覆結合這些技術,不斷優化應用。

    You should apply the following techniques while designing and implementing your app to make it more memory efficient.你應該通過應用以下技術設計和實現應用程序,使其高效的使用內存。

    Use services sparingly 謹慎保守的使用service

    If your app needs a service to perform work in the background, do not keep it running unless it's actively performing a job. Also be careful to never leak your service by failing to stop it when its work is done.如果你有應用在後臺運行,那麼請在任務完成後第一時間關閉,而且要注意不要因爲發生錯誤而導致無法關閉service而引起內存泄露。

    When you start a service, the system prefers to always keep the process for that service running. This makes the process very expensive because the RAM used by the service can’t be used by anything else or paged out. This reduces the number of cached processes that the system can keep in the LRU cache, making app switching less efficient. It can even lead to thrashing(

    通常是因爲內存或其他資源耗盡或有限而無法完成所要執行的操作,會嚴重影響系統性能
    ) in the system when memory is tight and the system can’t maintain enough processes to host all the services currently running.

    The best way to limit the lifespan of your service is to use an IntentService, which finishes itself as soon as it's done handling the intent that started it. For more information, read Running in a Background Service .當你啓動一個service的時候,系統會優先保持這個service進程的運行,這個代價很昂貴因爲service使用的內存是不能被別人使用或者通過頁交換出去。這就直接導致LRU緩存的大小,進而影響到應用間的切換效率,更甚至導致抖動通常是因爲內存或其他資源耗盡或有限而無法完成所要執行的操作,會嚴重影響系統性能。最好的解決辦法就是使用IntentService來限制service的壽命。IntentService會在自身任務完成後在第一時間釋放掉所佔用內存與所引用的intent。詳見Running in a Background Service 

    Leaving a service running when it’s not needed is one of the worst memory-management mistakes an Android app can make. So don’t be greedy by keeping a service for your app running. Not only will it increase the risk of your app performing poorly due to RAM constraints, but users will discover such misbehaving apps and uninstall them.讓一個service在不需要的時候運行在後臺是android應用犯得最大的內存使用錯誤。所以不要貪心的通過讓一個service不斷在後臺運行從而讓自己的應用持續運行。這不光是增加了自己應用內存不足甚至溢出的風險,而且這種不友好的行爲,用戶會發現並卸載了你的應用。


  • Release memory when your user interface becomes hidden 當你的應用被切換到後臺時及時釋放內存

    When the user navigates to a different app and your UI is no longer visible, you should release any resources that are used by only your UI. Releasing UI resources at this time can significantly increase the system's capacity for cached processes, which has a direct impact on the quality of the user experience.當你跳轉到其他的應用,你應用的頁面UI不在被用戶可見,你應該釋放掉任何只有你應用使用的UI資源。釋放掉這些UI資源能明顯有效的減少緩存你應用到LRU的容量。這能直接提高用戶體驗。

    To be notified when the user exits your UI, implement the onTrimMemory() callback in your Activity classes. You should use this method to listen for the TRIM_MEMORY_UI_HIDDEN level, which indicates your UI is now hidden from view and you should free resources that only your UI uses.爲了做到適時釋放UI,應該在activity中實現

    ComponentCallbacks2接口(API 14)的

     onTrimMemory() 回調並監聽 TRIM_MEMORY_UI_HIDDEN 級別。這表明UI現在被隱藏了應該要釋放只有你UI使用的資源了。

    Notice that your app receives the onTrimMemory() callback with TRIM_MEMORY_UI_HIDDEN only when all the UI components of your app process become hidden from the user. This is distinct from the onStop() callback, which is called when an Activity instance becomes hidden, which occurs even when the user moves to another activity in your app. So although you should implement onStop() to release activity resources such as a network connection or to unregister broadcast receivers, you usually should not release your UI resources until you receive onTrimMemory(TRIM_MEMORY_UI_HIDDEN). This ensures that if the user navigates back from another activity in your app, your UI resources are still available to resume the activity quickly.


  • 注意當你的UI只有部分被遮擋的時候,onStop方法會調用但是onTrimMemory(TRIM_MEMORY_UI_HIDDEN)不會被調用。所以不要在onsTrop方法中釋放UI資源,以便用戶返回時能快速顯示。

  • Release memory as memory becomes tight

    During any stage of your app's lifecycle, the onTrimMemory() callback also tells you when the overall device memory is getting low. You should respond by further releasing resources based on the following memory levels delivered by onTrimMemory():

    • TRIM_MEMORY_RUNNING_MODERATE 內存運行平穩

      Your app is running and not considered killable, but the device is running low on memory and the system is actively killing processes in the LRU cache.

    • 你的應用不會被關掉,但是設備運行在低內存環境,系統會關閉在LRU緩存中的進程。

    • TRIM_MEMORY_RUNNING_LOW  內存低

      Your app is running and not considered killable, but the device is running much lower on memory so you should release unused resources to improve system performance (which directly impacts your app's performance).

    • 你的應用不會被關掉,但是設備運轉在非常低的內存環境中,你需要主動釋放一部分不必要的資源,因爲這個時候的系統運轉很慢,直接影響了你程序的性能。

    • TRIM_MEMORY_RUNNING_CRITICAL 

      Your app is still running, but the system has already killed most of the processes in the LRU cache, so you should release all non-critical resources now. If the system cannot reclaim sufficient amounts of RAM, it will clear all of the LRU cache and begin killing processes that the system prefers to keep alive, such as those hosting a running service.

    • 你的應用還是會運行,但是系統已經關閉了LRU緩存中大部分的進程,所以你現在應該釋放掉所有的非關鍵的資源了。如果系統不能繼續申請到內存,它就會清空LRU緩存,開始關閉進程已確保系統自身的運行,比如那些持有 處於運行中的service的進程。

    Also, when your app process is currently cached, you may receive one of the following levels fromonTrimMemory():同樣當你的APP進程正被緩存,你也許會在onTrimMemory中收到如下級別的提示。

    • TRIM_MEMORY_BACKGROUND 應用被放到LRU

      The system is running low on memory and your process is near the beginning of the LRU list. Although your app process is not at a high risk of being killed, the system may already be killing processes in the LRU cache. You should release resources that are easy to recover so your process will remain in the list and resume quickly when the user returns to your app.

    • 系統運行低內存並且你的應用在LRU緩存表的開始,儘管你的應該沒有被關閉的高風險,系統也許已經開始在LRU緩存中關閉進程,你也應該要釋放掉容易恢復的資源,這樣你的進程就能繼續保持在LRU緩存列表並且可以快速恢復現場當用戶返回你的應用時。

    • TRIM_MEMORY_MODERATE

      The system is running low on memory and your process is near the middle of the LRU list. If the system becomes further constrained for memory, there's a chance your process will be killed.

    • 系統運行低內存並且你的應用在LRU緩存表的中間位置,如果你的系統內存喫緊更進一步,你的應用就有可能被關掉。

    • TRIM_MEMORY_COMPLETE 內存完全不夠用

      The system is running low on memory and your process is one of the first to be killed if the system does not recover memory now. You should release everything that's not critical to resuming your app state.

    • 系統運行低內存並且你的應用被系統首先關閉的應用之一,如果系統現在內存還是不夠,你應該釋放掉所有與恢復你應用狀態無關的資源(比如你可以記錄用戶輸入的數據,而釋放掉輸入框使用的圖片資源等)。

    Because the onTrimMemory() callback was added in API level 14, you can use the onLowMemory() callback as a fallback for older versions, which is roughly equivalent to the TRIM_MEMORY_COMPLETE event.因爲onTrimMemory是在API14以後才支持,那麼老版本可以用onLowMemory方法代替,但是這個方法只是onTrimmeory方法中TrIM_MEMORY_COMPLETE這一種情況。

    Note: When the system begins killing processes in the LRU cache, although it primarily works bottom-up, it does give some consideration to which processes are consuming more memory and will thus provide the system more memory gain if killed. So the less memory you consume while in the LRU list overall, the better your chances are to remain in the list and be able to quickly resume.

    注意:當系統需要關閉LRU緩存中進程的時候,雖然是首先考慮從LRU緩存列表下到上的順序(最先別緩存的應用最先被刪除),但是系統同時也會考慮是否關閉哪些佔用內存特別多的內存。所以你的應用佔用的內存越少,那麼在LRU緩存中時間就會越長,下次用戶回到你應用的時候,便能迅速恢復現場。
  • 未完待續







發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章