My firefox suddenly become sluggish and then froze. I opened Process Explorer to see what's going on and noticed the main thread of firefox.exe was stuck in the kernal function NtAllocateVirtualMemory
. At that time, the process was only using 1.5GB of virtual memory space and I had more than 1GB of commit limit free and at least 1GB of RAM free. I thought the memory space of Firefox might have become too fragmented, so I killed it.
Then I got a surprise graph like below.
As you can see, there has always been RAM free during the entire period but I seemed to have hit commit limit anyhow. The page files are set to system managed and the system drive have more than 17GB free, so I have no idea how I could hit the limit.... Any thoughts on this?
System is Windows 10 build 10586. I have 8GB of RAM.
(It seems Firefox or some related thing has claimed a hidden 3-4 GB of virtual memory space. I think it could be the display driver, but why did the system not expand the page file?)
-
Are you running a 32-bit version of Firefox?– spherical_dogJan 16, 2016 at 0:06
-
Yes, it is 32 bit, but I can normally go up to 1.9GB of virtual memory use with no problem what so ever.– billc.cnJan 16, 2016 at 0:09
-
Well, the virtual memory limit for a single 32-bit process is 2GB if it is not large address space aware. 32-bit Firefox is supposed to be address space aware, which doesn't explain the crash, but you might want to test the 64-bit version and see if it crashes.– spherical_dogJan 16, 2016 at 0:13
1 Answer
Physical memory and commit limit are distinct resources. You can run out of one even though you have plenty of the other left. You most likely need a larger page file to raise the commit limit.
Physical memory is very much like cash in the bank. Commit limit is very much like checks that you've already written. Even if you have lots of cash in the bank, if you've written a lot of checks, you may be unable to write more checks.
Say you have a system with 3GB of free RAM and no page file. And say an application asks for 2GB of memory. The system will say "yes" and raise the commit limit by 2GB. The system still has 3GB of free RAM, because the application hasn't used any yet. But if another application requests 2GB of memory, the OS will have to refuse. It has 3GB in the bank but has written a check for 2GB, so it can't write another check for 2GB.
-
Okay, this I don't get (even though all the evidences point to it). In my naive computer science-y understanding, the commit limit should be page-able RAM + page file. How could it not be able to use all the RAM? Any pointer to any reading I can do? Also, does not explain why the OS failed to expand the page file as it is set to system managed.– billc.cnJan 16, 2016 at 0:20
-
I get the argument in your example, but at that point I killed Firefox, it can at most request 500MB more memory before it runs out of process address space. However, the system was 1GB below the commit limit. (I've realised the commit graph has a scaling issue due to the limit nearly halved after FF was killed.)– billc.cnJan 16, 2016 at 0:33
-
@billc.cn Taking your questions in order: It can use all the RAM (for example, to fulfill existing obligations), it just can't reserve any more memory, so requests to reserve memory fail. The OS didn't expand the page file because memory was reserved but not used, the page file won't be expanded until it's used. Likely, at least part of your issue was also that the 32-bit process' virtual address space was fragmented. Jan 17, 2016 at 0:22
-
@billc.cn The point that I think is not clear is that even though total RAM is part of the commit limit, RAM is not marked as "used" just because virtual memory has been committed. If I commit 1 GB, that uses 1 GB of the system commit limit, but it doesn't actually use any RAM at all until I store something in that region. And then it only uses as much as needed to store what I've written. After a while it may use less than that, if I don't reference it much and the OS decides to move some of it to the pagefile to make it more available for some other, higher-activity process. Apr 7, 2018 at 18:22
我的火狐突然變得遲鈍,然後就死機了。我打開 Process Explorer 看看發生了什麼,發現 firefox.exe 的主線程卡在了 kernal function 中NtAllocateVirtualMemory
。當時,該進程僅使用 1.5GB 的虛擬內存空間,並且我有超過 1GB 的可用提交限制和至少 1GB 的可用 RAM。我認爲Firefox的內存空間可能變得過於碎片化,所以我把它殺掉了。
然後我得到了一張令人驚訝的圖表,如下所示。
正如你所看到的,在整個期間內存一直是空閒的,但我似乎已經達到了提交限制。頁面文件設置爲系統管理,系統驅動器有超過 17GB 的可用空間,所以我不知道如何達到限制......對此有什麼想法嗎?
系統是Windows 10 build 10586。我有8GB RAM。
(好像Firefox或其他相關的東西聲稱隱藏了3-4GB的虛擬內存空間。我認爲這可能是顯示驅動程序,但爲什麼系統沒有擴展頁面文件?)
-
-
-
如果單個 32 位進程不支持大地址空間,則其虛擬內存限制爲 2GB。32 位 Firefox 應該具有地址空間感知能力,這並不能解釋崩潰的原因,但您可能想測試 64 位版本並查看它是否崩潰。– 球形狗2016 年 1 月 16 日 0:13
1 個回答
物理內存和提交限制是不同的資源。即使你還有很多剩餘的,你也可能會用完其中之一。您很可能需要更大的頁面文件來提高提交限制。
物理內存很像銀行裏的現金。提交限制非常類似於您已經編寫的檢查。即使你在銀行裏有很多現金,如果你寫了很多支票,你也可能無法寫更多的支票。
假設您的系統有 3GB 可用 RAM 並且沒有頁面文件。假設某個應用程序需要 2GB 內存。系統會說“是”並將提交限制提高 2GB。系統仍然有 3GB 的可用 RAM,因爲應用程序尚未使用任何 RAM。但如果另一個應用程序請求 2GB 內存,操作系統將不得不拒絕。它在銀行中有 3GB,但已經寫了一張 2GB 的支票,因此它無法再寫一張 2GB 的支票。
-
好吧,我不明白這一點(儘管所有證據都表明這一點)。根據我天真的計算機科學理解,提交限制應該是可分頁 RAM + 頁面文件。怎麼可能無法使用所有 RAM?有什麼可以指導我閱讀的內容嗎?另外,沒有解釋爲什麼操作系統無法擴展頁面文件,因爲它設置爲系統管理。– 賬單網2016 年 1 月 16 日 0:20
-
我得到了你的例子中的論點,但那時我殺死了 Firefox,它在耗盡進程地址空間之前最多可以請求 500MB 的內存。但是,系統比提交限制低 1GB。(我意識到提交圖存在縮放問題,因爲 FF 被殺死後限制幾乎減半。)– 賬單網2016 年 1 月 16 日 0:33
-
@billc.cn 按順序回答您的問題:它可以使用所有 RAM(例如,履行現有義務),它只是無法保留更多內存,因此保留內存的請求失敗。操作系統沒有擴展頁面文件,因爲內存被保留但沒有使用,頁面文件在使用之前不會被擴展。可能,您的問題的至少一部分還在於 32 位進程的虛擬地址空間是碎片化的。– 大衛·施瓦茨2016 年 1 月 17 日 0:22
-
@billc.cn 我認爲不清楚的一點是,即使總 RAM 是提交限制的一部分,RAM 也不會僅僅因爲虛擬內存已提交而被標記爲“已使用”。如果我提交 1 GB,則會使用 1 GB 的系統提交限制,但在我在該區域存儲某些內容之前,它實際上根本不使用任何 RAM。然後它只使用存儲我所寫內容所需的量。一段時間後,如果我不太多引用它,並且操作系統決定將其中一些內容移動到頁面文件,以使其更可用於其他一些活動較高的進程,那麼它可能會使用更少的內容。– 傑米·漢拉漢2018 年 4 月 7 日 18:22