The Out-of-Memory Syndrome, or: Why Do I Still Need a Pagefile?

The Out-of-Memory Syndrome, or: Why Do I Still Need a Pagefile?

Windows’ memory management—specifically its use of RAM and the pagefile—has been a subject of concern and confusion since NT 3.1 first shipped. To be sure, there is some reason for concern. We worry about RAM because we know if there isn’t enough, the system will slow down, and will page more to disk. And that’s why we worry about the page file.

(There is also reason for confusion. Memory management in any modern operating system is a complex subject. It has not been helped by Microsoft’s ever-changing choice of nomenclature in displays like Task Manager.)

Today, RAM is just unbelievably cheap by previous standards. And Task Manager’s displays have gotten a lot better. That “memory” graph really does show RAM usage now (in Vista and 7 they made it even more clear: “Physical Memory Usage”), and people are commonly seeing their systems with apparently plenty of what Windows calls “available” RAM. (More on that in a later article.) So users and admins, always in pursuit of the next performance boost, are wondering (not for the first time) if they can delete that pesky old page file. After all, keeping everything in RAM just has to be faster than paging to disk, right? So getting rid of the page file should speed things up! Right?

You don’t get any points for guessing that I’m going to say “No, that’s not right.”

You see, eliminating the page file won’t eliminate paging to disk. It likely won’t even reduce the amount of paging to disk. That is because the page file is not the only file involved in virtual memory! Not by far.

Types of virtual memory

There are three categories of “things” (code and data) in virtual memory. Windows tries to keep as much of all them in RAM as it can.

Nonpageable virtual memory

The operating system defines a number of uses of virtual memory that are nonpageable. As noted above, this is not stuff that Windows “tries to keep in RAM”—Windows has no choice; all of it must be in RAM at all times. These have names like “nonpaged pool,” “PFN database,” “OS and driver code that runs at IRQL 2 or above,” and other kernel mode data and code that has to be accessed without incurring page faults. It is also possible for suitably privileged applications to create some nonpageable memory, in the form of AWE allocations. (We’ll have another blog post explaining AWE.) On most systems, there is not much nonpageable memory.

(“Not much” is relative. The nonpageable memory alone on most Windows systems today is larger than the total RAM size in the Windows 2000 era!)

You may be wondering why it’s called “virtual memory” if it can’t ever be paged out. The answer is that virtual memory isn’t solely about paging between disk and RAM. “Virtual memory” includes a number of other mechanisms, all of which do apply here. The most important of these is probably address translation: The physical—RAM—addresses of things in nonpageable virtual memory are not the same as their virtual addresses. Other aspects of “virtual memory” like page-level access protection, per-process address spaces vs. the system-wide kernel mode space, etc., all do apply here. So this stuff is still part of “virtual memory,” and it lives in “virtual address space,” even though it’s always kept in RAM.

Pageable virtual memory

The other two categories are pageable, meaning that if there isn’t enough RAM for everything to stay in RAM all at once, parts of the memory in these categories (generally, the parts that were referenced longest ago) can be kept or left out on disk. When it’s accessed, the OS will automatically bring it into RAM, possibly pushing something else out to disk to make room. That’s the essence of paging. It’s called “paging,” by the way, because it’s done in terms of memory “pages,” which are normally just 4K bytes… although most paging I/O operations move many pages at once.

Collectively, the places where virtual memory contents are kept when they’re not in RAM are called “backing store.” The second and third categories of virtual memory are distinguished from each other by two things: how the virtual address space is requested by the program, and where the backing store is.

Committed memory

One of these categories is called “committed” memory in Windows. Or “private bytes,” or “committed bytes,” or ‘private commit”, depending on where you look. (On the Windows XP Task Manager’s Performance tab it was called “PF usage,” short for “page file usage,” possibly the most misleading nomenclature in any Windows display of all time.) In Windows 8 and Windows 10’s Task Manager “details” tab it’s called “Commit size.”

Whatever it’s called, this is virtual memory that a) is private to each process, and b) for which the pagefile is the backing store. This is the pagefile’s function: it’s where the system keeps the part of committed memory that can’t all be kept in RAM.

Applications can create this sort of memory by calling VirtualAlloc, or malloc(), or new(), or HeapAlloc, or any of a number of similar APIs. It’s also the sort of virtual memory that’s used for each thread’s user mode stack.

By the way, the sum of all committed memory in all processes, together with operating-system defined data that is also backed by the pagefile (the largest such allocation is the paged pool),  is called the “commit charge.” (Except in PerfMon where it’s called “Committed bytes” under the “memory” object. ) On the Windows XP Task Manager display, that “PF usage” graph was showing the commit charge, not the pagefile usage.

A good way to think of the commit charge is that if everything that was in RAM that’s backed by the pagefile had to be written to the pagefile, that’s how much pagefile space it would need.

So you could think of it as the worst case pagefile usage. But that almost never happens; large portions of the committed memory are usually in RAM, so commit charge is almost never the actual amount of pagefile usage at any given moment.

Mapped memory

The other category of pageable virtual memory is called “mapped” memory. When a process (an application, or anything else that runs as a process) creates a region of this type, it specifies to the OS a file that becomes the region’s backing store. In fact, one of the ways a program creates this stuff is an API called MapViewOfFile. The name is apt: the file contents (or a subset) are mapped, byte for byte, into a range of the process’s virtual address space.

Another way to create mapped memory is to simply run a program. When you run an executable file the file is not “read,” beginning to end, into RAM. Rather it is simply mapped into the process’s virtual address space. The same is done for DLLs. (If you’re a programmer and have ever called LoadLibrary, this does not “load” the DLL in the usual sense of that word; again, the DLL is simply mapped.) The file then becomes the backing store—in effect, the page file—for the area of address space to which it is mapped. If all of the contents of all of the mapped files on the system can’t be kept in RAM at the same time, the remainder will be in the respective mapped files.

This “memory mapping” of files is done for data file access too, typically for larger files. And it’s done automatically by the Windows file cache, which is typically used for smaller files. Suffice it to say that there’s a lot of file mapping going on.

With a few exceptions (like modified pages of copy-on-write memory sections) the page file is not involved in mapped files, only for private committed virtual memory. When executing code tries to access part of a mapped file that’s currently paged out, the memory manager simply pages in the code or data from the mapped file. If it ever is pushed out of memory, it can be written back to the mapped file it came from. If it hasn’t been written to, which is usually the case for code, it isn’t written back to the file. Either way, if it’s ever needed again it can be read back in from the same file.

A typical Windows system might have hundreds of such mapped files active at any given time, all of them being the backing stores for the areas of virtual address space they’re mapped to. You can get a look at them with the SysInternals Process Explorer tool by selecting a process in the upper pane, then switching the lower pane view to show DLLs.

So…

Now we can see why eliminating the page file does not eliminate paging to and from disk. It only eliminates paging to and from the pagefile. In other words, it only eliminates paging to and from disk for private committed memory. All those mapped files? All the virtual memory they’re mapped into? The system is still paging from and to them…if it needs to. (If you have plenty of RAM, it won’t need to.)

The following diagram shows, in greatly oversimplified and not-necessarily-to-scale fashion, the relationship between virtual address space, RAM, and the various backing stores. All of nonpageable virtual space is, of course, in RAM. Some portion of the private committed address space is in RAM (“resident”); the remainder is in the pagefile. Some portion of the mapped address space is also in RAM; the remainder being in all the files to which that address space is mapped. The three mapped files—one .dat, one .dll, one .exe—are, of course, representative of the hundreds of mapped files in a typical Windows system.

A matter of balance

So that’s why removing the pagefile doesn’t eliminate paging. (Nor does it turn off or otherwise get rid of virtual memory.) But removing the pagefile can actually make things worse. Reason: you are forcing the system to keep all private committed address space in RAM. And, sorry, but that’s a stupid way to use RAM.

One of the justifications, the reason for existence, of virtual memory is the “90-10” rule (or the 80-20 rule, or whatever): programs (and your system as a whole) spend most of their time accessing only a small part of the code and data they define. A lot of processes start up, initialize themselves, and then basically sit idle for quite a while until something interesting happens. Virtual memory allows the RAM they’re sitting on to be reclaimed for other purposes until they wake up and need it back (provided the system is short on RAM; if not, there’s no point).

But running without a pagefile means the system can’t do this for committed memory. If you don’t have a page file, then all private committed memory in every process, no matter how long ago accessed, no matter how long the process has been idle, has to stay in RAM—because there is no other place to keep the contents.

That leaves less room for code and data from mapped files. And that means that the mapped memory will be paged more than it would otherwise be. More-recently-accessed contents from mapped files may have to be paged out of RAM, in order to have enough room to keep all of the private committed stuff in. Compare this diagram with the one previous:

Now that all of the private committed v.a.s. has to stay resident, no matter how long ago it was accessed, there’s less room in RAM for mapped file contents. Granted, there’s no pagefile I/O, but there’s correspondingly more I/O to the mapped files. Since the old stale part of committed memory is not being accessed, keeping it in RAM doesn’t help anything. There’s also less room for a “cushion” of available RAM. This is a net loss.

You might say “But I have plenty of RAM now. I even have a lot of free RAM. However much of that long-ago-referenced private virtual memory there is, it must not be hurting me. So why can’t I run without a page file?”

“Low on virtual memory”; “Out of virtual memory”

Well, maybe you can. But there’s a second reason to have a pagefile:

Not having a pagefile can cause the “Windows is out of virtual memory” error, even if your system seems to have plenty of free RAM.

That error pop-up happens when a process tries to allocate more committed memory than the system can support. The amount the system can support is called the “commit limit.” It’s the sum of the size of your RAM (minus a bit to allow for the nonpageable stuff) plus the current size of your page file.

All processes’ private commit allocations together, plus some of the same stuff from the operating system (things like the paged pool), are called the “commit charge.” Here’s where you can quickly see the commit charge and commit limit on windows 8 and 10:

 

Note: In Performance Monitor, these counters are called Memory\Committed bytes and Memory\Commit Limit. Each process’s contribution to the commit charge is in Process\(process)\Private Bytes. The latter is the same counter that Task Manager’s Processes tab (Windows 7) or Details tab (Windows 8 through 10) calls Commit Size.

When any process tries to allocate private virtual address space, Windows checks the size of the requested allocation plus the current commit charge against the commit limit. If the commit limit is larger than that sum, the allocation succeeds; if the commit limit is smaller than that sum, then the allocation cannot be immediately granted. But if the pagefile can be expanded (in other words, if you have not set its default and maximum size to the same), and the allocation request can be accommodated by expanding the pagefile, the pagefile is expanded and the allocation succeeds. (This is where you would see the “system is running low on virtual memory” pop-up. And if you checked it before and after, you’d see that the commit limit is increased.)

If the pagefile cannot be expanded enough to satisfy the request (either because it’s already at its upper size limit, or there is not enough free space on the disk), or if you have no pagefile at all, then the allocation attempt fails. And that’s when you see the “system is out of virtual memory” error. (Changed to simply “out of memory” in Windows 10. Not an improvement, Microsoft!)

The reason for this has to do with the term “commit.” The OS will not allow a process to allocate virtual address space, even though that address space may not all be used for a while (or ever), unless it has a place to keep the contents. Once the allocation has been granted, the OS has committed to make that much storage available.

For private committed address space, if it can’t be in RAM, then it has to be in the pagefile. So the “commit limit” is the size of RAM (minus the bit of RAM that’s occupied by nonpageable code and data) plus the current size of the pagefile. Whereas virtual address space that’s mapped to files automatically comes with a place to be stored, and so is not part of “commit charge” and does not have to be checked against the “commit limit.”

Remember, these “out of memory” errors have nothing to do with how much free RAM you have. Let’s say you have 8 GB RAM and no pagefile, so your commit limit is 8 GB. And suppose your current commit charge is 3 GB. Now a process requests 6 GB of virtual address space. (A lot, but not impossible on a 64-bit system.) 3 GB + 6 GB = 9 GB, over the commit limit, so the request fails and you see the “out of virtual memory” error.

But when you look at the system, everything will look ok! Your commit charge (3 GB) will be well under the limit (8 GB)… because the allocation failed, so it didn’t use up anything. And you can’t tell from the error message how big the attempted allocation was.

Note that the amount of free (or “available”) RAM didn’t enter into the calculation at all.

So for the vast majority of Windows systems, the advice is still the same: don’t remove your pagefile.

If you have one and don’t need it, there is no cost. Having a pagefile will not “encourage” more paging than otherwise; paging is purely a result of how much virtual address space is being referenced vs. how much RAM there is.

If you do need one and don’t have it, applications will fail to allocate the virtual memory they need, and the result (depending on how carefully the apps were written) may well be unexpected process failures and consequent data loss.

Your choice.

What about the rest? Those not in the vast majority? This would apply to systems that are always running a known, unchanging workload, with no changes to the application mix and no significant changes to the data being handled. An embedded system would be a good example. In such systems, if you’re running without a pagefile and you’ve never seen “out of virtual memory” for a long time, you’re unlikely to see it tomorrow. But there’s still no benefit to removing the pagefile.

What questions do you have about Windows memory management? Ask us in the comments! We’ll of course be discussing these and many related issues in our public Windows Internals seminars, coming up in May and July. 

 

13 responses on “The Out-of-Memory Syndrome, or: Why Do I Still Need a Pagefile?”

  1. Mike BlaszczakJuly 29, 2014 at 7:01 pm

    Stack space is initially reserved then committed as necessary. See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686774%28v=vs.85%29.aspx

    1. Jamie HanrahanPost authorJuly 29, 2014 at 9:19 pm

      Thank you for the comment! That is absolutely correct, and when we talk about VirtualAlloc and committed vs. reserved v.a.s. in our internals seminars (shameless plug!) we do use the user mode stack as an example.

      But for the purposes of this article I chose not to address that, or several other details for that matter; one is always trying to keep articles as short as possible, and I decided that those details would not have made the argument for the conclusion any stronger.

  2. Mike BlaszczakJuly 30, 2014 at 5:44 am

    Thing is, stack space is germane to this discussion. With a page file, stack space can be reserved and not committed. Without a page file, all stack space has to be committed at the start of the thread, whether it is used or not. In that state, creating a thread is a touch more likely to fail; and requires all the stack memory to be committed immediately, whether it is used or not. Lots of threads would mean lots of memory is being committed but never used.

    1. Jamie HanrahanPost authorJuly 30, 2014 at 8:10 am

      Sorry, but no… reserving v.a.s. (for the stack or otherwise) does not require a pagefile, nor does it affect commit charge. A reserved region simply needs a Virtual Address Descriptor that says “this range of Virtual Page Numbers is reserved.” No pagefile space is needed. This is easily demonstrated with testlimit.

  3. BryanDecember 18, 2014 at 1:27 pm

    OK, so in this age of SSD (which is costly, so people size as low as they feel they can get by with), how much freespace, relative to installed RAM, would you recommend people leave available for pagefile and hiberfil?

    For context, I’m getting questions like “If I have 16GB of RAM and I relocate my user profile directory and all data storage to a second drive, can I get away with a 32GB SSD for Windows?”

    1. Jamie HanrahanPost authorDecember 21, 2014 at 2:34 pm

      For the hibernate file, you don’t really have a choice: It needs to be the size of RAM. That’s what the OS will allocate for it if you enable hibernation. If you don’t want that much space taken up by the hibernate file, your only option is to not enable hibernation.

      For the pagefile, my recommendation has long been that your pagefile’s default or initial size should be large enough that the performance counter Paging file | %usage (peak) is kept below 25%. My rationale for this is that the memory manager tries to aggregate pagefile writes into large clusters, the clusters have to be virtually contiguous within the pagefile, and internal space in the pagefile is managed like a heap; having plenty of free space in the page file is the only thing we can do to increase the likelihood of large contiguous runs of blocks being available within the pagefile.

      The above is not a frequently expressed opinion; I should probably expand it to a blog post.

      Re “relative to installed RAM”: sizing the pagefile to 1.5x or 1x the size of RAM is simply what Windows does at installation time. It was never intended to be more than an estimate that would almost always result in a pagefile that’s large enough, with no concern That it might be much larger than it needed to be. Note that the only cost of a pagefile of initial size “much larger than it needs to be” is in the disk (or SSD) space occupied. It was not that long ago that hard drives cost (in $ per GB) about what SSDs do now, so I don’t see that the cost of SSD is a factor.

      I’m not sure how free space on the disk enters into it, except where allowing pagefile expansion is concerned. The above suggestion is for the default or initial size. I see no reason to limit the maximum size at all.

      1. BryanDecember 21, 2014 at 3:01 pm

        I do agree that SSD becomes more affordable every day. Still, I often see people trying to use the least amount of SSD possible. (For context, I help a lot of people in an IRC channel about Windows.) So I’m trying to develop a rule of thumb for them.

        Given what you said, it seems like the answer would be something like this: 1) A default installation of Windows 8.1 will typically use around 14GB of space, but with updates and so on could reasonably grow to 25GB. 2) the hiberfil will be the size of RAM and 3) you should leave at least 1.5x RAM disk space available for pagefile.

        So. If we have 16GB RAM, then allow 1) 25GB for Windows 2) 16GB for hiberfil and 3) 24GB for pagefile. Which means one should set aside at least a 65GB partition for Windows’ C: drive – and this is before thinking about how much space will be needed for applications and data.

        Or to put it another way. If (at default pagefile settings) freespace + hiberfil + pagefile is less than 2.5x amount of RAM in the system, “out of virtual memory” errors are just one memory-hungry application away. The likelihood of this error goes down, the more freespace one leaves on the disk.

        1. Jamie HanrahanPost authorDecember 21, 2014 at 6:15 pm

          To clarify, I was not defending or promoting the “1.5x RAM” idea for pagefile initial size, just explaining it. Windows’ use of it at installation time (it’s actually 1x in later versions) is based on the notion that installed RAM will be approximately scaled to workload: Few people will buy 16 GB RAM for a machine to be used for light Office and web browsing use, and few will install just 2 GB RAM where the workload will include 3d modeling or video editing.

          But my experience is that if you suggest “some factor times size of RAM” as a rule to be followed, you will get pushback: “But with more RAM you should need less pagefile space, not more!” And if the workload is the same, that’s completely true.

          I would also phrase things differently re. leaving disk space “available” for the pagefile. One should set the initial pagefile size to a value that will be large enough. This allocates disk space to the pagefile, it does not leave it “available.” As stated before, my metric for “large enough” is “large enough that no more than 25% pagefile space is used under maximum actual workload”.

          The only way free space on the disk should be involved or considered w.r.t. the pagefile size is in enabling pagefile expansion, i.e. setting the maximum size larger than the initial. Now, if the initial size is large enough, the pagefile will never have to be expanded, so enabling expansion would seem to do nothing. But it provides a zero-cost safety net, which will save you in case you initial size turns out to be not large enough. And of course pagefile expansion is ultimately limited by the free space on the disk.

          1. BryanDecember 22, 2014 at 9:48 am

            Thanks for your thoughts on the matter, Jamie!

            Just to clarify the intent of question a little, our general advice about pagefile settings is to leave them alone. System-managed all the way. Our hope is that this will remove the urge to limit or remove pagefile completely. Your idea of setting an initial size but no maximum is interesting; we’ll consider changing our advice! We do heavily stress that aside from (potential) disk space usage, there’s no downside to allowing pagefile to grow to whatever size it wants to have. As I’m sure you’re aware, this is somehow counterintuitive to quite a few people!

            So, given that and the basic question “how much disk space should I allow for the OS?” I wanted to be able to give a relatively safe rule of thumb for sizing the original OS partition. I’ll still say something like “sure, you can probably get away with less, but the smaller you make it, the more likely you’ll later find yourself in a pickle”.

  4. Todd MartinFebruary 3, 2015 at 1:15 am

    I know more about the craters on the moon than I know about the memory issues on my computer.

    So, hopefully someone out there can help me understand this and maybe suggest a fix.

    I have Windows 7 on my Dell laptop. I have 750gig hard drive. A month or so ago I checked the used space on my hard drive and I had used just shy of 50% of space.

    Now, I am done to less than 50mb! I have no idea where all the memory went. Lately, every time I boot the laptop on I’m getting the message that the system has created a paging file and as I’m on the laptop the error message pops up saying low disc space (it actually just popped up).

    I’ve off-load maybe 5gbs of files only to have the low disc space message pop up an hour later.

    I have not loaded anything new on the laptop (not that I know of) prior to the memory loss.

    I have run multiple virus scan, but they have come up empty.

    It’s difficult to even be on email at this point.

    I don’t know enough to have programed it to have altered its setup that could have led to the vanishing memory.

    The only thing that I have done – as suggested on other blog sites, is to delete old restore points. That didn’t do anything.

    What eat over 300gigs of memory? How do I stop it and how do I get that memory back?

    Any guidance would be greatly appreciated.

    Thank you.

    1. Jamie HanrahanPost authorFebruary 5, 2015 at 5:10 am

      Hi. First, let us say that we sympathize – this sort of thing can be very frustrating.

      This article doesn’t really address hard drive space, but rather virtual address space and physical memory (i.e. RAM). It sounds as if something in your system is furiously writing to your hard drive – other than the creation of the pagefile. The space on the hard drive is not usually thought of as “memory.”

      To track this sort of thing down, my first stop would be Task Manager. Right-click on an empty part of your taskbar and click “Start Task Manager”. Select the “Processes” tab. Then go to the View menu, and click “Select Columns”. Check the box for “I/O Writes”. OK. Oh, and click the “Show processes from all users” button at the bottom. Finally, click on the “I/O Writes” column head so that this column is sorted with the largest value at the top. Unfortunately this shows the total number of writes, not the rate. But it’s a start. If you see one of these ticking up rapidly, that’s a process to look at.

      A better tool might be the “Resource Monitor”, which you can get to from Task Manager’s “Performance” tab. Click the “Resource Monitor” button near the bottom. In Resource Monitor, select the “Disk” tab. In this display you already have columns for read and write rates, in bytes/sec. Click the “Write (B/sec)” column head so that the largest values in this column are at the top. Now, the process at the top might be “System”; if so, that is due to how the Windows file cache works. But the thing to look for is the non-“System” processes that are doing a lot of writes, even when you think your system should be quiet.

      Still in Resource Monitor: If you expand the “Disk Activity” portion of the display you’ll see the I/O rates broken down by file.

      There are some utilities out there, some free, some not, to help you find where all the space is going. The first one that came up in my Google search for “disk space analyzer” is “TreeSize Free”, which gives an Explorer-like display of the tree of directories, but with each annotated with the total size at and below that point. Another is “WinDirStat”, which gives a much more graphical view. This seems to be something a lot of people want help with; the search results show two articles at LifeHacker in the last few years on such software. Try a few of the free ones and see what they tell you.

      Finally, I would not so much look for malware like viruses (malware these days tries pretty hard to avoid notice, and filling up your disk space is something most people notice), but just buggy software. (Of course, malware can be buggy…) I recently traced a similar problem – not filling up the hard drive, but writing to it incessantly, thereby using up the drive’s I/O bandwidth – to the support software for a fancy mouse. Naturally I pulled the plug on that mouse and uninstalled its software. For your case… if the problem has been going on for a month, what have you added to the system in the last month? From Control Panel, you can go to “Uninstall a program”, and the table you’ll see there has clickable column heads for sorting. Sort by installation date and see what’s new.

      Hope this helps! – Jamie Hanrahan

  5. BryanJuly 7, 2015 at 3:10 pm

    Jamie, today I was watching this Channel 9/MVA video about Windows Performance: https://channel9.msdn.com/Series/Windows-Performance/02

    The section on physical and virtual memory, starting around 17:00, strikes me as something you could improve greatly.

    1. Jamie HanrahanPost authorAugust 21, 2015 at 9:32 pm

      Indeed, that section blurred a lot of terms. However I feel it necessary to point out that to really explain “Windows memory management” takes a significant amount of time. There’s no way anyone could do much better in a similar amount of time to what was offered there.


    2. 內存不足綜合症,或者:爲什麼我仍然需要頁面文件?

      自 NT 3.1 首次發佈以來,Windows 的內存管理(特別是 RAM 和頁面文件的使用)一直是人們關注和困惑的主題。可以肯定的是,我們有理由擔心。我們擔心 RAM,因爲我們知道如果內存不足,系統速度就會變慢,並且會將更多內存分頁到磁盤。這就是我們擔心頁面文件的原因。

      (這也是造成混淆的原因。任何現代操作系統中的內存管理都是一個複雜的主題。微軟在任務管理器等顯示中不斷變化的術語選擇並沒有幫助它。)

      如今,按照以前的標準,RAM 便宜得令人難以置信。任務管理器的顯示也變得更好了。現在,“內存”圖表確實顯示了 RAM 使用情況(在 Vista 和 7 中,他們甚至更清楚地說明了這一點:“物理內存使用情況”),而且人們通常會看到他們的系統顯然擁有大量 Windows 所謂的“可用”RAM。(更多內容將在後面的文章中介紹。)因此,總是追求下一次性能提升的用戶和管理員想知道(不是第一次)他們是否可以刪除那個討厭的舊頁面文件。畢竟,將所有內容保存在 RAM 中必然比分頁到磁盤更快,對嗎?因此,擺脫頁面文件應該會加快速度!正確的?

      如果你猜測我會說“不,那是不對的”,你就沒有任何得分。

      您會看到, 消除頁面文件並不會消除對磁盤的分頁。它甚至可能不會減少磁盤分頁量。那是因爲頁面文件並不是虛擬內存中涉及的唯一文件!到目前爲止還沒有。

      虛擬內存的類型

      虛擬內存中存在三類“事物”(代碼和數據)。Windows 嘗試將盡可能多的內容保留在 RAM 中。

      不可分頁虛擬內存

      操作系統定義了許多不可分頁虛擬內存的用途。如上所述,這不是 Windows“試圖保留在 RAM 中”的東西——Windows 別無選擇;所有這些都必須始終位於 RAM 中。它們的名稱包括“非分頁池”、“PFN 數據庫”、“在 IRQL 2 或更高級別運行的操作系統和驅動程序代碼”,以及必須在不引發頁面錯誤的情況下訪問的其他內核模式數據和代碼。具有適當特權的應用程序也可以以 AWE 分配的形式創建一些不可分頁內存。(我們將有另一篇博客文章解釋 AWE。)在大多數系統上,沒有太多不可分頁內存。

      (“不多”是相對的。當今大多數 Windows 系統上的不可分頁內存就比 Windows 2000 時代的總 RAM 大小還要大!)

      您可能想知道如果它永遠不能被調出,爲什麼它被稱爲“虛擬內存”。答案是虛擬內存不僅僅涉及磁盤和 RAM 之間的分頁。“虛擬內存”包括許多其他機制,所有這些機制都適用於此。其中最重要的可能是地址轉換:不可分頁虛擬內存中事物的物理(RAM)地址與其虛擬地址不同。“虛擬內存”的其他方面,如頁面級訪問保護、每個進程的地址空間與系統範圍的內核模式空間等,都適用於此。因此,這些東西仍然是“虛擬內存”的一部分,並且存在於“虛擬地址空間”中,儘管它始終保存在 RAM 中。

      可分頁虛擬內存

      其他兩個類別是可分頁的,這意味着如果沒有足夠的 RAM 將所有內容同時保留在 RAM 中,則可以保留或忽略這些類別中的部分內存(通常是最早引用的部分)在磁盤上。當它被訪問時,操作系統會自動將其放入 RAM,可能會將其他內容推送到磁盤以騰出空間。這就是分頁的本質。順便說一句,它被稱爲“分頁”,因爲它是根據內存“頁”完成的,通常只有 4K 字節……儘管大多數分頁 I/O 操作會同時移動許多頁。

      總的來說,虛擬內存內容不在 RAM 中時保存的位置稱爲“後備存儲”。第二類和第三類虛擬內存的區別有兩點:程序如何請求虛擬地址空間,以及後備存儲在哪裏。

      承諾內存

      其中一類在 Windows 中稱爲“已提交”內存。或者“私有字節”、“提交字節”或“私有提交”,具體取決於您查看的位置。(在 Windows XP 任務管理器的“性能”選項卡上,它被稱爲“PF 使用情況”,是“頁面文件使用情況”的縮寫,這可能是有史以來任何 Windows 顯示中最具誤導性的術語。)在 Windows 8 和 Windows 10 的任務管理器“詳細信息”中選項卡稱爲“提交大小”。

      無論它叫什麼,它都是虛擬內存,a) 是每個進程私有的,b) 頁面文件是其後備存儲。這就是頁面文件的功能:系統在其中保存不能全部保存在 RAM 中的已提交內存部分。

      應用程序可以通過調用 VirtualAlloc、malloc()、new()、HeapAlloc 或許多類似 API 中的任何一個來創建此類內存。它也是用於每個線程的用戶模式堆棧的虛擬內存。

      順便說一句,所有進程中所有已提交內存的總和,以及也由頁面文件支持的操作系統定義的數據(最大的此類分配是分頁池),稱爲“提交費用”。(除了在 PerfMon 中,它在“內存”對象下稱爲“提交字節”。)在 Windows XP 任務管理器顯示屏上,“PF 使用情況”圖表顯示提交費用,而不是頁面文件使用情況

      考慮提交費用的一個好方法是,如果頁面文件支持的 RAM 中的所有內容都必須寫入頁面文件,那麼這就是它需要多少頁面文件空間。

      因此,您可以將其視爲最壞的頁面文件使用情況。但這種情況幾乎從未發生過。大部分提交的內存通常位於 RAM 中,因此提交費用幾乎從來都不是任何給定時刻的實際頁面文件使用量。

      映射內存

      另一類可分頁虛擬內存稱爲“映射”內存。當進程(應用程序或作爲進程運行的任何其他內容)創建此類型的區域時,它會向操作系統指定一個文件,該文件將成爲該區域的後備存儲。事實上,程序創建這些東西的方法之一是名爲 MapViewOfFile 的 API。名稱很恰當:文件內容(或子集)被逐字節映射到進程虛擬地址空間的範圍內。

      創建映射內存的另一種方法是簡單地運行程序。當您運行可執行文件時,該文件不會從頭到尾“讀取”到 RAM 中。相反,它只是簡單地映射到進程的虛擬地址空間。對於 DLL 也是如此。(如果您是一名程序員並且曾經調用過 LoadLibrary,則這不會按照該詞的通常含義“加載”DLL;同樣,DLL 只是被映射。)然後該文件將成爲後備存儲 — 實際上,頁面文件—用於其映射到的地址空間區域。如果系統上所有映射文件的所有內容無法同時保存在 RAM 中,則剩餘部分將保存在各自的映射文件中。

      文件的這種“內存映射”也適用於數據文件訪問,通常適用於較大的文件。它是由 Windows 文件緩存自動完成的,該緩存通常用於較小的文件。可以說,正在進行大量的文件映射。

      除了少數例外(例如寫入時複製內存部分的修改頁面),頁面文件不涉及映射文件,僅適用於私有提交的虛擬內存。當執行代碼嘗試訪問當前已調出的映射文件的一部分時,內存管理器只需從映射文件中調入代碼或數據即可。如果它被推出內存,它可以被寫回它來自的映射文件。如果尚未寫入(通常是代碼的情況),則不會將其寫回到文件中。無論哪種方式,如果再次需要它,都可以從同一個文件中讀回。

      典型的 Windows 系統可能在任何給定時間都有數百個活動的此類映射文件,所有這些文件都是它們映射到的虛擬地址空間區域的後備存儲。您可以使用 SysInternals Process Explorer 工具查看它們,方法是在上部窗格中選擇一個進程,然後切換下部窗格視圖以顯示 DLL。

      所以…

      現在我們可以明白爲什麼消除頁面文件並不能消除磁盤的分頁。它僅消除了頁面文件的分頁。換句話說,它僅消除了私有提交內存的磁盤分頁。所有這些映射文件?它們映射到的所有虛擬內存?如果需要的話,系統仍在與他們進行尋呼。(如果您有足夠的 RAM,則不需要。)

      下圖以極其簡化且不一定按比例的方式顯示了虛擬地址空間、RAM 和各種後備存儲之間的關係。當然,所有不可分頁的虛擬空間都位於 RAM 中。私有提交地址空間的某些部分位於 RAM(“駐留”)中;其餘部分位於頁面文件中。映射地址空間的某些部分也在 RAM 中;其餘部分位於該地址空間映射到的所有文件中。當然,這三個映射文件(一個 .dat、一個 .dll、一個 .exe)代表了典型 Windows 系統中的數百個映射文件。

      平衡問題

      這就是爲什麼刪除頁面文件並不能消除分頁。(它也不會關閉或以其他方式刪除虛擬內存。)但是刪除頁面文件實際上會使事情變得更糟。原因:您強制系統將所有私有提交地址空間保留在 RAM 中。而且,抱歉,這是一種使用 RAM 的愚蠢方式。

      虛擬內存存在的理由之一是“90-10”規則(或 80-20 規則,或其他規則):程序(以及整個系統)花費大部分時間只訪問他們定義的一小部分代碼和數據。許多進程啓動、初始化,然後基本上閒置相當長一段時間,直到發生一些有趣的事情。虛擬內存允許將它們所佔用的 RAM 回收用於其他目的,直到它們醒來並需要它回來(前提是系統 RAM 不足;如果不是,則沒有意義)。

      但是在沒有頁面文件的情況下運行意味着系統無法對提交的內存執行此操作。如果你沒有頁面文件,那麼每個進程中所有私有提交的內存,無論多久前被訪問,無論進程空閒了多久,都必須保留在 RAM 中——因爲沒有其他地方可以保存內容。

      這爲映射文件中的代碼和數據留下了更少的空間。這意味着映射的內存將比其他情況被更多地分頁。映射文件中最近訪問的內容可能必須從 RAM 中調出,以便有足夠的空間來保存所有私有提交的內容。將此圖與上一張圖進行比較:

      現在,所有私有提交的 vas 都必須保持駐留,無論多久前訪問它,RAM 中用於映射文件內容的空間都會減少。當然,沒有頁面文件 I/O,但映射文件相應地有更多 I/O。由於已提交內存的舊舊部分不會被訪問,因此將其保留在 RAM 中沒有任何幫助。可用 RAM“緩衝”的空間也更少。這是淨虧損。

      您可能會說“但我現在有足夠的 RAM。我什至還有很多空閒內存。不管很久以前引用的私有虛擬內存有多少,它一定不會傷害我。那麼爲什麼我不能在沒有頁面文件的情況下運行呢?”

      “虛擬內存不足”;“虛擬內存不足”

      好吧,也許你可以。但是使用頁面文件還有第二個原因:

      即使您的系統似乎有足夠的可用 RAM,沒有頁面文件也會導致“Windows 虛擬內存不足”錯誤。

      當進程嘗試分配超出系統支持範圍的已提交內存時,就會彈出該錯誤。系統可以支持的數量稱爲“提交限制”。它是 RAM 大小(減去不可分頁內容的部分)加上頁面文件當前大小的總和。

      所有進程的私有提交分配一起,加上操作系統中的一些相同的東西(例如分頁池),被稱爲“提交費用”。您可以在此處快速查看 Windows 8 和 10 上的提交費用和提交限制:

      注意:在性能監視器中,這些計數器稱爲“內存\提交字節”和“內存\提交限制”。每個進程對提交費用的貢獻位於 Process\(process)\Private Bytes 中。後者與任務管理器的“進程”選項卡 (Windows 7) 或“詳細信息”選項卡(Windows 8 到 10)調用“提交大小”的計數器相同。

      當任何進程嘗試分配私有虛擬地址空間時,Windows 會根據提交限制檢查所請求分配的大小以及當前提交費用。如果提交限制大於該總和,則分配成功;如果提交限制小於該總和,則無法立即授予分配。但是,如果頁面文件可以擴展(換句話說,如果您沒有其默認大小和最大大小設置爲相同),並且可以通過擴展頁面文件來滿足分配請求,則頁面文件將擴展並且分配成功。(在這裏您會看到“系統虛擬內存不足”彈出窗口。如果您在前後檢查它,您會發現提交限制增加了。)

      如果頁面文件無法擴展到足以滿足請求(因爲它已經達到其大小上限,或者磁盤上沒有足夠的可用空間),或者根本沒有頁面文件,則分配嘗試將失敗。這時您會看到“系統虛擬內存不足”錯誤。(在 Windows 10 中更改爲簡單的“內存不足”。這不是改進,微軟!)

      其原因與術語“提交”有關。操作系統將不允許進程分配虛擬地址空間,即使該地址空間可能暫時(或永遠)不會全部使用,除非它有地方保存內容。一旦分配被授予,操作系統就 承諾 提供那麼多的存儲空間。

      對於私有提交的地址空間,如果它不能位於 RAM 中,那麼它必須位於頁面文件中。因此,“提交限制”是 RAM 的大小(減去不可分頁代碼和數據佔用的 RAM 位)加上頁面文件的當前大小。而映射到文件的虛擬地址空間會自動附帶一個存儲位置,因此不是“提交費用”的一部分,也不必根據“提交限制”進行檢查。

      請記住,這些“內存不足”錯誤與您擁有多少可用RAM無關。假設您有 8 GB RAM 並且沒有頁面文件,因此您的提交限制爲 8 GB。假設您當前的提交費用爲 3 GB。現在一個進程請求 6 GB 的虛擬地址空間。(很多,但在 64 位系統上並非不可能。)3 GB + 6 GB = 9 GB,超出了提交限制,因此請求失敗,並且您會看到“虛擬內存不足”錯誤。

      但當你查看系統時,一切看起來都很好!您的提交費用(3 GB)將遠低於限制(8 GB)......因爲分配失敗,所以它沒有用完任何東西。並且您無法從錯誤消息中得知嘗試的分配有多大。

      請注意,空閒(或“可用”)RAM 的數量根本不參與計算。

      因此,對於絕大多數 Windows 系統,建議仍然是相同的:不要刪除頁面文件。

      如果您有一個但不需要,則無需支付任何費用。擁有頁面文件不會比其他情況“鼓勵”更多的分頁;分頁純粹是引用了多少虛擬地址空間與有多少 RAM 的結果。

      如果您確實需要但沒有它,應用程序將無法分配它們所需的虛擬內存,結果(取決於應用程序編寫的仔細程度)很可能是意外的進程失敗和隨之而來的數據丟失。

      你的選擇。

      剩下的呢?那些人不佔絕大多數嗎?這適用於始終運行已知的、不變的工作負載的系統,應用程序組合沒有變化,處理的數據也沒有重大變化。嵌入式系統就是一個很好的例子。在這樣的系統中,如果您在沒有頁面文件的情況下運行,並且您很長一段時間從未見過“虛擬內存不足”,那麼您明天就不太可能看到它。但刪除頁面文件仍然沒有任何好處。

      對 Windows 內存管理有什麼疑問?在評論中詢問我們!當然,我們將在 5 月和 7 月舉行的 公開Windows 內部研討會中討論這些問題以及許多相關問題。

       

      13 條回覆“內存不足綜合症,或者:爲什麼我仍然需要頁面文件?

      1. 邁克·布拉扎克2014 年 7 月 29 日晚上 7:01

        堆棧空間最初被保留,然後根據需要提交。請參閱http://msdn.microsoft.com/en-us/library/windows/desktop/ms686774%28v=vs.85%29.aspx

        1. 傑米·漢拉漢帖子作者2014 年 7 月 29 日晚上 9:19

          感謝您的評論!這是絕對正確的,當我們在內部研討會(無恥的插件!)中談論 VirtualAlloc 以及承諾與保留 vas 時,我們確實使用用戶模式堆棧作爲示例。

          但出於本文的目的,我選擇不討論這個問題或與此相關的其他幾個細節;人們總是試圖使文章儘可能簡短,而我認爲這些細節不會使結論的論證變得更有力。

      2. 邁克·布拉扎克2014 年 7 月 30 日上午 5:44

        事實是,堆棧空間與此討論密切相關。使用頁面文件,可以保留堆棧空間但不提交。如果沒有頁面文件,則無論是否使用,所有堆棧空間都必須在線程啓動時提交。在這種狀態下,創建線程更有可能失敗;並要求立即提交所有堆棧內存,無論是否使用。許多線程意味着大量內存被提交但從未使用。

        1. 傑米·漢拉漢帖子作者2014 年 7 月 30 日上午 8:10

          抱歉,但是不……保留 vas(用於堆棧或其他)不需要頁面文件,也不影響提交費用。保留區域只需要一個虛擬地址描述符來表示“該範圍的虛擬頁碼已被保留”。不需要頁面文件空間。這可以通過 testlimit 輕鬆證明。

      3. 布萊恩2014 年 12 月 18 日下午 1:27

        好的,那麼在這個 SSD 時代(成本高昂,因此人們認爲自己可以承受的尺寸越小越好),相對於已安裝的 RAM,您會建議人們爲頁面文件和 hiberfil 保留多少可用空間?

        就上下文而言,我收到諸如“如果我有 16GB RAM 並且我將用戶配置文件目錄和所有數據存儲重新定位到第二個驅動器,我可以使用適用於 Windows 的 32GB SSD 嗎?”之類的問題。

        1. 傑米·漢拉漢帖子作者2014 年 12 月 21 日下午 2:34

          對於休眠文件,您實際上沒有選擇:它需要是 RAM 的大小。如果您啓用休眠功能,這就是操作系統將爲它分配的資源。如果您不希望休眠文件佔用太多空間,則唯一的選擇是不啓用休眠。

          對於頁面文件,我長期以來的建議是頁面文件的默認或初始大小應該足夠大,以便性能計數器 使用率(峯值)保持在 25% 以下。我這樣做的理由是,內存管理器嘗試將頁面文件寫入聚合到大型集羣中,集羣必須在頁面文件中幾乎連續,並且頁面文件中的內部空間像堆一樣進行管理;爲了增加頁面文件中可用的大量連續塊運行的可能性,頁面文件中擁有足夠的可用空間是我們唯一能做的事情。

          以上不是經常表達的觀點;我可能應該將其擴展爲一篇博客文章。

          關於“相對於已安裝的 RAM”:將頁面文件大小調整爲 RAM 大小的 1.5 倍或 1 倍正是 Windows 在安裝時所做的事情。它的目的只是估計,幾乎總是會產生足夠大的頁面文件,而不用擔心它可能比需要的大得多。請注意,初始大小“遠大於所需大小”的頁面文件的唯一成本是佔用的磁盤(或 SSD)空間。不久前,硬盤的成本(每 GB 美元)與現在的 SSD 差不多,所以我不認爲 SSD 的成本是一個因素。

          我不確定磁盤上的可用空間如何進入其中,除非涉及允許頁面文件擴展。上述建議適用於默認或初始大小。我認爲根本沒有理由限制最大尺寸。

          1. 布萊恩2014 年 12 月 21 日下午 3:01

            我確實同意 SSD 每天都變得更加便宜。儘管如此,我還是經常看到人們嘗試儘可能少地使用 SSD。(就上下文而言,我在有關 Windows 的 IRC 頻道中幫助了很多人。)因此,我正在嘗試爲他們制定一條經驗法則。

            根據您所說的,答案似乎是這樣的:1) Windows 8.1 的默認安裝通常會使用大約 14GB 的空間,但通過更新等可以合理地增長到 25GB。2) hiberfil 將是 RAM 的大小,3) 您應該爲頁面文件保留至少 1.5 倍的 RAM 磁盤空間。

            所以。如果我們有 16GB RAM,則允許 1) 25GB 用於 Windows 2) 16GB 用於 hiberfil 和 3) 24GB 用於頁面文件。這意味着人們應該爲 Windows 的 C: 驅動器留出至少 65GB 的分區,這是在考慮應用程序和數據需要多少空間之前的情況。

            或者換句話說。如果(在默認頁面文件設置下)空閒空間 + hiberfil + 頁面文件小於系統中 RAM 量的 2.5 倍,則“虛擬內存不足”錯誤僅會出現一個佔用大量內存的應用程序。磁盤上留下的可用空間越多,發生此錯誤的可能性就越低。

            1. 傑米·漢拉漢帖子作者2014 年 12 月 21 日下午 6:15

              需要澄清的是,我並不是在捍衛或提倡頁面文件初始大小的“1.5x RAM”想法,只是對其進行解釋。Windows 在安裝時使用它(在更高版本中實際上是 1 倍)基於安裝的 RAM 將根據工作負載大致縮放的概念:很少有人會爲計算機購買 16 GB RAM 以用於輕型 Office 和 Web 瀏覽很少有人會只安裝 2 GB RAM,其中工作負載將包括 3D 建模或視頻編輯。

              但我的經驗是,如果你建議遵循“某些因素乘以 RAM 大小”作爲要遵循的規則,你會遭到拒絕:“但是,隨着 RAM 的增加,你需要的頁面文件空間應該更少,而不是更多!” 如果工作量相同,那就完全正確了。

              我也會以不同的方式重新表述事物。爲頁面文件留下“可用”磁盤空間。應將初始頁面文件大小設置爲足夠大的值。這會將磁盤空間分配給頁面文件,但不會使其“可用”。如前所述,我對“足夠大”的衡量標準是“足夠大,在最大實際工作負載下使用的頁面文件空間不超過 25%”。

              涉及或考慮頁面文件大小的磁盤上可用空間的唯一方法是啓用頁面文件擴展,即將最大大小設置爲大於初始大小。現在,如果初始大小足夠大,則頁面文件將永遠不需要擴展,因此啓用擴展似乎沒有任何作用。但它提供了一個零成本的安全網,如果您的初始大小不夠,這可以節省您的時間。當然,頁面文件擴展最終受到磁盤上可用空間的限制。

              1. 布萊恩2014 年 12 月 22 日上午 9:48

                感謝您對此事的想法,傑米!

                只是爲了澄清一下問題的意圖,我們關於頁面文件設置的一般建議是不要理會它們。全程系統管理。我們希望這將消除完全限制或刪除頁面文件的衝動。您設置初始大小但沒有最大值的想法很有趣;我們會考慮改變我們的建議!我們確實強調,除了(潛在的)磁盤空間使用之外,允許頁面文件增長到它想要的任何大小沒有任何缺點。我相信您知道,這對很多人來說有點違反直覺!

                因此,考慮到這一點和基本問題“我應該爲操作系統留出多少磁盤空間?” 我希望能夠給出一個相對安全的經驗法則來調整原始操作系統分區的大小。我仍然會說“當然,你可能可以少做一些,但你做的越小,你就越有可能發現自己陷入困境”。

      4. 託德·馬丁2015 年 2 月 3 日凌晨 1:15

        我對月球隕石坑的瞭解比對電腦內存問題的瞭解還要多。

        因此,希望有人可以幫助我理解這一點,並可能提出修復建議。

        我的戴爾筆記本電腦上裝有 Windows 7。我有750g硬盤。大約一個月前,我檢查了硬盤的已用空間,發現已經使用了不到 50% 的空間。

        現在,我已經完成了不到 50mb 的工作!我不知道所有的記憶都去了哪裏。最近,每次我啓動筆記本電腦時,我都會收到系統已創建分頁文件的消息,當我在筆記本電腦上時,會彈出錯誤消息,提示磁盤空間不足(實際上只是彈出)。

        我卸載了大約 5GB 的文件,一小時後卻彈出磁盤空間不足的消息。

        在記憶喪失之前,我沒有在筆記本電腦上加載任何新內容(據我所知)。

        我已經運行了多次病毒掃描,但結果都是空的。

        此時甚至連發送電子郵件都變得困難。

        我不知道如何對其進行編程來改變其設置,從而導致記憶消失。

        正如其他博客網站所建議的那樣,我所做的唯一一件事就是刪除舊的還原點。那沒有做任何事情。

        什麼喫掉300G以上的內存?我該如何阻止它以及如何恢復記憶?

        任何指導將不勝感激。

        謝謝。

        1. 傑米·漢拉漢帖子作者2015 年 2 月 5 日上午 5:10

          你好。首先,我們要表示同情——這種事情可能會非常令人沮喪。

          本文並不真正討論硬盤空間,而是討論虛擬地址空間和物理內存(即 RAM)。聽起來好像系統中的某些東西正在瘋狂地寫入硬盤 - 除了創建頁面文件之外。硬盤驅動器上的空間通常不被認爲是“內存”。

          爲了追蹤這類事情,我的第一站是任務管理器。右鍵單擊任務欄的空白部分,然後單擊“啓動任務管理器”。選擇“進程”選項卡。然後轉到“視圖”菜單,然後單擊“選擇列”。選中“I/O 寫入”複選框。好的。哦,然後單擊底部的“顯示所有用戶的進程”按鈕。最後,單擊“I/O Writes”列標題,以便該列按最大值排序在頂部。不幸的是,這顯示了寫入總數,而不是速率。但這是一個開始。如果您看到其中一個快速增長,那麼這是一個值得關注的過程。

          更好的工具可能是“資源監視器”,您可以從任務管理器的“性能”選項卡訪問它。單擊底部附近的“資源監視器”按鈕。在資源監視器中,選擇“磁盤”選項卡。在此顯示中,您已經有讀取和寫入速率的列(以字節/秒爲單位)。單擊“寫入(B/秒)”列標題,以便該列中的最大值位於頂部。現在,最上面的進程可能是“系統”;如果是這樣,那是由於 Windows 文件緩存的工作原理造成的。但要尋找的是正在進行大量寫入的非“系統”進程,即使您認爲系統應該保持安靜。

          仍在資源監視器中:如果展開顯示的“磁盤活動”部分,您將看到按文件細分的 I/O 速率。

          有一些實用程序(有些是免費的,有些不是)可以幫助您找到所有空間的去向。我在 Google 搜索“磁盤空間分析器”時出現的第一個是“TreeSize Free”,它提供了類似資源管理器的目錄樹顯示,但每個目錄都註釋了該點及以下的總大小。另一個是“WinDirStat”,它提供了更加圖形化的視圖。這似乎是很多人需要幫助的事情;搜索結果顯示過去幾年 LifeHacker 上有兩篇關於此類軟件的文章。嘗試一些免費的,看看它們會告訴你什麼。

          最後,我不會過多地尋找病毒之類的惡意軟件(現在的惡意軟件很難避免引起注意,而且大多數人都會注意到填滿磁盤空間),而只是尋找有缺陷的軟件。(當然,惡意軟件也可能有缺陷……)我最近發現了一個類似的問題——不是填滿硬盤,而是不斷地寫入,從而耗盡了硬盤的 I/O 帶寬——到了一個精美鼠標的支持軟件。當然,我拔掉了鼠標的插頭並卸載了它的軟件。對於您的情況……如果問題已經持續了一個月,您在上個月向系統添加了什麼?從控制面板中,您可以轉到“卸載程序”,您將看到那裏的表格具有可單擊的列標題用於排序。按安裝日期排序並查看新增內容。

          希望這可以幫助!——傑米·漢拉漢

      5. 布萊恩2015 年 7 月 7 日下午 3:10

        Jamie,今天我正在觀看有關 Windows 性能的 Channel 9/MVA 視頻:https://channel9.msdn.com/Series/Windows-Performance/02

        關於物理和虛擬內存的部分從 17:00 左右開始,我覺得這是一個可以大大提高的部分。

        1. 傑米·漢拉漢帖子作者2015 年 8 月 21 日晚上 9:32

          事實上,該部分模糊了很多術語。但我覺得有必要指出,要真正解釋“Windows內存管理”需要花費大量的時間。任何人都不可能在相似的時間內做得比那裏提供的更好。




Understanding Virtual Memory

by Perris Calderon

May, 2004

 

First off, let us get a couple of things out of the way

�        XP is a Virtual Memory Operating system

�        There is nothing you can do to prevent virtual memory in the NT kernel

 

No matter your configuration, with any given amount of ram, you can not reduce the amount of paging by adjusting any user interface in these virtual memory operating systems. You can redirect operating system paging, and you can circumvent virtual memory strategy, but you cannot reduce the amount of paging in the NT family of kernel.

 

To elaborate;

We have to realize that paging is how everything gets brought into memory in the first place! It's quite obvious that anything in memory either came from your disc, or will become part of your disc when your work is done. To quote the Microsoft knowledge base:

 

�Windows NT REQUIRES "backing storage" for EVERYTHING it keeps in RAM. If Windows NT requires more space in RAM, it must be able to swap out code and data to either the paging file or the original executable file.�

 

Here's what actually happens:

Once information is brought into memory (it must be paged in), the operating system will choose for that process the memory reclamation strategy. One form of this memory reclamation (or paging, so to be clear), the kernel can mark to release or unload data without a hard write. The OS will retrieve said information directly from the .exe or the .dll that the information came from if it's referenced again. This is accomplished by simply "unloading" portions of the .dll or .exe, and reloading that portion when needed again. Nice!

 

Note: For the most part, this paging does not take place in the page file, this form of paging takes place within the direct location of the .exe or .the dll

 

The "page file" is another form of paging, and this is what most people are talking about when they refer to the system paging. The page file is there to provide space for whatever portion of virtual memory has been modified since it was initially allocated. In a conversation I had with Mark Russinovich this is stated quite eloquently:

 

�When a process allocates a piece of private virtual memory (memory not backed by an image or data file on disk, which is considered sharable memory), the system charges the allocation against the commit limit. The commit limit is the sum of most of physical memory and all paging files. In the background the system will write these pages out to the paging file if a paging file exists and there is space in the paging file. This is an optimization only.�

 

See this? Modified information cannot have backing store to the original file or .exe's SINCE it's modified*, this is obvious once told isn't it.

 

Different types of things are paged to different files. You can't page "private writable committed" memory to exe or .dll files, and you don't page code to the paging file.*

 

With this understanding we realize:

HAVING A PAGE FILE THAT DOESN'T MATCH THE PHYSICAL MEMORY YOU HAVE IN USE WILL AT TIMES INHIBIT THE PAGING OF PRIVATE WRITABLE VIRTUAL ADDRESS SPACE AND FORCE THE UNNECESSARY UNLOADING OF POSSIBLY RECENTLY ACCESSED .DLLS AND .EXES!

 

You see now, in a situation as such, when memory needs to be reclaimed, you'll be paging and unloading the other things in order to take up the necessary slack you've lost by having a page file smaller then the memory in use, (the private writable pages can no longer be backed if you've taken away it's page file area.)

 

Affect? Stacks, heaps, program global storage, etc will all have to stay in physical memory, NO MATTER HOW LONG AGO ANY OF IT WAS REFERENCED!!! This is very important for any given workload and ANY amount of RAM, since the OS would like to mark memory available when it's not been called for a long time. You have impeded this strategy if you have a page file lower then the amount of ram in use.

 

The hits? More paging or backing of executable code, cache data maps and the like. This even though they were referenced far more recently than for arguments sake, the bottom most pages of a thread's stack. See? These bottom most pages are what we want paged, not .exe's or .dlls that were recently referenced.

 

You thwart this good strategy when there is a smaller amount of page file then there is the amount of memory in use.

 

**All memory seen under the NT family of OS's is virtual memory, (Processes access memory through their virtual memory address space) there is no way to address RAM directly!!

 

And so we see, if memory is in use, it has either come from the hard drive or it will go to the hard drive...THERE MUST BE HARD DRIVE AREA FOR EVERYTHING YOU HAVE IN MEMORY...(self evident, isn't it).

 

Now, that's out of the way, let's go further:

When the operating system needs to claim memory, (because all memory is currently in use, and you are launching new apps, or loading more info into existing work), the OS obviously has to get the necessary ram from somewhere. Something in memory will (must) be unloaded to suit your new work. No one knows what will be unloaded until the time comes as XP will unload the feature that is least likely to come into use again.

 

Memory reclamation in XP even goes further than this to make the process as seamless as possible, using more algorithms than most can appreciate. For instance; there is a "first in first out" (FIFO) policy for pages faults, there is "least recently used" policy, (LRU), and a combination of those with others to determine just what will not be noticed when it's released. Remarkable! There is also a "standby list". When information hasn't been used in a while but nothing is claiming the memory as yet, it becomes available, both written on disc (possibly the page file) and still in memory. Oh, did I forget to say? ALL AT THE SAME TIME ('till the memory is claimed)! Sweat!!! If this information is called before the memory is claimed by a new process it will be brought in without needing anything from the hard drive! This is what's known as a �soft fault", memory available and loaded, also at the ready for new use at the same time!

 

Why so much trouble with today's amount of ram?

You have to realize; most programs are written with the 90/10 rule - they spend 90% of the time bringing 10% of their code or data into use by any given user. The rest of a program can (should) be kept out on disk. This will obviously make available more physical memory to be in use for other more immediate and important needs. You don't keep memory waiting around if it's not likely to be used; you try to have your memory invested in good purpose and function. The unused features of these programs will simply be paged in (usually from the .exe) if they are ever called by the user...HA!!!...no page file used for this paging (unloading and reloading of .exe's and .dlls).

 

To sum everything up:

If you are not short of hard drive space, reducing the size of the page file lower then the default is counter productive, and will in fact impede the memory strategies of XP if you ever do increase your workload and do put your memory under pressure.

Here's why:

�Mapped" addresses are ranges for which the backing store is an exe, .dll, or some data file explicitly mapped by the programmer (for instance the swap file in photo shop).

"Committed" addresses are backed by the paging file.

None, some, or all of the "mapped" and "committed" virtual space might actually still be resident in the process address space. Simply speaking, this means that it's still in RAM and reference able without raising a page fault.

The remainder (ignoring the in-memory page caches, or soft page faults) have obviously got to be on disk somewhere. If it's "mapped" the place on the disc is the .exe, .dll, or whatever the mapped file is. If it's "committed", the place on the disc is the paging file.

 

Why Does The Page File Need To Be Bigger Than The Information Written To It?

 

**Memory allocation in NT is a two-step process--virtual memory addresses are reserved first, and committed second...The reservation process is simply a way NT tells the Memory Manager to reserve a block of virtual memory pages to satisfy other memory requests by the process...There are many cases in which an application will want to reserve a large block of its address space for a particular purpose (keeping data in a contiguous block makes the data easy to manage) but might not want to use all of the space.

 

This is simplest to explain using the following analogy:

If you were to look to any 100% populated apartment building in Manhattan, you would see that at any given time throughout the day, there are less then 25% of the residents in the building at once!

 

Does this mean the apartment building can be 75% smaller?

Of course not, you could do it, but man would that make things tough. For best efficiency, every resident in this building needs their own address. Even those that have never shown up at all need their own address, don't they? We can't assume that they never will show up, and we need to keep space available for everybody.

 

512 residents will need 512 beds...plus they will need room to toss and turn.

For reasons similar to this analogy, you couldn't have various memory sharing their virtual address could you?

 

Now, for users that do not put their memory under pressure, if you are certain you won't be adding additional workload, you will not likely take a hit if you decide to lower the default setting of the page file. For this, if you need the hard drive area, you are welcome to save some space on the drive by decreasing the initial minimum. Mark tells me the rule of thumb to monitor if you have a hard drive issue as follows, "You can see the commit peak in task manager or process explorer. To be safe, size your paging files to double that amount, (expansion enabled)" He continues to say that if a user increases physical memory without increasing your, a smaller page file is an option to save hard drive area Once again, we repeat however, it's necessary to have at least as much page file for the amount of you have in use.

 

Let's move on

 

!!!!!!!!!!!!!!!!!!!! IMPORTANT!!!!!!!!!!!!!!!!!!!!

 

ONCE THE PAGE FILE IS CONTIGUOUS, IT CANNOT BECOME FRAGMENTED ON A HEALTHY DRIVE.

THIS INCLUDES PAGE FILES THAT ARE "DYNAMIC"

 

Any "expert" that has told you the page file becomes fragmented due to "expansion" has an incomplete understanding of what the page file is, what the page file does, and how the page file functions. To make this as simple as possible, here's what actually happens, and exactly how the "fragmented page file" myth got started:

 

First, we need to point out that the page file is a different type of file then most of the files on your computer. The page file is a "container" file. Most files are like bladders that fill with water; they are small, taking no space on the hard drive at all until there is information written, the boundaries of the file will form and change as information is written, the boundaries grow, shrink and expand around and in between the surrounding area and the surrounding files like a balloon or bladder would.

 

The page file is different. The page file is not like a bladder. It's like a can or container. Even if nothing is written to the page file, its physical size and location remain constant and fixed. Other files will form around the page file, even when nothing at all is written to it (once the page file is contiguous).

 

For instance, suppose you have a contiguous page file that has an initial minimum of 256MB. Even if there is absolutely nothing written to that page file, the file will still be 256MB. The 256MB will not move in location on the hard drive and nothing but page file activity will enter the page file area. With no information written to the page file, it is like an empty can, which remains the same size whether it's full or empty.

 

Compare this again to a common file on your hard drive. These files behave more like a bladder then a container. If there is nothing written to a common file, other information will form in proximity. This will affect the final location of these common files, not so with the page file. Once you make the page file contiguous, this extent will remain identical on a healthy drive even if expansion is invoked.

 

Here's how the "fragmented page file" myth due to dynamic page file got started:

Suppose for arguments sake, your computing episode requires more virtual memory then your settings accommodate. The operating system will try to keep you working by expanding the page file. This is good. If this doesn't happen you will freeze, slow down, stall, or crash. Now, it's true, the added portion of the page file in this situation is not going to be near the original extent. You now have a fragmented page file, and this is how that "fragmented page file due to expansion" myth was started. HOWEVER IT IS INCORRECT...simple to see also...the added portion of the page file is eliminated on reboot. The original page file absolutely has to return to the original condition and the original location that it was in when you re-boot. If the page file was contiguous before expansion, it is absolutely contiguous after expansion when you reboot.

 

(blue is data, green is page file)

What a normal page file looks like:

 

 

What an expanded page file looks like:

 

 

What the page file looks like after rebooting:

 

 

 

What Causes the Expansion of a Page File?

Your operating system will seek more virtual memory when the "commit charge" approaches the "commit limit".

 

What does that mean? In the simplest terms this is when your work is asking for more virtual memory (commit charge) than what the OS is prepared to deliver (commit limit).

 

For the technical terms the "commit charge" is the total of the private (non-shared) virtual address space of all of your processes. This will exclude however all the address that's holding code, mapped files, and etcetera.

 

For best performance, you need to make your page file so large that the operating system never needs to expand it, that the commit charge (virtual memory requested) is never larger then the commit limit (virtual memory available). In other words, your virtual memory must be more abundant than the OS will request (soooo obvious, isn't it). This will be known as your initial minimum.

 

Then, for good measure you need to leave expansion available to about three times this initial minimum. Thus the OS will be able to keep you working in case your needs grow, i.e.: you start using some of those very sophisticated programs that get written more and more every day, or you create more user accounts, (user accounts invoke page file for fast user switching), or for whatever, there is no penalty leaving expansion enabled.

 

NOW YOU HAVE THE BEST OF BOTH WORLDS. A page file that is static, because you have made the initial minimum so large the OS will never need to expand it, and, expansion enabled just in case you are wrong in your evaluation of what kind of power user you are or become.

 

USUALLY THE DEFAULT SETTINGS OF XP ACCOMPLISH THIS GOAL. Most users do not need to be concerned or proactive setting their virtual memory. In other words, leave it alone.

 

HOWEVER, SOME USERS NEED TO USE A HIGHER INITIAL MINIMUM THAN THE DEFAULT. These are the users that have experienced an episode where the OS has expanded the page file, or claimed that it is short of virtual memory.

 

USERS THAT ARE NOT SHORT OF HARD DRIVE SPACE SHOULD NEVER LOWER THE DEFAULT SETTINGS OF THE PAGE FILE.

Fact!

 

Different types of things are paged to different files. You can't page "private writable committed" memory to .exe or .dll files, and you don't page code to the paging file.

Jamie Hanrahan of Kernel Mode Systems, The web's "root directory" for Windows NT, Windows 2000 (aka jeh from 2cpu.com) has corrected my statement on this matter with the following caveat:

 

There's one not-unheard-of occasion where code IS paged to the paging file: If you're debugging, you're likely setting breakpoints in code. That's done by overwriting an opcode with an INT 3. Voil�! Code is modified. Code is normally mapped in sections with the "copy on write" attribute, which means that it's nominally read-only and everyone using it shares just one copy in RAM, and if it's dropped from RAM it's paged back in from the exe or .dll - BUT - if someone writes to it, they instantly get their own process-private copy of the modified page, and that page is thenceforth backed by the paging file.

Copy-on-write actually applies to data regions defined in EXEs and .DLLs also. If I'm writing a program and I define some global locations, those are normally copy-on-write. If multiple instances of the program are running, they share those pages until they write to them - from then on they're process-private.

 

 

Credits And Contributions :

 

Perris Calderon

Concept and Creation

 

Eric Vaughan

Editing

 

*Jamie Hanrahan

Kernel Mode Systems (...)

 

**Inside Memory Management, Part 1, Part 2

by Mark Russinovich

瞭解虛擬內存

作者:佩里斯·卡爾德隆

2004年5月

 

首先,讓我們先解決一些問題

� XP        是一個虛擬內存操作系統

        � 您無法 阻止NT 內核中的虛擬內存

 

無論您的配置如何,對於給定數量的 RAM,您都無法通過調整這些虛擬內存操作系統中的任何用戶界面來減少分頁量。您可以重定向操作系統分頁,也可以規避虛擬內存策略,但無法減少 NT 系列內核中的分頁量。

 

詳細說明;

我們必須意識到,分頁是所有內容首先被帶入內存的方式!很明顯,內存中的任何內容要麼來自光盤,要麼在工作完成後成爲光盤的一部分。引用微軟知識庫:

 

Windows NT 需要“後備存儲”來保存 RAM 中的所有內容。如果 Windows NT 需要更多 RAM 空間,它必須能夠將代碼和數據交換到分頁文件或原始可執行文件。

 

這是實際發生的情況:

一旦信息被帶入內存(必須進行分頁),操作系統將爲該進程選擇內存回收策略。這種內存回收(或分頁,所以要清楚)的一種形式是,內核可以標記以釋放或卸載數據,而無需硬寫入。如果再次引用該信息,操作系統將直接從信息來源的 .exe 或 .dll 中檢索所述信息。這是通過簡單地“卸載”.dll 或.exe 的部分,並在再次需要時重新加載該部分來完成的。好的!

 

注意:在大多數情況下,這種分頁不會發生在頁面文件中,這種形式的分頁發生在 .exe 或 .dll 的直接位置內

 

“頁面文件”是分頁的另一種形式,這就是大多數人在提到系統分頁時所談論的內容。頁面文件爲虛擬內存自最初分配以來已修改的任何部分提供空間。在我與 馬克·魯西諾維奇(Mark Russinovich)的一次談話中,這一點被雄辯地表述爲:

 

當進程分配一塊私有虛擬內存(磁盤上不受映像或數據文件支持的內存,被視爲可共享內存)時,系統會根據提交限制對分配進行收費。提交限制是大部分物理內存和所有頁面文件的總和。如果頁面文件存在並且頁面文件中有空間,系統將在後臺將這些頁面寫入頁面文件。這只是一種優化。

 

看到這個了嗎?修改後的信息不能備份存儲到原始文件或 .exe 中,因爲它已被修改*,一旦被告知,這是顯而易見的,不是嗎?

 

不同類型的事物被分頁到不同的文件。您不能將“私有可寫提交”內存分頁到 exe 或 .dll 文件,也不能將代碼分頁到分頁文件。*

 

有了這樣的認識,我們意識到:

如果頁面文件與您使用的物理內存不匹配,有時會抑制私有可寫虛擬地址空間的分頁,並強制卸載可能最近訪問的 .DLL 和 .EXES!

 

您現在看到,在這種情況下,當需要回收內存時,您將進行分頁並卸載其他內容,以便彌補因頁面文件小於正在使用的內存而損失的必要餘量,(如果您拿走了私人可寫頁面的頁面文件區域,則無法再支持該頁面。)

 

影響?堆棧、堆、程序全局存儲等都必須保留在物理內存中,無論它被引用多久了!這對於任何給定的工作負載和任何數量的 RAM 都非常重要,因爲操作系統希望在長時間未調用內存時將其標記爲可用。如果您的頁面文件低於正在使用的內存量,那麼您就阻礙了此策略。

 

熱門歌曲?更多可執行代碼的分頁或支持、緩存數據映射等。即使它們被引用的時間比出於爭論的原因(線程堆棧的最底部頁面)要晚得多。看?這些最底部的頁面是我們想要分頁的頁面,而不是最近引用的 .exe 或 .dll。

 

當頁面文件量小於正在使用的內存量時,您就會阻礙這個好策略。

 

**NT系列操作系統下看到的所有內存都是虛擬內存,(進程通過虛擬內存地址空間訪問內存)沒有辦法直接尋址RAM!

 

所以我們看到,如果內存正在使用,它要麼來自硬盤驅動器,要麼會轉到硬盤驅動器...必須有硬盤驅動器區域來容納內存中的所有內容...(不言而喻,不是不是吧)。

 

現在,這已經不再是問題了,讓我們更進一步:

當操作系統需要佔用內存時(因爲所有內存當前都在使用中,並且您正在啓動新應用程序,或者將更多信息加載到現有工作中),操作系統顯然必須從某個地方獲取必要的內存。內存中的某些內容將(必須)被卸載以適應您的新工作。直到那一天到來之前,沒有人知道會卸載什麼,因爲 XP 將卸載最不可能再次使用的功能。

 

XP 中的內存回收甚至比這更進一步,使用比大多數人所能理解的更多的算法來使過程儘可能無縫。例如; 有針對頁錯誤的“先進先出”(FIFO) 策略,有“最近最少使用”策略 (LRU),以及這些策略與其他策略的組合,以確定在釋放時不會注意到哪些內容。卓越!還有一個“備用名單”。當信息有一段時間沒有被使用但還沒有任何東西佔用內存時,它就變得可用,既寫在光盤上(可能是頁面文件)又仍然在內存中。哦,我忘了說嗎?全部同時進行(直到內存被佔用)!汗!!!如果在新進程佔用內存之前調用此信息,它將被引入,而不需要硬盤驅動器中的任何內容!這就是所謂的“軟故障”,內存可用並已加載,同時也準備好供新使用!

 

爲什麼今天的內存量這麼麻煩?

你必須意識到;大多數程序都是按照 90/10 規則編寫的 - 它們花費 90% 的時間將 10% 的代碼或數據供任何給定用戶使用。程序的其餘部分可以(應該)保留在磁盤上。顯然,這將使更多的物理內存可用於其他更直接和重要的需求。如果內存不太可能被使用,則不會讓內存等待;你嘗試將你的記憶投入到良好的目的和功能上。如果用戶曾經調用過這些程序未使用的功能,則它們將被簡單地分頁(通常從 .exe 中)...HA!!!...沒有用於此分頁的頁面文件(卸載和重新加載 .exe 的和.dll)。

 

總結一下一切:

如果您不缺少硬盤空間,那麼將頁面文件的大小減小到低於默認值會適得其反,並且如果您確實增加了工作負載並使內存承受壓力,實際上會阻礙 XP 的內存策略。

原因如下:

“映射”地址是後備存儲是 exe、.dll 或程序員顯式映射的某些數據文件(例如 photoshop 中的交換文件)的範圍。

“提交的”地址由分頁文件支持。

沒有、部分或全部“映射”和“提交”虛擬空間實際上可能仍然駐留在進程地址空間中。簡單地說,這意味着它仍然在 RAM 中並且可以引用,而不會引發頁面錯誤。

其餘部分(忽略內存中的頁面緩存或軟頁面錯誤)顯然必須位於磁盤上的某個位置。如果它是“映射”的,則光盤上的位置是 .exe、.dll 或任何映射文件。如果是“已提交”,則光盤上的位置就是分頁文件。

 

爲什麼頁面文件需要比寫入的信息大?

 

** NT 中的內存分配是一個兩步過程——首先保留虛擬內存地址,然後提交...保留過程只是 NT 告訴內存管理器保留虛擬內存頁塊以滿足其他虛擬內存頁塊的一種方式。進程的內存請求...在很多情況下,應用程序希望爲特定目的保留一大塊地址空間(將數據保留在連續的塊中使數據易於管理),但可能不想使用所有空間。

 

使用以下類比可以最簡單地解釋這一點:

如果您查看曼哈頓任何 100% 人口的公寓樓,您會發現在一天中的任何特定時間,大樓內的居民同時少於 25%!

 

這是否意味着公寓樓可以縮小 75%?

當然不是,你可以做到,但是這會使事情變得困難。爲了獲得最佳效率,這座大樓中的每個居民都需要自己的地址。即使那些從未出現過的人也需要自己的地址,不是嗎?我們不能假設他們永遠不會出現,我們需要爲每個人保留可用的空間。

 

512 名居民將需要 512 張牀位……此外,他們還需要有翻身的空間。

出於與此類比類似的原因,您不能讓各種內存共享其虛擬地址,不是嗎?

 

現在,對於不會給內存帶來壓力的用戶,如果您確定不會增加額外的工作負載,那麼如果您決定降低頁面文件的默認設置,您不太可能會受到影響。爲此,如果您需要硬盤驅動器區域,歡迎您通過減小初始最小值來節省一些驅動器空間。馬克告訴我監控硬盤問題的經驗法則如下:“您可以在任務管理器或進程資源管理器中看到提交峯值。爲了安全起見,請將分頁文件的大小設置爲該數量的兩倍(啓用擴展) “他繼續說,如果用戶在不增加物理內存的情況下增加物理內存,則可以選擇較小的頁面文件來節省硬盤空間。但是,我們再次重複一遍,

 

讓我們繼續

 

!!!!!!!!!!!!!!!!!!!!! 重要的!!!!!!!!!!!!!!!!!!!!

 

一旦頁面文件是連續的,它就不會在健康的驅動器上變得碎片化。

這包括“動態”頁面文件

 

任何告訴您頁面文件由於“擴展”而變得碎片化的“專家”對於頁面文件是什麼、頁面文件的作用以及頁面文件如何發揮作用都不完全瞭解。爲了儘可能簡單地說明這一點,以下是實際發生的情況,以及“碎片頁面文件”神話是如何開始的:

 

首先,我們需要指出,頁面文件與計算機上的大多數文件是不同類型的文件。頁面文件是一個“容器”文件。大多數銼刀就像裝滿水的膀胱;它們很小,在寫入信息之前根本不佔用硬盤驅動器上的空間,文件的邊界將隨着信息的寫入而形成和變化,邊界在周圍區域和周圍區域之間增大、縮小和擴大像氣球或膀胱一樣的銼刀。

 

頁面文件不同。頁面文件不像膀胱。它就像一個罐頭或容器。即使沒有向頁面文件寫入任何內容,其物理大小和位置也保持不變。即使沒有向頁面文件寫入任何內容(一旦頁面文件是連續的),其他文件也會在頁面文件周圍形成。

 

例如,假設您有一個連續的頁面文件,其初始最小值爲 256MB。即使該頁面文件中完全沒有寫入任何內容,該文件仍將是 256MB。256MB 不會在硬盤驅動器上的位置移動,並且除了頁面文件活動之外不會有任何內容進入頁面文件區域。由於沒有信息寫入頁面文件,它就像一個空罐頭,無論是滿還是空,它都保持相同的大小。

 

再次將其與硬盤驅動器上的常見文件進行比較。這些文件的行爲更像是一個膀胱而不是一個容器。如果沒有向公共文件寫入任何內容,則會在附近形成其他信息。這將影響這些公共文件的最終位置,但頁面文件則不然。一旦使頁面文件連續,即使調用擴展,該範圍在健康的驅動器上也將保持相同。

 

以下是由於動態頁面文件而產生的“碎片頁面文件”神話是如何開始的:

爲了論證起見,假設您的計算片段需要更多的虛擬內存,而不是您的設置所能容納的內存。操作系統將嘗試通過擴展頁面文件來讓您繼續工作。這很好。如果不這樣做,您將會凍結、減速、失速或崩潰。現在,確實如此,在這種情況下頁面文件的添加部分不會接近原始範圍。您現在有了一個碎片化的頁面文件,這就是“由於擴展​​而碎片化的頁面文件”神話的開始。然而它是不正確的...也很容易看到...頁面文件的添加部分在重新啓動時被刪除。原始頁面文件絕對必須返回到重新啓動時的原始狀態和原始位置。

 

藍色是數據, 綠色是頁面文件)

正常的頁面文件是什麼樣子的:

 

 

展開的頁面文件是什麼樣子的:

 

 

重新啓動後頁面文件的樣子:

 

 

 

頁面文件膨脹的原因是什麼?

當“提交費用”接近“提交限制”時,您的操作系統將尋求更多虛擬內存。

 

這意味着什麼?用最簡單的術語來說,這是當您的工作要求的虛擬內存(提交費用)超過操作系統準備提供的(提交限制)時。

 

對於技術術語來說,“提交費用”是所有進程的私有(非共享)虛擬地址空間的總和。然而,這將排除所有保存代碼、映射文件等的地址。

 

爲了獲得最佳性能,您需要使頁面文件足夠大,以便操作系統永遠不需要擴展它,並且提交費用(請求的虛擬內存)永遠不會大於提交限制(可用的虛擬內存)。換句話說,您的虛擬內存必須比操作系統請求的更豐富(太明顯了,不是嗎)。這將被稱爲您的初始最小值。

 

然後,爲了更好地衡量,您需要將可用擴展保留爲初始最小值的大約三倍。因此,操作系統將能夠在您的需求增長的情況下讓您繼續工作,即:您開始使用一些非常複雜的程序,這些程序每天都會編寫得越來越多,或者您創建更多的用戶帳戶(用戶帳戶調用頁面文件快速用戶切換),或者無論什麼,啓用擴展不會有任何損失。

 

現在您擁有了兩全其美的優勢。靜態頁面文件,因爲您已將初始最小值設置得很大,操作系統將永遠不需要擴展它,並且啓用擴展以防萬一您對自己是或將成爲哪種高級用戶的評估錯誤。

 

通常 XP 的默認設置可以實現此目標。大多數用戶不需要關心或主動設置他們的虛擬內存。換句話說, 別管它

 

但是,某些用戶需要使用比默認值更高的初始最小值。這些用戶曾經歷過操作系統擴展頁面文件或聲稱虛擬內存不足的情況。

 

硬盤空間充足的用戶決不應該降低頁面文件的默認設置。

事實!

 

不同類型的事物被分頁到不同的文件。您不能將“私有可寫提交”內存分頁到 .exe 或 .dll 文件,並且不能將代碼分頁到分頁文件。

內核模式系統的 Jamie Hanrahan,Windows NT、Windows 2000 的網絡“根目錄”(又名來自 2cpu.com 的 jeh)更正了我關於此事的聲明,並提出以下警告:

 

有一種並非聞所未聞的情況,代碼被分頁到分頁文件:如果您正在調試,您可能會在代碼中設置斷點。這是通過用 INT 3 覆蓋操作碼來完成的。瞧!代碼已修改。代碼通常映射到具有“寫入時複製”屬性的部分,這意味着它名義上是隻讀的,並且使用它的每個人僅共享 RAM 中的一份副本,如果它從 RAM 中刪除,則會從 exe 或 .dll 分頁回來- 但是 - 如果有人寫入它,他們會立即獲得自己的進程私有的已修改頁面的副本,並且該頁面從此由分頁文件支持。

Copy-on-write actually applies to data regions defined in EXEs and .DLLs also. If I'm writing a program and I define some global locations, those are normally copy-on-write. If multiple instances of the program are running, they share those pages until they write to them - from then on they're process-private.

 

 

Credits And Contributions :

 


 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章