Memory accesses have been rated in Nanoseconds even since the original IBM-PC.
512MB is an OK amount for XP, However, running a large number of big programs (such as Illustrator, photoshop, etc) at the same time means that it can't all fit in physical RAM at once; once allocated memory exceeds available physical memory Windows needs to prioritize what remains in physical RAM and what get's swapped out. basically, it's a fairly involved algorithm involving what code segments are run more often as well as what data is accessed- for example if you use a certain command often it will remain in memory; whereas the other portions of the process's code segment might be swapped to disk.
The way that windows loads a executable can be important in determining exactly what effect virtual memory has on it, as well as what effect it will have on virtual memory.
What it does is basically to "Map" the file in memory. Now this doesn't actually Allocate data within memory, but rather a file to be accessed via memory accesses rather then file I/O from the disk to the file; this is far easier both code-wise and performance wise. Rather then copy code into memory and run it, windows can simply point execution to the "WinMain()" function within the mapped program file (of course it's not quite that simple, since external links have to be resolved and so on, but sometimes things need to be omitted for brevity's sake). this way, the parts of the program itself that remain in RAM are kept in memory or swapped out based on the same logic that determines the same for data; that is, frequently accessed bits remain in RAM, whereas pages that are more infrequently used are more likely to be swapped out when memory get's low.
"When memory get's low" is a key phrase here; while many pages in a program file may be marked as "disposable" or swappable, Windows will still keep those bits out of Virtual memory for as long as possible; it is only when all physical RAM is committed and a program attempts to commit some more (or if a there is no contiguous block of free RAM that is large enough for the requested size a program wants) that windows takes a good hard look at the loaded pages. Basically the logic boils down to, "will I need this right away?" and makes a sort of guess based on how often it is accessed as well as the flags set on that particular memory page. For example pages can be set to never leave physical RAM, often programs will flag critical parts of their code with this to increase performance. It is obviously discouraged to do so with vast swaths of a program, since it restricts the abilities of the cache manager to do it's job.
This is where the mapped file I/O comes in- the file I/O itself is done "on the fly" as needed; in effect when an executable is loaded it is very nearly treated as a small, read-only swap file; in that data can be loaded from the executable, but never written, so that if it is swapped it is swapped to virtual memory. Some have queried why the executable might not be changed, and the reasons range from the obvious (do you really want your executables to be writing themselves?) to the more precise (after data is loaded from the executable- it is often "template" data, and is changed; often by the program itself, setting hooks and self-modifying code; the technique is called "copy on write" and is used quite extensively.
As a final note, RAM usage of two instances of the same application as reported by task manager are quite erroneous; the "private bytes" are not necessarily "Private" bytes, since the two applications share the same code and data; that is, their process address space is the same... until one instance changes code or data, at which point the touched page is duplicated, the write committed to the page and the processes memory map updated to refer to this "changed" page rather then the "template" one.