You can set the virtual memory size, yes, and it is recommended you set it to a fixed size, but windows by default changes it to suit what it thinks your computer needs.
Quote from Wikipedia:
"However, the page file only expands when it has been filled, which, in its default configuration, is 150% the total amount of physical memory." (that is with the Windows NT series eg, NT Workstation, 2000 and XP)
so lets see it i am right... 50% of 768 is 384 so 150% is 768+384=1152MB...which is 2MB off what windows recommends to me
for 256MB... 50% is 128 so 150% = 256+128=384...which i told you the computers at school ask for only 300 and something...
and yes the 32 bits per cycle is correct because that is how 'wide' the bandwith path is.
Say you had a stretch of road that is 4 lanes wide, and one full cycle would be 4 cars coming on as four cars come off, and if we make the cycle time being a second. the road would be 4-bit as it can only take 4 cars on to "process" a second
so a bit is either a 1 or a 0, right? so in saying that, our road can take 4 1's or 0's at once during each second
now our 32-bit cpu takes 32 1's or 0's a second, and the cycle time depends on the speed of the CPU...for instance a 1GHz (1000mhz) hz > mhz is 10^6.. so hz would = 1000x10^6 = 1000000000 cycles per second
which means in total... our 32-bit 1Ghz CPU can take 32*1000000000 = 32000000000 1's or 0's per second.
now with your fetch-decode-execute things...i believe it only means decode and execute, because different machines FSB speeds will generate different fetch/throughput speeds, though the CPU and FSB run at the same clock rate, if you compared two different machines, one with a slower fsb than the other...lets take a older one to show the difference... a 100Mhz FSB computer, compared to a 133Mhz FSB computer.
The only difference you will notice here is if you exceed that of the 100Mhz data input per second and in which case the 133Mhz FSB computer will be better as it can feed 33Mhz of data a second more then the 100Mhz computer...now in saying this, you wouldhave to have identical processors to see this effect
now with the virtual memory stuff...the virtual memory speed will be limited to the speed of the drive at which it is stored...if you didnt know, hard drives transfer at different speeds to the cpu and everything else, ranging from 25Mhz (UDMA 1) back then to 300Mhz (sata II) nowadays.
Im guessing this as a older article as 2^32 is 4294967296bits = 4294.967296Mhz a sec...which I dont understand...It has me lost
and i shall do some research quickly now
copied from wikipedia...it makes a bit of sense to me might make more to you
Contents of a memory location
Each memory location, in both ROM and RAM memory, holds a generic binary number of some sort. How it is interpreted, its type, and meaning, and usage, only depends on the context of the instructions which retrieve and manipulate it. Each such coded item has a unique physical position which is described by another unique binary number, the address of that single word, much like each house on a street has a unique number. A pointer is an address itself stored, as data, in some other memory location.
The interesting concept about items stored in memory: not only they can be interpreted as data—text data, binary numeric data, and so forth—but also as instructions themselves, in a uniform manner. This uniformity was introduced with von Neumann architecture and is prevalent in computers since the 1950s.
Instructions in a storage address are contextually interpreted as command words to the system's main processing unit, and data is retrieved by such instructions placed in an internal and isolated memory structure called a storage register, where the subsequent instruction can manipulate it in conjunction with data retrieved into other internal memory locations (or internal addresses). Registers are the memory addresses within the part of the central processing unit known as the arithmetic logic unit (ALU), which responds to binary instructions (machine code) fetched into instruction registers selecting combinatorial logic determining which data registers should be added, subtracted, circulated (shifted), and so forth at the low machine language level of binary manipulation of data.
[edit] Word size versus address size
A word size is characteristic to a given computer architecture. It denotes the number of bits that a CPU can process at one time. Historically it has been sized in multiples of four and eight bits (nibbles and bytes, respectively), so sizes of 4, 8, 12, 16, 24, 32, 48, 64, and larger came into vogue with technological advances.
Very often, when referring to the word size of a modern computer, one is also describing the size of address space on that computer. For instance, a computer said to be "32-bit" also usually allows 32-bit memory addresses; a byte-addressable 32-bit computer can address 232 = 4,294,967,296 bytes of memory, or 4 gibibytes (GiB). This seems logical and useful, as it allows one address to be efficiently stored in one word.
However, this is not always the case. Computers often have memory addresses larger or smaller than their word size. For instance, almost all 8-bit processors, such as 6502, supported 16-bit addresses, or else they would have been limited to a mere 256 byte capacity. Similarly, the 16-bit Intel 8086 supported 20-bit addressing, allowing it to access 1 MiB rather than 64 KiBs of memory. Also popular Pentium processors since introduction of Physical Address Extensions (PAE) support 36-bit physical addresses, while generally having only a 32-bit word.
A modern byte-addressable 64-bit computer—with proper OS support—has the capability of addressing 264 bytes (or 16 exbibytes) which as of 2007 is considered practically unlimited, being far more than the total amount of RAM ever manufactured.
[edit] Virtual memory versus physical memory
Main article: Virtual memory
Virtual memory is a mapping of real memory to page tables. The purpose of virtual memory is to abstract memory allocation, allowing the physical space to be allocated as is best for the hardware (that is usually in non-contiguous blocks), and still be seen as contiguous from a program perspective. Virtual memory is supported by some operating systems (for example, Windows but not DOS) in conjunction with the hardware. It is possible to think of virtual memory as a filter, or an alternate set of memory addresses (that are mapped to real address) that allow programs (and by extension, programmers) to read from memory as quickly as possible without requiring that memory to be specifically ordered. Programs use these contiguous virtual addresses, rather than real, and often fragmented, physical addresses, to store instructions and data. When the program is actually executed, the virtual addresses are translated on the fly into real memory addresses. Logical address is a synonym of virtual address.
Virtual memory also allows enlarging the address space, the set of addresses a program can utilize and thus allows computers to make use of secondary storage that looks, to programs, like main memory. For example, virtual address space might contain twice as many addresses as main memory with the extra addresses mapped to hard disk space in the form of a swap file (also known as page file). It copies them back (called swapping) into main memory as soon as they are needed. These movements are performed in the background and in a way invisible for programs.
hope all of this helps
Kurtis