Understanding the data yeps.
You can do a search on Google or Yahoo or some other search engine on the subject "Data Types " and find numerous references. Many of these will be overpowering.
Think of far personal computers as being tools or appliances that have to be assembled from parts. Because of size, cost, the efficiency and other factors the parts can be of different size. Of coarse, the parts that are used in side the computer are not visible to the human vision. But there is a relationship of cost and efficiency, even though the size of these items is not apparent to the human eye.
The early personal computers that were made in the 1970s were either four bit or 8-bit computers. There also was the possibility of a six bit, but let's not talk about that. In an eight bit computer a processor can quickly load eight bits of data all that wants. It is possible to do both arithmetic operations and logic operations on the bits in that one byte of memory.
Now to get to the point being made here. In these early computers there is no simple, easy and quick way to access one bit. Yes, you can access one bit, but at the same time you read seven other bits that you may not have been of interest. So if you wanted to change that one bit, you certainly could, but to write it back to memory unit have to write all it gets back to memory. The hardware did not really have a provision for just reading and writing one bit by itself without reading or writing the others.
Now it is true there are instructions or sequence of instructions that will allow you to test any bit in the eight bit memory location. Yet the hardware itself does not actually read just one bit, nor did it wrght one bit. In fact, it would take more effort to pick out just one bit by itself and perform operation on it than it would be to manipulate all eight bits at the same time. This was basically a hardware characteristic. The term BYTE came to be an expression or name used to identify the eight bits that would reside in a single memory location.
Now let's bring it up to modern date. The operating systems we use today are fundamentally 32-bit systems. In somewhat of an abstract sense they are designed for a computer that access is 32 bits at a time. Of course, our modern-day processors now Access 64-bitS.but the most popular system still in use are 32-bit systems. So the quick and efficient way of accessing data in memory is just to pull in 32 bits at a time operate on it and write it back out. There is no great advantage to trying to pick out individual bits among the 32 that are available. Unless you can imagine some advantage.
Now as to the registry, it is a database manager. We would like our database manager to be fairly efficient in speed and also memory usage. It would seem that specifying a byte value instead of a double word would be a better use of memory if only a small number is needed. A byte can represent a value from zero up to 255. Or, if you choose, of value that could range from -128 up to positive 127. However, the hardware does not really work any better trying to pick out just one bite than it is picking up the entire word or double word or quadruple word as the case may be. The CPU instruction set, of course, has specific introductions for doing BIT test, set and reset. You could use those heavy, they actually take more time than it is to operate on the entire set of bits all at once.
My apologies for my lack of making something like this more interesting or easier to understand. If it was something visible and you could see, it wood be rather obvious. In the Windows operating system a design decision had been made some time ago to use the double word as the most common data type that would be used in the registry. This is not really a dependency on a specific programming language. It has to do with the underlying capabilities of the hardware. Remember, that MS DOS and Windows started out with a 16-bit computer that had an eight bit data path and some kind of strange 20 bit address path, the old 8088, later to be replaced by the 8086. With the advent of the 386, something which Microsoft and Intel did together, the idea of having a 32-bit processor instead of a 16-bit processor became a reality for desktop computers.
So basically, that is the best I can do to try to explain why they use double word, I am an old, old assembly language programmer and I used to work using an operating system that Intel had available long before DOS came on the scene. We did our work in assembly language and if we wanted speed we went access the memory on the byte level and not try to test individual bits. Of course, we could write a program to test individual bits. But as I said before, there is a speed disadvantage and you have to have actually more code in order to access just one bit.
To the moderator, edit or strike out any of the stuff you do not think is relevant. This is just the way I try to explain things to people that don't do this kind of low-level pedal to the metal work.