GPU's don't always consume a specific amount of power. The actual power usage varies depending on the specific system configuration. Additionally, while many cards are based on the same GPU, they often have different supporting circuitry with different electrical requirements, and different bus master circuitry can impact the precise power consumption used by each graphics card as well.
The reason these values are not provided by a vendor is because they are ethereal. They don't truly exist. The tests done by hardware.fr are for a specific configuration. In some instances the measured values exceed the rated TDP of the card (one specific instance being the 9800GT). But again, that's only one 9800GT; the GPU may be a Nvidia 9800GT, but the surrounding circuitry will typically differ from vendor to vendor. The recommended minimum requirements are usually calculated with the provided specifications of the NVidia GPU's themselves which are provided to component manufacturers, the power usage of the added circuitry, as well as a good padding value, and then rounded up to valid Power Supply sizes.
Asking "Why don't Graphics cards manufacturers give exact numbers for power usage" is the same as asking why Application software requirements don't give precise values for memory and disk space. Different specific configurations will have different measurements. You cannot say that "This Graphics card uses exactly XXX.XX Watts of power" any more than you can state that a given application will use a specific amount of memory.