The specification itself doesn't say anything about reducing input latency. It is the journalist interpretations of it that do that, based on trying to explain VRR.
Most Input latency is in the application, not the time between frames. VRR only "reduces latency" in the sense that a Variable Refresh Rate means that a new image can be displayed before the next screen refresh without tearing. But in cases where that is useful it is largely going to be due to the desired Framerate the application is pushing being below a standard Refresh Rate, so it's largely just going to eliminate screen tearing, the absolute time difference from one frame of input to the next is going to actually be greater, and none of this has anything to do which how quickly the software receives input.