If i understand the thrust it is emphasizing the current ability to recover lost passwords
No. It is emphasizing the ability to crack them. Lost passwords should, by definition, be unrecoverable.
Most of it is rife with inaccuracies, even after the edit. For example the LM hash, to my recollection hasn't been used extensively since NT4. And being able to crack an NTLM password (or even LM, really) requires root level or physical access to the machine in order to read the SAM file, at which point you can simply remove said password anyway.
the quote regarding 14char passwords falling in 6 minutes was for LM encrypted
I always see articles and stuff referring to password security and how long it takes to brute force and so forth, and they always seem to use the LM hash. Despite what the article says, the last version to use LM by default was Windows 2000, to my recollection, if not NT4. 2000 and XP support LM hashes but only if they are networked with NT4 or 2000 machines that use the LM hash, despite what the article and the edit say (that it's used on XP)
The typical method of storing passwords is of course to never store the password at all. Instead, the password is sent through a one-way hash. The idea here is that you store the hash, and then when you want to verify a password you hash the input and compare it to the stored value to see if they match. The idea here being to increase security in the event that the database of stored hashes is acquired by malevolent parties.
Usually, the passwords are salted in some way. This is done to essentially add entropy, and needs to be something that will remain the same. Some authentication systems designed only for use on one machine will use that machines network Mac Address. Others will salt the password hash using the username or userid as it exists in the system. etc. The purpose being to make it so even if two users were to have the same password, their hashes would not be identical for other reasons.
This is a cryptographically secure system, as long as the hash algorithm is cryptographically secure.
Most attacks of cryptographically secure systems are done via brute-forcing of the password, which is, essentially, checking every single possible password and Salt combination, and seeing if the hash goes through. In order for this attack to be feasible you <need> the hashes, so the database of the website or service will need to be compromised. Otherwise, you're only way to check against them would be to use their API or service, and I've personally yet to see a service that doesn't lock you out from repeated attempts to login with the wrong password.
With access to the Hashes, they are still relatively safe, but it is possible to attack the hashed value of a password using rainbow tables. These are gigantic, pre-computed hash-values for every possible combination of characters. Each one has to be tailored to any individual salting method applied by the service on question, and they are often upwards of 8GB in size. An attacking PC can calculate these hashes on the fly, but using a massive table of already-calculated information is helpful because hash algorithms are typically very processor intensive, so such a table allows an attack to proceed a lot faster, particularly if the machine in question has the memory to keep most of or all of said table in RAM.
There are already Rainbow tables available for the NT Hashes, making it possible to brute force a good percentage of peoples Windows XP, Vista, 7, and 8 Passwords.
However, the actual hash data is not something that Windows just gives to any program that asks for it. Usually you need to reboot into a LiveCD or other OS, or run a program with LocalSystem privileges to get read/write access to the files where the hashes are stored. So if a hash is in a position to be compromised using rainbow tables it's already a breach.
regarding SSL and encrypted websites: they use a Public/Private key symmetric algorithm, which means that in order to get the data needed to reverse-engineer the encryption using brute force you would need to perform a Man in the middle attack of some sort. The method of SSL connections however makes such an attack difficult because there are checks in place that try to determine if something is fishy (within the confines of TCP, that is)