OP copy-pasted the post from
here. OP is probably a spammer.
The processing power required to test data is so little and so no performance loss to be noticed and a test condition for type or size of the data to be used only requires a few more lines of code usually.
This isn't always the case! It will depend a lot on how the validation is performed.
For example, When entering an Item number to sell in our "Point of Sale" software, it (obviously!) needs to know whether what was entered is valid.
Loading every single item into memory is not an option, however, so it cannot realistically just use say a dictionary of all the valid item numbers - what if somebody just created, deleted, or changed the number of an item? They'd expect it to work, but it wouldn't without having it pull all that data again into the dictionary.
Of course, hitting the database isn't usually expensive. One can check if an item is valid by "merely" checking a particular database view to see if that item number is associated with an item ID. If not, the item number isn't valid. if it is, then we also know what the item ID is which is used to load it in.
But, customers are a pretty tricky bunch. Just being able to accept an Item Number isn't enough. They want to be able to get the item loaded in by scanning one of it's barcodes, or by typing in a serial number, or perhaps they'd like to type in the item number that one of their vendors uses for the item, rather than theirs. That can mean that verifying something entered isn't valid can be quite the undertaking computationally. And even if it is valid, it might be looking in 3 or 4 different places before actually finding what it really is and being able to associate it with the Item.
The really fun part we are finding is that while many of the things we want to "cross reference" to get the relevant items are available via database views, joining on database views is incredibly slow on postgres.
There are some additional performance issues in that it needs to then load up that Item in order to know how to function with it (eg Serial Items need to prompt for the serial number, fuel needs to ask for a pump, and some other stuff like that)... that is (arguably) part of validation, too, but it's rather problematic because when it comes to loading item information apparently we decided "in for a penny, in for a pound" and it loads pretty much everything associated with the item. That makes sense for easy stuff that's pretty much flat in the database but when selling an item means it has to wait for the item to bring in the full quantity change history for the item and every single price change since forever I think it's no surprise that we're seeing performance issues now that customers are getting more historic data from that change! Kind of funny that this never occurred to us when adding it.