a billion and a million is different.
Not in this case. What difference would it have on the results? sure, the numbers will be larger for a billion then for a million, but it's not the actual number that's important, it's how the two numbers compare.
ST performed two tests: one with a smaller file, and one with a larger file. the two tests revealed that with a larger amount of data to read, my method causes a large IO bottleneck. Two points of reference is enough for a crude line-chart comparison of the two, and while it may not be entirely accurate, it can reveal specific trends in the two functions. For example, we can determine that my routine seems to run at something like O((n/4)^2), whereas his is a more linear method whose time taken is linearly related to the length of the file. In mine, this is not the case because additional overhead is required for the system to properly manage the larger amount of memory being used to store the entire string.
What is important here is that we are comparing the programs used, As long as the inputs are the same the comparisons are valid.
if you test program A and Program B with Input C, it's a fair comparison between A and B as long as C is the same for both.
It doesn't matter if there was a mixup over the specifics of the size of C. The comparison was between A and B.
If you compare a Quick Sort with a Merge Sort, wether you are testing with a million or a billion elements is largely redundant; what's important is the comparison. If there was confusion over the layout of the data (such as how a quicksort takes longer then a merge sort with a nearly sorted array) and it was relevant, then yes, I would agree. but while there is indeed some ambiguity, it's irrelevant.