I love to experiment with web stuff. My new goal (sort of) is to make a web crawler that will go through a bunch of pages, find URL's, and link them.
I would like to make a web crawler that can get a large amount of URL's, and save them in a simple text file.
How would I do this?
(Not looking for anything fancy, I've got limited bandwidth, you know. )