I've used Perl to take a list of numbers and generate URL's out of them such as
Item #'s
11111
22222
33333
44444
and write to text file as
https://www.somedomainname.com/item=11111https://www.somedomainname.com/item=22222https://www.somedomainname.com/item=33333https://www.somedomainname.com/item=44444Also wrote one that places the items from a list that is read in into the middle of a URL path when farming information online and I know unique identifier that I am targeting gathering information on and need to prepend and append info to make it into a URL
Looks like your all set though with what you have. Many different script/language methods to get to the same result.
When gathering information on 37,000 items, this was sweet. It took a while to read in each line and write out to file the prepend and appended info and hard drive was very busy, but was better than what I had been doing prior which was running a macro loop to edit a text file to add information for each line which took forever. The Perl method was so much faster and not as buggy as the macro that would without any reason go berserk and start writing information in wrong places and make a huge mess running as a TSR on the system. The Perl method is flawless and precise vs a keyboard macro running in a loop for x-many iterations which had a flaw in it somewhere that caused it to foul everything up.
After getting my full list of proper URL's, I had a team of systems running on their own gathering information on all of these items unattended which took a while. For 1 computer it would have taken about 154.17 hours at one items information gathered every 15 seconds. I used 5 systems and split the list up into 5 shorter lists to get this done in around 31 hours time. the 15 seconds was because the gathering automation i had I had to ensure that the webpage was fully loaded before information gathered before moving on to the next item to gather info from. The webpage had lots of elements that caused for a not so fast load time even with a broadband internet connection, so some of the 15 seconds was wasted time as I made a delay for full loading of webpages before the quick gathering and moving on to the next item.
Then all that info gathered I was able to look further into that information quickly locally as well as generate reports and graphs and stuff and look for patterns and other items of interest with the data that was gathered that is public domain free for all.
A professional programmer who could specifically target a pages element could probably gather the information a lot faster reading in the exact information from a specific location in a webpage knowing it populated with info before moving on and no delay padding used to ensure its there before gathering, but thats beyond my programming skills with gathering information online from websites.
So my methods are Rube Goldberg masterpieces that work, but not as efficient.
One of my C# books had a section with browser hooks which I suppose could be used for parsing information from websites, but I never dug deep into this. Maybe some day since I still have the book.