Yes, it is confused Matt. There needs to be a common entry that sed can use to limit the search. Regular expressions using SED are classed as 'greedy' and it will search to the LAST match in the string and not the NEXT match, unless there is some text that can anchor sed to an entry.
I think it is also time to consider that whenever someone changes the web page then this type of parsing of HTML using SED is going to break, and be difficult for you to fix.
You can still use batch files but I think the way to go is to download the web page as you are doing and then run it through a HTML to TEXT converter, and then use batch files to parse the text information. The text would be similar to what you would have when you use the SAVE AS text option in a web browser, or a text converter can add delimiters also.
HTMstrip by Bruce Guthrie is a rather dated free MSDOS command line tool but is limited to short filenames. There may be better modern tools.
Try this on the web page and see if it gives you the information you need - the format can be changed.
htmstrip /border=t page.htm