Micky I don't know any Universities that would use static pages to host web content. It's usually dynamically served through calls by php, java, asp. Anyone who knows anything about web programming doesn't store information in static HTML files anymore.
If you really need to get the information you can go about it several ways.
1) Ask the IT department to provide you with the content or find out what technologies they are using to host the website. There are many different languages and databases that can be used to serve web content. Professionally built sites store as much information in databases as possible as the access times are much faster and can handle volume. If you have access to the internal network the data is hosted on you could possibly just dump the data you want through a simple one line command. If you know what services that are being used you will know how to go about extracting the information, otherwise you're shooting in the dark.
2) The second thing is get on the same internal network as the web server if possible, run an NMap against the server to see what services / versions are running on it. You can also collect information from the url links of the site you are visiting to see what scripting languages are being used and if you're lucky, you might come across a lazy admin who has credentials hard coded in the websites db _connect string. You also may want to track down a program similar to this:
http://binhgiang.sou...l%20system.html This dumps databases and rips websites for you.
3) Social Engineering, call IT from within the University and ask them to hand over the last known good web backups for their server on behalf of the IT director before they are decommissioned. Make it seem like this should have been done last week.
4) Failing the above 3 steps: DDOS their network, crash their firewalls and run in and take the data like a
thief boss.