I am very interested in the new Tcl archive search engine being used.
You state that it uses a delivery based system. Can you explain to me more on this.
Wne I do a search, it's as if the search result is allready available, or is this not true, does it use a search, on-the-fly page creation and point to system.
Actually, I stated that it uses a new delivery system, not a delivery-based system (whatever that would mean).
The search results page is written on-the-fly when a search is performed, and the user is redirected to that page with a Location: header. If an identical search was performed previously, it doesn't need to load the database, execute a search and write the page - it simply redirects the user to the page that already exists (effectively a cache). The system works around server limitations present on most web hosts:
- There is no Tcl interface to MySQL, so the database is stored as a text file. Fewer accesses to it is a good thing (although the database is only 102KB at the moment, so it doesn't make a huge difference on the user's end).
- mod_gzip doesn't work on CGI-delivered pages -- only html and php. By delivering the search results as a mod_gzipped html page rather than straight from the CGI, loading is faster (especially for dial-up users) and bandwidth is saved.