Also I propose creating a read-only instance of each wikimedia project (e.g. the domain "
en.wikipedia.org" becomes "
read.en.wipedia.org") and connects to a proxy that will only deliver the content, will deliver no access to the various tools around, no preferences, no logon, just a basic search bar at top of page, no edits and a side bar limited to visit some read-only portals or pointing to other related websites not restricted by this limitation.
This website would just have a local standard cache of generated pages, with long expiration date (about one week is a minimum), it would be semi-live. But at least we continue delivering the content.
And to preserve privacy, this red-only edition should propose to extract some large navigatable extract (e.g. by collecting pages from a start page, plus some randomly selected start pages,
up to some level of depth or a maximum downloadable file size) that can be browsed offline (with the existing offline Wikipedia reader apps). Users could then navigate freely from some wellknown Wikipedia portals of from any subject they are interested in. This extraction can be made on top of the caching proxy, maximizing the use of its cache (filled by contents requested by any other random visitors).
Which format will be used for the download ? Basically a collection of prerendered HTML pages, plus images and some common CSS stylesheets, no javascript at all or a minimal one not absolutely necessary for using it, packed in an archive and freely installable on any webserver, or in a desktop storage folder. The metadata of these pages would only contain the date of production, no history at all. It could also contain the licence info (and for listing the contributors, one would have to browse them online with a decently secured browser, we would just display the URL to follow to get that history list online).