This is a tricky problem in general, and specific, partial solutions abound. Especially unidirectional solutions. For example, you can get read-only versions of wikipedia for offline use in your remote mountain village; but there is no easy way to contribute your updates back to the version on the main internet.
Also you should have the internet cached for offline use even if the net is nice right now, because nation states are war gaming to destroy the internet, and us little people will suffer when that happens and we can’t get our Youtube instructional videos on how to survive the apocalypse after it happens.
Offline automatic filesync
ArchiveBox takes a list of website URLs you want to archive, and creates a local, static, browsable HTML clone of the content from those websites (it saves HTML, JS, media files, PDFs, images and more).
You can use it to preserve access to websites you care about by storing them locally offline. ArchiveBox imports lists of URLs, renders the pages in a headless, autheticated, user-scriptable browser, and then archives the content in multiple redundant common formats (HTML, PDF, PNG, WARC) that will last long after the originals disappear off the internet. It automatically extracts assets and media from pages and saves them in easily-accessible folders, with out-of-the-box support for extracting git repositories, audio, video, subtitles, images, PDFs, and more.
AFAICT there is no way to contribute upstream. But a reasonably simple and well-curated option is to use the Kiwix offline wikipedia, which can give you everything, everything minus pictures, or only “medical” articles, or only “school” articles and so on.