When installing the i2p-router one also gets the command eepget as a replacement for wget or curl to use within I2P. It’s limited though, so I am not sure what’s the point.

I have used wget before to mirror site. If called with the --mirror --convert-links --page-requisites options, it will basically download a whole site with all files that linked in it and the same time convert those links to work within the downloaded version. Pretty amazing.

Now I checked the man page of eepget and expected to see the same stuff there, but it is not. So I am wondering what’s the point. Should wget not work properly over I2P? There currently is a static copy of an old wiki that has become unreachable but has important content. The page sais it will be made unavailable soon, so I thougt I’ll give it a try and see if I can get it mirrored.

Now I got my wget running over my privoxy proxy anyways, but until now have only used that in conjunction with Tor. That’s my default to download stuff like videos actually. I do that by using a wrapper-script to define the proxies like this:

#!/bin/bash

# define proxies to use ...
export https_proxy="http://192.168.2.1:8118"
export http_proxy="http://192.168.2.1:8118"

wget $1

I have quite some options in where usually, but it would work in that simple form. So I went and tried just to download the main page of that mirrored wiki by giving this the b32 address. And it worked. So I took the mirroring version of my script and gave it the address of the wiki. It’s still running now and I havent verified, that the conversion of the links works out. But the directory is getting filled in a promising way. Will update this later with a report on the results.

From reading the man-page of eepget I don’t get what it’s needed for, though. But maybe I’ll find out some day ;)

Update

About an hour later it was through with it. Looked liked this:

[...]
Converted 334 files in 3,0 seconds.

real    41m13.039s
user    0m0.508s
sys     0m0.312s
user@host:~/eepgettest# du -hs
7,0M    .

I opened index.html in the browser and indeed could surf around the site while only the local copy was used. Seems relative links are used. That should allow to easily put that site “online” again. Will try that someday …