LIST of MIRRORS
GOALS of MIRRORING
The technical goals of mirroring are to let users to have
a faster and redundant access to the content of our journals.
As the success of journal increases, the bandwith
required for servers must also increase.
Mirroring is the scalable answer to this problem, since
mirroring is likely to increase in proportion to the
success of the journals and therefore it enable
free online journals to prosper
Mirroring is not just a practical convenience,
it allows also people with poor internet connection to
enjoy part of the treasure of knowledge offered by the Web.
For commercial journals, mirroring
amounts to a theft of intellectual property
( which is given freely to them by authors... by the way).
In constrast, we authorize and encourage mirroring of our journals,
without the need of any formal authorization.
The legalese are not yet finalized, but
it is our intent that our publication contents follow
the "CopyLeft policy" or GPL Licence.
Therefore you are free to mirror and duplicate our journals,
but you are not allowed to sell their contents.
Since the overall size ( about 30 Mbyte ) is not very big
in regards to the average size of current hard disks, and
in order to simplify the task of everybody, the mirroring
is going to be performed on the whole tree of the MDPI site.
This includes the presentation of MDPI foundation and its
three current journals.
Our site has been built purposedly, as a site with HTML static pages and
permanent pdf documentsi ( no PHP, no database)
so that mirroring is technically feasible.
We are committed to keep a static file structure,
even as we plan to add an internal search engine.
We encourage two types of mirroring :
- Institutional Mirroring : Institutions may help
not only their own members, but neighbouring scientists,
to have a faster and reliable access
to MDPI journals. For institutions, this is a tradeoff :
they save bandwidth on outgoing traffic, while
having more inbound traffic.
One positive aspect is that sites supporting mirrors become more
visited and better known.
We are going to maintain a list of supporting
institutional mirror sites which is going to be presented
in an extremely visible fashion,
on the
welcome pages of each journal, so that all MDPI readers can access the nearest
site.
- Personnal Mirroring : With hard disks becoming larger
and cheaper, it becomes not unreasonnable to set up his/her
own personnal mirror, with all the information at your fingertips !.
An automated procedure, running at night, keeps your personnal mirror
always updated. This is extremely convenient.
You may keep this mirror to yourself, or openned to your colleagues,
you may do what you wish !
Of course, all readers, and more specifically librarians, are encouraged to burn
a CD of their own mirrors, both for archival and convenience purposes.
|
|
With the success of its journals, MDPI has decided to have
a dedicated server machine at the University of Basel
hosting the .net extension, in addition to a
commercial web site hosting the .org extension.
This machine has been
now fully operational for more than one year without a major glitch.
Please use the adress http://www.mdpi.net as
the adress for mirroring. ( You may keep the numeric IP address
131.152.105.26 in your
current scripts, if you wish).
For mirroring, do not use the URL http://www.mdpi.org, since
you will not be able to retrieve the whole site,
but http://www.mdpi.net.
|
|
UNIX TOOLS
If you are using a Unix operating system such as the now popular
Linux system, the mirroring
procedure is very easy.
Just type the following command :
|
|
wget -v -m -l13 -L http://www.mdpi.net
Explanations :
-m --mirror
Turn on mirroring options. This will set recursion and time-stamping, combining -r
and -N.
-r --recursive
Recursive web-suck. According to the protocol of the URL, this can mean two things.
Recursive retrieval of a HTTP URL means that Wget will download the URL you want,
parse it as an HTML document (if an HTML document it is), and retrieve the files
this document is referring to, down to a certain depth (default 5; change it with
-l). Wget will create a hierarchy of directories locally, corresponding to the one
found on the HTTP server.
-N --timestamping
Use the so-called time-stamps to determine whether to retrieve a file. If the
last-modification date of the remote file is equal to, or older than that of local
file, and the sizes of files are equal, the remote file will not be retrieved. This
option is useful for weekly mirroring of HTTP or FTP sites, since it will not permit
downloading of the same file twice.
-l depth --level=depth
Set recursion depth level to the specified level. Default is 5. After the given
recursion level is reached, the sucking will proceed from the parent. Thus specify
ing -r -l1 should equal a recursion-less retrieve from file. Setting the level to
zero makes recursion depth (theoretically) unlimited.
-L --relative
Follow only relative links. Useful for retrieving a specific homepage without any
distractions, not even those from the same host.
Interesting options in case of a poor connection :
-nc Do not clobber existing files when saving to directory hierarchy within recursive
retrieval of several files. This option is extremely useful when you wish to con
tinue where you left off with retrieval. If the files are .html or (yuck) .htm, it
will be loaded from the disk, and parsed as if they have been retrieved from the
Web.
-t num --tries=num
Set number of retries to num. Specify 0 for infinite retrying.
other interesting options :
-v --verbose
Verbose output, with all the available data. The default output consists only of
saving updates and error messages. If the output is stdout, verbose is default.
-o logfile --output-file=logfile
Log messages to logfile, instead of default stdout. Verbose output is now the
default at logfiles. If you do not wish it, use -nv (non-verbose).
|
|
At the directory level where you typed the command,
wget is going to create a subdirectory
131.152.105.26 containing all the files of the site.
That's very simple !.
We encourage the use of free UNIX OS such as Linux or FreeBSD,
since this agrees very well with the spirit of our free online journals.
Most Linux distributions feature by default, wget , if you
don't have it, then you may download it the following sites :
More links
|
|
TOOLS for WINDOWS
Not many WWW servers are using WINDOWS, but still a lot of
scientists are using the WINDOWS operating system.
A wget version for Windows is freely available.
at the following sites :
Otherwise you may use one of the many commercial "suckers" or "rippers"
software :
|
|
|
|