>> |
!!N7/2JlXSpJR 04/03/09(Fri)14:46 No.940180How to use: First,
we need to get all the relevant to what we're looking for. Let's
suppose we want to get all the pages from Danbooru. We use this command
line: dan-pages danbooru.donmai.us To get the first 10 pages of the top level we use: dan-pages danbooru.donmai.us _null_ 10 The
first argument is the server (other examples: nekobooru.net,
konachan.com). The second argument is the tag. _null_ signals the
program not to use any tags. The third argument is the number of pages.
The program can automatically detect when there are no more pages. Examples: Download all files tagged with tohno_akiha: dan-pages danbooru.donmai.us tohno_akiha Download all files tagged with tohno_akiha and highres: dan-pages danbooru.donmai.us tohno_akiha+highres
Once
this is done, run dan-extract to extract the URLs from the pages. You
can pass it the number of pages you want to extract from, but it can
detect them anyway. The program will generate urls.txt, which
contains all the URLs that could be found (ideal to use with wget), and
urls####.txt, which contain the same list split every 1000 files (some
graphical download managers, e.g. FlashGet, become rather sluggish when
adding too many files at once).
Enjoy. |