Web Crawler For Linux
Linux Web Crawler
Linux Web Crawler
Do you use Linux as your desktop?
Do you want to index a website for offline perusal?
It's really quite easy.
On the command line issue
wget -rk -l 0 https://web-address-to-crawl.com
And if the web address is password protected:
wget -rk -l 0 --user=username --password=password https://web-address-to-crawl.com
It's that simple. This will save the website and all of its' resources to the current working directory in a folder called web-address-to-crawl.com
If you are a Windows user visit this link.