"scrawler" = "scraper" + "crawler"
Provides functionality for the automatic collection of website data
(web scraping) and
following links to map an entire domain
(crawling). It can
handle these tasks individually, or process several websites/domains in
parallel using asyncio
and multithreading
.
This project was initially developed while working at the Fraunhofer Institute for Systems and Innovation Research. Many thanks for the opportunity and support!
You can install scrawler from PyPI:
pip install scrawler
Note
Alternatively, you can find the .whl
and .tar.gz
files on GitHub
for each respective release.
Check out the Getting Started Guide.
Documentation is available at Read the Docs.