Skip to content

Latest commit

 

History

History
60 lines (41 loc) · 2.34 KB

README.md

File metadata and controls

60 lines (41 loc) · 2.34 KB

spiro

Tornado Web Crawler (Distributed)

I really just wanted a "simple" web crawlwer, something that could fetch ... say 100,000 pages without breaking a sweat and save them to some storage (MongoDB or Riak). This is what I threw together.

Currently you are required to have MongoDB and Redis installed (the Riak store isn't complete). MongoDB is used for both the settings portion of the UI and also for storing pages into after their crawled.

Alpha - This is a work in progress, the goal is to add functionality based on peoples real useage. The core of the crawler - robots parsing, delays and other "friendly" factors should all work just fine.

Usage

Looks like a Tornado app.

./main.py --debug --port=8000

Point your web browser at it.

Example map process on the crawled data (scan pages in MongoDB and do "something")

python -m tools.map 

Basic Design

Much of the design is inspired by these blog posts:

MongoDB is used to store basic settings (crawler on, allowed domains, etc.etc.) Most of the crawler processing is managed via a Redis based Queue. Which is sliced and locked based on domain name, once a specific instance has a queue it will crawl that as needed.

What I'm trying to avoid is where [1] uses Gearman to spawn jobs, is to allow Redis to really control via locks what's happening and thus allow the Fetchers to be self sufficient. If link finding is enabled all links that are found are re-inserted into Redis to be crawlled at a later point, via a priority sorted list.

Other References

Techologies Used