Spider

crate version

Multithreaded web crawler/indexer written in Rust main repo.

Dependencies

On Linux

Example

This is a basic blocking example crawling a web page, add spider to your Cargo.toml:

toml [dependencies] spider = "1.10.9"

And then the code:

```rust,no_run extern crate spider;

use spider::website::Website;

fn main() { let mut website: Website = Website::new("https://choosealicense.com"); website.crawl();

for page in website.get_pages() {
    println!("- {}", page.get_url());
}

} ```

You can use Configuration object to configure your crawler:

```rust // .. let mut website: Website = Website::new("https://choosealicense.com"); website.configuration.blacklisturl.push("https://choosealicense.com/licenses/".tostring()); website.configuration.respectrobotstxt = true; website.configuration.subdomains = true; website.configuration.delay = 2000; // Defaults to 250 ms website.configuration.concurrency = 10; // Defaults to number of cpus available * 4 website.configuration.useragent = "myapp/version".tostring(); // Defaults to spider/x.y.z, where x.y.z is the library version website.onlinkfind_callback = |s| { println!("link target: {}", s); s }; // Callback to run on each link find

website.crawl(); ```

Regex Blacklisting

There is an optional "regex" crate that can be enabled:

toml [dependencies] spider = { version = "1.10.9", features = ["regex"] }

```rust,no_run extern crate spider;

use spider::website::Website;

fn main() { let mut website: Website = Website::new("https://choosealicense.com"); website.configuration.blacklisturl.push("/licenses/".tostring()); website.crawl();

for page in website.get_pages() {
    println!("- {}", page.get_url());
}

} ```

Features

Currently we have two optional feature flags. Regex blacklisting and randomizing User-Agents.

toml [dependencies] spider = { version = "1.10.9", features = ["regex", "ua_generator"] }