Spider

crate version

Multithreaded web crawler/indexer using isolates and IPC channels for communication.

Dependencies

On Linux

Example

This is a basic blocking example crawling a web page, add spider to your Cargo.toml:

toml [dependencies] spider = "1.19"

And then the code:

```rust,no_run extern crate spider;

use spider::website::Website; use spider::tokio;

[tokio::main]

async fn main() { let url = "https://choosealicense.com"; let mut website: Website = Website::new(&url); website.crawl().await;

for link in website.get_links() {
    println!("- {:?}", link.as_ref());
}

} ```

You can use Configuration object to configure your crawler:

```rust // .. let mut website: Website = Website::new("https://choosealicense.com"); website.configuration.blacklisturl.push("https://choosealicense.com/licenses/".tostring()); website.configuration.respectrobotstxt = true; website.configuration.subdomains = true; website.configuration.tld = false; website.configuration.delay = 0; // Defaults to 0 ms due to concurrency handling website.configuration.requesttimeout = None; // Defaults to 15000 ms website.configuration.channelbuffer = 100; // Defaults to 50 - tune this depending on onlinkfindcallback website.configuration.useragent = "myapp/version".tostring(); // Defaults to spider/x.y.z, where x.y.z is the library version website.onlinkfindcallback = Some(|s| { println!("link target: {}", s); s }); // Callback to run on each link find

website.crawl().await; ```

Regex Blacklisting

There is an optional "regex" crate that can be enabled:

toml [dependencies] spider = { version = "1.22.8", features = ["regex"] }

```rust,no_run extern crate spider;

use spider::website::Website; use spider::tokio;

[tokio::main]

async fn main() { let mut website: Website = Website::new("https://choosealicense.com"); website.configuration.blacklisturl.push("/licenses/".tostring()); website.crawl().await;

for link in website.get_links() {
    println!("- {:?}", link.as_ref());
}

} ```

Features

Currently we have three optional feature flags. Regex blacklisting, jemaloc backend, decentralization, and randomizing User-Agents.

toml [dependencies] spider = { version = "1.22.8", features = ["regex", "ua_generator"] }

Jemalloc performs better for concurrency and allows memory to release easier.

This changes the global allocator of the program so test accordingly to measure impact.

toml [dependencies] spider = { version = "1.22.8", features = ["jemalloc"] }

Blocking

If you need a blocking sync imp use a version prior to v1.12.0.

Pause, Resume, and Shutdown

If you are performing large workloads you may need to control the crawler using the following:

```rust extern crate spider;

use spider::tokio; use spider::website::Website;

[tokio::main]

async fn main() { use spider::utils::{pause, resume}; let url = "https://choosealicense.com/"; let mut website: Website = Website::new(&url);

tokio::spawn(async move {
    pause(url).await;
    sleep(Duration::from_millis(5000)).await;
    resume(url).await;
});

website.crawl().await;

} ```

Shutdown crawls

```rust extern crate spider;

use spider::tokio; use spider::website::Website;

[tokio::main]

async fn main() { use spider::utils::{shutdown}; let url = "https://choosealicense.com/"; let mut website: Website = Website::new(&url);

tokio::spawn(async move {
    // really long crawl force shutdown ( 30 is a long time for most websites )
    sleep(Duration::from_secs(30)).await;
    shutdown(url).await;
});

website.crawl().await;

} ```

Scrape/Gather HTML

```rust extern crate spider;

use spider::tokio; use spider::website::Website;

[tokio::main]

async fn main() { use std::io::{Write, stdout};

let url = "https://choosealicense.com/";
let mut website: Website = Website::new(&url);

website.scrape().await;

let mut lock = stdout().lock();

let separator = "-".repeat(url.len());

for page in website.get_pages() {
    writeln!(
        lock,
        "{}\n{}\n\n{}\n\n{}",
        separator,
        page.get_url(),
        page.get_html(),
        separator
    )
    .unwrap();
}

} ```

Decentralized [Experimental]

  1. cargo install spider_worker.
  2. spider_worker.
  3. SPIDER_WORKER=http://127.0.0.1:3030 cargo run --example example --features decentralized

Use SPIDER_WORKER env variable to adjust the spider worker onto a load balancer. The proxy needs to match the transport type for the request to fullfill. A WIP for support to handle http and https transparent proxies.