Spider

crate version

Multithreaded async crawler/indexer using isolates and IPC channels for communication with the ability to run decentralized.

Dependencies

On Linux

Example

This is a basic async example crawling a web page, add spider to your Cargo.toml:

toml [dependencies] spider = "1.40.5"

And then the code:

```rust,no_run extern crate spider;

use spider::website::Website; use spider::tokio;

[tokio::main]

async fn main() { let url = "https://choosealicense.com"; let mut website: Website = Website::new(&url); website.crawl().await;

for link in website.get_links() {
    println!("- {:?}", link.as_ref());
}

} ```

You can use Configuration object to configure your crawler:

```rust // .. let mut website: Website = Website::new("https://choosealicense.com");

website.configuration.respectrobotstxt = true; website.configuration.subdomains = true; website.configuration.tld = false; website.configuration.delay = 0; // Defaults to 0 ms due to concurrency handling website.configuration.requesttimeout = None; // Defaults to 15000 ms website.configuration.http2priorknowledge = false; // Enable if you know the webserver supports http2 website.configuration.useragent = Some("myapp/version".into()); // Defaults to using a random agent website.onlinkfindcallback = Some(|s, html| { println!("link target: {}", s); (s, html)}); // Callback to run on each link find website.configuration.blacklisturl.getorinsert(Default::default()).push("https://choosealicense.com/licenses/".into()); website.configuration.proxies.getorinsert(Default::default()).push("socks5://10.1.1.1:12345".into()); // Defaults to none - proxy list.

website.crawl().await; ```

The builder pattern is also available v1.33.0 and up:

```rust let mut website = Website::new("https://choosealicense.com");

website .withrespectrobotstxt(true) .withsubdomains(true) .withtld(false) .withdelay(0) .withrequesttimeout(None) .withhttp2priorknowledge(false) .withuseragent(Some("myapp/version".into())) .withonlinkfindcallback(Some(|link, html| { println!("link target: {}", link.inner()); (link, html) })) .withheaders(None) .withblacklisturl(Some(Vec::from(["https://choosealicense.com/licenses/".into()]))) .with_proxies(None); ```

Features

We have a couple optional feature flags. Regex blacklisting, jemaloc backend, globbing, fs temp storage, decentralization, serde, gathering full assets, and randomizing user agents.

toml [dependencies] spider = { version = "1.40.5", features = ["regex", "ua_generator"] }

  1. ua_generator: Enables auto generating a random real User-Agent.
  2. regex: Enables blacklisting paths with regx
  3. jemalloc: Enables the jemalloc memory backend.
  4. decentralized: Enables decentralized processing of IO, requires the spider_worker startup before crawls.
  5. sync: Subscribe to changes for Page data processing async.
  6. control: Enables the ability to pause, start, and shutdown crawls on demand.
  7. full_resources: Enables gathering all content that relates to the domain like css,jss, and etc.
  8. serde: Enables serde serialization support.
  9. socks: Enables socks5 proxy support.
  10. glob: Enables url glob support.
  11. fs: Enables storing resources to disk for parsing (may greatly increases performance at the cost of temp storage). Enabled by default.
  12. js: Enables javascript parsing links created with the alpha jsdom crate.
  13. time: Enables duration tracking per page.
  14. chrome: Enables chrome headless rendering [experimental].
  15. chrome_headed: Enables chrome rendering headful rendering [experimental].
  16. chrome_cpu: Disable gpu usage for chrome browser.
  17. chrome_stealth: Enables stealth mode to make it harder to be detected as a bot.

Decentralization

Move processing to a worker, drastically increases performance even if worker is on the same machine due to efficient runtime split IO work.

toml [dependencies] spider = { version = "1.40.5", features = ["decentralized"] }

```sh

install the worker

cargo install spider_worker

start the worker [set the worker on another machine in prod]

RUSTLOG=info SPIDERWORKERPORT=3030 spiderworker

start rust project as normal with SPIDER_WORKER env variable

SPIDER_WORKER=http://127.0.0.1:3030 cargo run --example example --features decentralized ```

The SPIDER_WORKER env variable takes a comma seperated list of urls to set the workers. If the scrape feature flag is enabled, use the SPIDER_WORKER_SCRAPER env variable to determine the scraper worker.

Subscribe to changes

Use the subscribe method to get a broadcast channel.

toml [dependencies] spider = { version = "1.40.5", features = ["sync"] }

```rust,no_run extern crate spider;

use spider::website::Website; use spider::tokio;

[tokio::main]

async fn main() { let mut website: Website = Website::new("https://choosealicense.com"); let mut rx2 = website.subscribe(16).unwrap();

let join_handle = tokio::spawn(async move {
    while let Ok(res) = rx2.recv().await {
        println!("{:?}", res.get_url());
    }
});

website.crawl().await;

} ```

Regex Blacklisting

Allow regex for blacklisting routes

toml [dependencies] spider = { version = "1.40.5", features = ["regex"] }

```rust,no_run extern crate spider;

use spider::website::Website; use spider::tokio;

[tokio::main]

async fn main() { let mut website: Website = Website::new("https://choosealicense.com"); website.configuration.blacklist_url.push("/licenses/".into()); website.crawl().await;

for link in website.get_links() {
    println!("- {:?}", link.as_ref());
}

} ```

Pause, Resume, and Shutdown

If you are performing large workloads you may need to control the crawler by enabling the control feature flag:

toml [dependencies] spider = { version = "1.40.5", features = ["control"] }

```rust extern crate spider;

use spider::tokio; use spider::website::Website;

[tokio::main]

async fn main() { use spider::utils::{pause, resume, shutdown}; let url = "https://choosealicense.com/"; let mut website: Website = Website::new(&url);

tokio::spawn(async move {
    pause(url).await;
    sleep(Duration::from_millis(5000)).await;
    resume(url).await;
    // perform shutdown if crawl takes longer than 15s
    sleep(Duration::from_millis(15000)).await;
    shutdown(url).await;
});

website.crawl().await;

} ```

Scrape/Gather HTML

```rust extern crate spider;

use spider::tokio; use spider::website::Website;

[tokio::main]

async fn main() { use std::io::{Write, stdout};

let url = "https://choosealicense.com/";
let mut website: Website = Website::new(&url);

website.scrape().await;

let mut lock = stdout().lock();

let separator = "-".repeat(url.len());

for page in website.get_pages().unwrap() {
    writeln!(
        lock,
        "{}\n{}\n\n{}\n\n{}",
        separator,
        page.get_url(),
        page.get_html(),
        separator
    )
    .unwrap();
}

} ```

```

Blocking

If you need a blocking sync implementation use a version prior to v1.12.0.