Multithreaded web crawler/indexer using isolates and IPC channels for communication.
On Linux
This is a basic blocking example crawling a web page, add spider to your Cargo.toml
:
toml
[dependencies]
spider = "1.19.4"
And then the code:
```rust,no_run extern crate spider;
use spider::website::Website; use spider::tokio;
async fn main() { let url = "https://choosealicense.com"; let mut website: Website = Website::new(&url); website.crawl().await;
for page in website.get_pages() {
println!("- {}", page.get_url());
}
} ```
You can use Configuration
object to configure your crawler:
```rust // .. let mut website: Website = Website::new("https://choosealicense.com"); website.configuration.blacklisturl.push("https://choosealicense.com/licenses/".tostring()); website.configuration.respectrobotstxt = true; website.configuration.subdomains = true; website.configuration.tld = false; website.configuration.delay = 0; // Defaults to 0 ms due to concurrency handling website.configuration.channelbuffer = 100; // Defaults to 50 - tune this depending on onlinkfindcallback website.configuration.useragent = "myapp/version".tostring(); // Defaults to spider/x.y.z, where x.y.z is the library version website.onlinkfind_callback = |s| { println!("link target: {}", s); s }; // Callback to run on each link find
website.crawl().await; ```
There is an optional "regex" crate that can be enabled:
toml
[dependencies]
spider = { version = "1.19.4", features = ["regex"] }
```rust,no_run extern crate spider;
use spider::website::Website; use spider::tokio;
async fn main() { let mut website: Website = Website::new("https://choosealicense.com"); website.configuration.blacklisturl.push("/licenses/".tostring()); website.crawl().await;
for page in website.get_pages() {
println!("- {}", page.get_url());
}
} ```
Currently we have three optional feature flags. Regex blacklisting, jemaloc backend, and randomizing User-Agents.
toml
[dependencies]
spider = { version = "1.19.4", features = ["regex", "ua_generator"] }
Jemalloc performs better for concurrency and allows memory to release easier.
This changes the global allocator of the program so test accordingly to measure impact.
toml
[dependencies]
spider = { version = "1.19.4", features = ["jemalloc"] }
If you need a blocking sync imp use a version prior to v1.12.0
.
If you are performing large workloads you may need to control the crawler using the following:
```rust
async fn main() { use spider::utils::{pause, resume}; let url = "https://choosealicense.com/"; let mut website: Website = Website::new(&url);
tokio::spawn(async move {
pause(url).await;
sleep(Duration::from_millis(5000)).await;
resume(url).await;
});
website.crawl().await;
} ```
```rust
async fn main() { use spider::utils::{shutdown}; let url = "https://choosealicense.com/"; let mut website: Website = Website::new(&url);
tokio::spawn(async move {
// really long crawl force shutdown ( 30 is a long time for most websites )
sleep(Duration::from_secs(30)).await;
shutdown(url).await;
});
website.crawl().await;
} ```