A high-level API for programmatically interacting with web pages through WebDriver.
This crate uses the [WebDriver protocol] to drive a conforming (potentially headless) browser through relatively high-level operations such as "click this element", "submit this form", etc.
Most interactions are driven by using [CSS selectors]. With most WebDriver-compatible browser being fairly recent, the more expressive levels of the CSS standard are also supported, giving fairly powerful.
Forms are managed by first calling Client::form
, and then using the methods on Form
to
manipulate the form's fields and eventually submitting it.
For low-level access to the page, Client::source
can be used to fetch the full page HTML
source code, and Client::raw_client_for
to build a raw HTTP request for a particular URL.
These examples all assume that you have a [WebDriver compatible] process running on port 4444.
A quick way to get one is to run [geckodriver
] at the command line. The code also has
partial support for the legacy WebDriver protocol used by chromedriver
and ghostdriver
.
The examples will be using unwrap
generously --- you should probably not do that in your
code, and instead deal with errors when they occur. This is particularly true for methods that
you expect might fail, such as lookups by CSS selector.
Let's start out clicking around on Wikipedia:
```rust,no_run
let mut c = Client::new("http://localhost:4444").unwrap(); // go to the Wikipedia page for Foobar c.goto("https://en.wikipedia.org/wiki/Foobar").unwrap(); asserteq!(c.currenturl().unwrap().asref(), "https://en.wikipedia.org/wiki/Foobar"); // click "Foo (disambiguation)" c.click(".mw-disambig").unwrap(); // click "Foo Lake" c.clickbytext("Foo Lake").unwrap(); asserteq!(c.currenturl().unwrap().asref(), "https://en.wikipedia.org/wiki/Foo_Lake"); ```
How did we get to the Foobar page in the first place? We did a search! Let's make the program do that for us instead:
```rust,no_run
// go to the Wikipedia frontpage this time c.goto("https://www.wikipedia.org/").unwrap(); // find, fill out, and submit the search form { let mut f = c.form("#search-form").unwrap(); f.setbyname("search", "foobar").unwrap(); f.submit().unwrap(); } // we should now have ended up in the rigth place asserteq!(c.currenturl().unwrap().as_ref(), "https://en.wikipedia.org/wiki/Foobar"); ```
What if we want to download a raw file? Fantoccini has you covered:
```rust,no_run
// go back to the frontpage c.goto("https://www.wikipedia.org/").unwrap(); // find the source for the Wikipedia globe let img = c.lookupattr("img.central-featured-logo", "src") .expect("image should be on page") .expect("image should have a src"); // now build a raw HTTP client request (which also has all current cookies) let raw = c.rawclientfor(fantoccini::Method::Get, &img).unwrap(); // this is a RequestBuilder from hyper, so we could also add POST data here // but for this we just send the request let mut res = raw.send().unwrap(); // we then read out the image bytes use std::io::prelude::*; let mut pixels = Vec::new(); res.readto_end(&mut pixels).unwrap(); // and voilla, we now have the bytes for the Wikipedia logo! assert!(pixels.len() > 0); println!("Wikipedia logo is {}b", pixels.len()); ```