Imagine, you need to use the following API to find the most upvoted comment under a blog post.
```rust struct GetCommentsRequest { blogpostid: BlogPostId, page_number: u32, }
struct GetCommentsResponse {
comments: Vec
#
```
In order to do that you will need to write a hairy loop that checks the
more_comments_available
flag, increments page_number
, and updates a
variable that stores the resulting value. This crate helps to abstract away any
sort of pagination and allows you to work with such APIs uniformly with the
help of async streams. All you need to do is to implement the [PageTurner
]
trait instead for the client that sends GetCommentsRequest
.
In [PageTurner
] you specify what items you query and what errors may occur,
then you implement the turn_page
method where you describe how to query a
single page and how to prepare a request for the next page.
```rust use asynctrait::asynctrait; use page_turner::prelude::*;
impl PageTurner
async fn turn_page(&self, mut request: GetCommentsRequest) -> PageTurnerOutput<Self, GetCommentsRequest> {
let response = self.get_comments(request.clone()).await?;
if response.more_comments_available {
request.page_number += 1;
Ok(TurnedPage::next(response.comments, request))
} else {
Ok(TurnedPage::last(response.comments))
}
}
}
#
```
[PageTurner
] then provides default implementations for [PageTurner::pages
]
and [PageTurner::into_pages
] methods that you can use to get a stream of
pages and, optionally, to turn it into a stream of items if you need. Now we
can use our client to find the most upvoted comment like that:
```rust
#
#
#
#
#
#
#
#
#
#
#
# #
#
let client = OurBlogClient::new();
let mostupvotedcomment = client
.pages(GetCommentsRequest { blogpostid, pagenumber: 1 })
.items()
.tryfold(None::
asserteq!(mostupvotedcomment.text, "Yeet"); asserteq!(mostupvotedcomment.upvotes, 5);
// Or we can process the whole pages if needed
let mut commentpages = client.pages(GetCommentsRequest { blogpostid, pagenumber: 1 });
while let Some(commentpage) = commentpages.trynext().await? { detectspam(comment_page); }
#
```
Notice, that with this API we don't actually need any info from the response to
construct the next valid request. We can take an advantage on such kind of
requests by implementing the [RequestAhead
] trait for them. For requests that
implement [RequestAhead
] [PageTurner
] provides additional methods -
[PageTurner::pages_ahead
] and [PageTurner::pages_ahead_unordered
]. These
methods allow to query multiple pages concurrently.
```rust
#
#
#
#
#
#
#
#
#
#
#
# #
#
impl RequestAhead for GetCommentsRequest { fn nextrequest(&self) -> Self { Self { blogpostid: self.blogpostid, pagenumber: self.page_number + 1, } } }
let client = OurBlogClient::new();
// Now instead of querying pages one by one we make up to 4 concurrent requests
// for multiple pages under the hood but besides using a different PageTurner
// method nothing changes in the user code.
let mostupvotedcomment = client
.pagesahead(4, Limit::None, GetCommentsRequest { blogpostid, pagenumber: 1 })
.items()
.tryfold(None::
asserteq!(mostupvotedcomment.text, "Yeet"); asserteq!(mostupvotedcomment.upvotes, 5);
// In the example above the order of pages being returned corresponds to the order
// of requests which means the stream is blocked until the first page is ready
// even if the second and the third pages are already received. For this use case
// we don't really care about the order of the comments so we can use
// pagesaheadunordered to unblock the stream as soon as we receive a response to
// any of the concurrent requests.
let mostupvotedcomment = client
.pagesaheadunordered(4, Limit::None, GetCommentsRequest { blogpostid, pagenumber: 1 })
.items()
.tryfold(None::
asserteq!(mostupvotedcomment.text, "Yeet"); asserteq!(mostupvotedcomment.upvotes, 5);
```
chunk_size = 1
in previous version. (0.8.0 yanked)RequestAhead
] and [PageTurner::pages_ahead
],
[PageTurner::pages_ahead_unordered
] for concurrent page queryingPagesStream
] for T
and E
. (0.6.0 yanked)Licensed under either of Apache License, Version 2.0 or MIT license at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this crate by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.