Leptos Query is robust asynchronous state management library for Leptos.
Heavily inspired by Tanstack Query.
Leptos Query was built to simplify your data fetching process and keep your application's state effortlessly synchronized and up-to-date. Here's how it's done:
Configurable Caching & SWR: Queries are cached by default, ensuring quick access to your data. You can configure your stale and cache times per query with Stale While Revalidate (SWR) system.
Reactivity at Its Core: Leptos Query deeply integrates with Leptos' reactive system to transform asynchronous query fetchers into reactive Signals.
Server-Side Rendering (SSR) Compatibility: Fetch your queries on the server and smoothly serialize them to the client, just as you would with a Leptos Resource.
Efficient De-duplication: No unnecessary fetches here! If you make multiple queries with the same Key, Leptos Query smartly fetches only once.
Manual Invalidation: Control when your queries should be invalidated and refetched for that ultimate flexibility.
Scheduled Refetching: Set up your queries to refetch on a customized schedule, keeping your data fresh as per your needs.
bash
cargo add leptos_query --optional
Then add the relevant feature(s) to your Cargo.toml
```toml
[features] hydrate = [ "leptosquery/hydrate", # ... ] ssr = [ "leptosquery/ssr", # ... ]
```
In the root of your App, provide a query client:
```rust use leptos_query::; use leptos::;
pub fn App(cx: Scope) -> impl IntoView { // Provides Query Client for entire app. providequeryclient(cx);
// Rest of App...
} ```
Then make a query function.
NOTE:
K
.K
must only correspond to ONE UNIQUE Value V
Type.
K
cannot correspond to multiple V
Types.TLDR: Wrap your key in a Newtype when needed to ensure uniqueness.
```rust
use leptos::; use leptos_query::; use std::time::Duration; use serde::*;
// Data type. #[derive(Clone, Deserialize, Serialize)] struct Monkey { name: String, }
// Create a Newtype for MonkeyId. #[derive(Clone, PartialEq, Eq, Hash)] struct MonkeyId(String);
// Monkey fetcher. async fn get_monkey(id: MonkeyId) -> Monkey { todo!() }
// Query for a Monkey.
fn usemonkeyquery(cx: Scope, id: impl Fn() -> MonkeyId + 'static) -> QueryResult
```
Now you can use the query in any component in your app.
```rust
fn MonkeyView(cx: Scope, id: MonkeyId) -> impl IntoView { let query = usemonkeyquery(cx, move || id.clone()); let QueryResult { data, isloading, isfetching, is_stale .. } = query;
view! { cx,
// You can use the query result data here.
// Everything is reactive.
<div>
<div>
<span>"Loading Status: "</span>
<span>{move || { if is_loading.get() { "Loading..." } else { "Loaded" } }}</span>
</div>
<div>
<span>"Fetching Status: "</span>
<span>
{move || { if is_fetching.get() { "Fetching..." } else { "Idle" } }}
</span>
</div>
<div>
<span>"Stale Status: "</span>
<span>
{move || { if is_stale.get() { "Stale" } else { "Fresh" } }}
</span>
</div>
// Query data should be read inside a Transition/Suspense component.
<Transition
fallback=move || {
view! { cx, <h2>"Loading..."</h2> }
}>
{move || {
data.get()
.map(|monkey| {
view! { cx, <h2>{monkey.name}</h2> }
})
}}
</Transition>
</div>
}
}
```
For a complete working example see the example directory
A Query uses a resource under the hood, but provides additional functionality like caching, de-duplication, and invalidation.
Resources are individually bound to the Scope
they are created in. Queries are all bound to the QueryClient
they are created in. Meaning, once you have a QueryClient
in your app, you can access the value for a query anywhere in your app.
With a resource, you have to manually lift it to a higher scope if you want to preserve it. And this can be cumbersome if you have a many resources.
Also, queries are stateful on a per-key basis, meaning you can use the same query with for the same key in multiple places and only one request will be made, and they all share the same state.
stale_time
and cache_time
? staleTime
is the duration until a query transitions from fresh to stale. As long as the query is fresh, data will always be read from the cache only.
When a query is stale, it will be refetched on its next usage.
cacheTime
is the duration until inactive queries will be removed from cache.
stale_time
is 0 seconds.cache_time
is 5 minutes.These can be configured per-query using QueryOptions
If you want infinite cache/stale time, you can set stale_time
and cache_time
to None
.
NOTE:
stale_time
can never be greater thancache_time
. Ifstale_time
is greater thancache_time
,stale_time
will be set tocache_time
.
A QueryClient
allows you to interact with the query cache. You can invalidate queries, prefetch them, and introspect the query cache.
use_query_client()
will return the QueryClient
for the current scope.
Sometimes you can't wait for a query to become stale before you refetch it. QueryClient has an invalidate_query
method that lets you intelligently mark queries as stale and potentially refetch them too!
When a query is invalidated, the following happens:
invalid
. This invalid
state overrides any stale_time
configuration.is_loading
and is_fetching
? is_fetching
is true when the query is in the process of fetching data.
is_loading
is true when the query is in the process of fetching data FOR THE FIRST TIME.