Sometime, we have no idea what the distribution of the test statistic is, we really want to be able to perform hypothesis tests, and we are willing to make the hypothesis that the samples we have are representative of the population.
This is where the bootstrap hypothesis testing comes in. The idea is to generate a large number of samples from the null distribution (distribution the samples would have if H0 is true - i.e. if both samples are from the same population) and then compute the test statistic for each of these samples. This gives a test statistics sampling distribution under H0.
We can then compute the p-value by counting the number of times the sampled test statistic is more 'extreme' than the test statistic for our initial samples.
# References - Bootstrap Hypothesis Testing - P-value - Stats 102A Lesson 9-2 Bootstrap Hypothesis Tests, Miles Chen
```rust use bootstrapht::prelude::*; use itertools::Itertools; use rand::prelude::Distribution; use rand::SeedableRng; use randchacha::ChaCha8Rng; use rand_distr::StandardNormal;
fn main() { let mut rng = ChaCha8Rng::seedfromu64(42);
let a = StandardNormal
.sampleiter(&mut rng)
.take(100)
.collect::
let teststatisticfn = |a: &[f64], b: &[f64]| { let amax = a.iter().copied().fold(f64::NAN, f64::max); let bmax = b.iter().copied().fold(f64::NAN, f64::max); (amax - bmax).abs() };
let pvalue = bootstrap::twosamplesnonparametricht( &mut rng, &a, &b, teststatisticfn, bootstrap::PValueType::OneSidedRightTail, 10000, ) .unwrap(); asserteq!(pvalue, 0.0021); } ```