** SIMD for Humans Easy, powerful, portable, absurdly fast numerical calculations. Includes static dispatch with inlining based on your platform and vector types, zero-allocation iteration, vectorized loading/storing, and support for uneven collections.

It looks something like this:

+BEGIN_SRC rust

let lotsof3s = (&[-123.456f32; 128][..]).simditer() .simdmap(f32s(0.0), |v| { f32s(9.0) * v.abs().sqrt().rsqrt().ceil().sqrt() - f32s(4.0) - f32s(2.0) }) .scalar_collect();

+END_SRC

Which is analogous to this scalar code:

+BEGIN_SRC rust

let lotsof3s = (&[-123.456f32; 128][..]).iter() .map(|v| { 9.0 * v.abs().sqrt().sqrt().recip().ceil().sqrt() - 4.0 - 2.0 }) .collect::>();

+END_SRC

The vector size is entirely determined by the machine you're compiling for - it attempts to use the largest vector size supported by your machine, and works on any platform or architecture (see below for details).

Compare this to traditional explicit SIMD:

+BEGIN_SRC rust

use std::mem::transmute; use stdsimd::{f32x4, f32x8};

let lotsof3s = &mut [-123.456f32; 128][..];

if cfg!(all(not(targetfeature = "avx"), targetfeature = "sse")) { for ch in init.chunksmut(4) { let v = f32x4::load(ch, 0); let scalarabsmask = unsafe { transmute::(0x7fffffff) }; let absmask = f32x4::splat(scalarabsmask); // There isn't actually an absolute value intrinsic for floats - you // have to look at the IEEE 754 spec and do some bit flipping v = unsafe { mmandps(v, absmask) }; v = unsafe { mmsqrtps(v) }; v = unsafe { _mmrsqrtps(v) }; v = unsafe { _mmceilps(v) }; v = unsafe { _mmsqrtps(v) }; v = unsafe { _mmmulps(v, 9.0) }; v = unsafe { _mmsubps(v, 4.0) }; v = unsafe { _mmsubps(v, 2.0) }; f32x4::store(ch, 0); } } else if cfg!(all(not(targetfeature = "avx512"), targetfeature = "avx")) { for ch in init.chunksmut(8) { let v = f32x8::load(ch, 0); let scalarabsmask = unsafe { transmute::(0x7fffffff) }; let absmask = f32x8::splat(scalarabsmask); v = unsafe { _mm256andps(v, absmask) }; v = unsafe { mm256sqrtps(v) }; v = unsafe { _mm256rsqrtps(v) }; v = unsafe { _mm256ceilps(v) }; v = unsafe { _mm256sqrtps(v) }; v = unsafe { _mm256mulps(v, 9.0) }; v = unsafe { _mm256subps(v, 4.0) }; v = unsafe { _mm256sub_ps(v, 2.0) }; f32x8::store(ch, 0); } }

+END_SRC

Even with all of that boilerplate, this still only supports x86-64 machines with SSE or AVX - and you have to look up each intrinsic to ensure it's usable for your compilation target. * Upcoming Features A rewrite of the iterator API is upcoming, as well as internal changes to better match the direction Rust is taking with explicit SIMD. * Compatibility Faster currently supports any architecture with floating point support, although hardware acceleration is only enabled on machines with x86's vector extensions. ** Performance Here are some extremely unscientific benchmarks which, at least, prove that this isn't any worse than scalar iterators. Even on ancient CPUs, a lot of performance can be extracted out of SIMD.

+BEGIN_SRC shell

$ RUSTFLAGS="-C target-cpu=ivybridge" cargo bench # host is ivybridge; target has AVX test tests::base100encscalar ... bench: 1,307 ns/iter (+/- 45) test tests::base100encsimd ... bench: 332 ns/iter (+/- 10) test tests::determinant2scalar ... bench: 486 ns/iter (+/- 8) test tests::determinant2simd ... bench: 215 ns/iter (+/- 3) test tests::determinant3scalar ... bench: 389 ns/iter (+/- 6) test tests::determinant3simd ... bench: 209 ns/iter (+/- 3) test tests::mapfillsimd ... bench: 835 ns/iter (+/- 12) test tests::mapscalar ... bench: 6,963 ns/iter (+/- 117) test tests::mapsimd ... bench: 879 ns/iter (+/- 18) test tests::mapunevensimd ... bench: 884 ns/iter (+/- 10) test tests::nopscalar ... bench: 49 ns/iter (+/- 0) test tests::nopsimd ... bench: 34 ns/iter (+/- 0) test tests::reducescalar ... bench: 6,905 ns/iter (+/- 107) test tests::reducesimd ... bench: 839 ns/iter (+/- 13) test tests::reduceunevensimd ... bench: 838 ns/iter (+/- 11) test tests::zipnopscalar ... bench: 824 ns/iter (+/- 18) test tests::zipnopsimd ... bench: 231 ns/iter (+/- 5) test tests::zipscalar ... bench: 901 ns/iter (+/- 29) test tests::zipsimd ... bench: 1,128 ns/iter (+/- 12)

RUSTFLAGS="-C target-cpu=x86-64" cargo bench # host is ivybridge; target has SSE2 test tests::base100encscalar ... bench: 760 ns/iter (+/- 11) test tests::base100encsimd ... bench: 492 ns/iter (+/- 2) test tests::determinant2scalar ... bench: 477 ns/iter (+/- 3) test tests::determinant2simd ... bench: 277 ns/iter (+/- 1) test tests::determinant3scalar ... bench: 380 ns/iter (+/- 3) test tests::determinant3simd ... bench: 285 ns/iter (+/- 2) test tests::mapfillsimd ... bench: 1,797 ns/iter (+/- 8) test tests::mapscalar ... bench: 7,237 ns/iter (+/- 51) test tests::mapsimd ... bench: 1,879 ns/iter (+/- 12) test tests::mapunevensimd ... bench: 1,878 ns/iter (+/- 9) test tests::nopscalar ... bench: 47 ns/iter (+/- 0) test tests::nopsimd ... bench: 34 ns/iter (+/- 0) test tests::reducescalar ... bench: 7,021 ns/iter (+/- 39) test tests::reducesimd ... bench: 1,801 ns/iter (+/- 8) test tests::reduceunevensimd ... bench: 1,734 ns/iter (+/- 9) test tests::zipnopscalar ... bench: 803 ns/iter (+/- 9) test tests::zipnopsimd ... bench: 257 ns/iter (+/- 1) test tests::zipscalar ... bench: 988 ns/iter (+/- 6) test tests::zipsimd ... bench: 629 ns/iter (+/- 5)

$ RUSTFLAGS="-C target-cpu=pentium" cargo bench # host is ivybridge; this only runs the polyfills! test tests::benchdeterminant2scalar ... bench: 427 ns/iter (+/- 2) test tests::benchdeterminant2simd ... bench: 402 ns/iter (+/- 1) test tests::benchdeterminant3scalar ... bench: 354 ns/iter (+/- 1) test tests::benchdeterminant3simd ... bench: 593 ns/iter (+/- 1) test tests::benchmapscalar ... bench: 7,195 ns/iter (+/- 28) test tests::benchmapsimd ... bench: 6,271 ns/iter (+/- 22) test tests::benchmapunevensimd ... bench: 6,288 ns/iter (+/- 22) test tests::benchnopscalar ... bench: 38 ns/iter (+/- 0) test tests::benchnopsimd ... bench: 69 ns/iter (+/- 0) test tests::benchreducescalar ... bench: 7,004 ns/iter (+/- 17) test tests::benchreducesimd ... bench: 6,063 ns/iter (+/- 17) test tests::benchreduceunevensimd ... bench: 6,107 ns/iter (+/- 11) test tests::benchzipnopscalar ... bench: 623 ns/iter (+/- 2) test tests::benchzipnopsimd ... bench: 289 ns/iter (+/- 1) test tests::benchzipscalar ... bench: 972 ns/iter (+/- 3) test tests::benchzipsimd ... bench: 621 ns/iter (+/- 3)

+END_SRC