What is this?

crates.io License

[luminance] is an effort to make graphics rendering simple and elegant.

The aims of [luminance] are:

An important point before starting: some people ask about how to easily render something. [luminance] is flexible enough to allow people to do rendering as they want and, thus, doesn’t include default shaders or such. So if you are looking for something that has already everything embedded, you’re not looking at the right crate.

[luminance] is shipped with almost no data — i.e. no default shader, tessellations, etc. You can probably find crates adding those, though. ;)

What’s included?

[luminance] is a rendering crate, not a 3D engine nor a video game framework. As so, it doesn’t include specific concepts, such as lights, materials, asset management nor scene description. It only provides a rendering library you can plug in whatever you want to.

There are several so-called 3D-engines out there on crates.io. Feel free to have a look around.

However, [luminance] comes in with several interesting features that might interest you.

Features set

How to dig in?

[luminance] is written to be fairly simple. The documentation is very transparent about what the library does and several articles will appear as the development goes on. Keep tuned! The online documentation is also a good link to have around.

Current implementation

Currently, luminance is powered by OpenGL 3.3: it’s the default. That version of OpenGL is old enough to support a wide range of devices out there. However, it’s possible that your device is older or that you target the Web or Android / iOS. In that case, you should have a look at the set of feature flags, which offers the possibility to compile [luminance] on several platforms.

Feature flags

Windowing

[luminance] does not provide a way to create windows because it’s important that it not depend on windowing libraries – so that end-users can use whatever they like. Furthermore, such libraries typically implement windowing and events features, which have nothing to do with our initial purpose.

Nevertheless, an ecosystem effort exists towards [luminance]: [luminance-windowing]. That crate provides a windowing API that is implemented by other crates, such as [luminance-glfw]. You don’t have to use them, though. If you’re interested into how you should setup windowing for [luminance] to work, this very documentation explains it in the [GraphicsContext] section.

User-guide and contributor-guide

If you just plan to use [luminance], just read the User-guide section.

If you plan to contribute to [luminance] (by writing a windowing crate or hacking on [luminance] directly), feel free to read the Contributor-guide section after having read the User-guide section as well.

User-guide

Creating a context

In order to get started, you need to create an object which type implements [GraphicsContext]. [luminance] ships with the trait but no implementor. You need to head over crates.io and search for luminance crates to find a windowing backend first.

Such a backend should expose a type which implements [GraphicsContext]. You can create one per thread. That limitation enables [luminance] not to perform plenty of runtime branching, minimizing the runtime overhead.

If you really want several contexts, you will need several OS threads.

[GraphicsContext] is the entry-point of everything [luminance] provides. Feel free to dig in its documentation for further information on how to use [luminance]. Most objects you can create will need a mutable reference to such a context object. Even though [luminance] is stateless in terms of global state, it still requires to have an object representing the GPU somehow.

Understanding the pipeline architecture

[luminance] has a very particular way of doing graphics. It represents a typical graphics pipeline via a typed [AST] that is embedded into your code. As you might already know, when you write code, you’re actually creating an [AST]: expressions, assignments, bindings, conditions, function calls, etc. They all represent a typed tree that represents your program.

[luminance] uses that property to create a dependency between resources your GPU needs to have in order to perform a render. Typical engines, libraries and frameworks require you to explicitly bind something; instead, [luminance] requires you to go deeper in the [AST] by creating a new lower node to mark the dependency.

It might be weird at first but you’ll see how simple and easy it is. If you want to perform a simple draw call of a triangle, you need several resources:

There is a dependency graph to represent how the resources must behave regarding each other:

```text (AST1)

Framebuffer ─> Shader ─> RenderState ─> Tess ```

The framebuffer must be active, bound, used — or whatever verb you want to picture it with — before the shader can start doing things. The shader must also be in use before we can actually render the tessellation.

That triple dependency relationship is already a small flat [AST]. Imagine we want to render a second triangle with the same render state and a third triangle with a different render state:

```text (AST2)

Framebuffer ─> Shader ─> RenderState ─> Tess │ │ │ └───────> Tess │ └─────> RenderState ─> Tess ```

That [AST] looks more complex. Imagine now that we want to shade one other triangle with another shader!

```text (AST3)

Framebuffer ─> Shader ─> RenderState ─> Tess │ │ │ │ │ └───────> Tess │ │ │ └─────> RenderState ─> Tess │ └───────> Shader ─> RenderState ─> Tess ```

You can now clearly see the [AST]s and the relationships between objects. Those are encoded in [luminance] within your code directly: lambdas / closures.

If you have followed thoroughly, you might have noticed that you cannot, with such [AST]s, shade a triangle with another shader but using the same render state as another node. That was a decision that was needed to be made: how should we allow the [AST] to be shared? In terms of graphics pipeline, [luminance] tries to do the best thing to minimize the number of GPU context switches and CPU <=> GPU bandwidth congestion.

The lambda & closure design

A function is a perfect candidate to modelize a dependency. When you look at:

ignore fn tronfibulate(x: Foo) -> Bar;

tronfibulate here is covariant in Bar and contravariant in Foo. What it implies is that for the function itself, if we have a function that does Zoo -> Foo, then we can create a new version of tronfibulate that will have, as input, a Zoo. Contravariance maps backwards while covariance maps forwards (i.e. if you have Bar -> Quux, you can adapt tronfibulate to create a new function that will output Quux value).

All this to say that a dependency (which is contravariant) is pretty interesting in our case since we will be able to adapt and create new functions just by contra-mapping the input. In terms of combinational power, that is gold.

Now, let’s try to represent AST1 with contravariance and, hence, functions, using pseudo-code (this is not real [luminance] excerpt).

ignore // AST1 use_framebuffer(framebuffer, || { // here, we are passing a closure that will get called whenever the framebuffer is ready to // receive renders use_shader(shader, || { // same thing but for shader use_render_state(render_state, || { // ditto for render state triangle.render(); // render the tessellation }); ); );

See how simple it is to represent AST1 with just code and closures? Rust’s lifetimes and existential quantification allows us to ensure that no resource will leave the scope of each closures, hence enforcing memory and coherency safety.

Now let’s try to tackle AST2.

```ignore // AST2 useframebuffer(framebuffer, || { useshader(shader, || { userenderstate(renderstate, || { firsttriangle.render(); second_triangle.render(); // simple and straight-forward });

// we can just branch a new render state here!
use_render_state(other_render_state, || {
  third.render()
});

); ); ```

And AST3:

```ignore // AST3 useframebuffer(framebuffer, || { useshader(shader, || { userenderstate(renderstate, || { firsttriangle.render(); second_triangle.render(); // simple and straight-forward });

// we can just branch a new render state here!
use_render_state(other_render_state, || {
  third.render()
});

);

useshader(othershader, || { userenderstate(yetanotherrenderstate, || { othertriangle.render(); }); }); ); ```

The [luminance] equivalent is a bit more complex because it implies some objects that need to be introduced first.

[Pipeline]

A [Pipeline] represents a whole [AST] as seen as just above. It is created by a [GraphicsContext] when you ask to create a pipeline and is destroyed as soon as the render has happened. A [Pipeline] is a special object you can use to bind some specific scarce resources, such as textures and buffers.

Creating a [Pipeline] requires at least one resource: a [Framebuffer] to render to.

When you create a pipeline, you’re also handed a [ShadingGate]. A [ShadingGate] is an object that allows you to create shader nodes in the [AST] you’re building. You have no other way to go deeper in the [AST]. The concept of gates is very important and you should try to familiarize yourself with it.

[ShadingGate]

As said above, a [ShadingGate] allows you to create a shader node in the graphics pipeline. That node will typically borrow a shader [Program] and will move you one level lower in the graph ([AST]). At that level (i.e. in that closure), you are given two objects:

The [ProgramInterface] is the only way for you to access your uniform interface. More on this in the dedicated section. It also provides you with the [ProgramInterface::query] method, that allows you to perform dynamic uniform lookup.

[RenderGate]

A [RenderGate] is the second to last gate you will be handling. It allows you to create render state nodes in your [AST], creating a new level for you to render tessellations with an obvious, final gate: the [TessGate].

The kind of object that node manipulates is [RenderState].

[TessGate]

The [TessGate] is the final gate you use in an [AST]. It’s used to create tessellation nodes. Those are used to render actual [Tess]. You cannot go any deeper in the [AST] at that stage.

[TessGate]s don’t immediately use [Tess] as inputs. They use [TessSlice]. That type is a simple GPU slice into a GPU tessellation ([Tess]). It can be obtained from a [Tess] via the [TessSliceIndex] trait.

Contributor-guide

You want to hack around [luminance] or provide a windowing crate? Everything you have to know is described in this section.

What it means to be a luminance windowing backend

[luminance] doesn’t know anything about the context it executes in. That means that it doesn’t know whether it’s used within the SDL, GLFW, glutin, Qt or an embedded specific hardware such as the Nintendo Switch. That is actually powerful, because it allows [luminance] to be completely agnostic of the execution platform it’s running on: one problem less.

However, the connection between [luminance] and the execution context must be correctly done. Currently, several points must be enforced:

Those rules might change and be adapted regarding the feature flags that are enabled. For instance, if you use a feature flag that allows to use OpenGL 2.1, then you should use a 2.1 OpenGL version.

[GraphicsContext], [GraphicsState] and TLS

In order to implement [GraphicsContext], you need to know several points: