-
Notifications
You must be signed in to change notification settings - Fork 119
Frame Blending Modes
"Blend modes" are rules which describe how multiple images layered on top of each other are combined into one final picture (in Frogatto, any sprite being drawn onto the screen is of course being layered on top of any tiles or background images behind it). Setting a special blending mode is often used for special lighting/shadowing effects, or to make a graphic that represents a partly translucent material (like glass or liquid) appear to be that material by changing how the things seen through it appear. These are a fairly common concept in digital imaging, which are also seen in programs like Photoshop et al; in Anura, we're able to do this in realtime using hardware accelerated OpenGL. General principles behind this is are described in a wikipedia article here: http://en.wikipedia.org/wiki/Blend_modes
Any individual animation frame
element can have blend modes assigned to it. Blend modes are typically specified in pairs; though there's also a one-parameter version available (which I believe is just a shorthand for a few common combinations of the pair-based blending modes).
To understand the pair version, one has to view the act of blitting with the right mental model. I think it's worth explaining because I, at least, struggled with this, and if you haven't actually done OpenGL programming yourself (which we can expect of most Anura users), it's very easy to misunderstand how this works.
My early and incorrect conception of OpenGL blitting came from working with various 3d modeling programs and game engines; from the outside, these all felt very parallel in operation. You created a scene in memory; with various polygons+textures in various positions, and then your software would look at all the contents in that scene, and would make several very clever decisions for you - it would carefully choose to omit certain things (which were offscreen or completely obscured behind other objects) from being drawn at all, save large amounts of processing time (lazy-evaluation as a feature, really). It would also determine the draw order for you, figuring out which things got drawn on top of others, and negotiating parts of certain things being in front of others for you.
My mistake was to think this was done by OpenGL itself (a thing which certain early tutorials and effusive writeups about OpenGL really misled me about, too). As it turns out, OpenGL itself is a very imperative, simplistic blitter (especially in our usage of it, sans the depth buffer) - this in turn, which I'll explain in a moment, is necessary for the pair-wise blend modes to make any sense at all.
The two parameters are two blending rules; one for the source, and one for the destination - this made no sense to me because in my mental model, there was no "destination" image; all of the source images' blending logic was required to be context-independent - otherwise there was no way to blit things out of order. Everything's blending rules had to depend on themselves-only; transparency meant that if, say, your source graphic was 50% transparent in a certain pixel, you'd mix 50% of whatever was in the source "with the rest of the world". You might apply an additional rule where it could alter the "effective transparency" of the source based on its color, but that was it. You could not have a destination image, because at the time of blending, nothing might exist 'behind' the object being blitted to be evaluated (and such a rule might contradict the source rules for other graphics).
Instead what happens in OpenGL is that everything is done sequentially. Things are painted one by one onto a blank canvas, building up a drawing. Thus, the destination image refers to everything that's already been drawn in our target rectangle. It's not the final product, it's one of the two ingredients in the final product. So rather than thinking of us as drawing one image onto an existing canvas, it's helpful to instead of think of the operation as "cutting a rectangular slice out of the final canvas", and then blitting a combined version of that and our source image into the blank hole in the canvas.
The trick with applying a blending mode to that is that we can adjust the meaning of traditionally fixed ideas like transparency or color. Traditionally color just means "are we red/blue/green". That's it - it determines that one thing about a pixel, and nothing else. What we can do with blending modes is make color actually determine whether we're opaque or not. The color of our source image can actually BE our (virtual) alpha channel - or anything else! We can do all sorts of crazy things where we patch one input into another.
"Well, okay, that's cute, I guess, but ... so what? Can't I just bake in an alpha channel that does the same thing?" For a small set of things, yeah. But not if your object needs to change colors dynamically (i.e. any scene lighting, any brightness changes, etc), or be combined from multiple images, or any number of other dynamic changes. There are tons of situations where it's impossible to prebake the final transparency/colors/etc - really for the same reason you can't prebake global lighting in a 3d game and then change the global lighting source.
There are also many natural phenomena which mirror common blending modes - the 'naive/easy' alpha blending of shadow textures using most software comes out additive; if you take two shadow textures (say, rgb(0,0,0) blitted at 50% transparency) and overlay them, you'll get something that's darker than either one. But - if you're using these to simulate shadows cast by some single overhead light source (like the sun), this is visually incorrect - shadows like that do not add; they reach a certain threshold of darkness and become no darker. A blending mode can allow you to enforce a rule like this inside a game. Similar things apply to local lights, and so on.
Within an animation frame of a custom_object, you can have fields variously containing named blend rules such as the following:
blend: "add", blend: "alpha_blend", blend:["src_alpha","one_minus_src_alpha"],
The set of acceptable values for blend is:
ZERO,
ONE,
SRC_COLOR,
ONE_MINUS_SRC_COLOR,
DST_COLOR,
ONE_MINUS_DST_COLOR,
SRC_ALPHA,
ONE_MINUS_SRC_ALPHA,
DST_ALPHA,
ONE_MINUS_DST_ALPHA,
CONSTANT_COLOR,
ONE_MINUS_CONSTANT_COLOR,
CONSTANT_ALPHA,
ONE_MINUS_CONSTANT_ALPHA,
Or you can change the blend equation:
blend_equation: "add", or blend_equation: "subtract", or if you want different rgb and alpha equations do blend_equation:["add","subtract"]
The set of valid blend equations is:
ADD,
SUBTRACT,
REVERSE_SUBTRACT,
MIN,
MAX,
You can also choose the rules for texture address modes. A texture address mode specifies what to do when you access the texture outside the co-ordinate range of the texture. So the options are clamp
- use the value at the edge of the texture. wrap
- use the value modulo the texture size, border
- use the given border_color (defaults to white) and mirror
- reflect the co-ordinate back across the appropriate axis. In place of simply image: "some_file.png",
you can have:
image: {image:"some_file.png", address_mode:"border", border_color:"red", mipmaps:4, filtering:["linear","linear","none"]}
The above "texture address mode" feature was used to solve some nasty artifacting seen during development of a digital card game written in Anura, called Argentum Age. Cards had anti-aliased edges, but the anti-aliasing didn't look smooth until we applied the following rule: "border mode with a color value of 255,255,255,1 and bi-linear filtering" to the darker textures used on the backs of the cards.
You can load multiple images with image: [{image:"xx1.png"}, {image:"xx1_depth_map.png}],
This is commonly used to provide separate textures to separate "texture units" on the graphics card.
The following is an excerpt from IRC explaining what texture units are:
[17:10:04] <Jetrel_laptop> KristaS: what's the significance of loading multiple images in one "image" tag?
[17:12:39] <KristaS> Jetrel_laptop: When the texture is bound, it makes all the images available on different texture units. This is thing that makes doing dynamic lighting easy. We just need to go image:[{image:"some_name.png},{image:"some_name_normal_map.png}] then by using a shader with lighting (and setting the lights) then we get everything nicely ready to go.
[17:13:06] <Jetrel_laptop> texture units?
[17:14:40] <KristaS> A GPU has these things called texture units. They are what you bind textures to then they're available to use in the shader.
[17:14:56] <KristaS> Most modern GPU's support up to 32 texture units.
[17:26:08] <Jetrel_laptop> ... uhm
[17:26:12] <Jetrel_laptop> gee
[17:26:27] <Jetrel_laptop> So are these basically like "pointers" to the texture?
[17:26:58] <KristaS> kind of
[17:27:00] <Jetrel_laptop> Or are these actually some sort of really large "local storage" to the gpu that caches a texture for immediate use?
[17:27:28] <KristaS> At this point the texture is already loaded into the GPU, that's done when you create a texture.
[17:27:41] <KristaS> That operation returns and internal handle to the texture.
[17:28:43] <KristaS> This operation is more akin to saying i want texture X bound to texture unit 0, i want texture Y bound to texture unit 1. Then in the sahder code the textures are available to use.
[17:34:17] <Jetrel_laptop> Okay..
[17:34:41] <Jetrel_laptop> So really it's that the shader language needs to use a limited set of really tight references like that?
[17:36:06] <Jetrel_laptop> It either can't use, or is much slower to just use, texture X, Y [... and some arbitrary >32 number more] textures directly?
[17:37:37] <KristaS> It's inherent in the design of OpenGL unforunately. Multiple textures go all the way back to GL 1.1. Hopefully Vulkan will alleviate a lot of this management overhead.
More help can be found via chat in Frogatto's Discord server, or by posting on the forums. This wiki is not a complete reference. Thank you for reading! You're the best. 🙂