How to achieve additive transparency/dithering?

Hey all, I've created an image that I want to have the player draw to. I draw to it using:


My thought was that by drawing at .5 opacity, if they draw over the same spot again it would add to 1.0 opacity, however it seems that returning to the same spot draws the exact same image again, so it remains at .5 opacity.

Does anyone know how I'd go about achieving additive opacity?

Interesting question!

I think you would have to use a randomized dither, but that wouldn't be exactly right because some of the pixels will overlap, so 0.5 + 0.5 will almost never yield 1.0. Think of the spraycan tool that image editors sometimes have. The opacity will increase as you draw over the same spot, just not linearly. But maybe that's good enough for your purposes.

If you need it to work exactly like an alpha channel, the only approach I can think of is to use a grayscale bitmap internally, doing the alpha blending manually with say 8 bits per pixel, then dithering at the very end to display it. That idea isn't well-supported by our SDK, nor is it likely to have great performance, but feel free to give it a try. Wish I had a better solution for you!

Is there a way to do randomized dithering? This solution doesn’t need to be mathematically perfect, it just needs to have subsequent passes add to what’s already there.

What about doing a ramp of opacity from 0.5, or whatever, to 1.0.

This is how cross fades can be done easily.

Bayer2x2 has few patterns of dither, 4x4 has more and 8x8 has more again.

But I'm not sure how you can do that in your use case without remembering what has been drawn before at that location. Hmm.

"use a grayscale bitmap internally". Since the sdk doesn't support it, does that mean a Lua table of 400x240x2 where you store literal 1 and 0? That would be very slow to draw to the screen, right? Pixel for pixel. I might be spelling out what Dave is getting at, but just wanted to check whether I'm not missing something.

Yeah, you’d have to come up with an efficient way of storing the data. Probably not a Lua table of individual bits, but a table of integers (each representing 32 bits) might work OK.

C is probably a better language choice for this kind of code, though.

Does @dustin do this in Playmaker?

A related feature request for what it's worth:

...but in this case, not LOADING multi-bit images, but creating and processing them.

There's already librif to do similar


nor should it
If you use 2 objects with 50% alpha in RGB, they'd average to 75%

What about averaging, y) across all pixels under the current ‘brush’ (assuming this is some kind of art tool), then setting the dither pattern to average between this value and the selected opacity? Or add them together, per original post.

1 Like

I actually do that already for a different reason. I'm using the sample color to determine your speed (the more "drawn on" an area is, the faster you go) so I suppose I could use that to understand the current color there for drawing purposes...

The only issue is I currently sample a fairly small area (3x3) to assign your speed, but I'd probably need to sample a larger area (say 16x16) to get the average for the entire brush area. Do you think that would be efficient to do every frame?

Either way I can give it a shot tonight, thanks!

Just thinking it through a bit more - my suggestion would return 0.5 if you were half over a solid black area… if you need to differentiate between this and an existing 0.5 dither you’ll need to do something a bit more sophisticated. Maybe look at the variability within averages in 4 directions?

For efficiency, I’d use as few samples as possible - check if you can get away with something smaller.

Thanks @matt for mentioning librif.

After reading this thread, I added setAlpha method to blend images. You can now blend grayscale images and convert the result to native LCDBitmap.

In my upcoming game I use librif to do that, there's a getPixel method in Lua.

1 Like

What if you sampled every-other row and column in a 16x16 area, say? (As long as the dithers being sampled are random, not 2x2.) That would be sampling only 1/4 of the region's pixels—64 samples. Maybe that wouldn't be too much processing? (Famous last words.)

It could be even less if you ignored the corners and sampled a circular or octagonal region. The coordinates of the samples could come from a lookup table, so arbitrary sample patterns need not require computation.

1 Like

I've got my mind on this problem and may have some ideas soon. But I do want to address this particular notion, since it works a bit unintuitively.

In alpha compositing, the formulas used almost universally across image software (including web browsers) were defined by Porter and Duff in 1984. They describe operations where there are two pixels--a background pixel and a foreground pixel, where each has a color and an alpha--and define how to compute the appropriate resulting pixel. In cases where the canvas itself is understood to be fully opaque (as is the case in this thought experiment), the background pixel's alpha can be assumed to be fully opaque, so I'll be working on that assumption throughout this post.

The RGB color values as well as the alpha are to be in the range of 0 to 1, inclusive. The general simple-case blending formula looks like this:

result = foreground * alpha + background * (1 - alpha)

Or more optimally:

result = background + (foreground - background) * alpha

If you're drawing, say, white atop black with 50% opacity, and do it twice, you wind up with this:

0.5  = 0   + (1 - 0  ) * 0.5
0.75 = 0.5 + (1 - 0.5) * 0.5

In other words, drawing the color in the same position twice with 50% opacity is not the same as drawing the color once with 100% opacity. An implementation that doesn't take this into account will yield some wrong-looking results.

I feel like using some form of ordered dithering for alpha might work, where instead of using the thresholds to select the color you use them to select whether or not to overwrite the corresponding destination pixel with the source pixel...

So I tried it and got it working reasonably well. Not sure if it's 100% correct and I may try something like librif later but I think it works well enough for now.