Transformation Order

I've got a dumb matrix question. If I do

local translate =,40)
local rotation =


I get

[0.707  -0.707  -14.14]
[0.707  0.707    42.42]
[0.0    0.0      1.0  ]

Whereas if I put the matrices into desmos's matrix calculator,

[1 0 20]     [0.707 -0.707 0]       [0.707 -0.707 20]
[0 1 40]   * [0.707 0.707  0]     = [0.707  0.707 40]
[0 0 1 ]     [ 0     0     1]       [ 0      0     1]

So the Playdate's matrix multiplication order is backwards. Why is this? Or is Desmos's calculator wrong?

1 Like

This looks to be a bug in the implementation.

Desmos is correct (of course). In the case of the top-right cell, the result is 1*0 + 0*0 + 20*1 = 20. Matrix multiplcation is not commutative: in the general case, a * b ≠ b * a.

Two matrices of dimensions n×m and m×p will multiply to produce a matrix of dimensions n×p. It is critically important that the number of columns in the left matrix matches the number of rows in the right matrix, or else the two can't be multiplied. This is prominently significant when transforming a point as a vector using one or more matrices: the matrix must go on the left. A 3×3 matrix multiplies by a 3×1 vector to produce a 3×1 vector: the reverse order is invalid.

Consider the following:

output_point = translate_matrix * rotate_matrix * input_point

Think of the matrices as filters. The point wants to get to the left side of the equals sign, so in order to get there it has to pass through the matrices. First the point is rotated, then second the rotated point is translated. (Edit: The two matrices are multiplied first because of order of operations. It just happens to be that the resulting matrix is an aggregate of the other two and has the same effect as the preceding sentence.) This is a common principle when doing graphics programming. As such, in the given example, translate * rotation needs to mean "apply this translation to this rotation", not the other way around.

This isn't a bug, but it's an area that's open to some degree of interpretation.

When you compose two affine transforms A and B, you're asking for a new transform C that has the same effect as applying A and B in sequence. The API achieves this by multiplying their matrices together, which works because matrix multiplication is associative, and the order of this multiplication matters because matrix multiplication isn't commutative. But either order can be valid depending on how you apply the resulting transform. For example, if you're going to post-multiply (transform matrix on the right), then

V * C = V * A * B (for any vector V).

But if you pre-multiply (transform matrix on the left) when applying the transform, you'll have to reverse the order:

C * V = B * A * V

(You get around the dimension mismatch by transposing — swapping rows for columns.) You might choose one or the other order depending on whether you're transforming a point/vector/image/sprite/path or the coordinate system that contains it. Sometimes graphics APIs dictate the order.

Our graphics "pipeline" (such as it is) pre-multiplies when applying transforms, which is why our APIs for composing transforms use this multiplication order.

The "pre-multiply" order is used by GLSL (source § 5.10 p. 110), HLSL (source) and SPIR-V (source § 3.42.13 re: OpMatrixTimesMatrix)--all big names in the graphics and games industries. These are the technologies game developers are going to be familiar with when approaching Playdate, and if Playdate uses the opposite order, then there will be no end of threads like this one.

If part of Playdate's mission is to be accommodating to game developers, then deviating from an industry-standard operation is only ever going to get in the way of that. I won't press the issue if you guys are determined to keep it the way it is, but I want to provide caution in no uncertain terms that doing so is a mistake.


To clarify, I'm referring to the meaning of a matrix-matrix multiply operation in GPU shader languages. If you say this:

a = b * c

... it unambiguously means this:

[ aj+dk+gl am+dn+go ap+dq+gr ]   [ a d g ]   [ j m p ]
[ bj+ek+hl bm+en+ho bp+eq+hr ] = [ b e h ] * [ k n q ]
[ cj+fk+il cm+fn+io cp+fq+ir ]   [ c f i ]   [ l o r ]

I think I understand the problem now.

When you type A * B and both variables are instances of playdate.geometry.affineTransform, we have a custom * operator that does the multiplication. There's also a custom operator for A * v where A is an affineTransform and v is a playdate.geometry.vector2d (or another geometric element). There is no corresponding operator for v * A — you'll get a runtime error if you try that.

We do use pre-multiplication when composing transforms via functions like affineTransform:translate() or translatedBy(). But our transform * transform operator appears to post-multiply, so the expression a * b is computed as if you're right-multiplying b by a. As a result, these two expressions are not equivalent:

v1 = transform1 * (transform2 * vector)
v2 = (transform1 * transform2) * vector
v1 == v2  -- false!

which violates associativity. This seems worth fixing, though I worry that existing games are relying on the current behavior, so it might not be feasible. We'll look into it. Thanks!


This seems worth fixing, though I worry that existing games are relying on the current behavior, so it might not be feasible.

I was thinking about this earlier. The best idea I came up with is that the runtime could check pdxversion or somesuch to select an implementation depending on SDK version. This would allow existing game builds to continue to function as before, though would break newer game builds if the author doesn't swap the operands.

Is there a way to check and see how prevalent use of this operator is?

1 Like