Multi-threading with C?

Here’s another newbie question: What’s the recommended option for running a lengthy operation in C code (finding the best chess move) while getting time to update the UI under Lua?

The engine will be calling back to a C function that the app could provide and call Lua from, and in turn coroutine.yield() from, but only once per second or so. I'd need 30 fps, depending on how the various parts perform. Could I use pthread? (How many cores does the Cortex M7 even have?)

Which direction should I look?

Thanks in advance,

pretty sure the playdate is a single-core machine. I'd love to be proven wrong, but I'm fairly familiar with the cortex m_ line. I'm no expert by any means.

For lengthy operations I have a few suggestions:

  • study yield() and see what it can do for you, maybe breaking the calculation up into phases

  • optimize in other ways. using binary search for instance rather than straight iteration where possible

  • maintain a hash table or something like it of possible moves so they can be referenced rather than calculated each time

me hoping and praying I can just #pragma omp my way out of good design

1 Like

Thanks for taking the time to reply, @DJ_irl.

I wanted to avoid having to call from deep down in the standard C engine (shared with other projects on other platforms) up to the Lua layer, but it looks like there's no way around it. It can be done without affecting the other implementations much, of course, I just hoped there was a better way.

1 Like

Just fyi: we currently throw an error if the Lua function called in pd->lua->callFunction() gives a LUA_YIELD return instead of LUA_OK. I'm not sure how we would handle that situation, and in your case I don't think it would do what you're looking for anyway. If rewriting your worker function so that it can return regularly to let your game update the UI then continue where it left off isn't feasible, GitHub - nstbayless/playdate-coroutines: C/C++ Coroutines etc. on the Panic Playdate might do what you want.


This is good info, thanks @dave!