Is there a limit on how long the update callback can run for? [EDIT: yes, 10 seconds]

I'm developing in C on Windows, building a pdx for the device, and uploading the pdx to the device with the Windows Simulator.

My app isn't a game, it runs some diagnostic tests and logs results with logToConsole. All of the execution happens the first time my update callback is invoked. It seems that if the update takes more than about 10 seconds then my program is forcibly halted and the device is reset. Is this a deliberate failsafe built into the operating system to terminate "stuck" programs? Is there a way to disable this behaviour?

Normally I'd never want my update function to run for 10 or more seconds, but in this case I'm doing some memory benchmarking in a small utility program, this isn't something that will ever go to an end user.

EDIT: I see mention in some other posts of "run loop stalled", is this what I'm hitting and can I do anything about it?

Thanks for any light you can shed on the situation!

For reference, here's what I see in the simulator's console window when this happens:

(output from my program, followed by ...)
22:05:40: ReadFile failed (995)
22:05:40: PlaydateSerialRead failed
22:05:40: SerialReadThread OnExit
22:05:40: Device Disconnected: COM4
22:05:40: PlaydateSerialClose (0x00000DC8)
22:05:41: Playdate Found: COM4
22:05:41: Device Connected: COM4
22:05:41: PlaydateSerialOpen (COM4)
22:05:41: Serial port opened (0x00000DC8)
22:05:41: SetCommState OK (0), BaudRate: 115200, fDtrControl: 2
22:05:41: SerialReadThread::Entry
echo off
cc=9.2.1 20191025 (release) [ARM/arm-9-branch revision 277599]
time and date set

I vaguely remember that the limit is 10 seconds indeed.

Note that the update runs in a 'coroutine'

You might be able to defeat the timeout by ensuring your code calls 'coroutine.yield' every 5 seconds or so. No other code modifications are necessary. Your code will almost immediately continue executing right where you called this function.

There might be weird side effects like button state and other timers not being updated not being

Definitely seems like there's a 10 second limit. This also applies to eventHandler(), not just the update callback, so I was unable to run my tests on kEventInit. I don't see anything in the C API about coroutines, is there documentation elsewhere that I'm missing?

My tests involve a lot of repeated loops (I'm benchmarking memory accesses, and repeating them enough times to get accurate timing information, not to mention running a whole bunch of tests one after another). I'll revise my test framework so I can spread the work over multiple updates.

Coroutines is a Lua thing--it won't apply to C. The update callback is given 10 seconds as a concession, but it should never-ever-ever take that long: it's just there to make sure your program isn't prematurely terminated for an abnormally long task that will eventually finish.

If you need to perform work in excess of 10 seconds, you'll need to introduce some mechanism to perform this work in chunks of < 10s at a time.

Yes, you have to pass control back to the run loop within 10 seconds or the system assumes your code is stuck. That's maybe a clumsy way to do it, but in practical usage if you're blocking the run loop for even 1/2 a second your game is going to feel pretty janky. (though I understand this isn't a game you're talking about..)

I've never used it and can't vouch for it, but here's a coroutine library for the Playdate C API, if reworking your code so it runs in chunks and returns out of the update function regularly isn't feasible: [C/C++] Coroutines Library for Playdate


Thanks @dave! The coroutine library looks fancy, but instead I've refactored my test runner to run one test per update, and I'll just make sure that each test takes less than 10 seconds.

Should have some results to share soon. :slight_smile:


I have a similar issue to this, saving my game can take a long time, over the 10s limit - this is because there can be a lot to save and the JSON can get to be > 1 MB.

For explicit saves or auto-saves I can work around this. I display a "SAVING" modal and save a little bit of content each frame, until it is all streamed out.

But I also want to be able to auto-save on case kEventTerminate: case kEventLock: case kEventLowPower: and here I cannot use this trick, the save has to be all in one go...

I cannot think of anything else I could do here, so it might be safest overall to not attempt to save on exit and have a message on the pause window to tell the player to save before quitting?

Happy to hear any suggestions of alternates!

Are you using datastore? It sounds like you might want to think about writing your own save data format into string data and using the playdate.file API instead of datastore's JSON.

Or, could you split up your save data into many files and only write out the stuff that's changed?

Thanks @AlexMay for the thoughts,

I'm using the PlayDate JSON C-API, and I have already split the game into different "worlds" to help manage the amount of data saved (also limited by the system RAM!). But an individual world can still get to the size where I can't save it on the case kEventTerminate: callback.

The gamestate is also correlated over the whole world, and the things which get saved are all the things which are all changing on a frame by frame basis, so I don't think I can shrink things there.

Short of coming up with my own binary format instead of using the JSON C-API, I am expecting that I am going to be stuck here.

(And I really don't want to be supporting my own binary format here, it took long enough as it is to debug the JSON serialisation! :slight_smile: )


I'm in the process of doing pre-Beta testing, and I'm running up against this as well.

Saving a game state to a save slot takes 12-15 seconds on actual hardware (I do prompt the user that 'this will take up to 15 seconds, please wait' but on hardware I get the infamous crash.

I suppose I could (try?) to break up the save into chunks (it's already multiple JSONs, so that's possible) and kludge with juggling semaphores etc. but it seems like a lot of trouble to go through when I know that the game 'isn't hung' but is just going to take >10 seconds.

As games get larger on Playdate (hint: once we get out of Beta :wink: ) IMO there should be some way to (temporarily?) flag that a long I/O is about to happen and that the Playdate runtime shouldn't freak out.

Hmm. Will still post this here, but will also put in a Feature Request for same.

Feature Request posted.

What kind of stuff are you saving? Usually saves are just 2kb or so...

Are you asking me, or timboe?

Anyhow, the original issue has been answered (tl;dr - use 'coroutine.yield()' periodically during I/O operations, which sidesteps the watchdog timeout issue).

Usually saves are just 2kb or so...

Usually (at least on Playdate) games aren't this big :wink: .

Edit: went into some detail on this in the 'feature request' thread, would be redundant to revisit it here.

My saves are certainly more than 2 kB :slight_smile:

Here's one for scale (894 kb uncompressed) (114.7 KB)

As above, the most annoying thing is to not be able to automatically save when the game is exited.

I need to count on the player remembering to save before they exit to the launcher, with an every-5m auto-save to ensure that they don't loose too much progress if they do forget to manually save.

But there really isn't any good solution to this that I am aware of....

It looks like your game world is one big .json (?).

For UTT (Under The Tree) the 'categories' of things to get saved are level states for 10 levels (includes stuff like doors open/closed/locked, pickups, triggers, messages, fog, monster instances) and player state (inventory/stats/position-in-world/etc).

Level state for 'current level' is persistified continuously, with granularity being at 'significant events' (opened a door, looted a chest, found a pickup, encountered a monster and won/lost, leaving a level for some other location). Player state is also persistified in the same way at that same granularity. Exiting a level persistifies both that level and player state.

So for any given level there's like 8 small-ish .json for that level's state, and 4 medium-ish for player state. Exiting the game takes about the same bandwidth as exiting a level (not all that much, at most a second, if that).

What was taking time for us was the (optional) saving to and loading from game slots - each slot being a snapshot of the state for all levels currently visited and the player state. As the player progresses through the game more levels become 'dirty' and their aggregate 'unique' state data gets larger. Potential late-game slot load/store can approach 75 or so individual .json and a total of 250K of data (big, but still less than a third of your world size).

Once I figured out the 'couroutine.yield()' thing was able to keep the watchdog from timing out and made a nice 'progress bar' to go along with it:


But what made that possible was having things spread across up to 70+ .json, not one big honkin' one o.O .

But there really isn't any good solution to this that I am aware of....

All I can recommend is cutting your .json into multiple ones, so that you can call 'coroutine.yield()' in between each 'datastore.write' and give some UI indication of progress. Otherwise you're likely to run into the watchdog issue (aside from the lack of feedback for some number of seconds, which is a user-experience issue).

with an every-5m auto-save

Cutting things up would also allow you to have a progress indicator for the auto-save, as well.

Hi Tengu,

Right - I already do the regular and auto-saving of this large world file asynchronously. It takes around 15 seconds on device and I populate one top-level field of the JSON on each frame before returning.

My point was that it is impossible for me to also save when the player exits the game, here I get one kEventTerminate call back and that's it...

I wrote a long reply, but then I realized (I think?) that you're not actually committing to disk until game exit (and what you're commiting to disk is the serialization of one great big table) - is that right? (edit on re-re-read: no, you're doing that big commit to disk also, but during gameplay it's async).

(edit: this still applies, I think): In which case I don't see an answer that doesn't involve breaking your monolithic json into smaller chunks (meaning, your world is defined by multiple smaller tables rather than one big one).