I promised this months ago, but never found the time to get it to a minimally useable state. Until now!
1bitvideo.app.zip (856.7 KB)
It’s not pretty but it works, mostly. Open a video file or drag it to the preview window, hit the play button to preview, or Save to write out a “pdv” file to disk. That also extracts the audio and saves that separately, which I’ll explain later. It saves from the current position (set with slider) to the end of the file, or when you hit Stop. I can’t decide if that makes more sense than starting from the beginning… You can change the different settings during preview to see how they affect the output.
Dither is which dithering algorithm to use. The first four are ordered, the rest are error-diffusing and pretty similar. I’m a Stucki guy, myself.
Contrast, Brightness, Scale to fit are obvious, I hope.
Frame rate is stored in the pdv and your code uses the value to figure out which frame to show when (again, more on that later).
fwd bias is a weird one. That’s how much a pixel resists flipping, if it can avoid it. This reduces flicker and also improves compression, but too much of it causes ghosting and jumbled pixels. I think 30 is a reasonable value? It probably depends on the video.
iframe every x frames: Since the fwd bias carries error forward we need a clean frame (“iframe”, for some reason) every once in a while to reset things. And since it flips more pixels than a biased frame (delta frame, or “dframe”), the iframe can produce a noticeable flicker, especially on the device screen. Changing the rate at which iframes occur affects the flicker in unpredictable ways. Play around with it, see what works! Iframes are less compressible than dframes so more iframes equals bigger files.
add reverse step to iframes: The video api lets you grab frames at random from the file, but is optimized for playing in forward order. If you’ll be playing backwards as well as forwards, this setting adds an extra dframe to generate the previous frame from an iframe.
So about that audio file… The pdv format does not (yet) contain audio data. To play a video file synced with audio, you play the audio file in the normal way, then use its current position to figure out which frame to show. The audio file the tool saves isn’t in a format the playdate compiler recognizes so you’ll have to convert it to aif or wav yourself. (Sorry! I’ll see what it takes to make it save aif instead…) Here’s a simple video player:
playdate.display.setRefreshRate(0) video = playdate.graphics.video.new('station') video:useScreenContext() audio, loaderr = playdate.sound.fileplayer.new('station') if audio ~= nil then audio:play() else print(loaderr) end lastframe = -1 function playdate.update() local frame = math.floor(audio:getOffset() * video:getFrameRate()) if frame ~= lastframe then video:renderFrame(frame) lastframe = frame end end
station.pdx.zip (3.4 MB)
(In retrospect I shouldn’t have used a time lapse video, but that’s what I had at hand and it’s late and I want to post this and go to bed. The sped up time isn’t a glitch, it’s supposed to look like that.)
But now let’s talk about file size. The pdv in that sample there is only 15 seconds of 30 fps video and it’s 2.4 MB. (The audio is 44kHz mono, 1.3 MB uncompressed but would be 1/4 that if I’d ADPCMed it.) If you push the bias up and compress the audio, that’s around 10MB per minute of 30 fps video. That’s… not insignificant! I’m putting this tool out there as something you can play with if you’re interested, not to suggest you should be making giant FMV games.
Let me know if you have any questions or suggestions or comments or run into problems or etc.!