Here's a little video I made
I actually did find and test your viewer earlier and this is how I knew the images didn't scale too well with error diffusion based dithering :
However I returned to it tonight and I've found a simple fix
Applying a small blur before scaling down makes a huge difference !
img:blurredImage(1, 1, gfx.image.kDitherTypeScreen, true)
I'll be factoring your code inside of the camera app next ! I guess instead of saving the pictures only as
gif for easy export, I'll need to save 2 copies, one
gif for export and one
pdi that I can read back to render this viewer. No biggie.
Very nice! Turning the dither to an ordered one like that does the trick.
Using your code as a starting point, I've updated my menu class to support images, looks rather decent to me
I wrote some logic to store sort of a rolling buffer of last X thumbnails in memory rather than reading the filesystem all the time. I have lots to learn on PD memory management and cost of reading/writing files
I use a rolling buffer of sorts for zooming in Outside Parties. Why throw out data that might be seen again soon? It helped a lot.
(Outside Parties is a really cool concept with SCP Foundation vibes, the devlog is super interesting, and I'm looking forward to playing it!)
Thanks! Just looked up SCP—kind of cool!
Indeed, be careful it's a rabbit hole.
In other news I've updated the SDK to 2.0 and my clunky VS Code setup fell apart so instead of coding fun things I'll get back to trying to get a lua+C project to build on Windows+VS Code again. The official doc is lacunar.
I'm really loving the Mark 3 design! The option of using it as a case to protect the screen, the wire being tucked away during use, selfie mode: it all seems really well-considered! I'm really looking forward to seeing all the documentation on Github: I have way too many side projects already, but I'd love to make and use this!
I've had some trouble with my Teensy, and had to replace it. I'm also waiting on new power circuitry to test parts that would be easier to find than what I'm currently using.
On the softwre side I spent quite some time polishing the app, making it more robust at detecting and connecting to the hardware, and integrating the camera roll feature to browse and delete pictures on device.
Next step is to record animated gifs !
So far I have crank-to-burst-shoot so it saves fames as fast as possible. From there I'd like to process the images into a gif on device, and clean up the individual files.
I've found this little library that looks quite straightforward and am now trying to port it to pd.
So far all it does is crash the pd with error e0. Anyone here willing to help me out with this ? (all good now )
PS: here's what it could look like (youtube on my laptop screen, processed these frames with an external tool)
Video next?! Is there anything you can't do?!
All right, so the code works in the Simulator( on Windows). Takes .5 second to process 50 pdi files into one animated gif. This is another YT video I filmed with the camera.
I don't yet have it running on the real device unfortunately: it currently crashes before processing anything with
ReadFile failed (995) (any clue what the code means?). And then once this is hopefully sorted out, I'll probably also need to rewrite the code to make it non blocking (eg process one frame, return to lua, update, repeat).
PS: since we create the gif manually here, we can use an arbitrary color palette... maybe that could be a fun little option
That would be cool to maybe put in two hex codes with the crank, and have the device generate a pre-colored GIF!
(And/or you could have a few pre-defined palettes, like pure B&W, Playdate LCD grays, and whatever crazy colors.)
Update: after some fiddling (sprinf and malloc were the reasons why the console didn't like my code as much as the simulator did), the gif encoder now works on the device and takes 23 seconds to create a 65 frames, 460Kb gif. I bet patterned dithering will produce smaller files.
coroutine.yield and a bit of back and forth with the C runtime we even get a nice progress bar
So yeah, that feature is going to make it into the camera app. I just don't think there's going to be an animated preview in the camera roll (gifs can't be read so we would need to also generate a pdv or something but I think I'll leave that as an exercice to a providential github forker).
For some reason I wasted my time recreating a BBC Test Card to use as a placeholder when the camera signal isn't available.
Got pretty close to the original. You'll just never see it in the app if the camera module works as it should (I guess that's why I'm posting it here ).
In other news, you'll very likely hear from this project in Tiny Yellow Machine's next Playdate Community Direct on the 6th, so stay... tuned !
PS: here's the original file, if you have a use for it (320 x 240 so you might want to make your own full width version):
Just a quick update to remind you to keep an eye on the next TYM live show on the 6th
aaaaand it's out? Thanks TYM for covering the project in your live!
If you're looking for the repository, it's right here: GitHub - t0mg/pd-camera: Experimental camera addon for the Playdate handheld console
Spent a lot of time documenting everything, I hope you'll find it interesting
Seeing this in the community direct was brilliant! I don't think I'm alone in wanting to spend money on one of my own
Thanks! Your encouragements helped me stay focused! And thanks everyone for contributing on this thread
Thank you for publishing this very nice product!
I recently purchased a device called M5Stack CoreS3 which has a camera, so I tried connecting it to pd-camera-app using the pd-camera source as a reference. It seems to be working reasonably well, although unstable.
I would like to release pd-camera for M5Stack CoreS3 under MIT license when it is completed, is this a problem?