OUTSIDE PARTIES horror scavenger hunt across a 1.5-gigapixel HDR image

A strange yellow device lies half hidden in the weeds beside a road you never noticed before.

If you find it, don’t pick it up.

It is not for you.

The device identifies itself as a “K5 Panopticon Receiver” developed by the mysterious Cristina Roccati Institute. It presents you with some cryptic warnings, mysterious incoming "radio" signals, and a massive scrolling “psychic image” full of surreal and disturbing things.

Who sent the signals, where is the image from, and what is going on? What will you discover? Probably nothing good… Don’t play in the dark!

OUTSIDE PARTIES is a “hidden picture”/“hidden object”-style horror scavenger hunt, coming to Playdate in 2023. (Much work remains—the artwork is full-res but it’s just rough test art. All details are subject to change!)

Follow me on Itch or Mastodon for updates.

Imaging Engine

This screenshot is a portion of a 1442-megapixel image, cropped to the Playdate screen (400x220 plus a 20-pixel UI bar). That’s right, nearly 1.5 gigapixels of imagery like this.


It’s not a 1-bit black-and-white image. It’s a grayscale HDR image dithered to 1-bit on the fly. You can adjust the dynamic range in realtime with the crank, choosing whether to bring out details in the shadows, the highlights, or anywhere in between. The image’s grayscale data extends well beyond the range visible at any one time. (Spooky "glitches" are intentional!)


The image is a 360° panorama (flat, no warping). A long strip that wraps around horizontally. The tiny red rectangle here is the visible portion of the blue pano. The image is 256 Playdate screens wide by 64 screens tall. That makes it about 15.1 x 2.1 meters, or 49.4 x 6.8 feet. Equivalent to about 16.5 standard doors side by side, all at Playdate resolution. Larger than the floor of a standard 40’x8’ shipping container.

Horizontally, you can pan forever. Vertically, you will hit the top or bottom… if you scroll far enough. Luckily, you can zoom in and out with (A) and (B) to get around faster.


The magnification numbers refer to “how many screens tall” the image is. So at “1x” the panorama matches the screen height (and you can see 1/4 of the width at once). At “.5x” minimum zoom—depicted below—the panorama is 1/2 the screen height, and you can see 1/2 of the width at once. So you can travel 180° to the far side of the panorama in moments. The red region is the maximum visible portion when zoomed out. The blue is the entire image.

You can pan freely with the d-pad, even diagonally. Since the image is so wide, I made horizontal movement go in 20% larger chunks. (I limited the Simulator framerate to reflect what you get on actual hardware.)


The HUD shows your coordinates on the image, measured in degrees of azimuth and elevation. (0° is “North.” The vertical field of view is 50°, ± 25°.) And while turning the “phase screw” (a.k.a. crank) the “noumenon phase” (dynamic range setting) is displayed for reference.

If you are zoomed in near the top or bottom, and then zoom out from there, it’s inherently not possible to remain centered on that portion near the edge (since the image won’t ever overscroll past the edge). But you won’t lose your place: zooming out remembers where you came from, and will zoom back in accordingly. You can also jump “home” to dead level at due North at any time from the System Menu.

At the bottom of the screen you can always see the live transcript of incoming signals, whether useless fragments, or—as you zero in on a signal—coherent messages. (But from who?) Everything you do visually also has an audio component: you are tuning “psychic radio signals” using the image as reference. Tuning the audio and tuning the image are one and the same process.

The “Signal Log” screen (design is only a placeholder!) is a growing list of items where you select which one to hunt for, or re-visit items already found. Access the Signal Log by pressing (A) + (B) together, or from the System Menu. [EDIT: those aren't the final methods I landed on. Instead, you keep pressing B to zoom out "past the minimum," OR long-press B to quickly get to the Signal Log without losing your zoom. And I added "LOG" on the left of the zoom scale to make it discoverable.)


How Does It Perform on Hardware?

Quite well!

The game’s image pipeline processes around 100 images on the fly to get to what you see on the screen. Different parts of the screen are sometimes generated in different ways.

I’m working on a detailed post about that pipeline, but for now, here are a few details:

• The entire image is loaded at launch, not streamed from disk.

• Nothing is generated procedurally, or assembled from parts. The “hypnopanorama” really does come from a single massive Photoshop document. Hundreds of MB. Simply exporting the file takes several minutes. (Bad Adobe!)

Everything is coded in 100% Lua! No C. I like the streamlined Lua workflow with Panic Nova, plus I didn’t feel like learning yet another new language. Nor did I want to fold in third-party libraries. (Some cool C libraries exist for grayscale images, but my HDR scheme called for a custom solution.)

• My original attempt at this concept was simple, elegant, and would have taken several MINUTES PER FRAME to run. So elegance and “conceptual purity” had to go out the window. I focused on what the player will actually see and experience, and started cutting every corner I could think of without compromising that. I hacked around things in shameful ways with ugly side effects, then coded mitigations to hide the side effects. I made things fast when I could, and when I couldn’t, I worked to make them FEEL fast.

• The image pipeline grew to around 100 steps… more conditionals, more corner cases, more weird visual artifacts to address… yet it kept getting faster and faster.

How Does It Work?

Doing more with less data = speed! I’m surprised by how good this photographic style of imagery is able to look, using far less data than I would have predicted.

In the end, that 1.5 gigapixel HDR image fits in a PDX a little over 6 MB. (I haven’t done the audio yet, so that will grow some.) And it holds steady at about 60% usage of the Playdate’s RAM.

At max magnification (64 screens tall x 256 wide), every 64-pixel zone of multi-bit HDR grayscale imagery is stored internally as just THREE BINARY DIGITS of data.

phasecycle sourcebits

That animation shows the full dynamic range of the grayscale panorama. That portion of the panorama is derived from those three little 1-bit data sets framed in blue. Every set of 3 source bits becomes 64 rendered pixels—including each of those rendered pixels having its own HDR range of dark-to-light values to crank through.

It’s still a lot of bits total, though, considering that this view is only 1/16,384 of the overall image!

Example “Recipe”

There are actually 6 sets of 1-bit source data, but only 3 are used at a time. The visible image is generated from those 3 sets of bits using numerous different "recipes," incorporating three different dither types that need to interact—including several pre-defined 128x128 blue noise patterns which are themselves tweaked to interact properly with each other. Here's the recipe used to generate the image above when the crank is at "PHASE VII.6":

Component A
1. Draw the inverse of bit set #2
2. On top of Step 1, composite bit set #1 with 33% blue noise

Component B
3. Draw bit set #3
4. On top of Step 3, composite bit set #2 with black transparent

Component C
5. On top of Component A, composite Component B with white transparent

Component D
6. Draw bit set #3 again
7. On top of Step 6, composite bit set #2 with white transparent
8. On top of Component C, composite Step 7 with black transparent

Component E
9. Draw pure black fill
10. On top of Step 9, composite bit set #2 with 67% blue noise
11. On top of Step 10, composite bit set #1 with 50% blue noise and white transparent
12. On top of Step 11, composite bit set #3 with black transparent

Component F
13. Take Component D and apply .5-pixel blur using Atkinson dither
14. Take Component E and apply .5-pixel blur using Atkinson dither
15. On top of Step 13, composite Step 14 with 27% Bayer 8x8 dither

Then periodic random “glitch” effects are added to the result.

Many of these steps are done only for the portion of the image that changes. The rest is re-used and translated to new coordinates. For example, when panning diagonally, two strips are rendered: a horizontal strip and a vertical strip in an "L" shape. The dimensions of the strips vary as needed, to composite cleanly with the existing render and avoid seams and artifacts. Sometimes extra edge pixels must be rendered for this reason.

I optimized first for smooth cranking (it’s super fast), then for panning performance, and lastly for zooming (the least frequent action).

Game Play

Game play will consist of exploring and getting to know this strange, detailed image. Early signals will be easy to find. Then, as you get to know the image better, that knowledge will help you track down less obvious signals.

Four main things set OUTSIDE PARTIES apart from conventional “hidden picture” games:

1. Dynamic range: “tune” the image on the fly (by cranking) to brighten dark areas or dim bright ones, revealing things otherwise lost in shadows or glare.

2. Audio and text “signals” that are tuned at the same time you tune the image. (This is inspired by “numbers stations”—possibly including spoken words, but that’s not yet definite.) This tuning action provides a kind of “warmer/colder” guidance to zero in on an item once you get close. And tuning does not require the sound be turned up: the live transcript lets you visually see a signal increasing in clarity as you locate and tune it. Zeroing in like this is useful when other clues leave you stumped, but it doesn’t hold your hand: you must choose one signal (target item) at a time to track down. You can browse the list of clues and change your target at will, but you can’t “accidentally” get credit for stumbling across other items.

3. One MASSIVE panning/zooming scene with many items to find, instead of multiple small scenes to search. No scene loading delays or interruptions—a single experience. The list of target items starts small and simple, but the signals you track down unlock additional signals in turn, providing clues to further items. To “acquire” an item, you must zoom in to it, tune it to clarity, and then wait for the looping signal to play through once. (They’re not long.)

4. Varied “clue” types instead of a simple list of items. Signals can provide one or more of the following clues, so some targets will be easier to find than others:

• A clear name or description

• A vague description or hint

• A trail you must follow across the image

• Exact coordinates (azimuth and elevation)

• Approximate coordinates

• Just one coordinate, leaving you with a “band” to search

• The “phase” (dynamic range to crank to)

No need to take notes or track down old signals again: you can read (and hear?) past signals at any time from the Signal Log screen, which is also where you choose one as your current target.

The signals also reveal snippets of the sci-fi/horror mystery. No “cut scenes” to wait through—the story comes to you as part of the game play.

I expect most “target items” will be acquirable around 8x or 16x magnification. Zooming in for a closer look than that will most often be optional and “just for fun.”




...I kid! This looks great and your detailed breakdown is very interesting for both design and technical implementation!


How wonderful! This sort of totally wild idea really floats my boat.

I'll follow your progress with great anticipation.

Well done so far. I really appreciate the deep insight into your process.

1 Like

Glad the idea seems to grab people!

The Playdate is so cute, whimsical and cheerful. I really want to make a game that saps all that joy out of it.

My original mobile (possibly ARG?) game concept—long before Playdate—had NO GRAPHICS. Just VFD-looking text (green with occasional red "jump scares") and audio inspired by shortwave "numbers stations." (Look them up and listen if you want to be creeped out.)

When the Playdate was announced, the crank immediately made me think of this never-finished horror radio-tuning concept.

But the PD screen was so cool! I imagined that the silvery LCD would give photos the look of creepy old daguerreotypes, and thought about marrying the signal-hunting to a big scrolling image. So here we are: the image component has grown to BE the main game, and the original all-text interface lives on in the bottom UI bar.

Long before the SDK was out, I made a JavaScript/Canvas-based proof-of-concept for the HDR shadow/highlight tuning, requiring a fast desktop processor. I simulated the crank by dragging the purple bar back and forth:

When I eventually got a Playdate, the concept smacked hard into the wall of what that little CPU could actually do. Not to mention, the lack of ability to even load a grayscale image internally.

But I didn't give up. I spent several days—in an actual cabin in the woods with no neighbors and no Internet—trying to optimize into something playable. Success! (Followed by months of on-and-off reworking to make it look and work better. The last 5% of the experience was probably 95% of the work. So much for 80/20!)

I haven't decided how to handle the audio yet—actual voices, I hope? I intend to compress them to super low distorted bitrates, because that in itself is creepy! And gentle on the RAM.

(And true to my Playdate roots... there is already a clock in the game. Just an Easter egg some players may discover.)


This is quite something! Can I ask you to elaborate on

I'm not sure I understand what you mean here.


I read it as three pieces? But yeah I had to think hard about that paragraph.

1 Like

Ah, the "bits" are the three 1-bit base images, yes - that must be it! The double use of bit threw me a bit :slight_smile:


Right, three 1-bit "source pixels" go into the recipe to become an 8x8 dithered zone on-screen.

The big Photoshop document has 256 gray levels, which then get indexed (using a custom CLUT and Diffision dither turned up to 100) into an 8-color (3-bit) intermediate step. But even then it looks like more than 8 grays because of the dithering, along with all the texturing contained in the artwork.

Then I extract each bit as a separate PNG for the SDK to import. Re-assembling the bits uses the various "recipes." (I first sought a generalized solution of a single process, and failed, but that would have run slower anyway.)

The reason there are two sets of three 1-bit images is because the source data exists at two scales, for the sake of the clearest detail when zoomed out. The second set is used only for "1x" and ".5x" (maybe "2x" in future). I'm not yet certain the extra set is worth it, but it can be removed in later testing. I'm autogenerating it for now, knowing I can play with sharpness etc. later on.

Currently, the source pixels are used at their unscaled native size when viewing "1x" and "8x." But even when doubled you can't see obvious pixels because the grayscale dither detail makes them all but invisible:

Ant with human eye

That's all raw, unblurred, 2x2 source data pixels... but you can't see them!

As for the dynamic range, "full range" from black to white is considered to be a 4-gray sequence out of the 8. Of course, black-to-white is not how it looks in Photoshop: everything is compressed and looks washed out—as seen below. The only reason for something to look full-contrast in the Photoshop working file is if something is in deep shadow and has a very bright light or flame on it. Same reason we need HDR in photography!

(BTW three elements went into this view: first, my modification of a 1665 ant print by Robert Hooke, which I used in some fictional currency I made, and have re-used as a test image ever since. Second, the eye from the pyramid on the US $1 bill—yes, that's legal! And lastly, some crazy nasty fungus I photographed in the woods.)

8 source gray levels facilitates having 5 meaningfully-different dynamic ranges ("noumenon phases" V through IX in the UI) of 4 grays each:


The lowest phase and highest phase (V and IX) are mutually exclusive: when one is fully "tuned," the other is invisible.

Each phase currently equates to one revolution of the crank—but I'm still playing with what feels good.

I also generate two additional "phases" at either end: "too dark" and "too light," so you never feel the crank is artificially limited. Beyond those, you can keep cranking into pure black or pure white forever. Beyond "phase I" and "phase XIII" (three full revolutions of nothingness) an error is shown to discourage you from cranking on... but you still can.

(When playing, you don't need to crank the image to perfect contrast—nor zoom/center on it perfectly. It's meant to feel vague like an analog process, and that makes it forgiving.)

1 Like

I'm still a little unclear.

Can you change the wording to not use the word "bit"? I think that would help immensely.

Sometimes I'm wondering, does he mean 1-bit pixel? Does he mean pieces? Something else?

1 Like

Good idea—I changed it to "binary digits". (That's always what I mean by bit, not "pieces.")

I didn't want to call them "pixels" because you never see them—and each "pixel" you DO see in the game comes from multiple binary bits of data.

But you could call them invisible "source pixels." Six 1-bit images of them, three of which are used at a time.

Some of the "glitch effects" are random flashes of the least-significant source bit at native scale. You can make out flashes of half-seen images in the glitches, taken from anywhere in the panorama. (That bit is the topmost of the 3 source data images framed in blue above. You can vaguely make out the rune and the eyeball, but not too obviously.)

Here's a "glitch" frame showing the ant. My hope is that people will just think it's noise... except once in a while wonder, did they see a flash of... something?


1 Like

Got it! Thanks again

1 Like

Here's a look at the launch card, some rough menu progress, and on-device performance recorded with Mirror.

I'll make a more complete animated "boot sequence" later. I'm trying for a mostly-text, "scientific device" UI feel, without much imagery beyond the pano itself.

I've abandoned using the (A)+(B) combo to bring up the Signal Log. Not discoverable enough. So I added "LOG" to the left of ".5x" in the magnification bar, which you traverse with (B) and (A).

And you can always long-press (B) as a shortcut to get the Log without having to change your zoom.

In this video, I mash (A) and (B) as fast as I can to show zoom performance. And I hold the d-pad down to show auto-scroll speed. (The HDR fading is done via crank.)

As you can see, zooming displays a rough/quick preview immediately on buttonDown, followed just a moment later by the full-quality render. In practice, only the very highest level of zoom-in ("64x") feels slightly delayed (and you won't need to use that much anyway).

And that delay is only the FIRST TIME you zoom in—then it gets faster. Once you've seen a zoom level, it's cached: now you get instant response at all zoom levels (as the video shows), with no need for the rough preview frame. Until you pan, at which time the current zoom is the only one still cached.

Cranking is ALWAYS totally smooth (actually slowed a bit here by Mirror).

Near the end of the video, you can see the full panoramic strip wrapping around, zoomed all the way out.

In the Signal Log, the numbers in the boxes represent how many items are left to find in that mission. With 1 signal sometimes leading to more than 1 new mission, I'm intending the number of "things to find" available at once to start small, balloon in the middle of the game, and then contract back to 1 goal at the end. Some signals will have a bit of story/lore, but I expect some will just be another mission in the chain. And some signals will have to be ONLY story, or the number of available missions will never contract. Those signals will start out with a "0" as soon as you acquire them.

I'm trying to build all that in a flexible way so that late changes in the feel/progression of missions will remain possible after playtesting.


This is a more-final "Signal Log" UI (with early/placeholder content). Along with a demo of new "glitch" effects using the SDK's vcrPauseFilterImage. The existing glitches still happen as always, but this is a new type in the mix occasionally.

Every item listed in the Signal Log is the result of a signal you found. And gives you a new "mission" to find another item.

  • "?" for a newly-discovered "mission"—something new to look for.

  • A number for a mission asking you to find MULTIPLE things. (So "?" means "1" basically—that's most common.)

  • Crosshair icon for things you have successfully found (but you can still go back to review those signals at will, and return to the pano to view the associated imagery).

  • I'll probably use a little dot or grayed-out crosshair for items that don't lead to any new mission (signals with nothing new to find, just story/atmosphere).

outside parties glitch test

The (A) indicator starts on the currently-active "mission." You can scroll it up and down to select a new one and exit the log. Or (B) reverts instantly back to the current mission and exits the log.

Each item you select as you scroll plays its looping audio, along with a live ticker of its text at the bottom. (That's the sound and text that played on the main pano screen when you first found that item.) The final words of the signal are used as the "mission name" in the list.


Progress update: I'm early in the writing phase now, so may not have much to show for a while! Have to resist diving into detailed art before planning. Release date? Who knows! (Halloween would obviously be nice, but it's done when it's done...)

Meanwhile, though, there might be a little bit of tangential OUTSIDE PARTIES lore in Ledbetter's Art&.. More on Catalog. The Roccati Institute appears to have made a mysterious donation from a collection long kept secret from the public...

Which also gave me the chance to make something in Pulp finally. Fun!


I’ll be posting some footage with audio soon. It’s coming along nicely, really feeling like the image, the sounds, and the text ticker are all ONE thing. I hope!

For now, I’ll share how I am generating procedural music...

Interval Signals

The game is partly inspired by numbers stations so I felt I HAD to build my own actual numbers stations into it. Complete with voices speaking code... and—very occasionally—”interval signals”: vaguely spooky bits of music that play between transmissions.

You won’t hear them often, and you will NEVER hear the same tune twice.

Real interval signals are often bits of folk music of various nationalities. So I programmed a system to write widely-varied snippets of folk music.

Most of the randomizations are weighted: I always like my randomness to feel “uneven,” with some events being much more rare than others.

Here’s how it works:

• Random choice of 3|4 or 4|4 time.

• Random span of 1 (usually) or 2 octaves to draw notes from. Never super high or low.

• Random selection of four scales I put in a table: Dorian C, Mixolydian C, Aeolian C, and Aeolian Dominant C.

• Random tempo.

• Random overall volume.

• Random choice of 6 sample-based instruments: Organ, Flute, Chorus, Bells, Out-of-Tune Piano, and the creepiest of all instruments: the dreaded Toy Piano.

• Random choice of 5 ADSR schemes I made (“Soft,” “Staccato,” Woodwind,” “Woodwind/long release,” “Full + fade-off”) that give those instruments different characters.

• Random bitcrusher amount with random undersampling, for a nice radio distortion that varies. At the extremes, the instruments can sound quite strange.

With those factors decided, it’s time for the Playdate to get to composing....

Procedural Music

Each snippet of music is 8 measures long. Usually it loops twice, since repetition adds structure and feels more “musical.” But when the random tempo is slow, it only plays once. I didn’t want the tunes running long.

• Measures are generated in pairs: a random rhythm is built from half-notes, quarter-notes, and rests, then used for two consecutive measures (of 3 or 4 beats each).

• The process has a little guidance against certain unwanted weird outcomes. For instance, the music never starts with a rest.

• The rhythm contains no pitches, but it has both durations and random velocities (note volumes). The velocity variation helps the music feel more “alive.”

• Random pitches from the chosen scale are assigned to all the resulting notes. Each note in the rhythm table has a 50% chance of being the same for both measures. That means the second measure of the pair feels like a variation on the first.

• The final measure is different: it copies nothing from the preceding measure, but is always a single note of random length, always at least a half note, and it can be longer than the other notes ever get—up to the full measure. This long note (or if short, followed by a long rest) is like a period on the end of the musical “sentence.”

• To time the sequence I use this formula: loops * musicSequence:getLength() / tempo. But if my piece ends with rests (which have no meaning in a sound.track, they’re just unused time) the timing will be wrong. So in the final measure, I use note(s) with velocity 0 to fill the time of any final rest.

The result often sounds, to my untrained ear, like real music! And when it doesn’t quite, it’s still a fine “musical signal.”

I then use a low-frequency oscillator modified to ramp the volume gently up from zero and back down again in a sine curve, like a distant signal coming and going. (Most of the sounds in the game do that to some degree.) Occasionally the two-loop tunes will forgo that and play full-volume start-to-finish. But the one-time tunes (less common) never do: they already feel more like pieces taken from the middle of something, not as much like music that stands alone.

The “data stream” (text ticker) at the bottom of the screen always reflects whatever the audio is doing (which makes an audio-heavy game playable with the sound off). In this case, the text flickers a mix of “INTERVAL SIGNAL” and question marks. (As well as the ever-present occasional flashes of brief creepy messages from Out There.)

Stay tuned for some recordings!


Recording time! (Forgive the slightly slow performance: these were recorded from actual hardware using Mirror, which has a little overhead.)

This is the game's built in "Void Monitor" mode which simply plays the game's sounds while auto-scrolling slowly and changing the contrast randomly. It sounds the same as user-controlled gameplay… and of course, I put a clock in the corner, for maximum utility!

A soundtrack to study, relax, meditate, sleep or party to!

And another recording, this time with more music, procedurally generated as explained above:


Audio Layers

The Playdate (“K5 Panopticon Receiver” in the game) is not meant to be a literal shortwave radio, but something mysterious that’s akin to that.

There are 10 layers of sound, of which 2 to 4 are playing at once:

UI clicks, different for different actions.

Static sounds accompanying the three kinds of brief visual glitches. Randomized pitch, and louder for bigger glitches.

Full stereo background loop, > 1 minute, while viewing the pano. (But any constant sound risks being tiring, so the often-visited Signal Log has no background audio.) This is a mix of a couple modified shortwave radio sounds, varying across the image via Perlin algorithm. It also pans randomly in stereo and changes pitch over time.

Second stereo loop plus random pan, similar to but simpler than the first, running at a different rate so the interaction of the two provides a non-repeating background. This one rises and falls in pitch with the crank, making it really feel like the crank is “doing something”! And I’m using pitch-compensated volume, volume/rate, so it doesn’t get loud and shrill as it rises.

Signals: the spoken story/mission snippets that come into clarity (diminishing bitcrusher, diminishing volume dropouts) as you zero in on each target object in the image.

False signals: heavily-modified random snippets from the full library of spoken signals. Played backwards, randomly re-pitched, and bitcrushed. They fade in and out smoothly, and are only sub-segments of the longer signals, starting at any random point—even near the end, causing the false signal to loop and finish with the beginning. Each signal is really a PAIR of WAVs: a “story” snippet, usually followed by a “mission” snippet leading you to look for something new in the image. (This leaves me great flexibility to alter the game flow at any time, changing the order of the missions while leaving the story alone, and vice versa.) False signals are randomly based on just ONE of these two halves.

• Random white noise overlay that comes and goes occasionally.

• An ever-growing library of creepy samples that fade in and out, with various random distortions applied. Additional radio sounds, screams, machinery, footsteps, you name it! I like “uneven” randomness, and these are one example of that: some samples are more rare than others. I want the player to regularly hear/see something new. Or better yet… half-hear or half-see and wonder, What was that?

Numbers stations. They partly inspired the game, so I couldn’t leave them out! A large range of spoken numeric codes—but only a few different voice processings, to make it feel like you keep hearing the same couple of stations return.

Interval signal music. Randomized in a guided way to sound like actual music. (See post above, and listen to some real-world interval signal music examples here.)

Audio Variation Axes

For variety the sounds vary in many ways, each one customized to what sounds good for the particular sound:

• Pitch/rate

• Volume (and sometimes rising and falling volume changes)

• Stereo pan (and sometimes pan movement)

• Bitcrush effect (reducing sample rate, depth, or both)

All sounds vary in response to these 5 factors:

• Randomly on their own (but not so strongly that it swamps the user-controlled factors).

• With proximity to a “real” signal (something to find in the image).

• Spatially when panning the image along X and Y with the dpad. This variance is done using a 2D Perlin algorithm.

• With “phase” (dynamic range) as you turn the crank.

• With magnification as you zoom with (A) and (B).

I wanted EACH of those factors to feel responsive, the changes not getting lost in the other 4. I think I’ve achieved that.

All of those factors are also visible in the pano image itself, AND are reflected in the text data stream at the bottom.

Everything you do feels like you are affecting the image AND the audio AND the data stream… because you are. All together as a single process. You’re doing a bunch of stuff, but I think I’ve made it FEEL like you are doing ONE thing: finding things and zeroing in on them.

(Soon I’ll post a live camera demo of the process of zeroing in on a signal, here or on Mastodon.)

EDIT: Oops! 10 layers, not 11... I decided against including the Morse code layer.


Here's an early peak at the likely style of the "stuff" you're searching for. It's all one big, complex scene—a single "place" seen in 360° panorama—and here's a little zoomed-in portion of it:


That incorporates some of my sculpture work:


This is incredibly cool. Love the spooky mysterious vibe. Can't wait to see more!

1 Like