OUTSIDE PARTIES horror scavenger hunt across a 1.5-gigapixel HDR image

Thanks! No major new specifics yet—it’s coming along but my own projects have to come second to client work :confused:

Outside Parties is tied for #1 priority among my own projects though! And although the creative work has just begun, the technical work is mostly complete.

.
And here’s that Community Direct trailer (also winner of Best Trailer in the 2023 Community Awards)—timestamped at 1:28:

2 Likes

A couple audio tricks to share:

First, for scrolling, I’ve turned the UI clicks into a Shepard scale. It works really well with short clicks: it sounds like each click gets lower in pitch no matter how far you keep scrolling down, and higher as you keep scrolling up... and yet the clicks never get very high or low. A useful illusion:


Second, I finished the text ticker that provides subtitles for every spoken signal. For readability, the text doesn’t move/slide, it stays static and paginates.

Outside Parties Text Subtitles Test

Here’s how the text and spoken dialogue stay in sync:

• The dialogue script is all in a Lua table.

• The script snippets are converted on the fly into audio filenames to be loaded from disk as needed: every file is named for the LAST 4 words, minus ending punctuation. (The last words seemed to be more distinctive than the first.) I also use the last words as the mission “title” in the log list (like the one you can see at the top of the screen above). But for that it's not just a word count, it measures the font and fits as many words as possible.

• Every piece of dialogue is in two halves—two separate audio files: typically one relaying a bit of story, and one relaying something new to go searching for.

• The user hears the two WAVs as one: they are played one after the other. But this modularity lets me quickly adjust the mission tree and story beats if I need to, EVEN after dialogue audio is all done. I can move a difficult quest later, for instance, without messing up the story by having to make the attached story piece also occur later. (There is no exact order—with multiple open quests at once, people will find things in different orders. But the mission tree creates a general chronological structure: the major mysteries won’t get resolved until late.)

• The text ticker code divides the dialogue script into 1-line “pages” that are about 2/3 the screen width and fit nicely below the panoramic image, with room left for UI at bottom-left.

• But I can’t just run those pages with equal time to match the duration of the spoken WAVs. That’s a poor match between text and voice, because some pages fit more words than others. (I never truncate or hyphenate the subtitles, so a long word may need to get bumped to the next page.) So every page has an independent duration calculated on the fly.

• I also don’t start a fresh page for the second WAV—they are supposed to be one single transmission after all. So the script of both halves is combined before pagination. That means that one of the middle pages typically transcribes the end of one WAV and the start of the other.

• Now we have a switch of audio file that happens in the middle of a text page, and pages of arbitrary numbers and durations. So the audio and text have to be on completely independent timers.

• But I pad the audio timers with an extra second of silence between the WAVs—where a pause would have naturally been had they been spoken together. And then even more padding, two seconds, before the signal loops.

• The end padding is easy to add to both text and audio timers. But the middle padding has to be added to just one specific page of the text, found by counting words.

• The text timing is not measured in seconds but in “time units.” Right now, every non-space gets 1 unit of time. That way a longer word naturally gets more time, a period adds a little time, and … three periods adds even more. What those time units equate to in seconds depends on how fast or slow the particular dialogue is spoken. By timing the total text “time units” to match the total audio duration, the text syncs automatically with fast and slow speech alike. And if I want go get fancy later and assign more time units to certain letters or characters, I can do so.

• So how do I add the middle padding to the page that needs it? How do I estimate one second of padding in terms of character units? Easy! I just add the character units for “one one thousand."

The result seems to be working nicely—it follows along well with the voice.

(I’m currently using realistic synth voices, ones that sound good and sure beat silent text! But still lifeless compared to what actual actors could do. I’m considering hiring some.)


Also... the story is complete! I’m hoping it will add some nice extra interest to the picture searching. And maybe even be a little scary. The stakes are not low...

6 Likes

Very interesting stuff! Will try the scale thing.

I'm still hyped about this game. Keep on truckin'

2 Likes

Neither of these modes of transportation gives me any confidence at all.

horsies

ferryman

2 Likes

Some visual effects (accompanied by a chime) when you find a target object:

Acquired FX

Also note the text at the bottom: as you crank the brightness (“noumenon phase”) to tune the image to clarity, the audio signal for the next mission also comes in and gets audibly clearer. The text transcript of that voice at the bottom does the same: it transitions from a letter jumble to accurate words as you crank. A little extra help if you can’t hear well or have the sound off.

1 Like

With the story done, I’m at the stage of building all the art and dialogue to execute it. But I also can’t help continuing to refine the imaging engine... As if it wasn’t complex enough already, I’ve added one more layer!

When the brightness is cranked too light or too dark for the region you’re viewing, large areas can go to fully black or white. That’s OK most of the time—but if you ALSO zoom in really far (32x or 64x) then those areas can be really large: you can sometimes get a fully empty black or white screen, or close to it.

At that point, you can’t even see when you pan/scroll with the d-pad. No big deal, obviously you need to crank the image brighter or darker! But I just I didn’t like the look and feel of those blank open areas.
_

Solution: add a little grain to the image, only visible at the highest zooms.

It uses kDrawModeXOR so it works on any underlying graphics.

That way there’s a hint of texture you can see scroll. It also just looks nice I think! And it doesn’t harm the image: zooming in that far is mainly for fun anyway, and the grain is pretty slight.

But I didn’t want repeating grain. If it’s all/most of what you’re looking at at those moments, then I didn’t want you to see the same cluster of dots keep going by at the same interval all lined up in a row.

Making the scrolling grain image large is a start: twice the dimensions of the screen. But it would still repeat, just less obviously.

To test this, I temporarily added a bold test doodle to my grain image. (It’s actually a quarter of the full grain image but I repeated it 4x.)

grain without flips

That’s the repetition I did not want.
_

Solution: randomized grain.

Originally I was going to randomly flip each instance of the grain, and/or have a table of multiple different grain images (as if I’m not pushing RAM enough already). There would still be entire screens of repeating grain sometimes, but much less often.

That’s easy—just pick a random index 1 to 4 from this table:

flips = {gfx.kImageUnflipped, gfx.kImageFlippedX, gfx.kImageFlippedY, gfx.kImageFlippedXY}

But luckily, since my engine only redraws the newly-revealed portion of the screen as you pan, I have a way to do even better: randomize the flip of each new strip that is rendered.

grain with random flips

Now you’re talking! One source image, endless random grain.

Just one problem: as you pan, my engine pre-renders a whole range of different brightness levels. That’s what keeps cranking fast and smooth. But it means you now have random grain that changes as you tune the brightness. This gave the whole screen a distracting animated “sparkle” effect. It looked neat, but it looked very wrong.
_

Solution: deterministic pseudo-random grain.

I thought, what if I generated the flip number 1 to 4 from the pan coordinates with no random component?

Modulo (division remainder, the % operator) can easily turn any positive number into a number 0 to 3. Add 1 and I have my flip table index that changes with panning on either axis:

flips[(viewX + viewY) % 4 + 1]

No more sparkle! But how does it look?

It turned out the coordinates alone produced a repeating result horizontally (just the luck of how the numbers happened to divide). All I had to do to fix that was add a fixed integer (say, 3) to the coordinate sum before the modulo.

Then it looked random! However, the grain also did not change when zooming between 32x and 64x. Again, it felt wrong.

So in addition to a fixed integer, I also added the index of the current zoom level. Now there was a non-repeating result that looked random, didn’t sparkle, and was unique at different zooms.

flips[(viewX + viewY + 3 + zoomNum) % 4 + 1]

Success! Completely deterministic, fast, and with repetition being infrequent enough to satisfy me.

grain with deterministic flips

Remove the test doodle, and here’s the grain alone:

grain alone

1 Like

Two more imaging engine enhancements:

  1. You can now scroll past the top/bottom of the panorama—making it more intuitive to zero in on targets near the edges.

  2. You can now zoom out farther, to .25×. (Those zoom numbers are simply the “number of screens tall” the panorama is, so this new zoom level fills 1/4 of the screen height.)

So now you can see the ENTIRE “psychopanorama” at once. (A) and (B) to zoom.

max to min zoom

1 Like

At min zoom, the panorama is 55x400 pixels, less than 1/4 of the total screen (since the 20px bottom UI doesn’t count).

At max zoom, the panorama has the area of 7.5 ping pong tables, or 20 standard interior (30-inch) doors.

1 Like