Not a lot of time today with work and bowling, but I managed to play around with the laser some more and make an electric blue version. Started playing around with some laser level design too.
Today I worked on 2 very different things: my own replay system for recording a round, and giving the 'laser' some animation via a vertex shader.
One of the biggest benefits I intended from integrating Everyplay was that I'd easily be able to capture feedback of how players actually play the game: where they die most often, how they beat certain levels, whether they miss 'hints' I'm trying to give them, etc. Unfortunately, I realized that Everyplay wasn't going to actually accomplish this because people aren't going to 'share' their replays of lousy rounds. So if the first jump on level 1-1 is causing people to fail in the first second 30 times before making it across, I'd never know.
Another small caveat of Everyplay is that it's only available on iOS and Android. While I intend on those being my primary platforms, I still am holding onto the idea of a PC release, because there's no reason not to. To tell you the truth, the game actually feels better on a PC with a physical keyboard.
So I definitely wanted some way to know how players play the game without relying on Everyplay video replays. The most basic approach, and one I will still do, is just recording the position in the level at which the player died, and sending it off for me to analyze. What would be better is a primitive replay system that captures the players inputs and sends them off to me to replay and analyze. Assuming the simulation is deterministic, and I get the syncronization right, I should see exactly what the player saw. To that end, I whipped up a very quick solution that recorded the inputs (jump, horizontal movement), if they weren't zero, and marked them with the frame number when they occurred, relative to the level start command. The replay system deserialized the list of inputs and created a mock input state each frame based on the recorded input. If the system was in replay mode (as opposed to normal player-driven mode), the Player object simply read the mock input state instead of the keyboard/ mobile GUI. Theoretically, everything else; enemies, moving platforms, picking up coins, should work exactly as when the original player played the round, resulting in an accurate replay. Theoretically.
In practice, even a slightly incorrect input (due to floating point differences, etc) can quickly desynchronize the replay. In some of my tests, it was obvious that the 'ghost' (a term I'm using for the replay'ed version of a player's actions) would acquire a jetpack 1 frame sooner than the player. As a result, the ghost would ascend higher than the player did because he'd been holding the jetpack that much longer. As a result, the ghost might die on a spike that the player missed. Or the opposite. The point is that, non deterministic simulations aren't viable for input-only replay.
An alternative to recording the inputs into a simulation that result in a given state, is to record the resulting state itself. This is much less elegant, and usually requires much larger recordings, in terms of RAM and serialized file-size. However, this approach is much more resilient to de-syncing. For my game, this would just mean (initially at least) recording the world position of my Player object every frame (actually, I can get away with every 3-5 frames and it still looks decent. The problem comes, though, that artificially moving the player to a pre-recorded position every frame is NOT the same thing as having inputs that result in him getting there. This boils down to how I implementing moving and collision detection in my game, but it results in things like: powerups not being picked up, triggers not triggering, enemies not dying, etc. When I replay a recording this way, the player moves precisely the same way he moved during live, but the rest of the 'world' desyncs quickly. For static things like floors and spikes, this is a no-op, but for moving platforms, and enemies, this can look a little strange.
Comparing the limitations of the 2 systems results in the following tradeoff: Recording inputs only results in an accurate simulation (players pickup powerups, die when they should, don't clip through platforms, etc), but isn't authentic to what actually occurred live. Recording position only results in an inaccurate simulation, but (at least for the Player object), is precisely what the player (human) did live. The choice then is easy for me. I will stick with a warping, clipping, invincible replay'd player that at least shows me, generally, how the player (human) moved through the level.
After all that text, I really don't have anything to show for the replay system. Right now, it is hackishly integrated into the game. I need to remember to check and un-check certain check-boxes before starting a round to be recorded. There is no indication that the current session is a playback session, or live (other than your inputs won't work). If you manage to start playing back a recording on a level other than the one it was recorded, you get some hilarious results.
Okay, enough. Just like I'm sure you're sick of reading a boring wall of text about replay system pros and cons, I was bored of implementing them. So instead of actually refactoring the replay system into a usable state, I took a detour to try to add some 'polish' to my laser. I could have simply animated it with traditional sprite-sheet animation, but I knew this was a good time to try to learn something about those mysterious graphical enigmas known as shaders! I won't go into any of the technical details for this one. In fact, I couldn't if I tried. I ripped off a shader somebody posted on reddit a while back, modified it slightly to expose some properties instead of hard-coded constants, and slapped it on my lasers. The result:
Or by slightly tweaking some parameters:
Or what I am 'settled' on for now:
That's all folks!
v
Add a very cliche laser enemy/obstacle today. It can be triggered by the player entering a trigger-area, like the moving platforms discussed earlier, or just on a regular timing interval, as seen here.
Meta: I wonder if I can use a Star Wars quote for every title of these submissions going forward.....
Well I spent most of the night trying to get Everyplay integrated. I really think that being able to share replays of your runs will be a cool feature. Unlike simply tweeting a high score, or Facebook-sharing a screenshot of the Game Over screen, a video replay actually shows 'how' you got that awesome achievement/score/unlock etc.
Initially, Everyplay integrated very simply into my game. It already has a Unity SDK, and all I had to do was call StartRecording() and StopRecordoing() at the approriate time in my game. I then automatically popped up a sharing dialog, and what do you know, it worked! Here is the first video ever recorded of my game with EveryPlay!
The integration got a little confusing when I tried to use the Thumbnail feature of Everyplay to take a still image of the last frame of the game to put on the GameOver screen. Long story short, it works once, but I'm not re-initializing or cleaning up something properly, because every time after the first, it blows up with a NullReference. Oh well. Win some, lose some.
(Since Everyplay is only available on Android/iOS, this is a screenshot of my game taking a screenshot of my game. Screenshotception!)
Up to this point, every 'level' in my game was really just a test-bed for a new feature. Usually, they weren't bigger than 1 screen. Tonight, I tried to develop a level that was 'full sized' to try and exercise my level-design skills, which I've never really trained.
The level is roughly comprised of 4 sections, each borken up by a checkpoint. There is also a sorta-secret cut-out that is off the beaten path, that can only be reached by falling onto the jetpack and engaging it just in time.
As part of trying to make a cohesive level, I had to add some new features to the moving platform behavior. It already had the ability to add an arbitrary number of points which it would follow, but in order to sync-up the moving platforms, I added 2 features: