Streak Club is a place for hosting and participating in creative streaks.
Today I worked on 2 very different things: my own replay system for recording a round, and giving the 'laser' some animation via a vertex shader.
One of the biggest benefits I intended from integrating Everyplay was that I'd easily be able to capture feedback of how players actually play the game: where they die most often, how they beat certain levels, whether they miss 'hints' I'm trying to give them, etc. Unfortunately, I realized that Everyplay wasn't going to actually accomplish this because people aren't going to 'share' their replays of lousy rounds. So if the first jump on level 1-1 is causing people to fail in the first second 30 times before making it across, I'd never know.
Another small caveat of Everyplay is that it's only available on iOS and Android. While I intend on those being my primary platforms, I still am holding onto the idea of a PC release, because there's no reason not to. To tell you the truth, the game actually feels better on a PC with a physical keyboard.
So I definitely wanted some way to know how players play the game without relying on Everyplay video replays. The most basic approach, and one I will still do, is just recording the position in the level at which the player died, and sending it off for me to analyze. What would be better is a primitive replay system that captures the players inputs and sends them off to me to replay and analyze. Assuming the simulation is deterministic, and I get the syncronization right, I should see exactly what the player saw. To that end, I whipped up a very quick solution that recorded the inputs (jump, horizontal movement), if they weren't zero, and marked them with the frame number when they occurred, relative to the level start command. The replay system deserialized the list of inputs and created a mock input state each frame based on the recorded input. If the system was in replay mode (as opposed to normal player-driven mode), the Player object simply read the mock input state instead of the keyboard/ mobile GUI. Theoretically, everything else; enemies, moving platforms, picking up coins, should work exactly as when the original player played the round, resulting in an accurate replay. Theoretically.
In practice, even a slightly incorrect input (due to floating point differences, etc) can quickly desynchronize the replay. In some of my tests, it was obvious that the 'ghost' (a term I'm using for the replay'ed version of a player's actions) would acquire a jetpack 1 frame sooner than the player. As a result, the ghost would ascend higher than the player did because he'd been holding the jetpack that much longer. As a result, the ghost might die on a spike that the player missed. Or the opposite. The point is that, non deterministic simulations aren't viable for input-only replay.
An alternative to recording the inputs into a simulation that result in a given state, is to record the resulting state itself. This is much less elegant, and usually requires much larger recordings, in terms of RAM and serialized file-size. However, this approach is much more resilient to de-syncing. For my game, this would just mean (initially at least) recording the world position of my Player object every frame (actually, I can get away with every 3-5 frames and it still looks decent. The problem comes, though, that artificially moving the player to a pre-recorded position every frame is NOT the same thing as having inputs that result in him getting there. This boils down to how I implementing moving and collision detection in my game, but it results in things like: powerups not being picked up, triggers not triggering, enemies not dying, etc. When I replay a recording this way, the player moves precisely the same way he moved during live, but the rest of the 'world' desyncs quickly. For static things like floors and spikes, this is a no-op, but for moving platforms, and enemies, this can look a little strange.
Comparing the limitations of the 2 systems results in the following tradeoff: Recording inputs only results in an accurate simulation (players pickup powerups, die when they should, don't clip through platforms, etc), but isn't authentic to what actually occurred live. Recording position only results in an inaccurate simulation, but (at least for the Player object), is precisely what the player (human) did live. The choice then is easy for me. I will stick with a warping, clipping, invincible replay'd player that at least shows me, generally, how the player (human) moved through the level.
After all that text, I really don't have anything to show for the replay system. Right now, it is hackishly integrated into the game. I need to remember to check and un-check certain check-boxes before starting a round to be recorded. There is no indication that the current session is a playback session, or live (other than your inputs won't work). If you manage to start playing back a recording on a level other than the one it was recorded, you get some hilarious results.
Okay, enough. Just like I'm sure you're sick of reading a boring wall of text about replay system pros and cons, I was bored of implementing them. So instead of actually refactoring the replay system into a usable state, I took a detour to try to add some 'polish' to my laser. I could have simply animated it with traditional sprite-sheet animation, but I knew this was a good time to try to learn something about those mysterious graphical enigmas known as shaders! I won't go into any of the technical details for this one. In fact, I couldn't if I tried. I ripped off a shader somebody posted on reddit a while back, modified it slightly to expose some properties instead of hard-coded constants, and slapped it on my lasers. The result:
Or by slightly tweaking some parameters:
Or what I am 'settled' on for now:
That's all folks!
I changed my steering implementation to not always snap the sprite's orientation to the velocity vector. This was causing issues where slight bumps (asteroids, lasers) were making the agents freak out. Instead, now the sprite is on a separate game object that can turn independently from the 'engine', and always tries to face the player. It's a little unnatural to see a rigid vessel like a spaceship having decoupled movement and 'vision', but I rationalize it as the invaders have developed advanced thrust-vectoring engines that let them essentially hover and turn on a time.
Oh, I also changed the orientation to landscape, which gives me quite a lot more screen real estate, which was sorely needed.
Added some more steering behaviors. The first one was a "path follow" which actually isn't so much a steering behavior, as a planner for the Seek behavior.
Next was "Cohesion" which is the opposite of Separation: it induces entities to stay together. This is added to the 3 red dudes in the following video. Notice how they 'try' to stay together, while the other main groups spread out.
Started working on some basic steering behaviors for enemy AI:
I kinda already started this with the missile (seeking and obstacle avoidance), but they were totally coupled to the missile class. Tonight, I pulled them out into a generic architecture. In addition to updating Seek and Avoid, I added Separation, which keeps enemies separated.
Without separation:
With separation:
Unfortunately, the separation kinda overrides the avoidance, so they are quite prone to colliding with the asteroids. This is a known issue and limitation of steering behaviors. Usually it is solved with weights and/or priorities, but it always ends up requiring tedious balancing of parameters to get it to 'feel' right. I'll have to cross that bridge eventually.
First things first: Ever since I added the post-processing effects, my GIFs are wayy to big (10-15MBs) for sharing. Even cutting the FPS down to 10 doesn't help much. So I'm going with Streamable videos. I'm curious what other people think of this, or what their preferred formats are.
Moving to the actual work I accomplished today:
Added a nifty shield-hit effect:
Added some missiles!:
Didn't post for the last couple days, but I actually have been busy! I added some post processing effects that make the game look ... better? I hope. The lens distortion is probably over-done, but I think it fits well with the circular nature of the game.
I also added the basics of a combat system: shields block normal blasters, but ion-type weapons defeat shields:
After spending virtually every waking moment from Feb through the end of April on my (last) class for my Masters degree, I'm finally back to game development.
Actually, there was a little thing called Ludum Dare 41 in there too. Check out my game: https://ldjam.com/events/ludum-dare/41/flatformer
But now, I'm finally back to my own games. I'm going to put Space Roguelike in Space on hold for a bit while I actually finish a game. Why? Because I realized that I had bitten off a bit more than I could chew. I still think that I can technically create all the systems and components I need, but I don't know if I have the skills or inclination to make a compelling and balanced roguelike.
So without further adieu, I'm starting a new (but actually old) game!
Yea, okay, I worked on this exact concept about a year ago, but it was really hacked together, and I never really got out of (or even into) the prototype state. I went back to the codebase, and realized that (for a number of reasons) it's unusable, so I'm starting fresh. My goal is to release this game by 'the end of the summer', whatever that means. Wish me luck!
Spent a very small amount of time determining the actual 'leaf' nodes in the room graph. These will be useful for determining start and end positions on a map, as well as placing more difficult rooms with better loot because they're off the beaten path.
To implement, I actually had to make a whole new data structure. Before, I had a collection of Rooms and a separate collection of Edges. Now, I additionally have a collection of RoomNodes that have references to their neighbors (kind of replaces edges). A leaf node is any RoomNode that has exactly 1 neighbor.
I am not doing anything with this knowledge yet, but here I added some square gizmos that identify leaf nodes in the scene view:
Not a very productive day.
I managed to clean up some stuff I broke while making the A* hallways. To make those work, I had to change the definition of what was Passable in the grid from Floors, to empty space. Next, i changed the definition of Neighbors to only be the 4 cardinal directions since I don't really want diagonal corridors (they look weird).
All of this epically broke the Player's ability to actually move, so I did some refactoring to make those parts configurable, and now everything is working.
Better than nothing!
I added the ability to connect rooms that aren't vertically or horizontally aligned.
My initial thought was to make crude 'L' shaped hallways, but I realized that they would almost always intersect some other room/hallway, and I didn't want that.
My second thought was to use pathfinding to actually generate the hallway, and I got it about 99% working. Unfortunately, I'm not sold on how the results actually look.
Here are some pics (note: new A* hallways are in brown for illustrative purposes)
This third one exhibits the issue that keeps this only a 99% solution. The 2 rooms with brown doorways, but no connected path, are supposed to be connected, but no path can be found because of the 'courtyard' formed south of the left room.
I'm already picking where to start the hallways by choosing a spot on the wall of each room that has an open-air tile next to it, but that isn't enough.
I think my 2 easiest options are:
Finally, and I almost forgot, I don't really like the aesthetic of the winding hallways anyway. They'd fit great in a dungeon setting, but not so much in a space station. Might have to go back to the drawing board...
Added some doors between rooms. Okay, not actual doors that can be opened and closed, but I carved open hallways between rooms.
I haven't covered the case that 2 rooms are neither vertically or horizontally aligned. So for now, they are inaccessible, so that's next.
Added a minimum-spanning-tree implementation to determine how all the rooms should be connected.
Been stumped by my overly-complicated combat system lately, so I decided to take a break and start some proc-gen.
This is just a really simple algorithm that randomly places some rectangles with rigidbodies, lets the physics engine separate them, and then rasterizes the result to the tile grid. It's a start!
Was on a ski trip with my in-laws from Thursday till tonight. However, I managed to get some gamedev in on the plane ride back.
Nothing worth making a GIF of, but I made it so normal attacks will NOT trigger the defender's damage reaction if they miss. This was working for the projectile a couple updates ago, but I made it more generic for any attack,.
I also added logic so the grappling hook will trigger the OnArrival behavior of the defender when it performs the final 'kick out'. This means, if you grapple an enemy right on top of a trap, it will actually take damage.
Not a lot of motivation today. Watched the Spartans hang on to beat a shitty Iowa team by 3. Not very inspiring.
However, I managed to add a little feature. Previously, it was possible to grappling-hook anything; including traps, which I want to limit. Now, there is a flag in the base Entity class, 'moveable', which is considered when trying to grapple. Of course, the Trap wasn't actually inheriting from Entity, I had to change that.
No GIF because I can't show something not working. Also, I'm lazy. Goodnight
Updated the grappling hook to calculate the nearest non-blocked tile and 'kick' (that's what I picture in my head at least. No actual animation) the defender to that position.
It's lacking the "OnArrival" behavior when the defender reaches its final destination. This would trigger the traps attack, for example.
Started some very basic work on the grappling hook today:
The obvious flaw right now is that it pulls the enemy onto the attackers tile. This technically is allowed in my combat system, but I don't want to encourage it. In fact, I may make it impossible for 2 entities to occupy the same tile, with the exception of traps.
To fix this part of the hook, I'll have to deduce the proper nearest point to the attacker to make the destination point for the victim. That shouldn't be too hard, but I'm honestly not sure what it should look like. If the victim isn't going to get pulled to center of the attacker, where should the grappling hook start? Should it have 3 points (attacker's position, destination tile, victim's position)? Or should I just fudge it somehow?
I was absolutely not feeling like doing gamdev after work and class tonight. I actually played video games (Rocket League) for 2 hours instead of doing gamedev, which is rare for me recently.
By 10pm I finally got the motivation to do some gamedev. I wanted to add a 'grappling hook' ability that will pull an entity right in front of the attacker. This will be a utility move for melee classes for closing the distance to ranged enemies. However, when I sat down to implement it, I realized it should be loosely based on my 'push' ability which I haven't kept up to day since the AbilityVisualization and ability UI system reworks. I spent about 30m updating it to a barely functional level (didn't actually implement a visualization, but the ability still 'waits' for all the enemies to move before completing). And then I lost motivation, and am going to bed. 30m is better than 0m!
Added something I've been meaning to get to: missing! Now attacks actually have the concept of not always hitting! For now, its a straight 50/50 RNG roll, but eventually it will be configurable per the ability, and might be modified by things like Entity accuracy, blind state, or target's evasiveness.
Implementing it required giving all AbilityVisualizations the concept of 'missing' as well. The only one I've gotten around to implementing is the ProjectileVisualization. Now, it actually goes past the intended target if it misses! It doesn't do anything fancy like detecting walls, so it looks kinda silly, but that can be refined later.
Once again, my group project has been sucking my mojo away from gamedev, but I managed to be decently productive for ~2hrs of time.
The Player and the Blob were duplicating a lot of code for things like: monitoring when an ability finished, updating passive traits every turn, updating their combat component every turn. I refactored that into a base class, and now they're only different in logical ways.
Generalized the StunTrait that gets applied by certain attacks to be any PassiveTrait via another ScriptableObject hierarchy. Technically, I could make an attack that gives the enemy the passive ActionRegen trait upon hitting, neat!
Here is a GIF of the Grenade ability imparting a 3 turn 'blind' state. Blind will eventually have some effect on combat (lower accuracy?) but for now it doesn't actually do anything:
Tonight, I added quite a bit of infrastructure to support "states". These will be various conditions that can affect an entity such as stun, rooted, and blind. This is implemented by a bitmask. Next, I added a PassiveTrait for stun that makes it easy for Abilities to apply a stun that lasts 3 turns and then removes itself. Whlie stunned, an entity basically forfeits it's turn. Finally, I added some UI to the EntityCanvas to visualize the current states.
joined 3,112 days ago
Post a comment