Mybacklog
Continued experimenting with tasks data change. Also, tried to import my new copy of grim fandango from steam, and noticed it was not being added to the list. I realized that through all of the recent data conversions I had not updated any of the import functions :( Spent some time cleaning those up. And then played Grim Fandango :P
Erik
Designed some "modules". Little actually written. More in my head, thinking of how I can more easily boil down the gameplay elements. I know there will be some kind of process of: design a module; implement module through: tiled, event, conversation, and possibly a new type of thing; and then tweak it from there. But I haven't narrowed the gap yet. What I'm calling a module is essentially a series of actions a specific npc tries to take when certain conditions are met. Currently, there is a kind of "entry" to the module (when the npc starts attempting this process), then a series of states where a success moves onto the next state. Certain actions by other characters (or player) may either cause a state to fail, or be delayed. And then, sometimes certain conditions will change to a different state, or loop back.
Yup. Certainly looks like a finite state machine to me. This leads me to think it may be a good idea to implement it that way. However, I already have events, conversations, and behavior trees, which are all like their own mini scripting languages. Does it really make sense to implement yet another type of thing? Another option would be to build the modules into the existing behavior trees. Either through ai having separate high-level/low-level BTs (one determines which stage of the module we are in, with the low-level tree handling movement and reactivity); or just having a part of the existing BT check/change module stages.
Will have to think about this more. I'm at the stage where it's less and less useful to just start coding, and it pays to do a bit more up front planning.
Two small changes:
Previously the ai would only walk. I've modified the chase behavior to make them run while it is active. As it uses the same code as the player to do the run, an ai chasing you will eventually run out of energy and slow down. I may need to add some smarts to the ai for when it's a good idea to run, and when it's better to conserve energy. Maybe a genetic function?
I also actually implemented the burning logic in the campfire, so it can burn all the way out. For now it just deletes the campfire item, but it's suitable enough for testing. You can add logs to extend the time it sticks around.
Optimally solving the opening scene will definitely involve getting enough logs to feed the campfire before night, which includes deciding who is the best member of the group to be assigned to collecting the logs. Perhaps I can set up some kind of dichotomy, where some people are more observant to see optimal firewood, but perhaps can't carry as much in one trip - and then others are stronger to carry it but are not as good at finding it. And then set up a vote where - unsurprisingly - the players vote is the deciding one.
To build an event system. In my little story outline, even within the single "scene" I designed for Act 1, there are several mini-scenes that the npcs are supposed to go through. At the beginning, erik is separated from his party, and has to go looking for them. When they see him, they should be glad to see them. And so on, for multiple other things going on. While I intend for the core minute to minute gameplay to be driven by the ai simulation, there are some simple things that I need to be able to do to control the narrative somewhat. So I built a high level event system. Each map has it's own queue of events that it listens for, and than various actions in the game can trigger those events. This will be used for controlling high-level narrative flow - and should be used sparingly. The opening sequence of the game is one of the few obvious places to use such a system.
This:
{ "":[ {"type":"init","result":[ ["console",{"text":"Game initialized"}] ]} ], "woods":[ {"type":"init","result":[ ["cam_focus",{"npc":"trask"}] ]}, {"type":"camera_near","distance":20,"npc":"trask","result":[ ["conversation",{"conversation":"intro1","npc":"yelda"}] ]}, {"type":"conversation_end","conversation":"test","result":[ ["cam_focus",{"npc":"erik"}] ]}, {"type":"trigger","trigger":"exit_tutorial","npc":"erik","result":[ ["cam_focus",{"npc":"trask"}], ["conversation",{"conversation":"test2","npc":"erik"}] ]} ] }
Turns into a quaint little cutscene that plays at the beginning, along with some "gameplay", where the npcs see erik when he moves in view. It's fairly unoptimized, checking all events every time a part of the code signals a possible event. I don't know how long the list of events for a given map is going to grow, so it might not even be a problem. In fact it probably wont. But it could certainly be optimized by building some kind of search tree according to each event type.
The worst event is the "camera_near" one, because the camera fires the event every frame (triggering a match attempt on ALL events in the current map) even when there are no events looking for it. An easy early optimization there would be to keep track of what event types are in the queue, and silently return if we know there are no camera_near events waiting.
I'm still not sure about this. It makes a lot of things really easy, but also is going to be a temptation to hardcode narrative rather than let the ai simulation create one. Finding that balance between narrative and simulation is what this project is all about though!