tezuka - lonely flower - default

Of Artificial Intelligence in Games

Of Artificial Intelligence in Games
I spent the last three hours talking about a game AI engine. Purely theoretical, of course, but mathematician Rolf was bemoaning the sorry state of AI and writing in games. We covered a lot of ground, starting from story/quest and writing in general. This led to him bemoaning the horrible AI in games and the scripted responses. This led to me laying out a typical (and completely theoretical) AI engine to solve that problem. That in turn led to a long discussion and argument about what the AI engine would be like.

Essentially I envision an engine/self-contained thing (for want of a better word) with possible input and output values. The engine itself is composed of a range of settings (I'll call them "gauges"). These gauges would have values like "Happiness", "Sociability", "Interactivity with Environment", "Energy", etc (like Sims, but with a LOT MORE, not just 6). We might get into "relationship" guages, but let's save that for another discussion.

These gauges will come with built-in programming (perhaps every gauge has a "responsiveness" setting one should either set, or randomize) that will affect other gauges. For example, if the Happiness gauge moves more than 5 points, then the "Generosity" or "Kindness" meter moves too. Likewise, if the "Anger" meter moves past a certain threshold, then for every 3 points, it may affect the "Aggression" or "Activeness" meter, or both, or maybe this meter is a combination meter. There should also be time triggers. For example, if the Happiness meter has been at 100 for a long time (game standards, not sure now one day would be represented, but I'm sure we could do something like "ten hours"), then if there is a significant drop in happiness (say, 6 points or more), it should affect a "Depression" meter.

[ETA: Forgot to mention a gauge I think is important--"Tolerance". The angrier/sadder one gets, the lower this gauge. The happier one gets, the higher the gauge. The "Tolerance" gauge affects how many times a "so-called" routine input can be repeated before it gets annoying. High-disruption inputs will fill this Tolerance gauge quickly. Smaller inputs will inch it up. Some inputs will pass by without registering at all, perhaps things like "light rain" or "low-level crowd noise" (well, up to a point--see "time triggers" above). "Tolerance" should repair itself over time (rate of repair can be affected by things such as "Anger", "Happiness", "Energy", etc.)]

Those are the gauges. Then, we have the inputs.

Inputs are basically object interactions. Interactions can come from the five senses. Touch, taste, smell, hear, see. For example, chocolate is in the vinctinity. If the AI becomes aware of the chocolate through any of the five senses, it may be a "Hunger +2" input. This may increase over time (recurring inputs), or it may be a one-time thing such as receiving a gift ("Happiness +2" or more depending on the gift's classification).

All inputs and objects should be classified. For example, a loud noise should be classified (or "tagged") "unknown", "noise". Anything "unknown" should prompt an investigation if the "Interactivity" or "Curiosity" gauge is above a certain threshold. (Most instinctual responses are either "run" or "look at/for source"). Perhaps once it is within line of sight and a distance field (say 100m or any value you want to represent sensory ability), it can either be identified as a known object of sorts or an "uninteresting" and "non-dangerous" unknown. (Anything with "unknown-uninteresting" should be henceforth ignored unless it changes suddenly--in which case it will lose the "unknown-uninteresting" tag and regain an "unknown" or other identifying tag, and be investigated again.)

So far so good? Moving on.

Outputs would be gauge readings. For example, "Energy" gauge output is now -3 (allowing for superhuman efforts to stay awake in the presence of "danger" tags). The gauge reading should flag an instinctual reaction such as collapse or fall asleep (depending on the reading of the "Security" gauge). Some instinctual reactions (those you would expect from a very young child of, say, 3 or 4) should be programmed into the AI engine.


They should not be programmed into the engine. However, the engine should be allowed to run itself and assign some random variables and behaviours at different times (or you can specify these variables eg "grew up with no parents, therefore Happiness gauge maximum set at 95" or "Anger gauge more responsive" or whatever you want). You should also be able to evolve/run the AI until you get the required age, and then it will produce for you a pretty good approximation of a generic Joe Schmoe who will react in a human enough manner to entertain AI-watchers, and yet have enough variety that you don't feel like you're seeing the same computer model just copied and pasted with a different gender or hair color. And it goes without saying, the AI can be scripted over, and whenever a situation that is scripted for (specific interactions with X characters or explosions or whatever it is you want to control) occurs, your script takes over. The AI is there to fill in the gaps.

Er, bedtime. Feel free to poke with comments or suggestions. (All are welcome to link and discuss.)
Wow, this is very intriguing. :O I'm not familiar with AI or AI/games, but this looks really, really complicated. It would be interesting to play a game like this one day in the future, lol. :D
<3 Thanks! I'm not too, actually, besides spouting off. I'm more of a designer than a programmer where any sort of system is concerned. T_T
Re: Crawford
I have not, but that sounds like a pretty cool book. ^_^ I haven't read anything on game design, actually. ^_^;