public marks

PUBLIC MARKS from bcpbcp with tags gameai & games

March 2006

Make Mac Games » Blog Archive » Bullfrog: Improving AI

(via)
I’ve been steadily plugging away at my list of bugs and enhancements for the upcoming release and finally tackled one area that I had been avoiding: improving the AI of some of the bugs.

Gamasutra - PathEngine SDK 5.01 Officially Released

(via)
The creators of the PathEngine SDK,a toolkit for implementing intelligent agent movement, based on a 'points of visibility' pathfinding solution implemented over arbitrary 3D ground meshes, have announced the release of V5.01. Key features are pathfinding through overlapping geometry, dynamic obstacle management, and content processing functionality.

February 2006

Game/AI: AI Planning for games and characters CFP

(via)
However though AI Planning has much to contribute to both these fields, particularly in producing more convincing Non-Player Characters and autonomous intelligent characters, few AI planning researchers have been involved in this work, and the technology, where applied at all, has often been used in a somewhat ad hoc way. In addition, games company use of AI planning has so far been limited - A*-based motion planning the main exception - with practitioners feeling that the technology is too computationally expensive or risky for integration into computer games.

Gamasutra - Feature - "Anticipatory AI and Compelling Characters"

Much of the work in game AI has focused on the ‘big' problems: path planning, squad planning, goal-directed behavior, etc. The result is characters that are capable of increasingly intelligent behavior. However, acting intelligently and acting aware and sentient is not the same thing. But if we are to create the kind of compelling and emotional characters upon which the next generation of computer games will be based, we must solve the latter problem, namely how to build characters that seem aware and sentient.

Hikoza'n-CHI X - Games

(via)
A shooting game with vertical scroll. In this game, there are only you and the boss ship. The more you beat a boss, the more next boss becomes strong. You have 180 seconds in first time, and it decreases. Time will increase, if a boss is beaten, and if you die, it will more decrease. Beat more bosses, before time is lost.

January 2006

Fenixweb | IA para jogos na USP

1. Introdução a jogos de computador. 2. Interação em jogos de computador: percepção, ação e reação. 3. Sistemas multiagentes. 4. Heurísticas e meta-heurísticas. 5. Aprendizado computacional. 6. Representação e compartilhamento de conhecimento. 7. Sistemas para desenvolvimento de jogos.

Blobs in Games

Black and White 2 AI I played Black and White 2 for many hours yesterday. The computer player and I were in a stalemate. The computer kept sending armies against me and I kept defeating them. I had built my town with walls around it, and then put archers on top of the walls. I was building up my strength while defending myself, in preparation for a big attack. I felt pretty safe. After around 40 attacks, I realized that they weren't all the same. The computer wasn't using the same attackers each time. It tried the creature, archers, swordsmen, and catapults. It tried combinations of them. Sometimes it would come through my main entrance, and sometimes it would come around the back entrance to the city. The computer player also destroyed major sections of the city using the “earthquake” power, but I recovered from these too. After a while the enemy creature figured out that he should kick my wall in. His archers and swordsmen stayed back, out of range, while the creature came up and destroyed my wall, including the archers on it. After it breached the wall, the army swarmed into my town and killed half my people. I rebuilt my wall and started to recover, but the computer's newly discovered strategy worked well. It tried several variants but kept going back to the same approach: kick down the wall, then swarm the town. This forced me to try some new strategies. Although being on the wall has advantages, it leaves the archers vulnerable when the enemy creature attacks the wall. So I moved them behind the wall. I've also learned to open my gate, wait for the enemy army to get close, then close the gate and set their army on fire. I have no good strategy for the creature knocking down my wall though, and I'm constantly losing townspeople and then rebuilding. After a long stalemate, the computer AI learned how to attack more effectively, and now I'm having trouble keeping my city safe. I'm very impressed by the AI. I'm not sure how it's programmed, but it tried out many different things and learned which ones work the best. From the game AI techniques I've learned (genetic algorithms, neural networks, fuzzy logic, state machines, etc.), the AI in Black and White 2 seems to match most closely with what I know about reinforcement learning. It's a technique that uses online learning (observing results as the game is played) instead of training (from examples constructed ahead of time), allows both exploration (trying new things in order to learn) and exploitation (taking advantage of what you've learned), and associates rewards (like whether the attack was successful) with actions (like kicking down the wall and keeping the army away from my archers). I recommend Sutton and Barto's book if you want to learn more. It's entirely possible though that the game uses something much simpler that just happens to look impressive, but my guess is that it's using reinforcement learning. — Amit — Monday, December 12, 2005 Comments: Post a Comment Links to this post:

November 2005

Game/AI: Great Expectation-Formations

"Forming expectations is a problem that is relatively easy to concretize: AIs have an internal knowledge model of the world. If the AI is able to provide an estimate for what that state is going to look like n ticks in the future, that’s an expectation – and naturally we’re going to want the AI to react not (or not just) to current world-state, but also to its expectation. React to its own expectation. I think that’s a neat idea, and architecturally it makes a lot of sense. Assuming we have a reactive behavior system of some sort already, we don’t, as a first pass, need to modify IT at all. All we need to do is provide future instead of current data. Great!"