Researchers from Rensselaer Polytechnic Institute are “engineering characters with the capacity to have beliefs and to reason about the beliefs of others. The characters will be able to predict and manipulate the behavior of even human players, with whom they will directly interact in the real, physical world, according to the team.”
The following are excerpts from the article:
At a conference on artificial intelligence, the researchers demonstrated the ““embodiment” of their success to date: “Eddie,” a 4-year-old child in Second Life who can reason about his own beliefs to draw conclusions in a manner that matches human children his age.”
““Current avatars in massively multiplayer online worlds — such as Second Life — are directly tethered to a user’s keystrokes and only give the illusion of mentality,” said Selmer Bringsjord, head of Rensselaer’s Cognitive Science Department and leader of the research project. “Truly convincing autonomous synthetic characters must possess memories; believe things, want things, remember things.””
To test “Eddie’s” reasoning, the group created an experiment in Second Life that tested false-belief. In real-life, when a child sees a person place an object in a certain location and then leave the room, and during his absence another person moves the object to a new location, when the child is asked to predict where the first person will look for the object, children age 4 and under will generally say the second location because “they haven’t yet formed a theory of the mind of others.”
Via software that simulated keystrokes of a human user, “Eddie” demonstrated “an incorrect prediction of where Person A will look for the teddy bear — a response consistent with that of a 4-year old child. But, in an instant, Eddie’s mind can be improved, and if the test is run again, he makes the correct prediction.”
Watch the video clip of the “False Belief in Second Life” demonstration here.
““Our aim is not to construct a computational theory that explains and predicts actual human behavior, but rather to build artificial agents made more interesting and useful by their ability to ascribe mental states to other agents, reason about such states, and have — as avatars — states that are correlates to those experienced by humans,” Bringsjord said. “Applications include entertainment and gaming, but also education and homeland defense.””