— In a new study, cognitive science and social cognition theory — a branch of cognitive psychology that examines how our brains work and how social interactions are formed — are helping scientists understand how to build robots and artificial intelligence (AI) software.
The work is part of a broader effort by cognitive science researchers to build artificial intelligence systems that can understand the world and respond to humans with compassion and empathy.
Researchers at the University of Florida at Tallahassee and the University at Albany, New York, conducted the study to better understand the cognitive systems that underpin human perception, speech, and social interaction.
A computer program developed by the University, which is currently in the early stages of testing on human volunteers, uses deep neural networks, the most sophisticated types of neural networks.
It learns how to recognize patterns and patterns of objects, and then uses that information to predict when and where it will appear in a scene.
To do that, the program analyzes objects by analyzing images of them.
Researchers said the program was able to identify a “pattern” in an image by looking at the shape of the object and then using the image’s shape to predict the location of the pattern.
“That’s the basic building block of the neural network.
It learns that a particular object shape is a pattern and then it uses that pattern to predict how it will look in the future,” said researcher David Gaffney, the study’s first author.
Gaffney is the John B. Gaffrey Distinguished Service Professor of Electrical Engineering at the UF-Tahasport School of Engineering.
What’s more, the neural networks learned to distinguish objects in the real world from images on the computer screen.
Using a system called a deep convolutional neural network, or DNN, to analyze images, the computer program was trained to distinguish between images of a bird and a dog.
That’s important because the DNN is used to build neural networks that are used to make sense of the world.
The system is trained to recognize and recognize images of dogs, cats, and people, and it’s then used to predict which images will appear on the screen and which images won’t.
And it’s also trained to predict that a person’s facial expression will change depending on the object’s shape.
The results of that prediction system, called the facial recognition model, are used by a computer to recognize people in photos, for example, and for other tasks such as detecting the shapes of objects in an environment and then predicting the shape.
DNNs are especially useful in recognizing faces because they are relatively easy to work with, said researcher Alex Pazderski, who is also an associate professor in the Department of Electrical and Computer Engineering.
“It’s very easy to build a computer program to do that sort of thing, which means that if you want to learn to use the Dnn to make decisions about how to train a machine to recognize faces in images, then you can build a machine with lots of the same basic features.”
A new program, called Empathy and the Dannas, aims to take those same basic neural networks and train it to understand human emotions and other social interactions.
Pazdterski is a co-author of the paper, which was published in the journal Frontiers in Cognitive Science.
In addition to Gaffsey and Pazdski, other authors on the paper are University of Georgia professor Michael Schuster, University of Minnesota assistant professor of cognitive science Robert Strom, University at Buffalo assistant professor James J. Fassbender, University University of Texas at Austin professor of applied neuroscience David R. Sperling, and University of South Florida professor of electrical engineering Michael H. Sibley.
“We think these are important directions for further development in this area,” said Gaffey, who noted that his own research has already shown that neural networks can be trained to correctly predict emotional states.
But he said that the new study is the first step toward creating a more general type of neural network that can predict emotions, and he said it’s likely that more research will be done to refine the algorithm.
“There are still some questions that we have to sort out, like what does this do to the neural system?,” he said.
Schuster added that the project is an important step toward better understanding how we communicate with and understand our fellow humans, because they rely on neural networks to create their own representations of the human world.
“In a way, this is like the computer trying to understand us.
You don’t have a human to communicate with, but you have a computer that is trying to figure out how to do things,” he explained.
However, he added that it’s not clear how much of the work has yet been done in the field.
“This is a huge project, but it’s really early days,” Sch