Developments in Artificial Intelligence (A.I.) are happening faster today than ever before. However, the nature of progress in A.I. is such that massive technological breakthroughs might go unnoticed while smaller improvements get a lot of media attention.
Take the case of face recognition technology. The ability of A.I. to recognize faces might seem like a very big deal, but isn’t that groundbreaking when you consider the nature of applied A.I.
On the other hand, suppose an A.I. is asked to choose between a genre of music, such as R&B or rock. While it may seem like a simple choice, the mathematical algorithm that must be solved before the A.I makes a decision could take hours and days.
General A.I. vs. Advanced A.I.
Most people get their idea of A.I. from Hollywood movies and science fiction. They assume that A.I. robots would work and think in the same ways as human beings do. They tend to think of the Terminator or Data from Star Trek The Next Generation (TNG).
These fictional roles give us examples of A.I. that behave in a very general manner. The A.I. that developers are working on is actually much more advanced. This A.I. will be able to perform very complex calculations, but in a very limited field, while it will still be unable to perform some of the basic functions that humans perform.
General A.I. Is Not Very Profitable
Take the example of a dishwasher that is programmable and cleans dishes very well. A dishwashing manufacturer may program it to respond to voice command and play music as well. The dishwasher may learn the general times when you have dinner and improve its washing quality based on your preferences.
But would it make sense or even be economical to teach this dishwasher how to recognize facial expressions and the mood of the operator?
A generalized A.I. would be able to perform many tasks, but it cannot be very good at each task. It would be more efficient economically to build machines that are specialized in particular tasks.
People would love seeing generalized A.I. machines in the store and find them entertaining, but no one would actually buy them for the chores at home. This is why A.I. advanced in specific tasks is far more useful and gets a lot more attention from commercial developers.
A.I. and Machine Learning
The real task that lies ahead for developers is to build A.I. that is capable of learning, thinking, and feeling without input from a human. This kind of independent A.I. will be capable of making decisions on its own, and can be considered truly smart.
Before this A.I. is completed and ready for practical applications, it will have performed millions of simulations on its neural network, which would help it improve its actions in the real world.
The A.I. does this by making repeated computations and recording the result on each stage of its learning process. Once the A.I. finds the first correct solution, it runs the test from the second stage, making repeated calculations until it finds the best solution. Once this solution is found, the A.I. begins testing solutions from the third stage.
Using this approach on some old video games, developers were able to get amazing results. They tested the model on old classics such as Hungry Snake, where the A.I. learned how to play the game by making the correct left or right movement to grow to the maximum level which would be very difficult to achieve for even the most expert of human players.
The model was tested on PC Snooker, where the A.I. was able to determine brilliant pool shots that gave it a near perfect score.
Deepmind is one of the leading A.I. development firms in the world. It started in 2010, and was acquired by Google in 2014. The firm has been at the forefront of technological breakthroughs in the world. Google’s access to a very large database has also allowed the researchers at the firm to test a number of Artificial Intelligence concepts.
Testing Game Choices with A.I.
Video games are incredibly useful in testing and improving A.I. learning. Most video games are developed for humans, and have a learning curve. While humans are able to quickly learn and become good at games due to our intuition, A.I. usually starts from scratch and performs poorly in the beginning.
The research team at Deepmind used its DMLab-30 training set, which is built on ID Software’s Quake III game. Using an Arcade learning environment that ran Atari games, the team developed a new training system for A.I called Importance Weighted Actor-Learner Architectures, or IMPALA for short.
IMPALA allows an A.I. to play a large number of video games really quickly. It sends training information from a series of actors to a series of A.I. learners. Instead of directly connecting the A.I. to the game engine, developers told the A.I. the result of its action from the controller inputs, just like a human player would experience the game.
The developers found the results to be quite good based on a single A.I. player’s interaction with the game world. How well the A.I. performs against human players is still under testing.
Most A.I. in games is disadvantaged against human players. Developers try to offset this by allowing A.I. certain advantages against humans. In arcade games this is done by giving the A.I. special powers that the human player does not possess. In strategy games, this is done by giving the A.I. extra resources.
A.I. that performs well against human players without any hidden benefits would truly be considered an amazing advancement.
Self-Teaching Robot Hands
Developments in neural networks have allowed robots to run millions of simulations or run complex calculations faster than humans can. Yet, when it comes to figuring out physical things, A.I. robots still struggle, because they have a number of infinite possibilities to choose from.
In order to counter this problem, DeepMind created an innovative paradigm for A.I. powered robots. Scheduled Auxiliary Control (SAC-X) gives robots a simple task such as “clean up this tray,” and rewards it for completion of the task.
The researchers don’t provide instructions on how to complete the task. That is something that the robot A.I. hand must figure out on its own.
The developers believe that progress in performing physical and precise tasks will lead to the next generation of robots and A.I.
Understanding Thought Process
Researchers at DeepMind are also looking at ways to making the AI understand how humans use reasoning and make sense of things around them.
Humans have the intuitive ability to evaluate the beliefs and intentions of others around them. This is a trait shared by very few creatures in the animal kingdom. For instance, if we see someone drinking a glass of water, we can infer that the person was thirsty and water can quench their thirst.
The ability to understand these abstract concepts is called the “theory of mind,” and plays a crucial role in our social interactions. Developers at DeepMind performed a simple task.
They first allowed an A.I., ToM-Net, to observe an 11-by-11 grid, which contained four colored objects and a number of internal walls. A second A.I. was given the task of walking to a specific colored square. It also had to pass by another color on the way.
While the second A.I. would try to complete the task, the developers would move the initial target. They then asked the ToM what the second A.I. would do.
ToM was correctly able to predict the actions of the second A.I., based on the information that was given to it.
About The Author
If you would like to read more from Ronald van Loon on the possibilities of Artificial Intelligence, Big Data, and the Internet of Things (IoT), please click “Follow” and connect on LinkedIn, Twitter and YouTube.
Bigdata and data center
thanks you RSS link