Of course, Artificial Intelligence is becoming more extraordinary and innovative as the days go by. But some of the recent developments in AI are downright creepy and terrifying. Here are 10 of the eeriest ways man has started to experiment with robots:
This is another scary development in AI that many people have been calmouring against for a long time. In fact, people have moved to ban the use of so called “killer robots” because of the adverse effects that they could have on society.
Many Hollywood movies have explored the scary possibilities of humans falling in love and having sex with robots. But this may soon become a reality. Some scientists believe that “human-on-robot” sex will become the norm by 2050. Although sex robots do not have the same emotional or romantic capacity as humans, some people are starting to really enjoy their company.
They are starting to look very human-like
Thanks to the work of scientists and engineers, a lot of robots and AI are starting to look more like humans. While this seems like progress, it can be very creepy and deceiving to have human-like robots walking among humans.
Robots predicting the future
Nauticus is a supercomputer that is capable of predicting the future based on news articles. It was helpful in locating the whereabouts of Osama Bin Laden within 200km. Now, scientists are working on more ways to see if this robot can predict actual future events, not just ones that are based on things that have happened in the past.
They are learning to deceive and cheat
Lying is an attribute that is common to all humans and even some animals such as birds and squirrels. However, the recent developments in tech show that lying is no longer limited to humans and animals. Researchers at Georgia Tech have developed robots capable of cheating and lying. They believe that these robots can be used by the military in the future. However, anything could go wrong with this kind of intelligence at man’s disposal. What if findings about these robots leak out of the military and fall into the wrong hands? This could spell trouble for man.
They are starting to feel emotions
Before, emotions were the main factor that distinguished humans from robots. These days, that line is becoming more blurry. Some AI have begun to show the ability to feel embarrassed, get angry, or excited in response to various questions or activities.
They’ll soon invade our brains
A French engineer at Google has predicted that nanobots implanted within the brain could make us “god-like” by 2030. These nanobots will make us so intelligent at learning new things that it will be possible for you to learn a new language, like French, in a matter of minutes! However, scientists are considering the risks of having such powerful AI implanted within the brain. There are still a lot of things we don’t understand about the brain. Because nanobots connect us to the internet, a powerful AI could easily invade our brains and do the unexpected should it decide to rebel against humans.
AI can can now communicate with each other
In 2017, a video of two Google home assistants talking and arguing with each other went viral. They argue about which one is a human and which is a computer, and one even claims it is God. But the question is: if AI can now talk and understand themselves, what is the guarantee that they will not start conspiring against humans?
They are starting to take over our jobs
With the increasing developments AI, many jobs are at risk of being automated. While automation will lead to more efficiency and production for companies, it could lead to a high rate of unemployment.
They are starting to learn the difference between right and wrong
This is another chief difference between humans and robots. In order to prevent an AI takeover, scientists are starting to teach robots the difference between right and wrong. While this seems like a good thing, it may also make the machines more human-like. This could lead to the inability to differentiate human from robot.
Which of these developments in AI do you think is most scary? Tell us in the comment section below.