
Amin al-Hababeh: In the classic film from 1987, the deceased policeman Detroit, Alex Murphy, will be reborn as Cyborg. He has a robotic body and a full brain-crap interface that allows him to control his movements with his mind.
He can access online information, such as suspects, he uses artificial intelligence (AI) to lend a hand detect threats, and his human memories have been integrated with those from the machine.
It is unusual to think that the key mechanical technologies of film robots have been almost now achieved by Boston Dynamics' Running, jumping Atlas and the fresh four -legged Corleo Kawasaki. Similarly, we see robotic exoskeletals that allow paralyzed patients to do things such as walking and climbing the stairs by responding to their gestures.
Developers were lagging behind when it comes to building an interface in which brain electrical impulses can communicate with an external device. It also changes.
In the latest breakthrough, the research team based at the University of California presented a brain implant, which enabled a woman with live paralysis live through artificial intelligence into a synthetic voice with just a three-second delay.
The concept of interface between neurons and machines reaches much further than Robocop. In the eighteenth century, an Italian doctor named Luigi Galvani discovered that when electricity passes through some nerves in the frog's leg, he shakes. This paved the way to the entire electrophysiology study, which is watching how electrical signals affect organisms.
The initial newfangled research of brain-computers interfaces began in the overdue 1960s, and the American neuronauk Eberhard Fetz connected monkey brains to electrodes and showing that they can move with a needle. However, if this showed some electrifying potential, the human brain turned out to be too intricate for this field to develop quickly.
The brain is constantly thinking, learned, remembers, recognizes patterns and decodes sensory signals – not to mention the coordination and movement of our bodies. It works on about 86 billion neurons with trillions of connections that process, adapt and evolve continuously in so -called neuroplasticity. In other words, there is a lot to understand.
Many last progress was based on progress in our ability to mapping the brain, identifying various regions and their actions.
A number of technologies can produce insightful images of the brain (including functional imaging of magnetic resonance imaging (FMRI) and positron emission tomography (PET)), while others monitor certain types of activity (including electroencephalography (EEG) and more invasive electrocortigraphy (ECOG)).
These techniques helped researchers build amazing devices, including wheelchairs and prosthetics, which can be controlled by the mind.
But while they are usually controlled using an external interface, such as the EEG headset, chip implants are a very fresh limit. They were included in the progress in AI and micro electrodes, as well as in neural network with deep learning that drive today's AI technology.
This allows for faster data analysis and recognition of patterns, which, together with more precise brain signals, which can be obtained using implants, allowed the creation of virtually operating applications in real time.
For example, the fresh Implant of the University of California is based on ECOG, a technique developed at the beginning of 2000, which reflects patterns directly from a gaunt electrode sheet located directly on the cortical surface of someone's brain.
In their case, intricate patterns taken by a high density implant are processed using deep learning to obtain a data matrix from which you can decod all words that the user thinks. This improves previous models that could create synthetic speech only after the user's sentence ends.


It is worth emphasizing, however, that deep learning of neural networks is allowed by more sophisticated devices that are based on other forms of brain monitoring.
Our research team from Nottingham Trent University has developed an inexpensive brain wave reader using ready -made parts that allow patients who suffer from conditions such as a completely closed syndrome (CLI) or a physical neuron disease (MND) to be able to answer “yes” or “no” to questions. There is also the potential to control computer mouse using the same technology.
Future
Progress in artificial intelligence, the production of chips and biomedical technology, which enabled these changes, should be continued in the coming years, which should mean that brain computer interfaces are still improving.
Over the next ten years, we can expect more technologies that provide disabled people with independence, helping them in easier movement and communication.
This requires improved versions of technologies that already appear, including exoskeletons, the mind controlled by the mind and implants, which go from controlling cursors to fully controlling computers or other machines.


In a medium and long -term period, I expect that I will see many RoboCop options, including planted memories and built -in skills served in internet communication. We can also expect quick communication between people through “Brain Bluetooth”.
Similarly, it should be possible to create six million dollars, with improved vision, hearing and strength, by implanting appropriate sensors and combining appropriate components to transform neuron signals into operation (cylinders). Undoubtedly, applications will also appear along with the enhance in our understanding of brain functionality, which has not yet been judged.
Apparently, the postponement of ethical considerations will soon become. Is it possible to hack our brains and memories planted or removed? Can our emotions be controlled? Will the day come when we need to update our brain software and restart the press?
With every step, questions like these are becoming more and more urgent. The main technological obstacles were basically removed from the road. It's time to start thinking to what extent we want to integrate these technologies with society, the faster, the better.
Amin al-Habaibeh, professor of Intelligent Engineering Systems, Nottingham Trent University
This article is published from Conversation under the Creative Commons license. Read Original article.
Image Source: Pixabay.com