A pioneering scientist explains ‘deep learning’

Buzzwords such as "deep learning" and "neural networks" are everywhere, but much of the public understanding is wrong. Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies, says,

Sejnowski, a pioneer in learning algorithm research, is the author of The Deep Learning Revolution (published next week by MIT Press). He argues that overestimation of killer artificial intelligence or robots that make us obsolete ignores the exciting possibilities of computer science and neuroscience, and that something can happen when artificial intelligence meets human intelligence.

Verge spoke to Sejnkowski about where sudden "deep learning" came about, what he could do and can not do, and hype.

This interview was cleverly edited.

I want to ask about First definition. People often use words like "artificial intelligence", "neural network", "in-depth learning" and "machine learning".


a pioneering scientist explains deep learning

Photo: Terrence Sejnowski

The 1956 AI decided by the engineer to write a computer program to imitate artificial intelligence dates back to 1956. Within artificial intelligence, a new field of machine learning has emerged. AI collects a lot of data about trying to understand instead of writing a step-by-step program to do something that is a traditional approach. For example, because you are trying to recognize an object, it collects a lot of images. They then identify the car and the stapler through automated processes that analyze various functions through machine learning.

Machine learning is a very big field and I will go back. The original people called it "pattern recognition," but the algorithm was mathematically much broader and more elaborate. Machine learning has brain-inspired neural networks and deep learning. In-depth learning algorithms have a specific architecture with numerous layers flowing through the network. Basically deep learning is part of machine learning and machine learning is part of AI.

What deep learning can not be done in other programs?

Program writing is labor intensive. The old computer was too slow and the memory was too expensive, so I relied on the logic that the computer was running. This is the default machine language for manipulating information bits. The computer was too slow and the calculations were too expensive.

But now computing is becoming cheaper and more labor-intensive. And computing was so cheap that it became much more efficient to learn a computer than to write a program. At that point, in-depth learning has actually begun to solve the problem in areas such as computer vision and translation that previously no one has written a program.

Learning is extremely computationally intensive, but you only need to write one program. Providing different data sets can solve many problems. You do not have to be a domain expert. So there are thousands of applications where there is a lot of data.


1539723547 544 a pioneering scientist explains deep learning

Image: MIT Press, 2018

"Deep learning" seems to be everywhere now. How was it so dominant?

In December 2012, the NIPS meeting, the most important AI meeting in history, will tell you exactly what happened. [computer scientist] Geoff Hinton and two of his graduate students have shown that using a very large dataset called ImageNet with 10,000 categories and 10 million images and using in-depth learning can reduce classification errors by 20%.

Traditionally, this dataset reduces errors by less than 1% over a year. Twenty years of research has been bypassed in a year. It really opened the gate.

Deep learning is inspired by the brain. So how does computer science and neuroscience work together?

The inspiration for deep learning actually comes from neuroscience. Explore the most successful and successful learning networks. It is CNN (Convolutional Neural Network) developed by Yann LeCun.

When you look at CNN's architecture, you are connected in a fundamental way that reflects your brain, not just many units. The basic work of the visual cortex and a part of the brain that is best studied in the visual system shows that there are simple and complex cells. Looking at the CNN architecture, we have the equivalent of simple cells and the equivalent of complex cells and come directly from our understanding of the visual system.

Jan did not try to replicate the cortex. He tried various variations, but his convergence converged on nature . This is an important observation. The convergence of nature and artificial intelligence has a lot to teach us and the distance to go.

How much does our understanding of computer science rely on our understanding of the brain?

Well, most of our current AI is based on what we know about the brain in the '60s. More knowledge and knowledge are being integrated into the architecture.

AlphaGo, a program to beat the Go champion, includes some cortical models, as well as some models of the brain, called basic grenades, that are important for making a series of decisions to achieve goals. There is an algorithm called the temporal difference developed by Richard Sutton in the '80s. Combined with deep learning, you can play a very sophisticated play that was never seen before.

As we begin to understand the structure of the brain and begin to understand how it can be integrated into the artificial system, we will provide more and more functions beyond what we are now.

Does artificial intelligence also affect neuroscience?

They are parallel efforts. There have been tremendous advances in innovative neural technology that simultaneously record thousands of neurons at the same time, simultaneously recording a large portion of the brain, opening up a whole new world.

Convergence is happening between AI and human intelligence. Once you know more about how the brain works, it will be reflected back into AI. At the same time, however, they are actually building a whole learning theory that allows them to understand the brain, analyze thousands of neurons, and see how their activities are coming. So there is a feedback loop between neuroscience and artificial intelligence. I think this is much more interesting and important.

My book explains the diverse applications of learning from driving to trading. Do you have the most interesting parts?

As long as I have completely fled, the application is a generic deficit network or GANS. With traditional neural networks, you give the input, and you get the output. GAN can develop activity-artifacts without input.

Right. I have heard about networks that make fake videos on the network above . They actually create something new that seems realistic.

They generate internal activities in meaning. It turned out to be how the brain works. You can look outside, you can close your eyes and you can imagine things that are not there. There is a visual image. You can get ideas in quiet situations. It is because your brain is productive. Now the network of this new class can create new patterns that never exist. For example, if you create an internal structure that can give you hundreds of car images and create a new image of a car that does not exist, it looks completely like a car.

Do you have an overly exaggerated idea in the opposite opinion?

No one can predict or imagine how the introduction of this new technology will affect the way things are organized in the future. Of course there are hype. We did not really solve the difficult problem. We do not have general intelligence, but people are saying that robots will replace us, even if robots are far behind artificial intelligence. Because the body is more complex than the brain to replicate.

Let's look at one technology advance, the laser. It was invented about 50 years ago and occupied the entire room. Moving from a classroom to a laser pointer requires 50 years of technology commercialization when teaching. It went down to the point where you shrink it and buy it for $ 5. The same thing will happen with exaggerated techniques like self-driving cars. It is not expected to be ubiquitous next year or is expected to be less than 10 years. There may be a need for 50, but there will be gradual developments along the way that are becoming more flexible, secure and more compatible with the way the transportation grid is constructed. The reason the hype is wrong is that people have the wrong time zone. They are overly anticipating too quickly, but in due course it will happen.

Please Note: This content is provided and hosted by a 3rd party server. Sometimes these servers may include advertisements. igetintopc.com does not host or upload this material and is not responsible for the content.