Frank Rosenblatt (1928 - 1971)

Biologists have always wondered how learning takes place with the fire of neurons in the brain. Eventually, learning takes place when the bond between neurons is strengthened. The more often neurons connect, the stronger the bond between them is. Taking this theory as a base, psychologist Frank Rosenblatt developed Perceptron, an artificial neural network, in 1958. Perceptron, a gigantic machine with spaghetti-like cables, could separate images into simple categories such as triangles and squares.

David Marr (1945 – 1980)

Marr, who died when he was 35, could have been the Einstein of neuroscience. Until 26 years of age, he published 3 theses on computational modeling of cerebellum, cortex and hippocampus. They were all revolutionary. The cerebellum has learned to correct motor errors, the cortex has generally undertaken the task of learning, and the hippocampus was positioned as an area where information was stored. After these research studies, Marr turned his attention to seeing. If we want to make sense of brain data, we should use the process, algorithm and adaptation stages.

Russell A. Kirsch (1929)

The first digital photo that have inspired satellite imagery, computed tomography printouts, barcodes and digital photographs was printed 62 years ago under the leadership of Robert Kirsch and his team. Thus, a 5x5 cm sandy photo of a baby triggered the computer’s ability to see.

Ernst Dieter Dickmanns (1936)

They call him the father of autonomous cars. The methodology he developed in 1980 is still valid. In this system called the 4D method, the images captured by the cameras were digitized by the computer as abstract lines with adjacent gray areas. Instead of comparing each image with the previous one, Dickmanns transferred moving models in 3D and added time.

Geoffrey Hinton (1947)

Geoffrey Hinton, a cognitive psychologist and computer engineer, drew attention with his work on artificial neural networks. Although he has conducted studies on deep learning in America's leading universities, he executed his main research studies at the University of Toronto. In 2012, Hinton, along with Ilya Sutskever and Alex Krizhevsky, published AlexNet, exceeding the best algorithm in ImageNet by 40%. Hinton devotes his time both on doctoral students at the university and Google Brain studies.

Yann LeCun (1960)

Yann LeCun, who started at the AT&T Bell Lab at the age of 30, based his neural network model on the visual cortex of animals. Convolutional neural networks and computer vision programs such as DjVU represent the cornerstones of today's artificial intelligence technology. He is currently the Chief Scientist responsible of AI, at Facebook, and continues to teach at NYU.

FeiFei Li (1976)

Google Cloud Chief Scientist, professor at Stanford, and director of the university's AI laboratory. Inspired by the WordNet project initiated by Princeton psychologist George Miller in 1980s, she created ImageNet. This database, created for computers to recognize images, will continue to transform not only artificial intelligence research but also the digital world. Since 2010, Imagenet is open to all developers and is being further improved with new algorithms every year.

Andrew Ng (1976)

Father of the Google Brain project. His lecture on machine learning in Stanford also inspired the creation of the online training platform Coursera. Until now, the number of people who attended this program in Coursera has reached 100.000. His large-scale deep learning studies at Google Brain provided the first answer to the question ‘can we teach the machine?’ Although there was no specification defined as a cat, after 10 million images displayed on Youtube, the computer automatically recognized the cat.

Ian Goodfellow (1985)

In the future, if there is an increase in images that make you feel suspicious and cannot be distinguished from the original, we can presume that the responsible is Ian Goodfellow. Developed when he was a researcher at Google, GAN (Generative Adversarial Networks), consists of two opposing neural networks and is based on the Nash equation in Game Theory. The network that we can call ‘propagator’ starts to create real-like pictures from random numbers. On the other side, the network what we can call ‘discriminator’ looks like a detective trying to differentiate the fake from the real.