What The Heck Is AI?

Everything You Should Know about Artificial Intelligence

Dr. Valerie MORIGNAT
9 min readSep 10, 2019

Artificial Intelligence has ushered in a Renaissance era in every industry. In 2019, AI technology detects cancer faster and more accurately than doctors, predicts legal outcomes, writes content, trades stocks, tracks poachers, detects fake news, discovers exoplanets, translates ancient languages, and even gives life to the Mona Lisa. Every sector will be radically transformed by AI technologies, therefore impacting the lives of billions of people in the very near future. The AI transformation will result in an unparalleled upside for early-adopters, provided they discriminate the hype from real AI capabilities and manage its legal and ethical risks.

Although AI is the hottest topic of conversation, its history, technologies, and applications are still frequently misunderstood. Unless you’re an avid reader of expert publications or an AI expert yourself, you may benefit from gaining insights that will help you better discern the real from the hype.

AI is Both a Concept and a Technology

Artificial Intelligence is a concept–that of emulating human-level intelligence in machines–and a constellation of technologies and methods that converge in realizing this concept. Machine Learning, Deep Learning, Artificial Neural Networks, Computer Vision, and Natural Language Processing, all fall under the “AI” umbrella term. These AI subsets enable computers to learn and make predictions about the real world to solve specific problems.

“What we want is a machine that can learn from experience” stated Alan Turing, one of the fathers of AI, in 1947. Well, that’s exactly what Machine Learning does. “Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world” (Source: Nvidia).
In the 70+ years it’s been around, Machine Learning has grown into a collection of methods, models, and algorithms, among which are Supervised Learning, Unsupervised Learning, Semi-Supervised Learning, Reinforcement Learning, and Transfer Learning. From 100,000 feet, the choice for a Machine Learning approach over another depends on many variables; such as the data, the nature of the problem to be solved, the prediction you’re trying to make, the contextual constraints, and the business goal.

Deep Learning is inspired by the human brain’s processing patterns.

Deep Neural Nets: A Bio-Inspired Approach

In the past decade, a subset of Machine Learning called Deep Learning gained prominence. Deep Learning is inspired by the human brain’s processing patterns. As your brain receives inputs, it labels and assigns them through a process of comparison with already known information. Deep Learning applies quite a similar process in automatically creating hierarchies of interrelated concepts. To do so, it relies on Artificial Neural Networks (ANNs) architectures to uncover features to be used for classification purposes. That’s why Deep Learning models are often referred to as ‘Deep Neural Nets’. Because Artificial Neural Networks are inspired by the biological brain’s networks of neurons, they exhibit self-adaptive qualities. ANNs teach themselves and can model very complex and non-linear relationships. This gives them an upside in inferring underlying relationships among the data and even make predictions from unseen data, which adds generalization capabilities to their sparks. Such predictive power is particularly useful in domains such as image-based medical diagnosis and machinery maintenance.

In many areas, Deep Learning models using neural nets already surpass human-level performance. In cancer diagnosis, for instance, they outperform medical professionals in early detection and accuracy. MIT’s Computer Science and Artificial Intelligence Lab recently delivered the first Deep Learning-based model able to detect breast cancer five years ahead, with equal accuracy for black and white women. Deep Learning models also outperform humans beyond this planet. As Astrophysicist Fatoumata Kebe and I were chatting about AI a few weeks ago, she shared with me some insights on the contributions of Deep Neural Nets in exoplanet discovery and space debris tracking. Several exoplanets that had been missed by astronomers for years were successfully spotted by the technology (learn more by visiting the Frontier Development Lab).

While inexhaustive, this overview of Deep Neural Nets will dig deeper into that subject to gain better insight into two of their champions: CNNs and GANs.

Convolutional Neural Networks (CNNs, also called ConvNets) are extremely effective in the context of image understanding, recognition, and classification. They “have the ability to map complex, high-dimensional image data into a much lower dimensional space of distinct categories, consisting of hundreds, or thousands, of object classes” ( Source: John Murphy, Microway). That’s what makes them equally great at performing facial recognition to unlock an iPhone and spotting cell anomalies in MRIs. The architecture of CNNs is inspired by the organization of the mammalian visual cortex. This enables them to learn high-level features from data in incremental ways, and therefore eliminate the need for human intervention (which is still required for Machine Learning algorithms). This advantage is key to the superior performance of Deep Neural Nets in Computer Vision-the ability to automatically understand any still image or video.

CNNs power most Computer Vision solutions.

In the 1990s, CNNs started delivering outstanding performance in handwritten digit classification and enabled Natural Language Processing (NLP) on hand-written documents (such as bank checks and mail for instance). Since the 2012 ImageNet competition that promoted them to the frontstage, CNNs power most Computer Vision solutions. In 2019, they excel at Facial Recognition, with accuracy rates above 95% for video and 97% for still images. If you are interested in Computer Vision and CNNs, I recommend checking out Clarifai; the market leader in Computer Vision. Clarifai uses CNNs for visual recognition and you can test out their product directly on their site.

CNNs are rockstars, but there are new cool kids on the block: GANs-Generative Adversarial Networks. According to Yann LeCun, Chief AI Scientist at Facebook, GANs are “the coolest idea in machine learning in the last twenty years” ( Source: Wired). Introduced in 2014, a GAN is a brand new way to train generative models using two neural networks competing with each other. GANs are the preferred technology for producing synthetic data. That’s why they bring forth revolutionary applications in the field of image synthesis. For instance, GANs can generate realistic images entirely from text, and vice versa. If you have ever wanted to see what your labrador would look like with a zebra fur, or dreamt of having your wedding pictures painted by Van Gogh, GANs can help you do just that. They can automatically understand and transfer the style of any image onto another.
Additionally, GANs greatly improved astronomical images resolution and video game visual rendering. Do you picture those magnificent landscapes that make you forget about dinner when playing your favorite video game? Well, that may well be their doing. Additionally, GANs recently received massive media coverage for being the technology behind the ultra-realistic fake images and videos referred to as DeepFakes. Just this past week, a DeepFake video of Bill Hader morphing into Tom Cruise and Seth Rogen on SNL went viral. The puppet master behind the curtain was a GAN.

You should by now have a greater understanding of AI and its building blocks. This will allow us to tackle deeper questions about human-level intelligence in machines.

Artificial General Intelligence: The Holy Grail of AI

In 2019, AI is a bio-inspired prediction machine that encompasses a diversity of methods, models, and algorithms. By the end of the next decade, AI is predicted to be the nervous system of every organization, contributing $15.7 trillion to the global economy (PWC). The majority of processes, products, and services will be powered by AI, and the most successful organizations will be AI-First, driving unprecedented value creation. As AI gets more sophisticated and pervasive, it will become everybody’s indispensable partner. While it embeds deeper into the fabric of our lives, it will reshape our mental models and our cultures in unprecedented ways.

AI will become everybody’s indispensable partner.

However, AI is still decades away from human-level intelligence; a stage called AGI-Artificial General Intelligence. AGI is also sometimes referred to as “General Purpose AI” and “Strong AI”.

In the excellent book Architects of Intelligence, Stanford Professor Fei-Fei Li defines AGI as “the kind of intelligence that is contextualized, situationally aware, nuanced, multifaceted and multidimensional” and equipped with human-level learning capability. Is AGI a foreseeable event? What would it take to emulate common sense reasoning in machines? Is it even possible? Will AGI be an individual system or a society of intelligent systems? While many of the conceptual building blocks towards AGI already exist, the prospect of realizing human-level intelligence in machines comes with many challenges.

Intelligence is Multidimensional

The very nature of intelligence was debated for more than 2000 years and is today still a research domain fostering competing theories. Despite the massive advancements in the understanding of the human brain in the last decade, intelligence is still difficult to scope and therefore hard to emulate in machines. Competing cultural views of what intelligence is, how to evaluate it, and how to replicate it, add to the complexity of the AGI challenge.
The human brain has 80 billions of neurons and tens of trillions of synapses. The most advanced AI technologies are very far from such a complexity level. Biological intelligence is also more multidimensional than mid-20th century’s AI researchers initially assumed. As developmental psychologist Howard Gardner underscores in best-seller Intelligence Reframed, “intelligences operate in rich environments, typically in conjunction with several other intelligences”. This co-evolutionary evidence meets cognitive scientist Francisco Varela’s findings that embodied cognition plays a pivotal role in the development of intelligence. The human mind was initially understood as an all-purpose problem solver relying on sets of rules and principles. Today, we know that it functions more like a society of evolutionarily adapted and highly specialized independent mechanisms. In this light, the project of emulating human-level intelligence in relatively secluded artificial environments seems counter-intuitive.

Closer collaborations between cutting-edge robotics and AI research could, however, open a pathway towards Embodied AI. This “beyond-the-brain” approach considers that intelligence is enabled by having a body to learn from direct interactions with the world and with other intelligences in the world. That’s why companies such as Vicarious focus on developing algorithms that can learn from sensorimotor experience, in order to develop generalization capabilities in robots.

AGI is a Framework and a Frame of Mind Issue

While every AI researcher I’ve spoken with is 100% certain AGI will at some point be brought to life, none of them can predict when and how the breakthrough will occur. Some risk a jaw-dropping prediction of 2029, while, for many others, humanity won’t find its holy grail before 2200. One thing is easy to predict: lots of money will be injected in the quest for AGI. Its market should reach $50.8 Billion by 2023. AI giants (Google, Facebook, Microsoft, Amazon, Apple, Baidu, and Tencent) are massively investing in the achievement of human-level artificial intelligence. OpenAI, a San Francisco-based AGI-centric organization, recently raised $1 Billion from Microsoft to create brain-like machines. Public-private partnerships towards the AGI moonshot goal are also rising. All understand that it’s a long-term roadmap that will require global and multidisciplinary collaboration.

The realization of AGI is both a framework and a frame of mind issue. As AI becomes an interdisciplinary field and biological intelligence gets better understood, I predict that a change of paradigm will occur. The concept of AI (and therefore AGI) could move away from its initial anthropocentric project (replicating the human brain) towards a hybrid intelligence inclusive of non-human animals, natural ecosystems, and natural and artificial hybrid systems.

One thing is sure, once AGI is realized, it will break through its own limitations and outperform everything human beings have ever done.

AI may be the new black, but it’s not a new technology. Understand the origins of Artificial Intelligence. Read Chapter Two of this article series: The Ancient Quest For AI.

Originally published at http://intelligentstory.com.

--

--

Dr. Valerie MORIGNAT

PhD | AI Strategy | AI Ethics | Design | Award-Winning Photographer