Recent breakthoughs in AI

Webinar
Facebook IconFacebook IconFacebook IconFacebook Icon

PaLM (or Pathways through Layers of Machines)

Starting with the most recent…Google has done it again. The company's Pathways Language Model (PaLM) is now the biggest language model in the world, with 540 billion parameters. That's ten billion more parameters than Microsoft/NVIDIA's Turing NLG, previously the largest known language model.

PaLM was trained on multiple TPU v4 pods, each containing 1024 chips. This means that PaLM used 6144 TPU chips in total, whereas Gopher—the previous record-holder for the largest language model, at 450 billion parameters—used 4096 TPU v3 chips and Turing NLG used 2240 A100 GPUs or NVlida Tesla V100 GPUs.

The efficiency of training massive models across large clusters of computers is improved by Google's Pathways plumbing framework, which is an internal system that "simplifies and improves the performance of parameter server-based distributed training". The company achieved a training efficiency of 57.8% hardware FLOPs utilization "the highest yet achieved for LLMs at this scale", according to a blog post announcing PaLM by Noam Shazeer and Youlong Cheng.

And the results are mind-blowing. In one example, PaLM correctly infers that the answer to the question “Why did the chicken cross the road?” is “To get to the other side.” It then provides an explanation for this answer: “Chicken crosses the road to reach the other side, which is implied in the question.”

Consider another example: Given a question like “Why do people use umbrellas when it rains?” and a plausible answer such as “To protect themselves from getting wet,” PaLM can generate an explanation: “People protect themselves from getting wet because it's raining.”

Google's work shows that some of the most significant gains in AI research come from scaling up existing techniques rather than inventing new ones. 

In recent years, the ability to scale models has been key to realizing many of the impressive gains in natural language processing (NLP) and computer vision.

Unfortunately, this scalability comes at a cost. Many state-of-the-art models require massive amounts of data, computing, and energy to train. To address these issues, Google presents an approach for scaling neural networks that prioritizes computational efficiency over raw capacity.

This new model, PaLM, provides a high degree of flexibility by enabling each layer to be trained separately and stacked together as needed.

Additionally, it enables training on much larger datasets than are currently available: up to 100 million images or 10 billion words!

Pushing the limits of model scale enables the breakthrough few-shot performance of PaLM across a variety of natural language processing (NLP), reasoning, and code tasks.

Check out the research paper: PaLM: Scaling Language Modeling with Pathways (Google, PDF).

Beating COVID-19

The development of vaccines is a highly complex process, with strict procedures to ensure the safety and efficacy of the end product. Typically, this process takes many years. However, scientists were able to create a candidate vaccine for COVID-19 in just three months—much less time than it usually takes to develop a vaccine. How were they able to do this? By using machine learning.

Machine learning is invaluable because it allows scientists to quickly process vast amounts of data. In the case of COVID-19 vaccine development, this was crucial because the virus was new, so there was very little information about its biological makeup available to researchers. They had to rely on other viruses as a starting point for their research into COVID-19 vaccines and use machine learning models to determine which compounds would result in an effective treatment for the new disease. These models used existing data from previous studies on immune responses and similar viruses to predict which combinations would work best against COVID-19 specifically—and then used these predictions as starting points for creating their vaccines from scratch in record time!

The world has a lot to be grateful for this year, and amid COVID-19, one of the tech community's most significant breakthroughs was the creation of an mRNA vaccine. The efforts of leading Chinese AI developer Baidu played a massive role in this technology's development. They made its LinearFold algorithms available to the entire scientific community, which helped scientists develop mRNA vaccines—these work by enabling cells to create proteins that provoke a response from the immune system.

Furthermore, the mRNA vaccine could also potentially be used in cases where traditional vaccines are ineffective or cannot be produced in large enough quantities, such as avian influenza and seasonal influenza.

In addition to helping with vaccine development, AI has also been instrumental in diagnosing COVID-19 and tracking its spread worldwide.

A team led by Hadi Fanaee-T and Gianluca Memoli at USC's Viterbi School of Engineering developed the algorithm, which uses computational methods and artificial intelligence to identify suitable drug candidates in seconds. The team trained their algorithm by feeding it data from previous clinical trials for different vaccines and data from prior research into SARS-CoV-2 viral sequencing. It was able to 'learn' how other mutations affect the efficacy of each candidate to become an accurate predictor of vaccine performance.

The University of Southern California's algorithm can project how a candidate compound might react with these mutating regions, thus enabling scientists to test if they will be suitable for vaccines. It uses machine learning techniques alongside data collected from previous vaccine trials, and the results are auspicious.

The pandemic has demonstrated how important it is to have a productive relationship between humans and AI. As we continue improving the vaccines, medical experts rely on machine learning to help with contact tracing, diagnosing the infection from medical images, and even forecasting disease outbreaks.

GPT-3

One of the most impressive innovations in artificial intelligence this year has been OpenAI's Generative Pre-Trained Transformer language modelling algorithms, often referred to as GPT-3. This deep learning model uses vast training data to produce natural language outputs that have never been seen from an AI system. It can write convincing articles, compose poetry and fiction, and even generate computer code—meaning it could create software programs on its own. 

GPT-3 has generated a great deal of excitement and trepidation. Some speculated that it could be the first step towards artificial general intelligence (AGI) that could potentially overtake humans in terms of intelligence.

While GPT-3 is a significant advance in artificial intelligence, there are still limitations to what it can do. GPT-3 cannot generate natural language from scratch; instead, it needs prompts from a human to generate text. Also, while GPT-3 can complete tasks such as writing articles or answering math problems, it does not understand the underlying concepts behind these activities.

GPT-3 is the third generation of OpenAI's language AI model. It was trained on a massive amount of text data scraped from the internet and can generate natural-sounding human-like text and machine code. It can also write a blog post like this in just a few minutes—simply by being given a title and a single sentence!

Alphafold – protein folding

For more than 50 years, scientists have been trying to figure out exactly how proteins fold themselves into place using computer models but with little success until now. Thanks mainly to a Google subsidiary, DeepMind's AlphaFold algorithm allows them to predict protein folding more accurately than ever before–making it possible to cure diseases like Type 1 diabetes when we finally crack the code!

Proteins are critical to life; they're used by cells to perform their most important functions, and they're responsible for everything from our DNA to hormones, antibodies, and more. But for proteins to do their jobs, they must be able to create new copies of themselves. That's why protein folding is essential: it's the biological process governing how proteins form these new copies.

This process has been a huge mystery for decades—and it's especially significant because it's often referred to as the fundamental building block of life itself.

Why is this such a significant breakthrough?

Protein folding is one of the most complex problems in biology because it involves so many different factors interacting on a molecular level. One Stanford University researcher says: "There are 20 different types of amino acids in nature, and each one can be placed in thousands of possible positions within a single protein." And yet, without understanding how proteins fold, we're unable to understand how cells grow and multiply, how tissues are formed, or how our bodies respond to illness: "The failure to fold is implicated

The lab's AI system AlphaFold has achieved a 90% success rate in predicting the shape of proteins. This achievement could help researchers develop new treatments for diseases like Alzheimer's and Parkinson's.

DeepMind's AlphaFold AI has successfully solved a half-century-long challenge of modelling the structure of proteins. Scientists have been trying to predict the 3D shapes that form when amino acids chain together to make a protein but have lacked the tools and knowledge needed to solve this problem. Identifying these shapes is crucial for scientists attempting to create new medicines or understand how diseases work. 

Autonomous driving hits the road, and Robot taxis in China

Most experts predict that we'll be getting self-driving cars in the next few years. However, there are still some significant milestones to reach before then.

Tesla, the electric car manufacturer, founded in 2003, reached a significant milestone on June 16th of 2020. Updates to its Autopilot feature have allowed Tesla vehicles to drive themselves under certain conditions.

Level two autonomy is still far from perfect and, most of the time, requires a human driver to be ready to take over should something go wrong. But it is an exciting development nonetheless, and one that will allow more people to experience the thrill of driving an electric vehicle. The update was rolled out slowly to allow the company to collect data and monitor its performance in a controlled manner.

Baidu is moving fast in the race to fully autonomous vehicles as well. The Chinese tech giant launched a "Robot taxi" service called Apollo Go. Human pilots still support their cars for safety reasons and customer confidence. Baidu aims to expand the service to Beijing, Changsha, and Cangzhou with up to 30,000 vehicles operating across 30 Chinese cities by 2023.

SoftBank's Pepper robot has been seen with a new look: a self-driving car body. Honda is working with SoftBank to test Pepper on Japanese roads using a vehicle equipped with sensors to drive autonomously. The robot is designed to assist people driving cars, giving directions or controlling air conditioning and music systems inside the vehicle. The vehicle is not commercially available yet, but Pepper will be put into action for business purposes next year.

The first autonomous trans-Atlantic crossing

The Mayflower was first conceived in 2017 by an independent marine innovation company called Promare. Their goal was to create an affordable, energy-efficient ship that would be universally useful for conducting a wide range of scientific experiments at sea while also operating autonomously and remotely.

The autonomous Mayflower is a replica of the original ship that brought the Pilgrims over to the United States, built to commemorate the 400th anniversary of their arrival. The Mayflower is powered by AI and has no crew onboard. It gathers data from 30 sensors, including radar, GPS, cameras, and depth detectors. The data is then passed on to the edge-computing application that can make decisions without communicating with a base station back on land. This could go down as a milestone for autonomous shipping.

What developments in AI have impressed you most since the start of the decade?

I think the most impressive developments in AI are the ones that help us make our lives easier. Learn more about how AI can make your life easier in our cases section.