Biggest Trends for AI in 2022 so far

Webinar
Facebook IconFacebook IconFacebook Icon
Kévin Françoisse
Mar 19, 2024

Despite its ups and downs, this year has been a great year for AI. While its adoption was accelerating previously, it is now a major source of transformation for every industry. This year, AI became increasingly pivotal to breakthroughs in every field and for every facility, from drug discovery to critical infrastructure like electricity grids. Yet, I feel that this is just the beginning as great advancements and researches are being made.

This article describes our view of the major AI trends to look for in 2022. It is strongly inspired by what we see at our customers or during discussions with our tech team.

Operationalizing AI initiatives (MLOps)

In 2021, a larger proportion of companies ventured into the world of AI. While AI has numerously proven that it can quickly generate strong business value, still very few companies go beyond the POC (Proof of Concept) phase. As The Economist was pointing out a year ago, AI techniques are powerful, but they can be troublesome to deploy.  

The Implementation of MLOPs

Building and deploying production systems still requires a lot of manual work. There are, among those tasks, issues like discovering and correcting data issues, spotting data drift and concept drift, managing training, managing the model versions, carrying out error analysis, auditing performance, pushing models to production, managing computation, and much more.

For the majority of organizations, continuously delivering and integrating AI solutions in enterprise infrastructure and business workflow remains notoriously difficult.

But these tasks are becoming more systematic. MLOps, or machine learning operations, is a set of practices that promise to empower engineers to build, deploy, monitor, and maintain models reliably and repeatedly at scale. Just as Git, TensorFlow, and PyTorch made version control and model development easier, MLOps tools will make machine learning far more productive.

Major Internet companies are now proposing MLOps services and have invested a lot in MLOps practices, such as AWS with SageMaker, Google with GCP Vertex AI, Microsoft with Azure Machine Learning, etc. Other open-source projects also gained a lot of traction recently, like KubeFlow on Kubernetes, or MLFlow.

At Sagacify, over the past year, we have been improving the efficiency of machine learning project development and have embraced the MLOps practices to simplify the management of the lifecycle of our AI solutions running in production.

If you would like to discuss what MLOps platform would best fit your enterprise needs, feel free to get in touch.

Low-code and No-code AI

AI is getting democratized. What we will see in the upcoming years from AI is similar to what we have seen with the democratization of web development 10 years ago. Building a qualitative website in the past required good programming skills and a huge investment of time. Now, with solutions like Wix, WordPress, and Webflow, virtually everyone can build a website by simply dragging and dropping elements. 

AI solutions are following the same trend. Specific tools are being built to let non-technical users build their own AI system, allowing them to focus more on the purpose of using AI and less on the complex technical aspects. Here are two examples of such AI tools: 

  • Lobe.ai (now bought by Microsoft) makes it extremely easy to build an image classifier model simply by feeding a few pictures for each class. In addition, it’s beautifully designed, making the user experience not just aesthetically pleasing but also easy-to-use.
  • Google Teachable Machine focuses on teaching you how machine learning models work but still lets you build an image classification model in minutes.

Some other models include the Google Document AI that simplifies the development of information extraction models. AWS also proposes a similar solution called AWS Form Recognizer.

At Sagacify, we fully embrace this trend because we believe that AI will become a central part of every business strategy. As a result, companies should not have to rely only on external service providers to maintain their AI solutions over time. Instead, they should use tools that let them be in control of their own AI systems.

Sagacify has strong expertise in using AWS and GCP tools to enable this in your environment but has also been developing low-code AI tools for situations where an existing solution was not yet available on the market.

This is the case for our product Skwiz.ai, which is a low-code AI tool that lets users automate the processing of documents without the need to define rules, keywords, or templates.

HyperAutomation

The automation of repetitive and mundane tasks is probably the most obvious and valuable application of AI. It often relies on supervised learning, a branch of AI that learns from past examples of input linked to output actions or decisions.  

Identifying tasks in your workflow that can be automated with AI is considered easier than trying to use AI to develop new business models. This makes it the go-to AI application when starting your AI transformation journey, and with the recent growth of AI capabilities in dealing with raw text or images, it has opened up a vast amount of new opportunities.

Some useful examples of tasks that can be automated with AI are

  • Management of inbound communication (emails)
  • Processing of documents 
  • Detection of fraud
  • Deciding which specialist to contact to solve an issue
  • The visual inspection of manufactured parts to detect defects

Gartner calls this Hyper Automation, where businesses are racing to automate as many business processes as possible to stay competitive, scale, and enable remote operations.

This frees up the time of valuable employees to do what we can do best as humans, like creativity, showing empathy, and taking better care of customers. In a sense, AI has the power to re-humanize society.

Better language modeling

Language models (or Natural Language Processing) are models that are capable of deeply understanding text and how to communicate with us humans. This is very important, as it could solve the complexities of human conversations - one of the barriers between the physical world and the digital world. We have recently seen the release of GPT3 (by OpenAI), the most advanced and largest language model ever created.

If you are interested, you can now listen to TEDTalk here, or this other article here.

Consisting of about 175 billion parameters (just about 6 times smaller than the human brain), GPT3 is so good at answering questions and generating speech that it can sometimes fool us into thinking we are talking to a real person, but that’s not all. GPT3 can also generate computer code (Microsoft Power Apps, GitHub Copilot), do maths, and manipulate visual concepts through text. 

For next year, OpenAI is expected to be working on GPT4, a neural network about 500 times larger than GP3. Considering what GPT3 is capable of today, there is no doubt that the next generations will have the ability to change our interaction with the digital world profoundly.

Data-Centric AI

The concept of Data-Centric AI was recently introduced by Andrew Ng, one of the thought leaders in AI. It represents the recent transition from focusing on improving machine learning models to the underlying data used to train and evaluate these models. 

In his recent talk, Andrew NG points out how he considers a data-centric approach to be more rewarding, as he calls for a shift towards data-centrism in the community. According to him, it is more important to spend time collecting high-quality data that is well labeled than working on model improvements.

Indeed, most companies don’t have the large dataset (big data) necessary to train large deep learning models. In reality, most data sets are much smaller. For example, in the manufacturing industry, people will say that they have at most 50 images of a defect. On such small data, the techniques built to work on hundreds of millions of examples would struggle to work. Another point is that manufacturing companies will also have a lot of different defects to detect, forcing us to build many machine learning models on small data.  This is where MLOps comes back into play because machine learning engineers will need tools to build models faster, letting them focus on getting quality data and on error analysis.

→ To dig a bit deeper on the subject

Transformers 

This is a more technical AI facet that has emerged as a general-purpose architecture for machine learning, beating the state of the art in many domains including NLP, computer vision, and even protein structure prediction.

Generative/Creative AI

Generative AI is the technology to create new content that previously relied on humans by utilizing existing text, audio files, or images. With generative AI, computers detect the underlying pattern related to the input and produce similar content.

The way it works is pretty fascinating and intuitive. The main family of generative models is Generative Adversarial Networks (GANs). GANs consist of two neural networks: a generator and a discriminator that pit against each other to find equilibrium between the two networks:  

  • The generator network is responsible for generating new data or content resembling the source data.
  • The discriminator network is in charge of differentiating between the source and the generated data to recognize what is closer to the original data.

Generative models have improved drastically over the past couple of years and are now able to generate pictures or videos of absolute originality that can fool the human eye.  One of the most shocking experiences of this is probably the thispersondoesnotexist website, where every picture of people you see is randomly generated by a GAN model. This was the state of research in 2019, and since then, we have made huge progress.

Now, it’s being used to generate article headlines, create logos, 3D models, and even art. In 2022, a larger community of machine learning engineers have started creating art using similar technologies, which has led to the apparition of virtual art galleries, selling AI-generated art using the NFT technology, and even art galleries in Metaverses.

Autonomous Vehicles

Telsa announced that its cars would demonstrate full self-driving capabilities by 2022. Other competitors like Ford, Apple, GM, and Google Waymo are supposed to announce major leaps next year as well. Other autonomous vehicles, like ships, are also on track to complete autonomy, with the Mayflower 400 supposed to cross the Atlantic Ocean without manual direction, and the Boston Dynamics robot Spot going on autonomous surveillance and rescue missions.

Conclusion

This was just a small scratch on the surface of the trends followed by AI, not even touching on the new algorithmic breakthrough, which aims to solve an even bigger range of applications. On the business side of matters, with the wider acceptance, adoption, and integration of AI in organizations, I hope that companies will open up to venture into more advanced applications.  Doing that will also require bigger investments, that sometimes big companies will be able to afford.  This is why I’m thrilled to see MLOps tools as a solution to reduce the cost for companies, as well as the Belgian governments investing more and more funds to help Belgian companies benefit from AI.