What is edge AI?

Copy URL

Edge artificial intelligence (AI), or AI at the edge, is the use of AI in combination with edge computing to allow data to be collected at or near a physical location. For example, an image recognition algorithm task will run better being closer to the source of the data.

Edge AI allows responses to be delivered almost instantly. With edge AI, data is processed within milliseconds providing real-time feedback with or without internet connection because AI algorithms can process data closer to the location of the device. This process can be more secure when it comes to data because sensitive data never leaves the edge.

Edge AI differs from traditional AI because instead of running AI models at the backend of a cloud system, they run on connected devices operating at the network edge.  This adds a layer of intelligence at the edge where the edge device not only collects metrics and analytics but is able to act upon them since there is an integrated machine learning (ML) model within the edge device.

The goal of artificial intelligence is the same – to have computers collect data, process that data, and then generate results similar to human intelligence. However, edge AI does the work and decision making locally, inside, or near whatever device being used. 

The combination of edge computing and artificial intelligence comes with great benefits. With edge AI, high-performance computing capabilities are brought to the edge, where sensors and IoT devices are located. Users can process data on devices in real time because connectivity and integration between systems isn’t required, and they can save time by collecting data, without communicating with other physical locations.

The benefits of edge AI include: 

  • Less power use: Save energy cost with local data processes and lower power requirements for running AI at the edge compared to cloud data centers
  • Reduced bandwidth: Reduce  the amount of data needed to be sent and decrease costs with more data processed, analyzed, and stored locally instead of being sent to the cloud
  • Privacy: Lower the risk of  sensitive data getting out with data being processed on edge devices from edge AI
  • Security: Prioritize important data transfer by processing and storing data in an edge network or filtering redundant and unneeded data
  • Scalability: Easily scale systems with cloud-based platforms and native edge capability on original equipment manufacturer (OEM) equipment 
  • Reduced latency: Decrease the time it takes to process data on a cloud platform and analyze it locally to allow other tasks

Red Hat does a lot of work on container and Kubernetes technologies with the greater open source community. Red Hat® OpenShift® brings together tested and trusted services to reduce the friction of developing, modernizing, deploying, running, and managing applications. 

Red Hat OpenShift includes key capabilities to enable machine learning operations (MLOps) in a consistent way across datacenters, hybrid cloud, and edge. With AI/ML on Red Hat OpenShift, you can accelerate AI/ML workflows and the delivery of AI-powered intelligent applications.

As an AI-focused portfolio, Red Hat OpenShift AI provides tools across the full lifecycle of AI/ML experiments and models and includes Red Hat OpenShift AI. It’s a consistent, scalable foundation based on open source technology for IT operations leaders while bringing a specialized partner ecosystem to data scientists and developers to capture innovation in AI.

Introducing

InstructLab

InstructLab is an open source project for enhancing large language models (LLMs).

Keep reading

Article

What is generative AI?

Generative AI relies on deep learning models trained on large data sets to create new content.

Article

What is machine learning?

Machine learning is the technique of training a computer to find patterns, make predictions, and learn from experience without being explicitly programmed.

Article

What are foundation models?

A foundation model is a type of machine learning (ML) model that is pre-trained to perform a range of tasks. 

More about AI/ML

Products

New

A foundation model platform used to seamlessly develop, test, and run Granite family LLMs for enterprise applications.

An AI-focused portfolio that provides tools to train, tune, serve, monitor, and manage AI/ML experiments and models on Red Hat OpenShift.

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure. 

Red Hat Ansible Lightspeed with IBM watsonx Code Assistant is a generative AI service designed by and for Ansible automators, operators, and developers. 

Resources

e-book

Top considerations for building a production-ready AI/ML environment

Analyst Material

The Total Economic Impact™ Of Red Hat Hybrid Cloud Platform For MLOps

Webinar

Getting the most out of AI with open source and Kubernetes