Tech

Red Hat Launches Enterprise Linux AI — and It’s Actually Useful


RHEL AI logo screenshot

Screenshot of Matene

Red Hat has officially launched Red Hat Enterprise Linux (RHEL) AI into general availability. This is not just another product release; this is a truly useful AI approach that RHEL administrators and developers will find extremely useful.

RHEL AI aims to simplify enterprise-wide adoption by providing fully optimized, bootable RHEL images for server deployments in hybrid cloud environments. These optimized bootable versions of the model runtime work with Granite models and InstructLab tooling packages. These include Pytorch Runtime libraries and GPU accelerators for AMD Instinct MI300X, Intelligence And NVIDIA GPU and NeMo platform.

Also: Bloomberg survey says businesses are doubling their efforts to deploy artificial intelligence

This is Red Hat’s core AI platform. It’s designed to streamline AI generated (gen AI) to develop, test, and deploy models. This new platform combines IBM Research’s licensed open source Granite Large Language Model (LLM) family, the InstructLab alignment tool based on LAB methodologyand collaborative approach to model development through the InstructLab project.

IBM Research pioneered the LAB methodology, which uses synthetic data generation and multi-phase tuning to calibrate AI/ML models without the need for costly manual effort. The LAB methodology, refined through the InstructLab community, allows developers to build and contribute to LLMs just like they would any open source project.

With the launch of InstructLab, IBM has also released several Granite English language and code models under the Apache license, providing transparent datasets for training and community contributions. English Model Granite 7B It is now integrated into InstructLab, where users can work together to enhance its capabilities.

Also: Can AI Even Be Open Source? It’s Complicated

RHEL AI is also integrated in AI OpenShiftRed Hat’s machine learning operations platform (MLOps). This enables large-scale model deployment in distributed Kubernetes cluster.

Let’s face it: AI isn’t cheap. Leading the way Large language models (LLMs) cost tens of millions of dollars to train. That’s before you even start thinking about tweaking for specific use cases. RHEL AI is Red Hat’s attempt to bring those huge costs back into reality.

Also: Research shows AI spending will reach $632 billion in the next 5 years

Red Hat does part of that by using Enhanced Access Generation (RAG). RAG allows LLM to access approved external knowledge stored in databases, documents, and other data sources. This enhances RHEL AI’s ability to provide the right answer instead of the answer that sounds right.

This also means you can train your RHEL AI instances from your company’s subject matter experts without needing a PhD in machine learning. This will make RHEL AI much more useful than regular AI for doing the work you need done instead of writing Star Wars fan fiction.

Also: Anthropic’s New Claude Enterprise Plan Brings AI Superpowers to Businesses at Scale

“RHEL AI empowers domain experts, not just data scientists, to contribute to next-generation purpose-built AI models on hybrid cloud platforms, while also enabling IT organizations to scale these models into production through Red Hat OpenShift AI,” said Joe Fernandes, vice president of Red Hat’s OpenShift AI Platform, in a statement.

RHEL AI isn’t tied to any single environment. It’s designed to run wherever your data lives — whether on-premises, at the edge, or in the public cloud. This flexibility is critical when deploying AI strategies without completely overhauling your existing infrastructure.

The program is currently available on Amazon Web Services (AWS) and IBM Cloud as a “bring your own (BYO)” subscription. Over the next few months, the program will be available as a service on AWS, Google Cloud Platform (GCP), IBM Cloud, and Microsoft Azure.

Also: Stability AI’s text-to-image models are now available in the AWS ecosystem

Dell Technologies has announced a partnership to bring RHEL AI to Dell PowerEdge servers. The partnership aims to simplify AI deployments by providing validated hardware solutions, including NVIDIA accelerated computing, optimized for RHEL AI.

As someone who has covered open source software for decades and has played with AI before, Lisp Considered the most advanced, I think RHEL AI represents a significant shift in the way businesses approach AI. By combining the power of open source with enterprise-grade support, Red Hat is positioning itself at the forefront of the AI ​​revolution.

Of course, the real test will be in real-world adoption and application. But given Red Hat’s track record, RHEL AI could be the platform that takes AI beyond the tech giants and into the hands of businesses of all sizes.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *