matt_asay
Contributor

We need a Red Hat for AI

opinion
Jun 10, 20244 mins
Artificial IntelligenceEmerging TechnologyGenerative AI

We’re still waiting for a trusted vendor to spare enterprises from the confusion and guesswork of artificial intelligence.

hands hold a string of lightbulbs hands at sunset / ideas / brainstorming / invention / innovation

Everyone is doing AI, but no one knows why. That’s an overstatement, of course, but it feels like the market has hit peak hype without peak productivity. As Monte Carlo CEO Barr Moses highlights from a recent Wakefield survey, 91% of data leaders are building AI applications, but two-thirds of that same group said they don’t trust their data to large language models (LLMs). In other words, they’re building AI on sand.

To be successful, we need to move beyond the confusing hype and help enterprises make sense of AI. In other words, we need more trust (open models) and fewer moving parts (opinionated platforms that require guesswork to choose and apply models).

We might need a Red Hat for AI. (It also raises the question, why isn’t Red Hat stepping up to be the Red Hat of AI?)

A model that needs complexity

Brian Stevens, who was CTO of Red Hat back in 2006, helped me understand a key dependency for Red Hat’s business model. As he noted then, “Red Hat’s model works because of the complexity of the technology we work with. An operating platform has a lot of moving parts, and customers are willing to pay to be insulated from that complexity.” Red Hat creates a distribution of Linux, choosing certain packages (networking stacks, print drivers, etc.) and then testing/hardening that distribution for customers.

Anyone can download raw Linux code and create their own distribution, and plenty do. But not large enterprises. Or even small enterprises. They’re happy to pay Red Hat (or another vendor such as AWS) to remove the complexity of compiling components and making it all work seamlessly together. Importantly, Red Hat also contributes to the variety of open source packages that comprise a Linux distribution. This gives large enterprises the confidence that, if they chose (most don’t), they could move away from Red Hat Enterprise Linux in ways they never could move away from proprietary UNIX.

This process of demystifying Linux, combined with open source that bred trust in the code, turned Red Hat into a multibillion-dollar enterprise. The market needs something similar for AI.

A model that breeds complexity

OpenAI, however popular it may be today, is not the solution. It just keeps compounding the problem with proliferating models. OpenAI throws more and more of your data into its LLMs, making them better but not any easier for enterprises to use in production. Nor is it alone. Google, Anthropic, Mistral, etc., etc., all have LLMs they want you to use, and each seems to be bigger/better/faster than the last, but no clearer for the average enterprise.

We’re starting to see enterprises step away from the hype and do more pedestrian, useful work with retrieval-augmented generation (RAG). This is precisely the sort of work that a Red Hat-style company should be doing for enterprises. I may be missing something, but I’ve yet to see Red Hat or anyone else stepping in to make AI more accessible for enterprise use.

You’d expect the cloud vendors to fill this role, but they’ve kept to their preexisting playbooks for the most part. AWS, for example, has built a $100 billion run-rate business by saving customers from the “undifferentiated heavy lifting” of managing databases, operating systems, etc. Head to the AWS generative AI page and you’ll see they’re lining up to offer similar services for customers with AI. But LLMs aren’t operating systems or databases or some other known element in enterprise computing. They’re still pixie dust and magic.

The “undifferentiated heavy lifting” is only partially a matter of managing it as a cloud service. The more pressing need is understanding how and when to use all of these AI components effectively. AWS thinks it’s doing customers a favor by offering “Broad Model Choice and Generative AI Tools” on Amazon Bedrock, but most enterprises today don’t need “broad choice” so much as meaningful choice with guidance. The same holds true for Red Hat, which touts the“array of choices” its AI approach offers, without making those choices more accessible to enterprises.

Perhaps this expectation that infrastructure providers will move beyond their DNA to offer real solutions is quixotic. Fair enough. Perhaps, as in past technology cycles, we’ll have early winners in the lowest levels of the stack (such as Nvidia), followed by those a step or two higher up the stack, with the biggest winners being the application providers that remove all the complexity for customers. If that’s true, it may be time to hunker down and wait for the “choice creators” to give way to vendors capable of making AI meaningful for customers.

More by Matt Asay:

matt_asay
Contributor

Matt Asay runs developer relations at MongoDB. Previously. Asay was a Principal at Amazon Web Services and Head of Developer Ecosystem for Adobe. Prior to Adobe, Asay held a range of roles at open source companies: VP of business development, marketing, and community at MongoDB; VP of business development at real-time analytics company Nodeable (acquired by Appcelerator); VP of business development and interim CEO at mobile HTML5 start-up Strobe (acquired by Facebook); COO at Canonical, the Ubuntu Linux company; and head of the Americas at Alfresco, a content management startup. Asay is an emeritus board member of the Open Source Initiative (OSI) and holds a J.D. from Stanford, where he focused on open source and other IP licensing issues.

More from this author