Every phase of the technology industry produces its own premature obituaries.
The web was supposed to kill newspapers. The cloud was supposed to kill system administrators. Low-code was supposed to kill developers. Today, in the middle of the generative AI wave, it is Open Source’s turn.
The thesis returned to the center of the debate after the Tailwind Labs case: AI would be feeding on the work of open communities, disintermediating the projects that helped nourish it, and making free software economically unsustainable.
A recent article by Maurizio Farina, Open Source in the Age of AI: the Tailwind CSS Case, has the merit of framing the issue clearly. The concern is legitimate. The case is real. The question of the economic sustainability of open projects cannot be dismissed with a shrug.
But the conclusion, in my view, is too broad.
AI is not killing Open Source. It is putting pressure on certain business models built around Open Source and, at the same time, making urgent a problem that already existed: how to redistribute value toward those who maintain the software infrastructure we all depend on.

Index
- The Tailwind case deserves to be taken seriously
- The real problem: AI can extract value without redistributing it
- Where the diagnosis becomes too broad
- The shadcn/ui factor: when Open Source competes with Open Source
- Code becomes a commodity. Value does not.
- Closing the code would be the wrong response
- The more AI we use, the more we need open foundations
- I asked AIs whether they will kill Open Source
- The right crisis, the wrong conclusion
The Tailwind case deserves to be taken seriously
Let us start with the facts.
In January 2026, Adam Wathan, founder of Tailwind CSS, publicly described a very difficult situation: Tailwind Labs had laid off 75% of its engineering team — three people out of four, in a small bootstrapped company. In his view, the reason was the “brutal impact” of artificial intelligence on the business.
The paradox is obvious: Tailwind is more widely used than ever, yet Tailwind Labs’ business has weakened.
For years, the model worked roughly like this:
- the Tailwind CSS framework is Open Source and free;
- developers arrive at the official documentation to learn how to use it;
- along that path, they discover the company’s commercial products, especially premium components and templates;
- a portion of that traffic turns into revenue.
AI breaks exactly that passage.
A developer who yesterday searched for “Tailwind responsive navbar” or “Tailwind pricing card” on Google can now ask ChatGPT, Claude, Gemini, Cursor, or Lovable to generate the code directly. The framework keeps being used. The official website, however, is visited less. And if that website was the main sales channel, the economic damage becomes immediate.
According to Wathan, traffic to the documentation fell by about 40% compared with the beginning of 2023, while revenue dropped by almost 80%.
This dynamic deserves attention. It is not a nostalgic complaint. It is the sign of a real transformation: in the age of AI, a project can become more relevant and monetize worse at the same time.
Up to this point, the concern is well founded.
The real problem: AI can extract value without redistributing it
There is a point that defenders of Open Source should not minimize.
Generative models learn from an enormous mass of public material: code, documentation, tutorials, issue trackers, forums, examples, technical discussions. A substantial portion of this heritage comes from the Open Source ecosystem.
Companies developing AI then transform that knowledge into commercial products sold as subscriptions, APIs, AI-enhanced development environments, and enterprise services.
The issue should not be reduced to the simplistic opposition between “theft” and “innovation.” The more serious point is another one:
if AI enormously increases the use value of Open Source, but that value does not adequately flow back toward those who produce and maintain it, then a real economic and political problem exists.
The Tailwind case is interesting precisely because it makes this asymmetry visible. Open work remains useful; indeed, it becomes even more useful because it is incorporated into AI-assisted workflows. But the relationship between value generated and value captured can break.
This is where the debate is legitimate. New sustainability tools are needed:
- more structured sponsorships;
- more responsible corporate procurement toward critical dependencies;
- direct funding for maintainers;
- commercial models less dependent on traffic;
- a serious discussion about the relationship between AI, licenses, and our shared digital heritage.
This is the right question.
The wrong question is: should we close the code, then?
Where the diagnosis becomes too broad
Saying that AI puts the Tailwind model under pressure is not the same as saying that AI puts Open Source itself under pressure.
Tailwind CSS is Open Source. But Tailwind Labs did not monetize by selling the framework. It monetized mainly by selling complementary products — premium components and templates — discovered through the documentation.
So the point is not:
“AI makes it impossible to build a business with Open Source.”
The point is:
“AI can destroy a commercial funnel based on the idea that users must pass through you to obtain something that an assistant can now synthesize on the fly.”
That is a huge difference.
Open Source does not coincide with a single business model. There are projects and companies that monetize in very different ways:
- managed hosting;
- enterprise support;
- cloud platforms;
- security and compliance services;
- consulting;
- training;
- collaboration tools;
- premium capabilities that cannot be reduced to “copyable code.”
Vercel does not thrive because it sells the source code of Next.js. Red Hat did not build its history by selling “the code of Linux.” Automattic does not monetize WordPress by keeping the CMS behind a paywall.
In these cases, value does not lie in hiding the code. It lies in reducing complexity, guaranteeing reliability, and providing operational continuity.
AI can generate a landing page. It cannot assume an SLA. It can propose a configuration. It cannot guarantee the multi-year maintenance of an ecosystem. It can write plausible code. It cannot, by itself, replace governance, support, accountability, and trust.
The shadcn/ui factor: when Open Source competes with Open Source
There is also another element that the narrative “AI hit Tailwind” risks leaving in the background: competitive pressure was not coming only from generative models.
In the same years, shadcn/ui exploded.
Its positioning is almost a perfect answer to the new context: high-quality components, code copied directly into the project, total customization, no black box, no opaque package to endure. The promise is explicit: Open Source. Open Code.
And there is an even more interesting detail: shadcn/ui presents itself openly as AI-ready. The code is accessible not only to developers, but also to AI assistants that can read it, modify it, and integrate it more effectively.
This does not prove that shadcn/ui is “the cause” of Tailwind Labs’ revenue decline. That would be a simplification symmetrical to the one that attributes everything to AI.
But ignoring the competitive context would be just as wrong.
When a community offers modular, aesthetically refined, highly customizable components for free, and those components fit perfectly into new AI-assisted workflows, the perceived value of a premium component catalog inevitably tends to compress.
AI accelerated the change. But it did not invent it from scratch.
And this, paradoxically, does not demonstrate the weakness of Open Source. It demonstrates its vitality.
Code becomes a commodity. Value does not.
Generative artificial intelligence is doing to repetitive code what automation has done to many technical activities before it: it lowers the marginal cost.
Boilerplate, snippets, standard components, recurring markup, variations on already known patterns: all of this becomes simpler, faster, less scarce.
It is understandable that those who monetized these objects directly feel the ground shifting under their feet.
But that does not mean that “code is worth nothing anymore.”
Value moves.
What matters more is:
- knowing which code to use;
- integrating it well;
- maintaining it over time;
- verifying its security;
- scaling it;
- making it accessible;
- embedding it in reliable processes;
- governing it in a real context, with real people and real accountability.
AI can generate a button. The quality of an enterprise design system is something else.
AI can compose a page. The coherent user experience of a complex product is something else.
AI can suggest an architecture. An architecture that survives growth, incidents, technical debt, and business constraints is something else.
Selling “copy-paste” in 2026 is harder. Selling reliability is not.
Closing the code would be the wrong response
The most instinctive reaction to a sustainability crisis is also the most dangerous one: if AI feeds on openness, then let us close everything.
It is an understandable temptation. But it is a strategically weak response.
First: closing today does not erase the past. If a project has been public for years, models have already absorbed documentation, syntax, examples, patterns, discussions, and use cases.
Second: closing reduces the most powerful advantage of Open Source, namely the community.
An open repository is not just visible code. It is:
- public issue tracking;
- distributed bug fixing;
- technical debate;
- organic adoption;
- forks;
- trust;
- the ability to become a standard.
AI can produce code. It cannot retroactively generate a living, competent, motivated community.
Linux is not irreplaceable only because the kernel is readable. It is irreplaceable because around that kernel there has existed, for decades, an ecosystem of maintainers, companies, distributions, review processes, technical culture, and shared responsibility.
Closing the code to defend it from AI risks being like walling up the windows to prevent someone from looking inside, forgetting that those same windows were also letting in the light.
The more AI we use, the more we need open foundations
There is one final point I consider decisive.
If AI assistants are going to write a growing share of the software we use, the verifiability of the foundations will matter even more.
A modern application is not made only of code written “in-house.” It is composed of libraries, frameworks, package managers, container images, transitive dependencies, runtimes, and toolchains. If we add another layer of probabilistically generated code on top of that stack, the fundamental question becomes:
how much of what I am running can I actually verify?
Open Source is not automatically secure. History has already taught us that an open project can be fragile, under-maintained, or vulnerable.
But openness makes possible practices that closed software hinders by definition:
- independent audits;
- public code reviews;
- supply chain analysis;
- dependency assessment;
- defensive forks;
- community remediation;
- scoring and posture-control tools for software security.
In a world flooded with synthetic code, transparency does not lose value. It becomes an infrastructural necessity.
I asked AIs whether they will kill Open Source
To make the point even more interesting, I put the question directly to three generative models: ChatGPT, Gemini, and Claude.
The answers, although different in emphasis, converge in a striking way.
ChatGPT argues that AI will not kill Open Source, but it will hit monetization models based on scarcities that have now evaporated: traffic to documentation, trivial snippets, privileged access to patterns that an assistant can reconstruct in seconds.
Gemini is even sharper: AI does not destroy Open Source, it forces it to level up. Value shifts from the “rounded button” to the ability to orchestrate reliable systems. And if the reaction were to close everything, the risk would be a technological ecosystem increasingly concentrated inside a handful of walled gardens.
Claude points to the deepest wound: the real problem is not the death of Open Source, but value extraction. LLMs absorb unpaid work and transform it into commercial services. This asymmetry must be addressed. But moving from there to the claim that the only solution is a retreat behind proprietary walls is an unjustified leap.
I would take these answers for what they are: not oracles, but an interesting mirror of the debate.
If even the tools accused of threatening Open Source converge on a similar diagnosis, perhaps the question deserves to be framed better.
Not: “Will AI kill free software?”
But: “what economic models, what responsibilities, and what redistribution mechanisms are needed for Open Source to remain sustainable in the age of AI?”
The right crisis, the wrong conclusion
The Tailwind case matters because it shows something we will see more and more often: AI can increase the usefulness of a project and, at the same time, weaken the way that project captures value.
This tension is real.
But it does not tell the story of the end of Open Source.
It tells the story of:
- the crisis of some commercial funnels based on attention;
- the end of easy monetization around repetitive code;
- the need to fund maintainers better;
- the urgency of more robust business models;
- the growing need for verifiable software, especially in a world increasingly mediated by AI.
Open Source is not a romantic relic of the pre-generative Internet. It is one of the central infrastructures of contemporary digital civilization. Without it, software would be more expensive, slower to innovate, more concentrated, and less controllable.
AI does not change this truth. It makes it clearer.
Because if code becomes easier to produce, then trust, community, transparency, and the ability to maintain what everyone uses become even more valuable.
The point, then, is not to defend Open Source from AI.
It is to demand that the economics of AI learn to coexist responsibly with the Open Source ecosystem from which it draws such an important part of its intelligence.