Back to blog
RecentFebruary 11, 2026

Open Source Doesn't Have to Mean Open Season

open-sourcesoftwarecommunity

I'll be honest. I don't fully understand the concept of open source.

That probably sounds wild coming from a software engineer with nearly 20 years of experience. I've used open source tools every day of my career. I've benefited from them. I respect the people who build them. But the philosophy around open source? The way some people treat it like a moral imperative? That part has never clicked for me.

I'm not anti-community. I'm not anti-sharing. I'm not even anti-open-source in all cases. What I am is anti-theft. Not theft of code. Theft of time. Theft of the years someone spent thinking through a problem, iterating on solutions, and building something that represents their unique perspective on how software should work.

So I want to have an honest conversation about this. One that doesn't devolve into ideology. One where both sides can actually hear each other.

The Case for Open Source

Open source has produced some of the most important software ever written, and the arguments in its favor are genuinely compelling. For foundational infrastructure where broad collaboration helps, it's hard to argue against the model. Linux winning the server market isn't ideology. It's a measurable outcome.

Linux, Git, PostgreSQL, Kubernetes. None of these would exist in their current form without open collaboration. When a broad community contributes to a project, you get diverse perspectives, faster iteration cycles, and battle-tested code scrutinized by thousands of developers across different environments.

There's also the longevity argument. If a proprietary vendor goes under or kills a product, you're stuck. With open source, someone can fork it and keep it alive. The code survives the company. For foundational infrastructure and developer tooling, open source makes a ton of sense. Nobody is going to build a better TCP/IP stack by keeping it proprietary. The network effects of collaboration genuinely matter at that layer.

I respect all of that. Truly.

The open source movement deserves credit for fundamentally changing how software gets built. Before open source became mainstream, developers at different companies routinely solved the same problems in isolation, each reinventing the same wheels behind closed doors. Open source broke that cycle for infrastructure. It created a shared foundation that the entire industry builds on, and that foundation is better for having thousands of contributors rather than one company's engineering team.

The cultural impact is real too. Open source normalized the idea that developers should be able to inspect the tools they depend on. It created career paths for people who contribute publicly. It democratized access to high-quality software in ways that matter enormously for developers in regions and economic situations where paid tooling is out of reach.

I want to be clear that I'm not dismissing any of this. The case for open source in the right context is strong, well-evidenced, and has delivered enormous value to the world. My argument isn't that open source is bad. It's that the model has a blind spot.

The Asymmetry Problem

Open source licensing treats all forks as equal, but a student learning from your code and a trillion-dollar corporation repackaging it as a competing product are fundamentally different acts. The license doesn't account for power dynamics, and that's a structural problem.

When Google forked VS Code to create Project Antigravity, they didn't break any rules. Microsoft open-sourced VS Code intentionally, and forking is the expected consequence. Logically, I have no grounds to be upset about it.

But the asymmetry bothers me. Microsoft invested years of design thinking, user research, and engineering. Google, a company with functionally infinite resources, took that investment and repackaged it in nanoseconds. The license permits it. The power dynamic makes it feel like a billionaire photocopying someone's homework.

And this keeps happening. MongoDB, Elastic, Redis. All three built open source products, watched cloud providers monetize their work without contributing back, and scrambled to change licenses after the fact. Can you blame them?

Here's the thing that nobody wants to say out loud: a license without enforceable accountability is just a suggestion. The GPL has teeth in theory, but most violations go unchallenged because enforcement requires money, lawyers, and time that individual developers don't have. Meanwhile, big companies have legal teams whose entire job is to navigate the edges of compliance.

So you end up with a system where the rules technically exist, but the power to invoke them is completely one-sided. Sound familiar? It should. It's the same dynamic we see in plenty of other "rights" that look great on paper but fall apart in practice when power is distributed unevenly.

The MongoDB story is particularly instructive. They created MongoDB as open source, built a massive community around it, and then watched AWS launch DocumentDB, a MongoDB-compatible service that captured the revenue stream MongoDB had created. MongoDB's response was the Server Side Public License, which essentially says: if you offer this software as a service, you have to open source your entire stack. AWS responded by building their own implementation instead. The license change worked in the narrow sense that it stopped direct exploitation, but it fractured the community and created years of confusion about what "open source" even means anymore.

This is the cycle that keeps repeating. Build openly, get exploited, change the rules, fragment the community. There has to be a better model.

What the Evidence Actually Says

The open source debate generates a lot of heat and not much light because both sides cherry-pick evidence. When you look at the research honestly, neither model is categorically better. The outcomes depend on context, community, funding, and use case.

On security, open source enables faster discovery of vulnerabilities but doesn't guarantee it. Abandoned projects can be just as dangerous as proprietary black boxes, sometimes more so because attackers can read the code. On quality, studies scanning both open and proprietary codebases found defect density is roughly comparable, but the variance within open source is enormous. The Linux kernel has excellent quality. A random npm package with 12 stars? You're rolling the dice.

On total cost of ownership, "free as in beer" is genuinely misleading. Open source has real costs in integration, maintenance, and the engineering time to vet dependencies. On sustainability, a huge amount of critical internet infrastructure depends on projects maintained by one or two burned-out volunteers. The xkcd "Nebraska problem" is not a joke.

Let me expand on each of these because the nuance matters.

Security. The common argument is Linus's Law: "given enough eyeballs, all bugs are shallow." Reports have found that the majority of codebases contain at least one known open source vulnerability. But proprietary software has plenty of vulnerabilities too. They're just less visible until someone exploits them. The honest answer is that open source shifts the security model from "security through obscurity" to "security through transparency," and neither approach is foolproof. What actually determines security outcomes is the health of the maintenance community, not whether the source is open.

Quality. The Coverity Scan project (now part of Synopsys) ran for years scanning both open and proprietary codebases. Their finding was that open source defect density was roughly comparable to proprietary, and in some high-profile projects, better. But this varied enormously. Well-funded projects with active communities produce excellent code. Abandoned projects accumulate technical debt just like proprietary code does, except nobody is contractually obligated to fix it.

Total cost of ownership. Companies like Red Hat built entire businesses on the reality that "free" software still needs paid support. For a startup pulling in well-maintained libraries, open source is an enormous advantage. For an enterprise with compliance requirements, the hidden costs of evaluating, vetting, securing, and maintaining open source dependencies can be substantial. Neither side of the debate is wrong about this. They're just talking about different contexts.

Sustainability. This is the elephant in the room that open source advocates are often reluctant to address directly. Core internet infrastructure depends on projects where the bus factor is one. One person gets burned out, one person changes careers, one person simply stops responding to issues, and software that millions of people depend on starts rotting. Well-funded proprietary software has dedicated teams and contractual maintenance obligations. Open source has hope and goodwill. Both are valid, but only one is reliable.

The bottom line is that neither model is categorically better, which is exactly why a one-size-fits-all approach to licensing doesn't work. Different software deserves different models.

Proof of Creative Labor

Ideas should flow freely. Implementations should be earned. The distinction is between building on concepts (which is how all progress works) and taking someone's specific creative output (which is where it starts to feel like theft of time and investment).

Think about it like life itself. We're all thrown into the world with different perspectives, experiences, and starting points. We have the power to make our lives our own, even when we follow in the footsteps of those who paved our ways. But we'll never have their life, their brain, their perspective. We can be inspired by someone without becoming a copy of them.

I can take the concepts of Dynamic Consistency Boundaries and event modeling (public ideas that I didn't invent) and create SliceRM without forking AxonIQ or KurrentDB. But I have to do the work. I have to contribute something new. I have to actually think about the problem and arrive at my own solution. Not just extend, maintain, or repackage someone else's creative output.

That's what I'm calling proof of creative labor. Proof that you sat with a problem, wrestled with it, and produced something that reflects your unique perspective.

The closest analogy I can find outside of software is in music. Nobody owns the blues as a genre. The 12-bar blues progression is public domain. Thousands of musicians have built careers on it. But if you took someone's specific recording, their specific arrangement, their specific performance, and released it as your own, that would be theft. Not because you used the same chord progression, but because you took the specific creative labor that someone invested in turning that progression into their expression of it.

Software should work the same way. Design patterns, architectural concepts, algorithmic approaches? Those are the chord progressions. They belong to everyone. But the specific implementation that represents years of someone's creative investment in solving a problem their way? That's the recording. That's the performance. That deserves protection.

The open source movement conflates these two things. It treats the chord progression and the recording as the same artifact, governed by the same license. I think that's a mistake. You can share the patterns without sharing the implementation. You can enable learning, inspiration, and composition without enabling wholesale copying of someone's creative output.

This isn't about being possessive. It's about creating a system where more people are incentivized to do the creative work rather than just consuming the creative work of others. If the only way to get a good event store is to fork someone else's, we end up with one event store and a thousand forks. If everyone has to actually think through the problem, we end up with ten genuinely different approaches. That's better for everyone.

Open Surface, Sealed Core

Publish your interfaces, extension points, and documentation openly. Ship your implementation as compiled bytecode. The community builds adapters and plugins on the open surface. The creative core stays sealed. This isn't anti-community. It's a different kind of generosity.

The model works in three layers. The interface is open: types, traits, extension points, documentation, examples. All published. Anyone can see this, code against it, build adapters and integrations. Fork it, remix it, go wild.

The implementation is sealed. The actual engine, the specific way I solved the problem, ships as compiled bytecode. You can run it, extend it through the defined interfaces, but you can't meaningfully read the internals.

The community extends rather than forks. People build projections, adapters, plugins, integrations. They compose with the system without cloning it. And if someone doesn't trust the implementation or wants something different, they do what you did: think about the problem and build their own solution.

I know the immediate objection: "What about bugs in the black box?" Two things.

First, bugs in the sealed core are my responsibility. That's actually more accountable than the typical open source model where bugs are everyone's and no one's problem. When a critical vulnerability shows up in an open source library, there's often an awkward gap between "everyone can see the problem" and "someone actually fixes it." With a sealed core, the accountability is clear: I built it, I maintain it, I fix it.

Second, the sealed core model doesn't prevent competition. It just requires competitors to actually do the creative work. You can't gut my engine and rebrand it, but you can absolutely build your own engine that solves the same class of problems differently. That's not a restriction on innovation. That's a requirement for innovation.

Technically, this works by writing core logic in Rust and distributing it as compiled binaries. WebAssembly modules for portable distribution across platforms. Native binaries through NAPI-RS for performance-critical Node.js integration. The public types and traits get published as source to crates.io and npm. The implementation compiles down to opaque bytecode that runs everywhere but reveals nothing meaningful about how it works internally.

For enterprise customers who want to self-host (because that's always the ask), you ship the compiled binary. The source stays private. The binary runs on their infrastructure. This is what most successful databases already do. You can self-host PostgreSQL from compiled packages without ever seeing a line of C. The difference is that Postgres chose to be open source. I'm choosing not to. And I believe that's a valid choice.

I'm Already Building This Way

This isn't theoretical. I'm shipping products with this model right now. Phino, LumineDB, Bergcache. Open surfaces, sealed cores. You can build with them, extend them, and integrate them. But the guts are mine.

With Phino (my design system built on the golden ratio and biological processes), you can use all of its interfaces, see the docs, and build with the tokens and components. But you'll never be able to put it together the way I put it together, because the internal composition logic is mine.

With LumineDB (my multi-model database) and Bergcache (my in-memory graph cache), the same principle applies. You can build all sorts of adapters and extensions for them. Making a database isn't a new concept. Making a cache isn't a new concept. But the guts? The specific indexing strategies, the consistency mechanics, the architectural decisions? Those are mine. If you want something similar, you'll have to ideate on what those internals might look like. I'm not going to tell you.

Let me be specific about what "open surface" means for each of these projects, because the abstract principle only matters if the implementation is real.

For Phino, the open surface includes the design token specification, the component API contracts, the documentation for how genomes (Phino's term for design subsystems) express themselves, and reference implementations for common UI patterns. What's sealed is the core engine that computes token values from biological parameters, the genome expression logic, and the harmonic relationships between design subsystems. You can use Phino to build beautiful interfaces. You can extend it with new genomes. You can't replicate the engine that makes it all work together.

For LumineDB, the open surface includes the event schema format, the tag hierarchy specification, the DINO query language, the adapter traits for storage and transport, and the projection interface for building read models. What's sealed is the query engine, the indexing strategy, the consistency enforcement mechanics, and the event storage internals. You can build applications on LumineDB, create custom projections, and write adapters for any transport layer you want. You can't extract the database engine and rebrand it.

For Bergcache, the open surface includes the client libraries (TypeScript, and eventually others), the key hierarchy specification, the dependency edge API, and the cascade behavior documentation. What's sealed is the graph traversal algorithm, the TTL management internals, and the memory optimization strategies. You can use Bergcache as your cache layer and build integrations for it. You can't copy the cache engine.

Each of these follows the same pattern: maximum capability for users and the community, maximum protection for the creative work that makes each project unique.

Bridging the Gap

I'm not arguing against open source. I'm arguing for intellectual sovereignty: the right to share your capability without surrendering your cognition. Everyone benefits from what your software does. Nobody shortcuts past the creative work of figuring out how it does it.

To the open source advocates: I hear you. The idea that knowledge should be shared freely, that we stand on the shoulders of giants, that transparency leads to trust. Those are real values, and I share many of them. What I'm pushing back on isn't the sharing of knowledge. It's the expectation that sharing must include handing over the specific creative output that represents years of someone's life.

You can learn from my API design without seeing my implementation. You can be inspired by my architecture without copying my code. You can build something better than what I've built, and I'd genuinely celebrate that, because you'd be proving the point: the ideas flow freely, but the creative labor is yours.

To the people who share my skepticism about open source: I'm not saying close everything down and go back to shrink-wrapped software. The answer isn't to reject the community model entirely. The answer is to build a better one. One that protects the people who do the creative work while still enabling collaboration, extension, and community innovation.

Can this scale beyond one person's conviction? Maybe. Maybe not right away. But I believe the most persuasive argument isn't a manifesto. It's a portfolio. If I can ship products that prove this model works, that communities can thrive around open surfaces without needing access to sealed cores, then the model speaks for itself.

Nobody asked me to solve this problem. But I see it. And I think the solution is worth pursuing.

PS> You might notice that this post uses progressive depth. To learn more about what that is, read this post on ByteQuilt's blog.

Share

Comments