Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
An app like Cal.com can be vibe coded in a few evenings with a Chrome MCP server pointed to their website to figure out all the nooks and crannys. The moat of Cal.com is not the code, it's the users who don't want to migrate.
The real answer is they are likely having a hard time converting people to paid plans
"AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better. This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary."
This conclusion makes more sense to me, but maybe I'm too naive.
The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.
> to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
That can't be right, can it? Given stable software, the relative attack surface keeps shrinking. Mythos does not produce exploits. Should be defenders advantage, token wise, no?
So long as that OSS keeps accumulating features, there isn't quite the equilibrium you're imagining. If you can pin to a stable version, which continues to audited, you're fine. But if the rest of the world moves on to newer versions of the software, you'll have to as well, unless you want to own the burden of hardening older versions.
> to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them
This is true until certain point, unless the requirement / contract itself has loophole which the attacker can exploit it without limit. But I don't think this is the case.
Let's say, if someone found an loophole in sort() which can cause denial-of-service. The cause would be the implementation itself, not the contract of sorting. People + AI will figure it out and fix it eventually.
It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.
Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).
One way to reduce distribution is to, raise the price.
Another is to make a worse product.
Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).
The economics of software are going to massively reconfigure in the coming years, open source most of all.
I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
> I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
So each time you roll the dice you gamble on getting a fresh set of 0-days? I don't get why anyone would want this.
You already do this with human-authored code, just slowly.
Project model capabilities out a few years. Even if you only assume linear improvement at some point your risk-adjusted outcome lines cross each other and this becomes the preferred way of authoring code - code nobody but you ever sees.
Most enterprises already HATE adopting open source. They only do it because the economic benefit of free reuse has traditionally outweighed the risks.
If you need a parallel: we already do this today for JIT compilers. Everything is just getting pushed down a layer.
This seems similar to the lesson learned for cryptographic libraries where open source libraries vetted by experts become the most trusted.
Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
An important difference between SaaS offerings and open source libraries is that the latter have not liability. They can much more easily afford exhibiting vulnerabilities until those are fixed.
This may be true long term but not short term. It also assumes that white hats will be as motivated as black hats – not true.
For projects with NO WARRANTY, the risk is minimal, so yes there are upsides.
For a commercial project like cal.com, where a breach means massive liability, they don’t have the resources to risk breaches in the short term for potentially better software in the long term.
I expect we're about to find that it's a lot easier to convince a company to spend money running an AI security scan of their dependencies and sharing the results with the maintainers than it is to have them give those maintainers money directly.
(I just hope they can learn to verify the exploits are valid before sharing them!)
> SAN FRANCISCO – March 17, 2026 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced $12.5 million in total grants from Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI to strengthen the security of the open source software ecosystem.
This seems kind of crazy. If LLMs are so stunningly good at finding vulnerabilities in code, then shouldn't the solution be to run an LLM against your code after you commit, and before you release it? Then you basically have pentesting harnesses all to yourself before going public. If an LLM can't find any flaws, then you are good to release that code.
A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?
After a release, attackers have effectively infinite time to throw an LLM against every line of your code - an LLM that only gets smarter and cheaper to run as time passes. In order to feel secure you’d need to do all the work you’d imagine an attacker would ever do, for every single release you ship.
The first few times it's going to be expensive, but once everyone level sets with intense scans of their codebases, "every single release" is actually not that big a deal, since you are not likely to be completely rebuilding your codebase every release
LLMs really are stunningly good at finding vulnerabilities in code, which is why, with closed-source code, you can and probably will use them to make your code as secure as possible.
But you won't keep the doors open for others to use them against it.
So it is, unfortunately, understandable in a way...
I'm not a security expert but can't close source applications be vulnerable and exploited too? I feel like using close source as a defense is just giving you a false sense of security.
What is being phrased as obscurity is one of the approaches to security as long as you are able to keep the code safe. Your passwords, security keys are just random combination of strings, the fact that they are obscure from everyone is what provides you the security
Finding a vulnerability in a black box is drastically different from finding one in a white box. This isn’t about whether there is a vulnerability or not, but about the likelihood of it being found.
Every change would introduce the possibility of a vulnerability being added to the system and one would need to run the LLM scan across the entire code base. It gets very costly in a environment where you are doing regular commits. Companies like Github already provide scanning tools for static analysis and the cost is already high for them.
Attackers only need LLMs to be good at randomly finding one vulnerability, whereas service providers need them to be good at finding all such vulnerabilities.
It's not a "project" though; the business Cal.com Inc raised that VC money. Their open source repo did not raise the money.
Did they ever promise to keep their codebase FOSS forever, in a way that differs from what they're already doing over at cal.diy? If not, I don't see why it would be reasonable to expect them to spend a huge amount of money re-scanning on every single commit/deploy in order to keep their non-"DIY" product open source.
I mean, you should definitely have _some_ level of audit by LLMs before you ship, as part of the general PR process.
But you might need thousands of sessions to uncover some vulnerabilities, and you don’t want to stop shipping changes because the security checks are taking hours to run
It's entirely possible to address all the LLM-found issues and get an "all green" response, and have an attacker still find issues that your LLM did not. Either they used a different model, a different prompt, or spent more money than you did.
It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.
Hey cal.com, as a potential customer, you have just lost me.
Open source is set to profit from improved transparency in the SSDLC. With closed source, you will have to trust the software vendor instead.
I'm not sure I agree with Drew Breunig, however. The number of bugs isn't infinite. Once we have models that are capable enough and scan the source code with them at regular intervals, the likelihood of remaining bugs that can be exploited goes way down.
I think people are finding ways to either enable “pro” features and at least find the right extension points to implement them easily with LLMs. Security is window dressing.
Yeah, I don't buy it. If they don't want these security reports, ignore them and continue your path. Blaming AI is just an excuse to close source. If you don't want AI to learn from your code, too late. Add genetic algorithms and fuzzing into AI and it can iterate and learn a billion times faster, no need to learn for humans.
The real downside to Google's solution is that you have to use Google Meet. Depending on your opinion of Meet, this is either no big deal or a total deal breaker.
Security through obscurity is only problematic if that is the only, or a primary, layer of defense. As an incremental layer of deterrence or delay, it is an absolutely valid tactic. (Note, not commenting on whether that is the rationale here.)
That, and plenty of closed-source software at least has a decent security track record by now. I haven't seen an obvious cause-and-effect of open-source making something more secure. Only the other direction, where insecure closed-source software is kept closed because they know it's Swiss cheese.
Going closed source is making the branch secret/private, not making it obscure. Obscurity would be zipping up the open source code (without a password) and leaving it online. Obscurity is just called taking additional steps to recover the information. Your passwords are not obscure strings of characters, they are secrets.
If there is a self-hosted version at all, then the compiled form is out there to be analysed. While compilation and other forms of code transformation that may occur are not 1->1, trivially reversed, operations, they are much closer to bad password security (symmetric encryption or worse) then good (proper hashing with salting/peppering/etc). Heck, depending on the languages/frameworks/other used the code may be hardly compiled or otherwise transformed at all in its distributed form. Tools to aid decompiling and such have existed for practically as long as their forward processes have, so I would say this is still obscurity rather than any higher form of protection.
Even if the back-end is never fully distributed any front-end code obviously has to be, and even if that contains minimal logic, perhaps little more than navigation & validation to avoid excess UA/server round-trip latency, the inputs & outputs are still easily open to investigation (by humans, humans with tools, or more fully automated methods) so by closing source you've only protected yourself from a small subset of vulnerability discovering techniques.
This is all especially true if your system was recently more completely open, unless a complete clean-room rewrite is happening in conjunction with this change.
Right, but those capabilities are available to you as well. Granted the remediation effort will take longer but...you're going to do that for any existing issues _anyway_ right?
I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.
And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.
I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.
We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.
I don't think this really helps that much. Your neighbor could ask an LLM to decompile your binaries, and then run security analysis on the results.
If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
If I understand correctly, their primary product is SaaS, and their non-DIY self-host edition is an enterprise product. So your neighbor wouldn't have access to the binaries to begin with.
It only takes 20 minutes and $200 to hack a closed source one too though. LLMs are ludicrously good at using reverse engineering tools and having source available to inspect just makes it slightly more convenient.
Very true, but that is still a meaningfully higher cost at scale. If, as people are postulating post-Mythos, security comes down to which side spends more tokens, it is a valid strategy to impose asymmetric costs on the attacker.
Couldn't you just spend those $100 on claude code credits yourself and make sure you're not shipping insecure software? Security by obscurity is not the correct model (IMO)
> neighbors son 15 mins and $100 claude code credits
Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.
What's preventing cal.com to run the AI researcher over their own codebase and find their vulnerabilities before anyone else and patch them all by tomorrow morning?
We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.
Time will tell, I am in the open source camp, though.
Proposition 1: The majority of a code in a modern app is from shared libraries
Proposition 2: The most popular shared libraries are going to be quickly torn apart by LLM security tools to find vulnerabilities
Proposition 3: After a brief period of mass vulnerability discovery, the overall quality of shared libraries will dramatically increased.
Conclusion: After the initial wave of vulnerabilities has passed, the main threat to open source code bases is in their own comparatively small amount of code.
I have fond memories of this project. Contributing to it really helped me ramp up my dev skills and was effectively my introduction to monorepo’s in JavaScript. It was the kind of codebase I couldn’t get my hands on while working in my part of the world. Good luck going closed source.
Security through obscurity can be a good security layer, but you need to maintain obscurity. That's a lot harder than Cal.com seems to realize.
For example using something like Next.js means a very large chunk of important obscurity is thrown out the window. The same for any publicly available server/client isomorphic framework.
This certainly makes me feel better about the project I started a few months ago to replace my Cal.com instance with a smaller, simpler self-hosted tool
I know plenty of security researchers who exclusively use Claude Code and other tools for blackbox testing against sites they don’t have the source code for. It seems like shutting down the entire product is the only safe decision here!
The founder proclaimed "Open Source is Dead" in the original tweet.
I thought this was grandiose and projecting their own weakness onto others, an extremely unappealing marketing position that may get clicks in the short term but will undermine trust beyond that.
The real threat is not security but bad actors copying your code and calling it theirs.
IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?
The only open source that will remain will be the real open source projects that are true to the ethos.
I agree with you that AI's disruption of attribution is a much bigger problem, but it's also worth recognizing that not everyone has this same motivation. It mostly affects copyleft open source licenses.
Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.
Sounds like "security by obscurity" to me - if you think AI is so good at finding security issues - it will find them in compiled code as well. Why not using it in your favor and let it search for bugs you'd otherwise not find?
You can lock down the source and also use AI to look for bugs in it. It does take significantly more time and money for AI to find bugs in compiled code.
That said, I agree with another commenter that this seems like more of a business decision than a security one.
I am beyond convinced at this point that you either run an Open Source Project with a small revenue company (single digit millions) or run a software company that does more than 10M ARR at the least and be closed source. I know there are exceptions but most open source Software companies are providing code with heavy restrictions or teaser features and gate keep everything in their "ee/enterprise" version etc.
Today, it's easy to (publicly) evaluate the ability of LLMs to find bugs in open source codebases, because you don't need to ask permission. But this doesn't actually tell us the negative statement, which is that an LLM won't just as effectively find bugs in closed codebases, including through black-box testing, reverse engineering, etc.
If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).
There are endless closed calendar options. Cal.com being FOSS and not making us feel locked in forever was the only reason we chose it over wasting limited cycles self hosting this at Distrust and Caution.
AI can clone something like cal.com with or without source code access, so in trying to pointlessly defend against AI they are just ruining the trust they built with their customers, which is the one thing AI can never create out of thin air.
We exclusively run our companies with FOSS software we can audit or change at any time because we work in security research so every tool we choose is -our- responsibility.
They ruined their one and only market differentiator.
We will now be swapping to self hosting ASAP and canceling our subscriptions.
Really disappointing.
Meanwhile at Distrust and Caution we will continue to open source every line of code we write, because our goal is building trust with our customers and users.
Juxtapose this with the fact that many HNers will decry strong copyleft FOSS licenses as not being truly "open source" - the reality is that closed source software is still full of open-source non-copyleft dependencies. Unless you're rolling your own encryption and TCP stack, being closed source will not be the easy solution that many imagine it to be.
In my advisory job founders always raise the question about open sourcing within the first hour of meeting me. They think that open sourcing product means transparency and developer trust which helps with early adoption. Every single founder I talked to brings up open source as a market penetration method to drive the initial adoption.
I always say to just stop with the virtue signaling led sales technique.
I despise the "we are like the market leader of our niche but open source" angle. Developer as a buyer and as a community these days in my opinion do not care about open source anymore. There is no long term value to that. The moment a product gets traction the open source elements is a constant mild headache as open source product means that they have no intellectual copyright on the core aspect of the product and it is hard to raise money or sell the company. And whenever a product gets traction they will take any excuse to make it close source again. With an open source product they are just coasting on brand. Regardless of what your personal opinion is, this has been largely true for most for-profit business.
Open source is largely is nothing more then a branding concept for a company who is backed by investors.
Monumentally dumb given their codebase is already public and the type of security issues that exist in software are usually found in the oldest code. But also, and more importantly, cal.com launched coss.com last year, open source is (ostensibly) their DNA. How could they do a complete 180 on something so fundamental and think that wouldn’t worry customers, much more so than their codebase being public? I cannot even begin to understand this. Surely there must be more to the story?
Coss.com reads like a half assed pivot if you look at it with today's news. It's clear cal.com isn't making enough money and going closed source is yet another attempt to fix that.
Oh wow the coss.com thing makes this so much worse. Making such an aggressive and public commitment to open source to then turn around and do something like this is a pretty rough look.
- You know, Lindsay, as a software engineering consultant, I have advised a number of companies to explore closing their source, where the codebase remains largely unchanged but secure through obscurity.
- Well, did it work for those companies?
- No, it never does. I mean, these companies somehow delude themselves into thinking it might, but... but it might work for us.
TIL I learned about yet another calendar application I don't need. Someone should setup their openclaw to just write a new todo/calendar app each week; they'll be billionaires by the end of the year.
This is some truly exceptionally clownish attention seeking nonsense. The rationale here is complete nonsense, they just wanted to put "because AI" after announcing their completely self-serving decision. If AI cyber offense is such a concern, recognize your role as a company handling truckloads of highly sensitive information and actually fix your security culture instead of just obscuring it.
I mean it's not complete nonsense, but yeah, doing it for security reasons sounds like BS. I actually thought this was going to be about how AI makes it super easy for someone to steal all their code and fold it into their own competing project. I've seen a few open source projects get sideswiped by this, AI is pretty good at copying code (and obfuscating the fact that it was copied). I suspect that's the real reason but it doesn't sound as good. So they went with this half-truth.
I hate how this sounds...but this reads to me "we lack the confidence in our code security so we're closing the source code to conceal vulnerabilities which may exist."
Risk tolerance and emotional capacity differs from one individual to another, while I may disagree with the decision I am able to respect the decision.
That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
This is the future now that AI is here. Publishing is going to be dead, look at the tea leaves, how many engineers are claiming they don’t use package managers anymore and just generate dependencies? 5 years and no one will be making an argument for open source or blogging.
> if AI can be pointed and find vulnerabilities then do it yourself before publishing the code
At your cost.
Every time you push. (or if not that, at least every time there is a new version that you call a release)
Including every time a dependency updates, unless you pin specific versions.
I assume (caveat: I've not looked into the costs) many projects can't justify that.
Though I don't disagree with you that this looks like a commercial decision with “LLM based bug finders could find all our bad code” as an excuse. The lack of confidence in their own code while open does not instil confidence that it'll be secure enough to trust now closed.
For-profit companies using open-source software should bear that cost - that's my position.
I believe than N companies using an open source project and contributing back would make this burden smaller than one company using the same closed-source project.
Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.
Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.
Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
The tools are available to everyone. It's becoming easier for hackers to attack you at the same speed that it's becoming easier for you to harden your systems. When everyone gains the same advantage at the same time, nothing has really changed.
It makes me think of how great chess engines have affected competitive chess over the last few years. Sure, the ceiling for Elo ratings at the top levels has gone up, but it's still a fair game because everyone has access to the new tools. High-level players aren't necessarily spending more time on prep than they were before; they're just getting more value out of the hours they do spend.
I agree it's a shit tactic, but one thing I can say for those running software businesses is that it's not an equivalent linear increase on both sides. It's asymmetric, because # of both attackers and the amount of attack surface (exposed 3rd party dependencies, for example) is near infinite, with no opportunity cost for failure by the bad actors (hackers). However a single failure can bring down a company, particularly when they may be hosting sensitive user data that could ruin their customers' businesses or lives.
I think Cal are making the wrong call, and abandoning their principles. But it isn't fair to say the game is accelerating in a proportionate way.
Ultimately, he concludes that while in the short run the game defines the players' actions, an environment that makes cooperation too risky naturally forces participants to stop cooperating to protect themselves from being "exploited" (this bit is around 34:39 - 34:46)
Sure, I can see that to a degree. And there definitely is a bit of chaos during the transition period as everyone scrambles to figure out what the landscape looks like now. I could understand if they decided to temporarily do less-frequent code releases, or maybe release their code on a delay or something, while they wait for the dust to settle. But I don't think permanently ending open source development is the right move.
Agreed! There must be a way to maintain the principles and benefits of open-source; the alternative, which is that all software becomes a black box, is antithetical to the same security that that choice supposedly aims to achieve.
I think companies make decisions like this from a tactics level, not realizing that by doing so they are not only alienating their customers but misunderstanding the basic (often unconscious or unspoken) social contract upon which their very existence is predicated.
Calendly already existed. Cal came along and said, ok, but what if the code were out in the open -- auditable, self-hostable. Then you wouldn't have to worry about lock-in, security, privacy, etc, in the same way. Now they are removing that entire aspect of their value prop. It may be the only thing that caused a good portion of their customers to adopt in the first place.
Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
* a la https://news.ycombinator.com/item?id=26998308
The real answer is they are likely having a hard time converting people to paid plans
That's a very weak moat unless you have something else like the friction of network dependence similar to a social network.
"AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better. This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary."
The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.
(I might be very wrong here)
That can't be right, can it? Given stable software, the relative attack surface keeps shrinking. Mythos does not produce exploits. Should be defenders advantage, token wise, no?
Defenders have to find all the holes in all their systems, while attackers just need to find one hole in one system.
AI in general will, don't worry. "Move fast and break things" makes more exploits than "move steadily and fix things" does.
This is true until certain point, unless the requirement / contract itself has loophole which the attacker can exploit it without limit. But I don't think this is the case.
Let's say, if someone found an loophole in sort() which can cause denial-of-service. The cause would be the implementation itself, not the contract of sorting. People + AI will figure it out and fix it eventually.
It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.
Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).
One way to reduce distribution is to, raise the price.
Another is to make a worse product.
Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).
The economics of software are going to massively reconfigure in the coming years, open source most of all.
I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
So each time you roll the dice you gamble on getting a fresh set of 0-days? I don't get why anyone would want this.
Project model capabilities out a few years. Even if you only assume linear improvement at some point your risk-adjusted outcome lines cross each other and this becomes the preferred way of authoring code - code nobody but you ever sees.
Most enterprises already HATE adopting open source. They only do it because the economic benefit of free reuse has traditionally outweighed the risks.
If you need a parallel: we already do this today for JIT compilers. Everything is just getting pushed down a layer.
Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
For projects with NO WARRANTY, the risk is minimal, so yes there are upsides.
For a commercial project like cal.com, where a breach means massive liability, they don’t have the resources to risk breaches in the short term for potentially better software in the long term.
(I just hope they can learn to verify the exploits are valid before sharing them!)
I might like to live there.
https://openssf.org/tag/google
"But that's Linux, how small libraries get audit budget..." fortunately LLM has eliminated the need to have small libraires in your dependency chain.
I'd give them more credits if they use the AI slop unmaintainability argument.
Our scheduling tool, Thunderbird Appointment, will always be open source.
Repo here: https:// github.com/thunderbird/appointment
Come talk to us and build with us. We'll help you replace Cal.com
A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?
https://en.wikipedia.org/wiki/Linus%27s_law
But you won't keep the doors open for others to use them against it.
So it is, unfortunately, understandable in a way...
LLMs, and tools built to use them, are violating a lot of assumptions these days.
Did they ever promise to keep their codebase FOSS forever, in a way that differs from what they're already doing over at cal.diy? If not, I don't see why it would be reasonable to expect them to spend a huge amount of money re-scanning on every single commit/deploy in order to keep their non-"DIY" product open source.
But you might need thousands of sessions to uncover some vulnerabilities, and you don’t want to stop shipping changes because the security checks are taking hours to run
It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.
This! I love OSS but this argument seems to get overlooked in most of the comments here.
I'm not sure I agree with Drew Breunig, however. The number of bugs isn't infinite. Once we have models that are capable enough and scan the source code with them at regular intervals, the likelihood of remaining bugs that can be exploited goes way down.
I feel like with AI, self-hosting software reliably is becoming easier so the incentives to pay for a hosted service of an OSS project are going down.
Wanna sack a load of staff? - AI
Wanna cut your consumer products division? - AI
Wanna take away the source? - AI
It has always been odd to me they didn’t have this functionality years ago. It’s been requested for a long long time
Even if the back-end is never fully distributed any front-end code obviously has to be, and even if that contains minimal logic, perhaps little more than navigation & validation to avoid excess UA/server round-trip latency, the inputs & outputs are still easily open to investigation (by humans, humans with tools, or more fully automated methods) so by closing source you've only protected yourself from a small subset of vulnerability discovering techniques.
This is all especially true if your system was recently more completely open, unless a complete clean-room rewrite is happening in conjunction with this change.
I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.
And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
Give him $100 to obtain that capability.
Give each open source project maintainer $100.
Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.
I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.
We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.
If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
You can keep the untested branch closed if you want to go with “cathedral” model, even.
Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.
To what end? You can just look at the code. It's right there. You don't need to "hack" anything.
If you want to "hack on it", you're welcome to do so.
Would you like to take a look at some of my open-source projects your neighbour's kid might like to hack on?
That's right. Nothing.
We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.
Time will tell, I am in the open source camp, though.
Proposition 2: The most popular shared libraries are going to be quickly torn apart by LLM security tools to find vulnerabilities
Proposition 3: After a brief period of mass vulnerability discovery, the overall quality of shared libraries will dramatically increased.
Conclusion: After the initial wave of vulnerabilities has passed, the main threat to open source code bases is in their own comparatively small amount of code.
For example using something like Next.js means a very large chunk of important obscurity is thrown out the window. The same for any publicly available server/client isomorphic framework.
https://git.sr.ht/~bsprague/schedyou
Open Source Isn't Dead - https://news.ycombinator.com/item?id=47780712
Cybersecurity looks like proof of work now - https://news.ycombinator.com/item?id=47769089
I thought this was grandiose and projecting their own weakness onto others, an extremely unappealing marketing position that may get clicks in the short term but will undermine trust beyond that.
IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?
The only open source that will remain will be the real open source projects that are true to the ethos.
Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.
How has this changed?
That said, I agree with another commenter that this seems like more of a business decision than a security one.
It seems like an easy decision, not a difficult one.
If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).
AI can clone something like cal.com with or without source code access, so in trying to pointlessly defend against AI they are just ruining the trust they built with their customers, which is the one thing AI can never create out of thin air.
We exclusively run our companies with FOSS software we can audit or change at any time because we work in security research so every tool we choose is -our- responsibility.
They ruined their one and only market differentiator.
We will now be swapping to self hosting ASAP and canceling our subscriptions.
Really disappointing.
Meanwhile at Distrust and Caution we will continue to open source every line of code we write, because our goal is building trust with our customers and users.
https://news.ycombinator.com/item?id=47780712
I always say to just stop with the virtue signaling led sales technique.
I despise the "we are like the market leader of our niche but open source" angle. Developer as a buyer and as a community these days in my opinion do not care about open source anymore. There is no long term value to that. The moment a product gets traction the open source elements is a constant mild headache as open source product means that they have no intellectual copyright on the core aspect of the product and it is hard to raise money or sell the company. And whenever a product gets traction they will take any excuse to make it close source again. With an open source product they are just coasting on brand. Regardless of what your personal opinion is, this has been largely true for most for-profit business.
Open source is largely is nothing more then a branding concept for a company who is backed by investors.
This post's argument seems circular to me.
- Well, did it work for those companies?
- No, it never does. I mean, these companies somehow delude themselves into thinking it might, but... but it might work for us.
That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
At your cost.
Every time you push. (or if not that, at least every time there is a new version that you call a release)
Including every time a dependency updates, unless you pin specific versions.
I assume (caveat: I've not looked into the costs) many projects can't justify that.
Though I don't disagree with you that this looks like a commercial decision with “LLM based bug finders could find all our bad code” as an excuse. The lack of confidence in their own code while open does not instil confidence that it'll be secure enough to trust now closed.
I believe than N companies using an open source project and contributing back would make this burden smaller than one company using the same closed-source project.
Great move.
Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.
Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.
Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
It makes me think of how great chess engines have affected competitive chess over the last few years. Sure, the ceiling for Elo ratings at the top levels has gone up, but it's still a fair game because everyone has access to the new tools. High-level players aren't necessarily spending more time on prep than they were before; they're just getting more value out of the hours they do spend.
I think Cal are making the wrong call, and abandoning their principles. But it isn't fair to say the game is accelerating in a proportionate way.
See: https://www.youtube.com/watch?v=2CieKDg-JrA
Ultimately, he concludes that while in the short run the game defines the players' actions, an environment that makes cooperation too risky naturally forces participants to stop cooperating to protect themselves from being "exploited" (this bit is around 34:39 - 34:46)
I think companies make decisions like this from a tactics level, not realizing that by doing so they are not only alienating their customers but misunderstanding the basic (often unconscious or unspoken) social contract upon which their very existence is predicated.
Calendly already existed. Cal came along and said, ok, but what if the code were out in the open -- auditable, self-hostable. Then you wouldn't have to worry about lock-in, security, privacy, etc, in the same way. Now they are removing that entire aspect of their value prop. It may be the only thing that caused a good portion of their customers to adopt in the first place.
Then good, that overengineered, intentionally-crippled crap should go away.