Open source is (not) insecure
TL;DR: The idea that open source software is inherently less secure than its closed source or proprietary counterparts is untrue and stems from fear, uncertainty, and doubt (FUD).
Spend any time in the enterprise IT world, and you’ll quickly be led to believe that open source software is insecure, and only what you use when you can’t afford “real” software — both false. This is a form of FUD (fear, uncertainly, doubt), which has been historically encouraged by those who sell open source’s proprietary alternatives. When talking about the security of open source, there’s two, parallel conversations. Open source as a platform, and open source as a workflow.
Open source as a platform
The first threshold issue, is whether building or using open source software, even if your own particular code isn’t published, is somehow inherently less secure. This argument has a few flavors:
This one (high profile) open source project that (once) had a vulnerability
There’s good software and there’s bad software. How widely the code’s shared has no direct impact on its quality (if anything, it helps, see below). An immature project is going to have bugs, whether proprietary or open source.
Often times, the reason open source software vulnerabilities make headlines, or seem to make headlines more often, is because it’s so widely used. Your custom built software being hacked isn’t newsworthy, but the CMS that powers a quarter of the internet is. Not to mention, by being open source, vulnerabilities are more easily discovered and patched, meaning you may hear more often about open source software having unexploited vulnerabilities, while the only closed-sourced vulnerabilities you hear about are those which have already been exploited in the wild (and thus require immediate action).
It’s (not) made by a bunch of hobbyists
That’s true of some projects, for sure. There’s also fly-by-night software companies that sell sub-par, closed-source software. Again, quality is quality. Chances are, the project made in the developer’s mom’s basement isn’t going up against the one made by the company that makes “enterprise-grade” solutions. Like anything else, look to who’s behind the software and who else uses it. Major projects like Linux, WordPress, and Drupal are led by teams of professionals, relied on by thousands of for-profit business, and have “suits” supporting them, to ensure their success.
That said, there is something to be said for a distributed workflow. Lots of closed-source solutions, influenced by nine-to-five, Monday-to-Friday workflows, stringent release cycles, and press paralysis are forced to delay releases that patch critical vulnerabilities, often for years, while open source projects, guaranteed to have someone awake at any hour of the night, can turn around that same patch, literally within hours of the vulnerability being reported.
Open source as a workflow
The second form of open source FUD attacks open source as a workflow, meaning that by publishing the source code to your software, by that workflow’s nature, it somehow, becomes less secure. Again, there are several flavors to this argument.
Anyone can (not) make changes
The idea that anyone can make changes is simply untrue — FUD at its best. Like with closed projects, only authorized maintainers can approve changes to open source projects. I suspect this misunderstanding stems from using Wikipedia as an analogy to describe open source to the uninitiated. Simply put, it comes from a place of ignorance. You’re not going to wake up one day and mysteriously, your code will be riddled with malware and backdoors.
Instead, open source often has better controls to ensure pedigree than its closed-source counterparts. Open source, by its nature, must use advanced version control systems, to facilitate discussions (two things often absent from closed-source software). Every change, no matter how small, whether by the project creator or a first-time contributor, must be publicly proposed, discussed, and stringently reviewed, the change history available for all to see.
Anyone can see where the vulnerabilities are (and fix them)
This is predicated, almost exclusively on the (disproven to death) strategy of security through obscurity. It’s a form of security theater. It’s the idea of hiding a key under the welcome mat. It assumes that, if the bad guy doesn’t know it’s there, it’s secure, right? Wrong. It’s the same reason building blue prints can be available for public inspection (and city review), without increasing the chance of a break in (and we end up with stronger, more fire resistant buildings as a result).
When everyone can see the inner workings (including those you want to keep out), you’re forced to build inherently secure systems, rather than systems that rely on smoke and mirrors. As a result, statistically, open source software actually produces more secure software than its closed-source counterparts.
We (won’t) reveal the secret sauce
Open or closed, industry-standard best practices dictate you shouldn’t have secrets in your code. Passwords, tokens, server names, anything remotely secret sauce — heck, anything that might change based on the environment (development, staging, production) — should be placed in the database, an environmental variable, or in a VCS-ignored configuration file, not distributed with the software itself.
For one, you’ve potentially got outside contractors or others in your code, which shouldn’t be an access control mechanism in itself. For another, best practices once again dictate that such configurations aren’t shared between environments, and are rolled frequently, something significantly harder if it requires a code change and deploy. Open source software ensures application logic, and system specific-configurations, secret or otherwise, remain distinct concerns.
Can open source be insecure? Of course. Is there insecure open source software? Of course again. But the idea that a language, platform, or workflow, based solely on its philosophy, is somehow inherently less secure is absurd, and I’d argue, can often be, if not exclusively, more secure, than closed source or proprietary alternatives when done right. Either way, not all software must be closed source and the decision of whether a particular piece of software should be open or closed should be defined by industry-best practices, not by fear.
If you enjoyed this post, you might also enjoy:
- Why open source
- Twelve tips for growing communities around your open source project
- 19 reasons why technologists don't want to work at your government agency
- Five best practices in open source: external engagement
- 15 rules for communicating at GitHub
- Everything an open source maintainer might need to know about open source licensing
- Four characteristics of modern collaboration tools
- Everything a government attorney needs to know about open source software licensing
- Why everything should have a URL
- Why you probably shouldn't add a CLA to your open source project
- A White House open source policy written by a geek
Ben Balter is Chief of Staff for Security at GitHub, the world’s largest software development platform. Previously, as a Staff Technical Program manager for Enterprise and Compliance, Ben managed GitHub’s on-premises and SaaS enterprise offerings, and as the Senior Product Manager overseeing the platform’s Trust and Safety efforts, Ben shipped more than 500 features in support of community management, privacy, compliance, content moderation, product security, platform health, and open source workflows to ensure the GitHub community and platform remained safe, secure, and welcoming for all software developers. Before joining GitHub’s Product team, Ben served as GitHub’s Government Evangelist, leading the efforts to encourage more than 2,000 government organizations across 75 countries to adopt open source philosophies for code, data, and policy development. More about the author →