The end of open source? – TechCrunch

by Joseph K. Clark

This was quickly followed by the — in some senses, equally disturbing — announcement that the university had been banned, at least temporarily, from contributing to kernel development. A public apology from the researchers followed. Though exploit development and disclosure is often messy, running technically complex “red team” programs against the world’s biggest and most important open-source project feels a little extra. It’s hard to imagine researchers and institutions so naive or derelict as not to understand such behavior’s potentially colossal blast radius.

Common sense suggests (and users demand) they strive to produce kernel releases that don’t contain exploits. BEqually sure, maintainers and project governance are duty-bound to enforce policy and avoid having their time wasted. Ut killing the messenger seems to miss at least some point — that this was research rather than pure malice. It casts light on software (and organizational) vulnerability that begs for technical and systemic mitigation.

Projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models.

TechCrunch

I think the “hypocrite commits” contretemps is symptomatic, on every side, of related trends that threaten the entire extended open-source ecosystem and its users. That ecosystem has long wrestled with problems of scale, complexity, and free and open-source software’s (FOSS) increasingly critical importance to every kind of human undertaking. Let’s look at that complex of problems:

  • The most significant open-source projects now present big targets.
  • Their complexity and pace have grown beyond the scale where traditional “commons” approaches or even more evolved governance models can cope.
  • They are evolving to commodify each other. For example, it’s becoming increasingly hard to state categorically whether “Linux” or “Kubernetes” should be treated as the “operating system” for distributed applications. For-profit organizations have noted this and begun reorganizing around “full-stack” portfolios and narratives.
  • In so doing, some for-profit organizations have begun distorting traditional patterns of FOSS participation. Many experiments are underway. Meanwhile, funding, headcount commitments to FOSS, and other metrics are declining.
  • OSS projects and ecosystems adapt in diverse ways, sometimes making it difficult for for-profit organizations to feel at home or benefit from participation.

Meanwhile, the threat landscape keeps evolving:

  • Attackers are more prominent, smarter, faster, and more patient, leading to long games, supply-chain subversion, etc.
  • Attacks are more financially, economically, and politically profitable than ever.
  • Users are more vulnerable and exposed to more vectors than ever before.
  • The increasing use of public clouds creates new technical and organizational monocultures that may enable and justify attacks.
  • Complex commercial off-the-shelf (COTS) solutions assembled partly or wholly from open-source software create elaborate attack surfaces whose components (and interactions) are accessible and well-understood by bad actors.
  • Software componentization enables new kinds of supply-chain attacks.
  • Meanwhile, all this is happening as organizations seek to shed nonstrategic expertise, shift capital expenditures to operating expenses and evolve to depend on cloud vendors and other entities to do the hard work of security.

The net result is that projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models. In the specific case we’re examining here, the researchers were able to target candidate incursion sites with relatively low effort (using static analysis tools to assess units of code already identified as requiring contributor attention), propose “fixes” informally via email, and leverage many factors, including their own established reputation as reliable and frequent contributors, to bring exploit code to the verge of being committed.

This was a profound betrayal, effectively by “insiders” of a trust system that’s historically worked very well to produce robust and secure kernel releases. The abuse of trust changes the game, and the implied follow-on requirement looms large to bolster mutual human trust with systematic mitigations.

But how do you contend with threats like this? Formal verification is effectively impossible in most cases. Static analysis may not reveal cleverly engineered incursions. Project paces must be maintained (there are known bugs to fix). And the threat is asymmetrical: As the classic line goes, the blue team needs to protect against everything, and the red team only needs to succeed once.

I see a few opportunities for remediation:

  • Limit the spread of monocultures. Stuff like Alva Linux and AWS’ Open Distribution of ElasticSearch is good, partly because they keep widely-used FOSS solutions free and open-source but also because they inject technical diversity.
  • Reevaluate project governance, organization, and funding to mitigate complete reliance on the human factor and incentivize for-profit companies to contribute their expertise and other resources. Most for-profit companies would be happy to contribute to open source because of its openness and not despite it. Still, this may require a culture change for existing contributors in many communities.
  • Accelerate commodification by simplifying the stack and verifying the components. Push appropriate responsibility for security up into the application layers.

I’m advocating here that orchestrators like Kubernetes should matter less and Linux should have less impact. Finally, we should proceed as fast as we can toward formalizing the use of things like unikernels. Regardless, we need to ensure that companies and individuals provide the resources open source needs to continue.

Related Posts