Committed to Trust:
A Qualitative Study on Security & Trust in Open Source Software Projects
In 27 in-depth, semi-structured interviews with owners, maintainers, and contributors from a diverse set of open source projects, we explored their security measures and trust processes.

We find that our participants' projects are highly diverse both in deployed security measures and trust processes, as well as their underlying motivations. As projects grow in scope and contributors, so grow their needs for security and trust processes.

Publication #


First page of the publications
Committed to Trust: A Qualitative Study on Security & Trust in Open Source Software Projects
Dominik Wermke, Noah Wöhler, Jan H. Klemmer, Marcel Fourné, Yasemin Acar and Sascha Fahl.
43rd IEEE Symposium on Security and Privacy (S&P'22), May 22-26, 2022.

Abstract

Open Source Software plays an important role in many software ecosystems. Whether in operating systems, network stacks, or as low-level system drivers, software we encounter daily is permeated with code contributions from open source projects. Decentralized development and open collaboration in open source projects introduce unique challenges: code submissions from unknown entities, limited personpower for commit or dependency reviews, and bringing new contributors up-to-date in projects' best practices & processes.

In 27 in-depth, semi-structured interviews with owners, maintainers, and contributors from a diverse set of open source projects, we investigate their security and trust practices. For this, we explore projects' behind-the-scene processes, provided guidance & policies, as well as incident handling & encountered challenges. We find that our participants' projects are highly diverse both in deployed security measures and trust processes, as well as their underlying motivations. Based on our findings, we discuss implications for the open source software ecosystem and how the research community can better support open source projects in trust and security considerations.

Overall, we argue for supporting open source projects in ways that consider their individual strengths and limitations, especially in the case of smaller projects with low contributor numbers and limited access to resources.

Acknowledgements #

We want to thank all interviewees for their participation: It was a great experience to interview you for this study. We appreciate your knowledge, project information, and most importantly your valuable time that you have generously given.

We hope that with this work and your contribution, both the research and open source community are one step closer to more secure and trustworthy software.

Artifacts #

In line with the effort to support replication of our work and help other researchers build upon it, we provide a replication package and an artifact repository.

Filename Type
recruitment-public-channel.txt Recruitment message for public channels (Discord, IRC, Forums, etc.)
recruitment-email.txt Recruitment email content
interview-guide.pdf Interview guide in English
interview-guide-ger.pdf Interview guide in German
code-book.pdf Codebook for interview coding
recruitment-criteria.pdf In-depth description of our recruitment approach

Overview #

In 27 in-depth, semi-structured interviews with owners, maintainers, and contributors from a diverse set of open source projects, we explored their security measures and trust processes.

Motivation #

Whether as low-level system drivers in operating systems, as tooling in our daily jobs, or simply as dependencies of our hobby projects, open source software is an important building block of our everyday lives.

“To what extent should one trust a statement that a program is free of Trojan horses. Perhaps it is more important to trust the people who wrote the software." – Ken Thompson in “Reflections on Trusting Trust”.1

The decentralized development and open collaboration of open source projects also introduce some unique challenges such as code submissions from unknown entities, limited people-power for reviewing the supply chain & dependencies, and bringing new contributors up-to-speed in projects' best practices & processes.

With those unique challenges in mind, we asked us: How we can empower open source contributors to build more secure projects? For this, we decided to conduct a series of interviews with open source maintainers and contributors to gain insights into currently employed security & trust processes.

Interviews #

We conducted 27 semi-structured interviews with contributors, maintainers, and owners of open source projects between July and November 2021.

We based the initial interview guide on our exploratory research questions and considered concepts investigated in previous related work and adapted them to our more in-depth interview approach accordingly. To establish additional areas of research and for feedback, we consulted and piloted the interview guide with open source contributors from our professional network.

For the participants’ convenience, we created both English and German versions of the interview guide, keeping both in sync during the study. During the study process, we continually iterated the interview guide based on the conducted interviews and the collected participant feedback.

Structure of the surveys consisting of 9 sections.

Figure 1: Illustration of the flow of topics in the semi-structured interviews. In each section, participants were presented with general questions and corresponding follow-ups, but were generally free to diverge from this flow at will.

Ethics #

This interview study plan was approved by the human subjects review board (IRB equiv.) of our institution. Research plan, study procedure, and all involved parties adhered to the strict German data and privacy protection laws, as well as the EU General Data Protection Regulation (GDPR). In addition, we modeled our study to follow the ethical principles of the Menlo report for research involving information and communications technologies.

We encouraged potential participants to familiarize themselves with consent and data handling information on a study website before agreeing to any interview participation. Before, during, and after the interview, (potential) participants were able to contact us at listed contact addresses for any questions or additional information. We obtained informed consent from all participants for participation in the study and having their interview’s audio recorded. After acceptance of this work, we contacted participants with a preprint and gave them an opportunity to suggest changes or to veto quotes in the publication.

We consider the interview questions regarding certain security incidents to be of sensitive nature and explicitly highlighted to the participants that they could skip questions or terminate the interview at any time.

Our research approach agrees with the Researcher Guidelines2 for the Linux developer community introduced in response to the “hypocrite commits” incident3 in late March 2022, after the conclusion of our work.

Results #

We report and discuss results for 27 semi-structured interviews with open source contributors, maintainers, and owners.

Developers & Projects #

We sorted our participants into their highest project role with a roughly ascending order of responsibility: contributors (4), maintainers (3), team leaders (7), and founders or owners (9).

Project contributors are often highly distributed, with five of 27 participants reporting to know other contributors only virtually. Although this does not seem to impair collaboration:

“But to be honest, I don’t really mind. As long as one has the same interests, it’s still easy to collaborate if you have the same goal."

At the other extreme, four participants mentioned very close connections such as working at the same company or university.

Overall, we found our participants to be more experienced than we initially expected, often having been involved for multiple years and possessing high-level commit rights. We assume this high level of experience was due to our recruiting focusing on “expert channels” such as project-specific communication channels or dedicated contact addresses, as well as being referred further up in projects until reaching project founders and owners.

Structure #

The specific project setups appear to be as diverse as our participants’ projects. As probably expected of open source projects, most development approaches appear to be somewhat open:

“It’s an open-source project, everything from [build] stages to CI is in the same repository, and everyone can contribute to it. However, no one has direct control over anything because everything executed is a series of scripts and tests in the main repository, meaning that anyone can send a pull request tomorrow and modify them."

Code submissions are at the heart of open source collaboration, making pull request handling and build pipeline setup part of the overall security and trust strategy.

Specifically for incoming pull requests, projects provide a number of controls, e.g., by protecting the main branch. Our participants opt for a number of different strategies for merging code contributions:

Rebasing on the main branch:

“We actually always require from the author to rebase their changes on top of the main, so that we don’t have the whole complex structure of merges […] which actually helps to pinpoint any kind of problems […]"

Majority vote before merging pull requests:

“So on each PR you can review it and then give a thumbs-up or thumbs-down. And that’s done by at least three of the main contributors, […] and that means that it’s a majority of them think that it’s a worthy contribution."

Resolving problems in follow-up pull requests:

"[Y]ou optimistically merge code as long as it passes some basic sanity checks. If someone thinks that the code which is merged isn’t actually perfect, there is some way to improve it, they need to send a follow-up pull request."

The projects' structure and code submission handling appear to be specialized to the project’s and community needs.

Build Pipeline #

In the interviews, 23 participants mentioned using CI/CD or or other automatic build systems in their projects, with the majority relying on GitHub Actions (10). Aside from GitHub Actions, many different systems were mentioned, sometimes even within the same project:

“But basically we use everything, like Travis, Azure Pipelines, GitHub Actions, CircleCI, custom build machines and so on. It’s quite a hodgepodge."

A few participants (3) mentioned that they prefer manual builds and publishing for a number of reasons, e.g.,

“I don’t like the one click deploy, I like to actually see, you know, things fly by in the console."

Running tests as part of the build pipeline is a common practice, with some of our participants taking advantage of this:

“Every pull request automatically goes through our full test suite […] There are at least 1,000 files, each testing one area."

Thoroughly testing every commit might include some trade-offs in the context of attracting contributors, as pointed out:

“If the tests run in five seconds, then people will contribute, if the tests run in five hours, then people will contribute less."

Dependencies #

Common criteria for selecting a dependency included activity and reputation metrics like GitHub stars:

“Our most important criteria, in general, is that we do not want to rely on inactive projects."

“If somebody was pulling in a package and I go to their GitHub and its got two stars and it’s only used in this project, I’m probably going to say: ‘Let’s avoid using that’."

Other participants had more involved criteria for including a dependency, with such elaborate selection criteria even benefiting all involved parties:

“What I usually do before including any dependency is I send them a pull request fixing something. And if they don’t react on this or don’t merge that one, then they don’t become my dependency because they are obviously not interested in improving the software."

“As it happened also with [dependency]: we reached out, we got a good response. We worked on a few issues together, even I personally fixed one of those issues […]"

Key Findings: Developers & Project.

  • The majority of our participants is highly experienced in the open source environment, often with multiple years of work and high-level commit rights.
  • Our participants appear to fully utilize modern build systems, including during testing and deploy. Only few projects explicitly use signed commits, often due to incompatibilities with their workflow or threat model.
  • Selection criteria for dependencies range from readily available metrics over security reviews, to elaborate collaborations or even rewrites.

Guidance & Policies #

We examined guidance and best practices provided by the projects, as well as the content and applicability of security and disclosure policies.

Guidance #

Most commonly, our participants mentioned guidance for contributing to the project (14) and programming language-specific guidance such as style guides (13), followed by general guidance for project setup and infrastructure (8).

As reasons for not providing specific guidance documents, participants mention time and money constraints:

“Somebody would have to write the guide, and I am the only one who can write it. I mean, there is nobody paid to write it and I am also not paid to write it."

More generally, our participants are somewhat divided in their opinions of the helpfulness of guidance for their projects.

Ranging from very positive:

“I personally think that documentation is one of the most important aspects of an open source project, both for users and also developers."

To less helpful:

“I’m also honestly not quite sure that’s really that helpful […] Of course, it’s quite nice to have overviews and stuff like that somewhere, but there aren’t too many people who then read something like that."

Participants mentioned that they prefer to coach new contributors. Similarly, others mentioned an approach outside of classical guidance documents:

“Most of the people who are interested show up in the communication channels. And then it depends on [project members] being communicative by helping the other person."

“We answer very detailed answers to questions of users, which then become the kind of searchable result of answers for guides, including security fixes."

This guidance split appears to run between projects with a more technical developer audience preferring coaching or static testing, vs. projects with less technical contributors such as scientists preferring extensive guidance, although our interview coverage of these aspects was insufficient to statistically confirm this.

Security Policies #

We were interested in the content and applicability of our participants’ security and disclosure policies. Of our 27 participants, eight mention that their projects do not have specific security policies. A participant offered a possible explanation for this:

“So in the same way as people don’t make a security policy on their repo unless something pushes them to do it or unless they have a security incident, people aren’t going to document security best practices unless they’ve had a problem. Part of that is because they may not know to do so. But part of that is also because is there a need?"

The most commonly mentioned security policy aspect (10) was related to providing a security-specific contact for the project and/or to a dedicated security team. Less common security policies include air gapping and programming language-specific policies:

“The policy of [the project] is that any released software has to be built on a machine controlled by the release manager."

“Everything that is related to crypto or network code or parsing and so on is all written in Rust. That’s already a kind of policy."

Only four of our participants explicitly mentioned not having any form of disclosure policy or security contact. Disclosure approaches mentioned by the other participants included a policy or plan for coordinated disclosure (10), private channels for disclosure (5), and plans for full disclosure, e.g., as public issue (2). The often heated debate regarding coordinated disclosure in open source projects extends to our participants:

"[The projects] say: we’re just putting our users at too much risk. We’re not sitting on patches, the people out there have installations on the front line, and because somebody likes to coordinate something, we’re not waiting three months longer."

Being closely related to policies, we also queried our participants about their security testing and review setup, with many participants mentioning automated tests and mandatory reviews.

Key Findings: Guidance and Policies.

  • Our participants appear to diverge in their opinions regarding the helpfulness of (written down) guidance.
  • For security policies, larger projects mentioned dedicated security teams, while smaller projects mentioned a security contact channel.
  • Most projects included some type of disclosure policy or at least contact for security issues.

Releases & Updates #

The release decisions of our participants broadly fit into two approaches: either as periodical releases (9) or when specific features or patches are ready (10). Different communities seem to favor different release approaches, as our participants describe both feature-driven and cycle-driven release schedules based on community input:

“Periodically, we’ll reach consensus in the community, and say, ‘Hey, we ought to do a release’, and so we’ll stop developing for a few days and just make sure there aren’t any major bugs."

“We try to aim for three times a year, mostly because the real reason for the three times a year rough cycle is that we polled the community and the kind of the averaging that three times a year seemed like what suited people the most."

Some participants utilize both approaches, depending on, e.g., project maturity:

“Mainline development continues just normally under main branch, and we have this temporary release branch where we merge in only bug fixes that come in during this time. This is for the most mature projects […] For projects that move faster and don’t have for example, back-holding strategy for bugs and stuff like this, we basically once a month tag a version and push them out."

Aside from set release windows, participants often mentioned a more flexible approach to vulnerability fixes, e.g.,

“If you have a vulnerability, Spectre, Meltdown or something like that, then it can also happen that updates are released completely unscheduled."

The majority of our participants does not seem to specifically advertise new releases, e.g.,

“Most people who interact with this project don’t actually even look at my GitHub. They don’t look at the release assets or anything like that, they just use [package registry] and it just works from there. They pull it down and use it automatically."

Of the ones that do advertise, preferred channels included via social channels like Twitter, Slack, or IRC (3), via mailing lists (3), and on websites (2). Again, our participants seem to prefer a practical approach for deprecating insecure or out-of-date releases, e.g., by simply stopping support:

“We only guarantee that we will backport security fixes to the last two releases. So anything before that is not an LTS we will not fix, which could be seen as deprecated from this point of view."

“I don’t have any official policy of supporting old versions, so they’re effectively deprecated as soon as I release a new version."

For distributing releases, 12 participants specifically mention that they utilize external infrastructure such as registries, app stores, or package managers. As a reason for not distributing binary releases, a participant points to their community composition:

“We have no [binary] releases. We always build the project ourselves, there are no pre-built binaries for end users, because there are practically no end users."

“All of our releases are done on GitHub tag, because we release via source code, not via binaries, so it’s a software release in the form of a git tag."

Of our participants, 11 were aware of their projects’ releases being signed. Their reasons for not or not correctly signing releases included technical limitations:

“The Mac build is signed by my developer key, but the builds for Raspberry Pi, Linux, Windows, they’re unsigned. People just have to trust the integrity that I’m the only person who has access to those and I did it right. We’d love to have better solutions for that, but none are available right now."

as well as the general singing setup leading to key ownership problems:

"[…] because our release procedure checklist only states sign, meaning sign them in general. So people use their GPG signing keys, and there is no control where and how those keys are verified or belong to a particular key ring. So this is something we need to improve."

Generally, our participants seem to be aware of the security benefits of signing and releasing checksums of releases, but some are not utilizing it for (all) releases due to technical limitations and platform restrictions.

Key Findings: Releases and Updates.

  • Our participants mostly publish projects’ releases and updates based on direct community input and feedback, often mentioning exceptions from their usual schedule for vulnerability fixes.
  • Release distribution and deprecation appear to be oriented towards practicality, utilizing package registries and other distribution infrastructures, again depending on the need of their users.

Roles & Responsibilities #

Somewhat unsurprisingly, participants involved in projects with corporate stakeholders frequently mentioned sophisticated management structures that oversee the project’s development:

Most of our participants described the contributor hierarchy in their projects as having two levels: The core team that is tasked with reviewing code submissions and that has permissions to merge new code into the source tree and everyone else whose code is subject to the code review process. The core team was often called a group of maintainers or simply committers:

“There’s two classes of contributor. There are the maintainers and then there’s pretty much everyone else. The maintainers are me and maybe seven other people who contribute regularly to the project […] They can push directly to the main branch of the project."

Other projects make a distinction between the core team and the project’s owners, or they even have a dedicated role for developers who have the ability to manipulate the repository itself, e.g., by pushing to branches corresponding to pull requests:

"[…] then there are about [a few] people who have maintainer status, so they can merge requests. And then there is about a hundred people who have developer access, so they can push to a branch inside the merge request."

Some projects take centralization further and follow the so-called Benevolent Dictator for Life (BDFL) model, where the project’s founder steers the overall direction of the project and has the final say in disputes:

"[The project] is what [one of our contributors] has dubbed a do-ocracy, and that is basically whoever’s writing the code gets to decide how it’s done, but our benevolent dictator has the final say so. We essentially have this benevolent dictator, and everybody else under that."

Participants whose projects have not grown out of corporate contexts often mentioned a more relaxed contributor structure, with direct influences on the code review process:

“It is basically a much more peer-to-peer structure than a hierarchical structure. If you develop something, you don’t need to submit it to somebody to get it into the tree. You do need to get a review from people who are competent in this area, but that’s all."

Only five participants stated they were aware of roles within their projects that deal with security. One summarized the security team’s obligations as follows:

“You can communicate privately with the security team. They would classify your issues and decide if it matches the criteria for the issued security notice, how to proceed with patches, and how to publish them."

Three of the five participants mentioned roles that are not primarily or only indirectly involved with security, such as IT departments or sysadmins. Relying on a security response team existing within the parent organization or foundation of the project was more uncommon, which two participants reported.

Key Findings: Roles and Responsibilities.

  • Our participants’ projects have a variety of contributor hierarchies which are mostly relatively flat with two levels.
  • This practical approach seems to be prevalent in projects of any size, bar very small (single person) projects or ones that grew out of a corporate context.
  • Most of the projects do not staff teams dedicated to project security, with some either relying on their organization’s resources or leveraging members of other teams such as their IT admins.

Security & Trust #

More than half (16) of our participants reported never having encountered a direct security incident in the past. The most commonly reported security challenges (that did not necessarily lead to an incident) included: suspicious or low quality commits (15) and vulnerabilities introduced by dependencies (8).

Overall, our participants seem to be mostly ambivalent about potentially malicious commits:

“I mean, there’s definitely been people that have intentionally tried to put malicious code in projects, but it’s always very easy to spot immediately. It’s like those spam emails where they have bad grammar and stuff."

Same holds true for vulnerabilities in dependencies, which apparently often turn out to be false positives or to be irrelevant for participants’ projects:

“Most of the time, the vulnerabilities I deal with are transitive dependencies, have a CVE, and 99.99 percent of the time, they are false positives for every other use case: it’s a real vulnerability in the dependency, but it’s not in the way almost anyone uses it."

The majority of our participants was aware of the “hypocrite commits” incident in early 2021 (23 of 27). For the remaining four, we provided a short, factual summary of the incident during the interview.

Of the 16 participants with a generally negative opinion of the incident, many considered the research approach as outright malicious. This is likely a misconception, as the researchers stated that they did not intend to, and objectively did not, introduce any vulnerability in Linux.3

Of the remaining participants with a mixed (7) or no opinion (4), some considered the research approach similar to that of a “White Hat Hacker”, although with a flawed execution.

“I do understand both sides of this […] It would be much better if this kind of research was done in cooperation with somebody at the Linux kernel, who knew that it’s happening and without disclosing that to a lot of people."

We could not identify a single participant with an outright positive opinion of the incident. We assume this skew was likely exaggerated by the generally negative, sometimes misinformed reporting by open source aligned news sources and communities.

Key Findings: Security Challenges.

  • Only few projects have experienced an outright security incident, although many of our participants were familiar with suspicious or low quality commits, as well as potential vulnerabilities introduced by dependencies.
  • The majority of our participants were generally aware of the “hypocrite commits” incident and had an overall negative opinion of the research approach.

Trust Models #

Establishing trust for new committers is an important step in the open source onboarding process. The majority of our participants described some form of meritocracy when asked how new contributors gain trust within the community, i.e., by making frequent, high-quality contributions to the project:

“So it’s purely on contributions to the project, so it’s meritocracy based. And this means that the person essentially starts usually either just helping out on filing issues like well documented issues, filing pull requests and again well documented, reviewing pull requests is also an important aspect of it."

A less common approach involves trusting unknown contributors by default and giving them access early in the hope of facilitating first-time contributions:

“Everyone is actually trusted first. So if I submit code, then the maintainer trusts me actually first that I want with the code nothing malicious. Even if I have not yet participated in the project."

“I really want to empower people to contribute. […] it’s very easy to get access to [the project]. It’s not like super easy, but you just submit patches and if you do some useful work, I default to just give you the commit access."

CLAs appear to be still somewhat rare, with only four participants mentioning that their projects require one, e.g.,

“For licensing purposes, we require a [CLA], because the project is licensed under the BSD license. We have to have people assign their copyright, so when people want to contribute, they fill out a form, just sign it. It says, ‘Hey, I’m releasing my contributions under the Berkeley Style License’."

This low number agrees with the personal impressions of some of our participants, e. g.,

"[…] I think that was the only time I ever had to [sign a CLA], and I’ve submitted lots of pull requests to many different projects. It doesn’t seem to be very widely used."

In our interviews, projects affiliated with corporate stakeholders or other organizations appear to be more likely to require a CLA.

Trust Incidents #

The majority of participants (20) reported to never have experienced a trust incident (by their definition) in their projects.

Described trust incidents included drive-by cryptocurrency miner commits, failed background checks, and a pro-active block after potential SSH key theft. Somewhat unsurprisingly, larger and likely older projects appear to have had more experience with trust incidents in the past. The fact that most projects have never experienced a trust incident is also reflected in their incident handling strategy, with multiple participants reporting not having previously thought about such cases.

Reported incident response strategies, especially by smaller projects, seem to be decided on a case-by-case basis, e.g.,

"[Incident response] is decided dynamically from case to case. The infrastructures are so small that you can do this relatively quickly. So it’s not like in the company that we have incident playbooks. There are too few people involved for that."

Again, larger, and likely older projects appear to have a more codified incident handling strategy in place. Two participants pointed to their project’s or organization’s code of conduct, which codifies the steps to take in the case of a breach of trust:

“That is one place where then the code of conduct will start to kick in. We actually have an enforcement section for code of conduct with a step-by-step escalation, which basically ends up with everything from just asking someone not to do something through to banning them and removing access."

Key Findings: Trust Processes.

  • Most of our participants use some form of meritocracy for establishing trust with new contributors, with some even assuming trustworthiness by default to facilitate first-time contributions.
  • The majority of participants never experienced a trust incident in their projects and also did not establish specific trust incident strategies.
  • Larger, likely older projects seem to have more past experience with incidents, and often offer more specific strategies.

Opinions & Improvements #

Lastly, we asked participants about both the internal and external reputation of their project in the context of security, as well as how they would personally like to improve security and trust in their projects.

With one exception, all participants reported a high internal reputation of their projects, e.g.,

“Amongst the people on the project, everybody trusts it a lot."

“We follow very, very high standards there, mainly because we have a few people who are very, very keen on that."

The same generally holds true for the external reputation, although many participants are unsure about the actual awareness of the project outside of their community. Overall, our participants appear to take pride in their projects, but are quite humble about their importance and reach in the OSS ecosystem.

We also asked our participants how they would like to improve security and trust in their projects, assuming no limitations. For reporting, we roughly sorted the suggested improvements into mainly requiring more person-hours (15), requiring more money (9), or requiring a different infrastructure (9).

Improvements requiring more person-hours focus on alleviating past software development decisions and technical debt, e.g.,

“If I could, I would write the entire stack myself."

"[…] I would rewrite a lot of the code. That’s just a historical thing, because it has already become big and complex […] It’s just like building a house; you’d have to build it three times before it becomes good."

Another focus was enhancing the review process, e.g.,

“So the first thing I do is that a group of people would review every pull request exclusively from the view of security."

Some of the improvements mainly requiring more money curiously also translated into necessitating more person-hours, just by buying the time, e.g.,

“I could always use more participants in the review process and so if I could hire some people, if I had the disposable income to do that, I would probably hire people to get more eyes on pull requests than just myself […]"

“I think getting more tools and more CI-type tools to watch for that, because I think humans are vulnerable […] If I had unlimited budget and unlimited engineers, I’d really work on improving our testing systems."

Other money-based improvements included the introduction of security bounties:

"[Projects] mentioned they tried all the different kinds of things, and the only thing that worked well was [a] bounty process, and having bounties, and being able to reward security researchers to bring up the security issues."

For improvements requiring infrastructure, participants mentioned improvements to build and test pipelines, e.g.,

“with unlimited resources, I would like some more investment into automatic tools that are better in like finding vulnerabilities and problems with code."

“I would like to build [the binaries] on my own machine and then ship the site final result. For anything binary related, that would be way better than what we have right now."

Other participants mentioned transitioning their projects’ codebases to other languages, e.g. Rust:

“What I’d like to do is oxidize [the project] over time, to integrate Rust and Rust code into the codebase – which is quite an undertaking […] and an incredibly tedious task to do it well."

Overall, even improvements initially requiring more money or a different infrastructure were traceable to the crux of all open source project: the need for more contributors.

Key Findings: Opinions and Improvements.

  • Our participants take pride in their projects, but are quite humble about their importance and reach in the OSS ecosystem.
  • Overall, even improvements initially requiring more money or a different infrastructure ended up targeting the project’s need for more contributors.

Summary #

In 27 in-depth, semi-structured interviews with owners, maintainers, and contributors from a diverse set of open source projects, we investigate their security measures and trust processes.

  • We find that our participants’ projects are highly diverse both in deployed security measures and trust processes, as well as their underlying motivations.
  • As projects grow in scope and contributors, so grow their needs for security and trust processes.
  • Smaller projects appear to handle security and trust incidents “as they happen”. Elaborated incident playbooks and committee structures are likely of little use to these projects due to frequently changing committers and structures.

Overall, we argue for supporting open source projects in ways that better consider their individual strengths and limitations, especially in the case of smaller projects with low contributor numbers and limited access to resources.

Cite this Work #

@inproceedings{conf-oakland-wermke22,
	title = {Committed to Trust: A Qualitative Study on Security \& Trust in Open Source Software Projects},
	author = {Dominik Wermke and
		Noah Wöhler and
		Jan H. Klemmer and
		Marcel Fourné and
		Yasemin Acar and
		Sascha Fahl},
	booktitle = {43rd IEEE Symposium on Security and Privacy},
	month = may,
	year = {2022},
	url = {https://www.ieee-security.org/TC/SP2022/index.html}
}
Wermke et al. "Committed to Trust: A Qualitative Study on Security & Trust in Open Source Software Projects." 43rd IEEE Symposium on Security and Privacy. 2022. 
Wermke, D., Wöhler, N., Klemmer, J., Fourné, M., Acar, Y., & Fahl, S. (2022, May). Committed to Trust: A Qualitative Study on Security & Trust in Open Source Software Projects. 43rd IEEE Symposium on Security and Privacy (S&P'22).
%0 Conference Proceedings
%T Committed to Trust: A Qualitative Study on Security & Trust in Open Source Software Projects 
%A Wermke, Dominik
%A Wöhler, Noah
%A Klemmer, Jan H.
%A Fourné, Marcel
%A Acar, Yasemin
%A Fahl, Sascha
%B 43rd IEEE Symposium on Security and Privacy 
%D 2022
TY  - CONF
T1  - Committed to Trust: A Qualitative Study on Security & Trust in Open Source Software Projects
A1	- Wermke, Dominik
A1	- Wöhler, Noah
A1	- Klemmer, Jan H.
A1	- Fourné, Marcel
A1	- Acar, Yasemin
A1	- Fahl, Sascha
JO  - 43rd IEEE Symposium on Security and Privacy 
Y1  - 2022
ER  -

  1. Ken Thompson: Reflections on Trusting Trust. PDF ↩︎

  2. lwn.net: Guidelines for research on the kernel community. https://lwn.net/Articles/888891/ ↩︎

  3. Linux Foundation’s Technical Advisory Board: Report on University of Minnesota breach-of-trust incident. https://lwn.net/ml/linux-kernel/202105051005.49BFABCE@keescook/ ↩︎