Written by: Austin Shannon

Edited by: Henry Ertl and Kate Giffin

In a democracy, government money is public money and should ideally be accountable to the interests of the public as negotiated through the political process. However, if the stewards of that money (i.e., elected officials or citizens) know little about what they are paying for, then how do they know that public interests are being upheld?

Investment in the United States scientific enterprise after World War II (WWII) largely followed the logic of the “linear model.” The idea was that well-funded basic research inevitably creates societal benefit through unpredictable advancements that become technologies through market incentives. For example, Alexander Fleming’s discovery that mold could inhibit the growth of bacteria was largely happenstance, but it incentivized another team to extract and purify the compound involved. That compound was then used in hospitals to help people overcome bacterial infection. This was the development of penicillin, the world’s first pharmaceutical antibiotic. In retrospect, penicillin appears to have experienced a seamless transition from discovery to new technology, but the truth is that its development took nearly two decades and was driven by external circumstances, like the need for mass production of the drug during WWII (1).

While the linear model persisted, scientists were entrusted to chart the course of scientific inquiry, holding each other accountable through the self-correcting norms of modern science. This is the “social contract” view of science and society, where the government funds basic research and grants substantial autonomy to scientists for the promise of societal benefit and economic growth. According to scholars like political scientist David Guston, author of Between Science and Politics: Assuring the Integrity and Productivity of Research, the social contract ended in the 1980s following decades of intermittent controversies involving scientists. Congressional hearings in the ‘80s regarding fraud in the scientific community and worries about slowing innovation in the U.S. precipitated the creation of the Office of Research Integrity (ORI) and the Office of Technology Transfer (OTT), respectively (2). The social contract for science ultimately failed to grapple with the human fallibility of scientists and scientific institutions, relying too much on norms to maintain accountability. It also failed to conceptualize a productive and accountable relationship between government and scientists that prioritized the public interest.

Illustration by Jacquelyn Roberts

The relationship between science and society today involves far more checks and balances, trending each decade towards more openness and oversight into the process of knowledge creation. However, the current paradigm still largely excludes the primary benefactors of basic research: average citizens. Informed largely by the work of Guston, this article describes some ways the social contract fell apart and how laws and scientific institutions changed in response to those breakdowns. I then discuss how the SARS-CoV-2 pandemic complicated the relationship between science and society, and how this relationship may develop in the future.

Not Living Up To Expectations

The social contract for science was built on the belief that scientific institutions, guided by the norms and incentives of modern science, were self-correcting in cases of scientific misconduct. In the decades following WWII, controversies involving scientists occasionally burbled to the surface of the public consciousness. These cases instigated congressional hearings that sometimes resulted in the creation of new institutions, offices, or laws to hold scientists more accountable, signaling a shift in the government’s attitude towards scientists and their work.

From the early 1930s until the early 1970s, researchers at the Tuskegee Institute conducted a highly unethical study monitoring the course of untreated syphilis in black men and how it affected their bodies over time. Despite penicillin becoming available and prescribed for syphilis by the 1940s, the nearly 400 study participants with syphilis were left untreated so their condition could be documented (3). In 1974, shortly after the horrors of Tuskegee went public, Congress passed the National Research Act (4), which formally created what is known today as Institutional Review Boards (IRBs) (5). IRBs are committees formed of at least five individuals with different backgrounds, including at least one researcher and one member whose primary concerns are not scientific. They enforce ethical principles of biomedical research at all research institutions involved with human subjects (6). Recognition that scientists may shirk ethical principles in the pursuit of data, along with more widely-accepted and inclusive concepts of human dignity and its inviolability, led directly to the creation of these important institutional mechanisms for accountability in research.

In addition to ethical violations, a number of public scandals put into contention the view of scientists as impartial and objective. A high profile fraud scandal in the late 1980s reinforced the idea that scientific institutions were not exceptional, and that they, like any other institution, required formal, rather than normative, checks and balances. The case in question involved accusations of data fabrication in a paper published by Thereza Imanishi-Kari and co-authored by Nobel laureate David Baltimore. The controversy sparked nearly a decade of investigations by universities and a number of governmental institutions such as the National Institutes of Health (NIH) and the U.S. Secret Service (7). Accusations of research misconduct like this were a topic of congressional attention in the 1980s, as Congress increasingly perceived oversight by universities, the NIH, and other research institutions as inadequate. During this time, two different offices were created to deal with research misconduct (the Office of Scientific Integrity at the NIH and The Office of Scientific Integrity Review within the Office of the Assistant Secretary for Health), which were subsequently merged into the Office of Research Integrity (ORI) in 1992 (8). The perception of automatic accountability as the norm in scientific institutions eroded, leading to the creation of new instruments of government oversight like the ORI.

IRBs and the ORI are examples of what Guston coined as “collaborative assurance” (9). Collaborative assurance is a more formal process of oversight and incentivization involving the work of boundary organizations that operate at the interface between science and politics. Boundary organizations, such as the Intergovernmental Panel on Climate Change and the U.S. National Bioethics Advisory Commission, are interdisciplinary organizations that involve scientists and non-scientists in the work of negotiating societal, political, ethical, and legal dimensions of an issue while preserving the autonomy of scientists (9,10). This increased oversight of the research process is analogous to the government’s attempt to increase the output and application of research in the 1970s and ‘80s, culminating in the creation of the Office of Technology Transfer (OTT) and the passage of the Bayh-Dole Act (9). The former assists scientists in translating their research into new technologies and the latter incentivizes commercialization by giving researchers the ability to patent findings made with public money. Unfortunately, while the ethos of collaborative assurance stabilizes the relationship between scientists and the government, public distrust of institutions is on the rise.

Modern Distrust of Scientific Institutions

Public trust in institutions has degraded over recent decades (11). Trusting that scientific institutions know what they are doing and that they are not ideologically motivated is foundational to their continued existence. In 2020 and 2021, the SARS-CoV-2 pandemic put the messiness of science-in-the-making on full display through the globalized information soup of the internet, causing many to infer a corrupting allegiance of scientists to political ideology or commercial interests. Some of what could be considered normal deliberation between experts who disagree was intensified by the political need to act decisively during an emergency, and these disagreements were amplified by media of all forms. Rather than communicating largely settled science or exciting breakthroughs that may only be relevant after years of validation and refinement, experts were asked to comment on the research as it was being generated so that people and their governments could act on it now. While you would be hard-pressed to find more than a handful of scientific “facts” that enjoy a 100% consensus amongst experts, this emergency provided very little time for even a rough consensus on many issues before the public needed answers. The scientific community did incredible, important work during this time that saved countless lives, but not everyone ended up with more trust in scientific institutions in the end.

The most reputable scientific journals in the world publish papers only after an extensive review by other researchers in that field, which is a process called peer review. In this case, the review process was overwhelmed with papers about COVID-19 and the virus that causes it, SARS-CoV-2. Doctors and public health officials had little time to wait for new research and news organizations were reporting on unreviewed preprints of papers while scrambling for experts to comment. Faced with a firehose of arcane experimental results and differing expert interpretations of data on masks, social distancing, vaccines, and treatments, individuals and news organizations decided who to trust and who to distrust. This was the public’s first large-scale exposure to the iterative and complicated work of science-in-the-making, but with few institutional guardrails and extremely high stakes.

Importantly, the decision of how to act depends entirely on one’s values, so people understandably internalized only the research results and interpretations that confirmed their priors. Many claimed that objective science was on their side well before the issue was settled, using that to legitimize their position, silence debate, and in some cases, to enforce their values. While some saw the power and promise of science on display with the swift development and implementation of effective vaccines and treatments, others saw government overreach, corporate greed, and scientific arrogance. Some of this, of course, is attributable to the circumstances of the current political climate rather than the actions of scientists themselves, but it is important to recognize that our institutions bear the costs no matter what, and that this moment has likely changed the relationship between science and society moving forward. Perhaps it could help to lean further into the ethos of collaborative assurance that marks the modern scientific enterprise of the U.S.

A New Chapter for Science and Society

In a world where scientific advancements look riskier and riskier (think AI and human gene editing with CRISPR), the public desire to fund basic research will dwindle unless the benefits of science become more visible, funding priorities shift towards mitigating the risks of these technologies rather than advancing them, or the public is given a more important role in decision-making. So how might the next chapter of this necessary but sometimes fraught relationship between science and society look?

First of all, turning the incisive eye of the scientific method inward on scientific institutions is a promising way to understand and improve scientific institutions in the future. The field of metascience – the science of science – is a mixture of sociology and data analytics focused on understanding the scientific enterprise. Data on funding and publishing can help us understand how researchers choose their research questions and draw trendlines between what research is funded and what industries are innovating. Information about the role of institutions, mentorship, personal background, and various metrics of success in a scientist’s career (e.g., publication history, awards, grants, and tenure) can help us increase and diversify our scientific workforce while identifying important barriers and perverse incentives in science. Furthermore, metascience helps us understand why the linear model of basic research leading directly to new technologies is an oversimplification, and how incentives may be better aligned to improve faltering U.S. innovation (12).

Operation Warp-Speed was a governmental investment in the rapid development of COVID-19 vaccines that included pre-ordering millions of doses and expediting clinical trials by allowing phases 2 and 3 to overlap (13). Warp-Speed demonstrated that government-defined goals, incentives, and regulatory frameworks can help us accomplish specific goals far faster than they otherwise would. With the success of this initiative, it seems reasonable that the U.S. government will focus more on industrial policy in the future. Industrial policy is an attempt to shape the economy to achieve a specifed goal. This usually means combatting market failures like, in the case of Warp-Speed, a company’s hesitancy to invest in the expensive process of vaccine development without a guarantee that the vaccine will succeed in the market. It could also mean bolstering certain technologies in the market through research investment, tax incentives, or subsidies. Success is not guaranteed, however. We need only look to the embarrassing 2009 propping up of solar energy start-up Solyndra (bankrupted in 2011) to see how improper vetting of government loan or grant recipients can go wrong (14). Either way, there are other important projects that could benefit from an industrial policy approach (e.g., renewable energy, lab-grown meat) and pressure from big issues like climate change will further encourage these interventions.

As industrial policy and metascience bolster the relationship between government, science, and industry, we may also see an attempt to improve public trust and support through the democratization of science. For instance, recognition of broader concepts of expertise (e.g., indigenous knowledge, lived experience) along with increased public engagement with either the scientific process itself or with tech policy assessment will likely become an essential legitimizing force for scientific institutions. Boundary organizations that can build trust between scientists, government officials, and the public through meaningful engagement and mutual respect may help us overcome some of the epistemological dangers of emergency situations like the recent pandemic. Citizen panels, consensus conferences, and advisory committees will become increasingly important as advances in biotechnology, geoengineering, and AI lurch from questions of “can we?” to “should we?”

Public Support Is Essential to a Thriving Scientific Enterprise

We will likely see the role of boundary organizations expand to meet the societal and governmental demands on science, creating new institutions that prioritize societal values and explicit public buy-in for basic research oriented towards specific societal
goals. This may be realized through new federal funding efforts in metascience and industrial policy, along with increasingly substantive public engagement from scientific institutions. It is not enough to simply trust that the norms of science will prioritize public interests, continually improve and inform our lives, as well as safeguard our future while remaining shrewdly under-budget. Scientific institutions have so much to offer society, but our discoveries will either fall upon deaf ears or simply remain undiscovered without the trust and support of non-scientists.


Austin is a grad student in microbiology and immunology, where he studies how the bacteria that causes cholera releases proteins into its environment. He recently completed the Science Technology and Public Policy certificate through the Gerald R. Ford School of Public Policy, where he found inspiration for this article.

Leave a comment