The interplay of trust and political ideology in shaping emerging AI attitudes

  Nicole KRAUSE, University of Wisconsin - Madison, United States
  Shiyu YANG, University of Wisconsin - Madison, United States
  Luye BAO, University of Wisconsin - Madison, United States
  Todd NEWMANN, University of Wisconsin - Madison, United States
  Dominique BROSSARD, University of Wisconsin - Madison, United States
  Mikhaila CALICE, University of Wisconsin - Madison, United States

AI has changed the way scientists make genetic edits; it has infiltrated our daily lives through the internet of things; and it is being used by law enforcement agencies to fight crime. Many of the societal questions raised in its wake cannot be answered by science. Who or what will govern this technology? How do we prevent inevitable biases in how the technology is developed and applied? And how do we avoid unintended consequences such as predictive policing exacerbating racial profiling?
When navigating emerging and complex issues like AI, non-expert audiences rely on different sets of trusted actors to guide their views. Using a nationally representative survey conducted in the United States in February and March of 2020 (n = 2,700; completion rate: 41.3%), we analyze the relationships among trust in different institutions and individuals’ AI-related attitudes, beliefs, and perceptions.
Our results show that trust in scientists and tech regulators operates in the expected, positive ways on support for AI. However, extending existing work on the interplay between trust and ideology in shaping science-related attitudes and beliefs, we also test several hypotheses related to nascent politicization of AI in ongoing US media narratives. In particular, we expect that public trust in certain institutions will play a mediating role in their support for AI, and that the mediating effect of trust will be further moderated by political views.
These expectations are driven by recent developments. First, as reports of discrimination linked to AI-driven policing and hiring practices have surfaced, US Congresspeople have proposed legislation to counteract AI risks. Notably, these proposals were introduced by Democrats (left-leaning party), and, as of this writing in February 2021, they don’t enjoy a single Republican co-sponsor (right-leaning party). Second, Republicans have long advanced media narratives that cast “Big Tech” as politically liberal and biased against conservatives, possibly undermining trust in tech companies and products like AI.
Nascent politicization of this kind is worrisome, given that partisan divides over other issues at the intersection of science and risk—most notably climate change—have inhibited policymaking and responsive action, and have exacerbated polarization in the US electorate. Our research begins to address the possibility that a similar phenomenon is occurring with AI.
Our results suggest that public opinion is indeed mapping, at least to some extent, onto politicized narratives, revealing an interplay between political ideology and trust on support for AI. For example, among people who have high trust in technology companies, we find that political ideology doesn't really affect support. However, for individuals who have moderate or low trust in tech companies, political ideology does matter, with conservatives supporting AI less. We discuss these and other findings and elaborate on implications for the future of AI.