parallax background















What does the public think about AI?

US, 2023


Authors: Jonathan Dupont, Seb Wride, Vinous Ali & Jude Wilson


Introduction and Summary


It has long seemed likely that Artificial Intelligence (AI) would be the next General Purpose Technology, radically transforming our economy and society. Forecasting the timing of these changes is always difficult however, with AI having passed through multiple slowdowns and ‘AI winters’ in the past.

In the last year, however, it has become clear that the decade-long increase in fundamental AI capabilities has crossed a threshold where they now have practical, economically significant applications. AI systems are no longer a research curiosity, but the pressing focus of the world’s largest companies. From ChatGPT to Copilot, Stable Diffusion to LaMDA, meaningful advances in AI seem to be happening now on a near weekly basis. AI systems can now pass medical or legal exams, increase white collar productivity in many occupations by over 50%, generate new photo realistic images, and help power more dangerous, autonomous weapons.

In order to learn more about how the public is responding to these changes - and what might come next - Public First ran a new extensive nationally representative poll of 2,052 adults in the US, asking them their opinions around a range of AI related issues.

AI Today

The public has noticed the recent increase in AI capacities

  • Just 7% of our survey respondents said they were not familiar with AI, while half of adults under 35 (51%) said they had already used ChatGPT. Only 6% said that recent AI progress had been slower than they expected, while 42% said it had been faster.
  • 45% of Americans described America as a leader in AI research, climbing to 55% amongst men (compared with 36% of women). China also scored highly, with 38% of Americans describing it as an AI leader.
  • Overall, we saw mixed emotions around the rise of AI. 39% of Americans feel positive about AI while 20% feel negative. The most commonly expressed emotion was curiosity, followed by interest and worry.
  • The public expects recent progress in AI to continue. Almost a third (29%) told us they think AI will be able to produce a new scientific discovery in the next ten years. 27% of those we spoke to thought AI would be able to diagnose a medical issue and just shy of a quarter (24%) of Americans thought it would be able to identify and kill enemy soldiers in the next ten years.
  • When we asked about different use cases, it was clear that the public remains cautious about using AI in the place of human decision makers. While a significant percentage of the public were comfortable using AI to detect welfare fraud (37%) or to mark a student’s homework (28%), they were far less comfortable giving AI the reins when choosing how to spend Government money (14%) or deciding whether to hire someone for a job (13%).

AI transforming current jobs is no longer seen as a far off possibility.

  • 45% of current workers told us that they believed an AI could do their job better than them in the next decade, while 53% said that they expected AI to somewhat or significantly increase unemployment.
  • In response to this, a majority (55%) said that they believed that Governments should try to prevent AI from taking human jobs.
  • Just 29% believed that their job would disappear completely if an AI could do it better than them, but equally only 28% expected that automation would allow them to work fewer hours. A third would expect to either oversee the AI or take on other responsibilities (32%).

Across all demographics, there is strong support for greater regulation of AI.

  • When we asked who should decide what AI tools are allowed to be used for, by far the most common responses were national governments, regulators and the legal system. Just 26% believed that this decision should lay in the hands of the developers of the AI system, and 23% the user.
  • 50% of respondents supported the creation of a new government regulatory agency, similar to the Food and Drug Administration (FDA), to regulate the use of new AI models. This was supported by nearly every demographic, no matter their age, political allegiance, economic liberalism or general level of tech optimism.

AI and Policy

In education, the majority of respondents believed AI tools should be banned for homework and exams, but allowed as a wider learning aid.

  • In general, 67% of respondents believed that it was more important for children at school to learn how to do things without the help of AI, rather than to learn how to use AI tools. 63% believed they should be banned during exams, and 41% during homework.
  • Despite this, most recognised that preventing the use of AI tools outside exams would be very difficult or impossible. 61% agreed that it would be impossible to stop children using AI tools for their homework, and 47% that banning them was as pointless as banning the calculator.
  • Equally, 48% agreed that children needed to learn AI tools for the rest of their career, and in general our respondents were in favour of the use of AI tools as a personal tutor, or to help take and organise notes.

In health, the majority of respondents were in favour of the greater use of AI based diagnosis, as long as this remained a choice.

  • 47% of our sample said that they would support an AI being used to diagnose people based on the conditions they describe, compared to 22% who were opposed. Of those who were opposed, the greatest concern was incorrect diagnoses (67%).
  • In general, our respondents wanted AI to achieve at least 90% accuracy, although notably only a minority (32%) thought human doctors reached this threshold.
  • By far the biggest objections to the use of AI diagnosis came when patients were forced to use it, with 63% opposed, whereas conversely 68% were in favour when all diagnoses were double checked by a human doctor.

The public have mixed views on whether AI companies should need explicit permission from the creators of the text or images they train on.

  • Just 21% told us that they believed AI companies should be allowed to train on any publicly available text or images, with around the same proportion (19%) saying they could train on any work where the creator has not opted out. By contrast, 32% believed AI companies should always need explicit permission.

The public were conflicted about the greater use of AI for surveillance and law enforcement purposes.

  • In one scenario, we asked about the use of AI on a train which could automatically recognise when a train passenger was posing as a threat, and alert staff or the police. We saw overwhelming support for this, with 56% in favour compared to just 12% opposed. Those who opposed were predominantly worried that the AI would incorrectly identify people (80%) while 69% said they felt it was a ‘scary level’ of surveillance in our society.
  • In another scenario, we asked about the use of AI for automatic age verification at a supermarket. 35% supported this, but 34% opposed, with the greatest support coming from the youngest age groups. Only 29% believed an AI would be more accurate than a human at identifying people’s age from their face while 23% felt it would be less accurate than a human.

The Future of AI

A human level AI within our lifetimes is no longer seen as science fiction.

  • On average, the respondents to our survey expected a human level artificial general intelligence (AGI) to arrive between 2023-2029. This is an earlier time frame than current predictions in leading prediction markets. 28% believe this has already happened.
  • When asked to compare the intelligence of the most advanced AI today to other animals, the closest analogue was seen to be a human adult (29%), with just 10% thinking their intelligence was closest to a dog, 2% to a pig or 1% a sheep.

Even if an AGI was as smart as a human, or could experience emotions in the same way, most people think it should not have human level rights.

  • In order for them to consider it conscious, 58% of our respondents said that an AI would have to experience emotions and 49% the ability to feel pain. At the moment, however, only 8% believed that the most advanced AI are currently able to feel pain, compared to 66% for a goldfish.
  • Even if an AI could feel pain on the same level as a human, just 29% believed it should be treated equally. Similarly, only 24% believed that a human level AGI should have the right to refuse a task it does not want to do.

A significant proportion of the public are starting to take seriously the potential catastrophic risks from advanced AI.

  • Overall, 32% thought advanced AI would make us safer, compared to 23% who thought it would make us less safe. When asked about specific risks from advanced AI, the most important were perceived to be increasing unemployment (44%) and more dangerous military robots (34%).
  • However, a significant minority were also worried about existential risks from AI. 30% said that an important risk was an advanced AI trying to take over or destroy human civilisation, and 28% thought it was a real risk that it could cause a breakdown in human civilisation in the next fifty years.
  • When asked their judgement on the overall probability that an advanced AI could cause humanity to go extinct in the next hundred years, around half thought that this was highly unlikely: below 1 in 100. By contrast, just over a quarter (19%) thought that that was a significant probability, with at least a 10% possibility.
parallax background

AI Today

How familiar are people with AI?

In the last few years, AI has been rising rapidly in public prominence. In our polling, just 7% said that they were not familiar with Artificial Intelligence (AI), conversely a majority 51% said that they would be able to explain what it was. A sizable chunk (27%) of the population had only heard of it in the last year.

Looking at Google Trends data, we can see more specifically that interest in AI has significantly accelerated in the last year or so.

While AI’s prominence has been rising rapidly, as of the time of our polling (March 2023), personal familiarity with leading generative AI tools was much lower, with around a third of adults (33%) saying they had used ChatGPT. This, however, rises to half (51%) of adults under 35, before rapidly falling off for older generations. These figures were far higher for household AI tools such as Amazon’s Alexa or Google Assistant.

People might not quite know what to make about AI yet, but they clearly seem to be intrigued. When we asked people how they felt about AI, the most popular responses were curious (42%) and interested (39%). By contrast, just 19% said that they felt scared, 5% underwhelmed and 3% bored.

When we asked people whether AI was developing faster, slower or around the same speed that they’d expected, only a small minority (6%) reported progress being slower, and similar proportions said faster (42%) or at the speed they’d expected (35%). Notably, this pattern did not seem to change significantly with age, with the younger generations only slightly more likely to believe that progress was happening faster than they’d foreseen.

In order to better understand awareness of the current abilities of AI, we asked our polling sample what abilities they believed AI was already currently capable of doing at a human level.

One of the most famous public AI breakthroughs was IBM’s Deep Blue defeating world chess champion Garry Kasparov in 1996, but even today in our polling only 54% of the public believed that AI is capable of playing chess at a human level. A similar level seemed to be aware that AI is now capable of transcribing voices, driving cars, recognising handwritten text (40%), or creating computer programs (38%), paintings (36%) or photos (38%).

By contrast, there seemed much less awareness of AI abilities in areas that one might ordinarily think of as needing a ‘human touch’ e.g. creating a new funny joke (27%), or providing therapy or counselling (15%).

In another way of looking at this question, we also gave our respondents a series of scenarios and asked them how surprised they would be if it was reported to have happened by a credible source in the next five years.

When we asked our respondents, over half (57%) told us that they would not be surprised to find that a photorealistic scene from a TV or movie show was entirely AI generated, or similarly, that a car manufacturer could start running a factory entirely with robots (50%).

People were less aware of the defence or security implications of AI, with less than half (46%) saying that they would not be surprised if robotic soldiers were used in active warfare in five years time, and only 45% saying they would be surprised if a swarm of autonomous drones was used to assassinate someone in the US.

Ironically, the most surprising scenario according to our respondents would be if an AI chatbot claimed to be conscious and asked to be freed from its programmer - which has reportedly already happened with GPT-4 models.

What should AI be used for?

As well as people asking about what AI could be used for, we also wanted to know what they thought it should be used for.

To start, we asked about a range of personal day-to-day use cases. The most popular across all age ranges was using AI to research a topic for the user (45% interested), while conversely one of the least popular use cases was one of the most common AI demos today: drafting emails at work (19%).

We also asked about wider societal uses of AI and using AI to either replace human decision makers. Health diagnosis again scored well here, although the most popular use case was actually detecting welfare fraud. (This did, however, have a strong age gradient, with those over 65 almost twice as likely to select it as those under 25).

By contrast, our respondents were much less comfortable with AI being used either to make decisions when it came to detecting guilt, either criminally or in a military context.

Whether they would be comfortable or not, we also wanted to know more objectively whether our panel thought a human or AI would make a better decision in a range of scenarios. Here we found our six scenarios fell in three groups:

  • for highly routine sounding tasks (finding trends in data, transcribing words) our panel thought the AI was likely to do a better job.
  • for diagnosis tasks (either medical or hiring), humans were still preferred, but many thought AI could do as good a job.
  • for highly sensitive tasks (therapy, deciding whether to launch a nuclear weapon), only a very small minority wanted to trust the AI.

Automation

For the majority of our panel - especially those middle aged or younger - AI is likely to have a significant impact on the labour market before they retire. In order to look into this further, we asked a series of questions on what effect our respondents thought AI would have on their jobs and the labour market, and what if anything should be done about it.

To start, we asked what impact our respondents expected AI to have on unemployment. 53% said they expected it to either significantly or somewhat increase it, with just 21% thinking it would have no overall effect. In response to this, a majority (55%) believed that governments should try to prevent human jobs being taken over by AI, compared to 29% who said that they should not try to stop this.

Somewhat surprisingly, 45% of workers were already prepared to say that they believed an AI could do their job better than them in the next decade. These proportions only changed moderately when we extended the time horizon to the distant future, suggesting that many of the sceptics just fundamentally think their job is not able to be automated.

Digging into this more granularly, we saw that younger generations (under 35) were more likely to believe that an AI could do their job better in the next decade, while older generations remained more sceptical.

In response to what the outcome of automation would be, our respondents seemed less clear. Just 29% believed that their job would disappear completely - but equally only 28% expected that automation would allow them to work fewer hours.

Who should be in control of AI?

How should governments respond to the changes in AI?

In our polling, we saw that almost nobody thought that everyone should have unfettered access to AI for potentially harmful use cases. This applied not just to obviously dangerous examples - 74% agreed that you should not be allowed to use an AI tool to help you build a bomb - but all the way through to what might be thought of as more classical free speech and debate. Just 27% thought you should be allowed to ask AI for arguments in support of communism, and 27% for arguments against democracy.

When we asked who should decide what AI tools are allowed to be used for, the most common responses were national governments, regulators and the legal system. 26% believed that this decision should lay in the hands of the developers of the AI system, and 23% the user. Similar views were held across the political spectrum, with even those who had elsewhere identified with more libertarian policy views (economically right, socially left) not seeming to disagree.

Later on, we suggested a range of potential policy ideas to either slow or better control AI development. 24% believed that it was a good idea to ban new research into AI, while a fifth thought an increase in government funding of AI research would be a very bad idea.

By far, the most popular policy idea was the creation of a new government regulatory agency, similar to the Food and Drug Administration (FDA), to regulate the use of new AI models.

Across the whole sample, this was strongly supported and we saw majority support across every demographic we tested, including: age, gender, party affiliation, highest education view, ethnicity, political ideology, overall tech optimism, whether they’d used Chat GPT or not, and views of the overall extinction level from AI.

parallax background

AI and Policy

Education

One of the most striking existing use cases of AI models is to help with learning. Current Large Language Models (LLMs) can already act as a personal tutor, help explain a difficult concept, or give advice on how to improve an essay. More concerning to many, they can also just write the essay for you.

In our poll, we wanted to find out more on whether the public thought AI tools were more likely to be an opportunity or threat for learning, and particularly for school aged children.

To start, we asked our sample whether it was more important for children to learn how to use AI tools, or to learn how to do things without the help of AI. Two thirds of our sample said that they believed it was more important to learn how to complete tasks without the help of an AI tool.

As a control however, we also asked the same question for using a calculator and searching the Internet. Interestingly, we saw a divide here: for the calculator we saw a very similar pattern, whereas for searching the Internet parents believed it was more important to learn how to use it.

When we asked about more specific use cases, we saw that in general our sample was relatively relaxed about use cases where AI acted in more of assistant role - for practice exercises, acting as a personal tutor or helping with spelling and grammar - while being far more cautious around the use of AI during an exam or to complete the first draft of a homework assignment. Interestingly, being a parent seemed to cause no real difference in views on this.

Finally, we asked a range of questions over whether our sample thought it was right to try and ban AI tools - or whether their use by children was inevitable. In general, our sample agreed with both:

  • 55% agreed that we should ban children from using AI tools for exams, and while less popular, more still agreed than disagreed that we should ban children from using AI tools for homework (36%) or in the classroom (36%).
  • By contrast, 55% believed that it will be impossible to stop children using AI tools for their homework, and 47% that banning AI tools is as pointless as banning the calculator.

Health

As we saw earlier, one of the most popular potential use cases of AI is to help with medical diagnosis. 47% of our sample said that they would support an AI being used in doctor’s offices or hospitals that could diagnose people based on the conditions they describe, compared to 22% who were opposed.

When we looked further into why people opposed the use of a medical AI, the most common objection was a worry that it would give incorrect diagnoses (67%). Another key concern was that it would not deal with the patient in a sympathetic, caring way (50%). By contrast, only a minority seemed concerned about the privacy implications of an AI having information about people’s symptoms (32%).

In order to look further into concerns about accuracy, we asked our respondents how accurate they thought AI would have to be before it was rolled out in doctors’ offices and hospitals - and as a reference, how reliable they thought current human diagnosis was. In general, it seemed that our sample wanted an AI to achieve at least 90% accuracy, although notably only a minority (32%) thought human doctors reached this threshold.

Finally, we also asked about a range of different scenarios, trying to further assess when our sample was comfortable with the use of AI for diagnosis, and when they were not. By far the biggest objections came when patients were forced to use AI, with 63% opposed to this, whereas conversely 68% were in favour when all diagnoses were double checked by a human doctor too.

Today’s AI models require massive amounts of training data - generally either text or images - in order to learn the patterns that eventually add up their current level of intelligence. Much of this training data is currently taken from the largest corpus of information we have: the Internet.

A key current policy debate is the extent to which it is okay for an AI model to learn from the intellectual property of another, generally human, creator. On the one hand, this is the way humans have always learned: taking inspiration and blending together from thousands of sources around them. On the other hand, without proper controls, modern AI image models could easily replicate a copyrighted character such as Spider Man, a real celebrity or just the distinctive style of an independent artist, potentially putting them out of work.

In order to test our panel’s views on this, we asked them whether they believed AI companies should be allowed to train on any images or texts that are publicly available, only when they have explicit permission to do so, or as an in between option, when the creator has not explicitly opted out of their work being used this way.

Just 21% told us that they believed AI companies should be allowed to train on any publicly available text or images, with around the same proportion (19%) saying they believed AI companies could train on any work where the creator has not opted out. By contrast, 32% believed AI companies should need explicit permission.

As a reference, we also asked our panel three questions in the same form, but this time asking about human learning, rather than AI. Here, we saw a very different pattern: 45% believed a human should be allowed to learn from any text or images, compared to 21% who thought that they should have explicit permission.

When we asked the panel to explain or justify why they had given different answers for humans and AI, it seemed that the majority did not have clearly worked out answers. The most popular answer, by far, was just that ‘humans and computers are different’ (74%).

Surveillance

One of the more controversial use cases of AI is as an aid for law enforcement and surveillance, with many commentators expressing a concern about the use of AI by authoritarian states such as China or Russia.

In our poll, we found support for these concerns, with limited enthusiasm for greater use of AI by the US government. As we saw earlier, just 39% of people were in favour of AI replacing human decision makers when it came to tackling welfare fraud.

However, in another scenario we asked about the use of AI on a train to automatically recognise when a train passenger was posing as a threat, and to alert staff or the police. We saw majority support for this, with 56% in favour compared to just 12% opposed. This high level of support was mirrored across demographics, with only those aged 55-64 being marginally less enthusiastic.

When we asked whether this system would have an impact, the public seemed positive, with 47% believing it would make train travel safer compared to 10% saying it would reduce safety.

In another scenario, we asked about the use of AI for automatic age verification at a supermarket. While less enthusiastic, here again we saw 35% support and 34% opposition. In this case, the youngest age groups were actually more likely to be in favour - perhaps as they normally have to deal with the most extra inconvenience of a human check.

Overall, 53% told us that they believed an AI would be as or more accurate than a human at identifying a person’s age from their face, and on average they believe an AI would be accurate most of the time (defined as 60-89% accuracy).

parallax background

The Future of AI

Artificial General Intelligence (AGI)

AI algorithms have already met or exceeded human abilities in many areas, including playing chess, handwriting recognition or brute force computation. How long will it be before they meet or exceed human abilities across all types of cognitive task?

In a recent survey of AI experts, the median forecast time to a human level artificial general intelligence (AGI) was 37 years, implying its arrival in 2059. By contrast, at the time of writing, one of the leading prediction markets, implied AGI was likely to arrive as below:


In our survey, the median time frame of the arrival of AGI matched this earlier prediction: 2030-2039. Indeed, a not insignificant proportion (21%) believed that such had already been developed.

When we asked what an AI would have to do to be as smart as a human in their view, the most popular view was similar to the classic Turing test: being capable of holding a conversation with a human (47%). Nearly as common a response was being capable of creativity (43%) or discovering a new scientific idea (40%). Just 12% took the hard line position that an AI could never be as smart as a human.

When we asked which animal most closely matched the intelligence of today’s most advanced AI, the most common answer (after Don’t Know) was already a human adult (29%), with just 10% thinking it was closest to a dog, 2% to a pig or 1% a sheep.

Emotions, Sentience and Rights

We next asked a very similar question, but this time asking what an AI would have to do to be considered conscious, rather than focusing on its intelligence. Again, our survey respondents largely focused on qualia as the dividing line: with 58% pointing to the ability to feel emotions, and 49% the ability to feel pain.

At the moment, few (8%) believed that the most advanced AI are currently able to feel pain. By contrast our pattern was very willing to acknowledge other animals could experience pain, with 93% saying dogs and 94% saying cats felt pain, 66% saying a goldfish felt pain and even 60% saying ants could.

Suppose an AI is created that either is smart as a human or can experience emotions like it. What does that imply for how we should treat it?

In our survey, just 20% thought that human level intelligence was enough to deserve to be treated equally, compared to 63% who thought it should not be treated equally. When we changed the comparison to an ability to be able to feel pain, this increased to 29% believing it should be treated equally, although 50% still disagreed.

Looking in more depth, our panel was generally opposed to a human level AGI having civil rights: roughly two third said it should not be allowed to vote in an election (66%) or marry a human (62%), and only 24% believed that an AI should have the right to refuse to do a task it does not want to.

Risks from Advanced AI

Many experts believe that the development of human or more advanced AGI could be one of the most significant future risks to human civilisation. At the moment, we do not understand how to design AI to ensure that they only fulfil the goals we intend for them, while AI could also be an extremely powerful weapon in the hands of hostile actors.

As part of our polling, we asked our respondents which of a series of risks they were most worried about inr the next fifty years. The biggest reported concern was a major international war (79%), followed by terrorism (78%), nuclear war (77%) and climate change (66%).

By contrast, AI was clearly in a different tier, with 56% saying that they were worried compared to 36% that were not. That said, it was clear that our panel wasn't completely ignoring risks - and concern was definitely greater than for another science fiction sounding concept, such as an asteroid strike.

In a follow up question, we asked which of the same list of dangers respondents believed there was a real risk that it could cause a breakdown in human civilisation in the next fifty years. Here, AI scored lowest, albeit with 28% still believing this was a real risk.

When we just asked about the overall impact of advanced AI:

  • 28% thought it would make us richer, compared to 25% who thought it would make us poorer.
  • 32% thought it would make us safer, compared to 22% who thought it would make us less safe.

The most important perceived risk from advanced AI was still increasing unemployment (44%), but many were also still worried about the danger from military robots (34%).

30% said that they worried an advanced AI might try to take over or destroy human civilisation, or the same proportion as who said that they were concerned an advanced AI might increase the amount of misinformation and deception on the Internet.

Finally, we asked respondents to give their overall probability that an advanced AI could cause humanity to go extinct in the next hundred years.

Of those who expressed an opinion, around half thought that this was highly unlikely: below 1 in 100. By contrast, just over a quarter (26%) thought that that was a significant probability, with at least a 10% possibility.

When we asked those who thought an existential risk from AGI was highly unlikely, the answers were not necessarily reassuring. Just 9% of these said that they did not believe advanced AI will be developed in the next hundred years, while conversely 53% believed that human civilisation is more likely to be destroyed by other factors.

parallax background

Notes

About the Poll

Fieldwork: 8th Mar - 19th Mar 2023 Interview method: Online Survey Population represented: US Adults

Sample size: 2052

Methodology:All results are weighted using Iterative Proportional Fitting, or ’Raking’. The results are weighted by interlocking age & gender, region and social grade to Nationally Representative Proportions

Public First is a member of the BPC and abides by its rules. For more information please contact the Public First Polling Team.

Artwork

Artwork produced by AI, with the following prompts:

Header: An impressionist art painting of a neural net (DALL-E)

Today: Black and white picture of students using tablet computers in a classroom to learn (DALL-E)

Policy: Human neural network full body scan, nerves and tissue (DALL-E)

Future: Humans using AIs as prisoners, graphic novel style, black and white (DALL-E)

Notes: AI creating the future, digital art (DALL-E)