parallax background















What does the public think about AI?

UK, 2023


Authors: Jonathan Dupont, Seb Wride & Vinous Ali


Introduction and Summary

“It's exciting, but also scary.”Survey Respondent

It has long seemed likely that Artificial Intelligence (AI) would be the next General Purpose Technology, radically transforming our economy and society. Forecasting the timing of these changes is always difficult however, with AI having passed through multiple slowdowns and ‘AI winters’ in the past.

In the past year, however, it has become clear that the decade-long increase in fundamental AI capabilities has crossed a threshold where they now have practical, economically significant applications. AI systems are no longer a research curiosity, but the pressing focus of the world’s largest companies. From ChatGPT to Copilot, Stable Diffusion to LaMDA, meaningful advances in AI seem to be happening now on a near weekly basis. AI systems can now pass medical or legal exams, increase white collar productivity in many occupations by over 50%, generate new photo realistic images, and help power more dangerous, autonomous weapons.

In order to learn more about how the public is responding to these changes - and what might come next - Public First ran a new extensive nationally representative poll of 2,000 adults in the UK, asking them their opinions around a range of AI related issues.

AI Today

The public has noticed the recent increase in AI capacities

  • Just 5% of our survey respondents said they were not familiar with AI, while nearly half of adults under 35 (46%) said they had already used ChatGPT. Only 8% said that recent AI progress had been slower than they expected, while 38% said it had been faster.
  • Overall, we saw mixed emotions around the rise of AI. The most commonly expressed emotion was curiosity, but otherwise we saw nearly equal excitement and worry.
  • The public expects recent progress in AI to continue. Over two-thirds (67%) told us that they would not be surprised to learn that, in the next five years, a photorealistic scene from a TV or movie show was entirely AI generated, or that a car manufacturer could start running a factory entirely with robots (67%). 53% said that they would not be surprised if robotic soldiers were used in active warfare.
  • When we asked about different use cases, the public were highly supportive of using AI to give earlier warning of new medical conditions or help detect welfare fraud. By contrast, they were much less comfortable with AI being used to decide or to advise when it came to detecting guilt, either in a criminal or military context.

AI transforming current jobs is no longer seen as a far off possibility

  • 40% of current workers told us that they believed an AI could do their job better than them in the next decade, while 64% said that they expected AI to somewhat or significantly increase unemployment.
  • In response to this, a majority (57%) said that they believed that Governments should try to prevent AI taking human jobs.
  • Just 30% believed that their job would disappear completely if an AI could do it better job than them, but equally only 27% expected that automation would allow them to work fewer hours.

Across all demographics, there is strong support for greater regulation of AI

  • When we asked who should decide what AI tools are allowed to be used for, by far the most common responses were national governments, regulators and the legal system. Just 21% believed that this decision should lay in the hands of the developers of the AI system, and 16% the user.
  • 62% of respondents supported the creation of a new government regulatory agency, similar to the Medicines and Healthcare Products Regulatory Agency (MHRA), to regulate the use of new AI models. This was supported by nearly every demographic, no matter their age, political allegiance, economic liberalism or general level of tech optimism.

AI and Policy

In education, the majority of respondents believed AI tools should be banned for homework and exams, but allowed as a wider learning aid.

  • In general, 67% of respondents believed that it was more important for children at school to learn how to do things without the help of AI, rather than to learn how to use AI tools. 63% believed they should be banned during exams, and 41% during homework.
  • Despite this, most recognised that preventing the use of AI tools outside exams would be very difficult or impossible. 61% agreed that it would be impossible to stop children using AI tools for their homework, and 47% that banning them was as pointless as banning the calculator.
  • Equally, 48% agreed that children needed to learn AI tools for the rest of their career, and in general our respondents were in favour of the use of AI tools as a personal tutor, or to help take and organise notes.

The public have mixed views on whether AI companies should need explicit permission from the creators of the text or images they train on.

  • Just 20% told us that they believed AI companies should be allowed to train on any publicly available text or images, with around the same proportion (21%) saying they should be allowed to train on any work where the creator has not opted out. By contrast, 37% believed AI companies should always need explicit permission.

The public seem broadly in favour of the greater use of AI based surveillance and law enforcement

  • In one scenario, we asked about the use of AI on a train to automatically recognise when a train passenger was posing as a threat, and to alert staff or the police. We saw overwhelming support for this, with 67% in favour compared to just 9% opposed. 59% said that they believed it would make train travel safer compared to 8% saying it would reduce safety.
  • In another scenario, we asked about the use of AI for automatic age verification at a supermarket. 44% supported this, compared to 27% opposed, with the greatest support coming from the youngest age groups. 63% of the public told us that they believed an AI would be as or more accurate than a human at identifying a person’s age from their face.

The Future of AI

A human level AI within our lifetimes is no longer seen as science fiction

  • On average, the respondents to our survey expected a human level artificial general intelligence (AGI) to arrive between 2030-2039. This is around the same time frame as current predictions in leading prediction markets, albeit faster than the consensus of AI experts.
  • When asked to compare the intelligence of the most advanced AIs today to other animals, the closest analogue was seen to be a human adult (27%), with just 10% thinking their intelligence was closest to a dog, 2% to a pig or 1% a sheep.

Even if an AGI was as smart as a human, or could experience emotions in the same way, most people think it should not have human level rights

  • In order for them to consider it conscious, 62% of our respondents said that an AI would have to experience emotions and 52% the ability to feel pain. At the moment, however, only 5% believed that the most advanced AI are currently able to feel pain, compared to 70% for a goldfish.
  • Even if an AI could feel on the same level as a human, just 30% believed it should be treated equally. Similarly, only 25% believed that a human level AGI should have the right to refuse a task it does not want to do.

A significant proportion of the public are starting to take seriously the potential catastrophic risks from advanced AI

  • Overall, 32% thought advanced AI would make us safer, compared to 18% who thought it would make us less safe. When asked about specific risks from advanced AI, the most important were perceived to be increasing unemployment (49%) and more dangerous military robots (39%).
  • However, a significant minority were also worried about existential risks from AI. 29% said that an important risk was an advanced AI trying to take over or destroy human civilisation, and 20% thought it was a real risk that it could cause a breakdown in human civilisation in the next fifty years.
  • When asked their judgement on the overall probability that an advanced AI could cause humanity to go extinct in the next hundred years, around half thought that this was highly unlikely: below 1 in 100. By contrast, just over a fifth (21%) thought that that was a significant probability, with at least a 10% possibility.
parallax background

AI Today

How familiar are people with AI?

In the last few years, AI has been rising rapidly in public prominence. In our polling, just 5% said that they were not familiar with Artificial Intelligence (AI), although only 55% said that they would be able to explain what it was. When we followed up, on average, people said that they had first heard about AI 3-5 years ago.

Looking at Google Trends data, we can see more specifically that interest in AI has significantly accelerated in the last year or so.

While AI’s prominence has been rising rapidly, at the time of our polling (March 2023), personal familiarity with leading generative AI tools was much lower, with around a third of adults (32%) saying they had used ChatGPT. This, however, rises to nearly half (46%) of adults under 35, before rapidly falling off for older generations.

People might not quite know what to make about AI yet, but they clearly seem to be intrigued. When we asked people how they felt about AI, the most popular responses were around curiosity (46%) or being interested (42%). By contrast, just 17% said that they felt scared, 5% underwhelmed and 2% bored.

When we asked people in their own words how they felt about the development of AI technologies, however, the responses were often more concerned, with examples including:

"It's exciting, but also scary."



"Worried that the consequences of AI are ruining human behaviour."



"Excited but genuinely slightly worried."



"Whilst I can see sone benefits , thru can also be a bit daunting as I feel we're being encouraged to be less proactive reusing our own brains. I am concerned about the negative effect this will have on our brains' capacity!"



"That they will disadvantage everyone except straight white middle class men."



"Progress in the right hands is OK In the wrong hands very dangerous."



"It is inevitable."



"Be very wary, it can spiral out of control."

       Survey Respondents

When we asked people whether AI was developing faster, slower or around the same speed that they’d expected, only a small minority (8%) reported progress being slower, and around equal proportions said faster (38%) or at the speed they’d expected (41%). Notably, this pattern did not seem to change significantly with age, with older generations only slightly more likely that progress was happening faster than they’d foreseen.

In order to better understand awareness of the current abilities of AI, we asked our polling sample what abilities they believed AI was currently capable of at a human level - and for those it thought it could not currently achieve a human level in a particular ability, whether it would be able to in ten years.1

One of the most famous public AI breakthroughs was IBM’s Deep Blue defeating world chess champion Garry Kasparov in 1996, and even today in our polling the majority (66%) believed that AI is capable of playing chess. Similarly, a significant proportion seemed aware that AI is now capable of transcribing voices, driving cars, recognising handwritten text, or creating computer programs, paintings or photos.

By contrast, their seemed to be much less awareness of abilities that could be thought of as more emotional or human: creating a new funny joke (27%), or providing therapy or counselling (15%).

In another way of looking at this question, we also gave our respondents a series of scenarios and asked them how surprised they would be if it was reported to have happened by a credible source in the next five years.

Over two-thirds of respondents (67%) told us that they would not be surprised to find that a photorealistic scene from a TV or movie show was entirely AI generated, or similarly, that a car manufacturer could start running a factory entirely with robots (67%).

People were less about the defence or security implications of AI, with a small majority (53%) saying that they would not be surprised if robotic soldiers were used in active warfare, and only 51% saying they would be surprised if a swarm of autonomous drones was used to assassinate someone in the UK.

Ironically, the most surprising scenario according to our respondents would be if an AI chatbot claimed to be conscious and asked to be freed from its programmer - which has reportedly already happened with GPT-4 models.

What should AI be used for?

As well as people asking about what AI could be used for, we also wanted to know what they thought it should be used for.

To start, we asked about a range of personal day-to-day use cases. The most popular across all age ranges was using AI to give early warnings of new medical conditions (52% interested), while conversely one of the least popular use cases was one of the most common functions of AI today: drafting emails at work (21%).

We also asked about wider societal uses of AI, and using AI to either replace or advise human decision makers. Health diagnosis again scored well here, although the most popular use case was actually detecting welfare fraud. (This did, however, have a strong age gradient ,with those over 65 almost twice as likely to select it as those under 25).

By contrast, our respondents were much less comfortable with AI being used either to decide or just to advise when it came to detecting guilt, either criminally or in a military context.

Whether they would be comfortable or not, we also wanted to know more objectively whether our panel thought a human or AI would make a better decision in a range of scenarios. Here we found our six scenarios fell in three groups:

  • for highly routine sounding tasks (finding trends in data, transcribing words) our panel thought the AI was likely to do a better job
  • for diagnosis tasks (either medical or hiring), humans were still preferred, but many thought AI could do as good a job
  • for highly sensitive tasks (therapy, deciding whether to launch a nuclear weapon), only a very small minority wanted to trust the AI

Automation

For the majority of our panel - especially those middle aged or younger - AI is likely to have a significant impact on the labour market before they retire. In order to look into this further, we asked a series of questions on what effect our respondents thought AI would have on their jobs and the labour market and what, if anything, should be done about it.

To start, we asked what impact our respondents expected AI to have on unemployment. 64% said they expected it to either significantly or somewhat increase it, with just 20% thinking it would have no overall effect. In response to this, a majority (57%) believed that Governments should try to prevent human jobs from being taken over by AI or robots, compared to 29% who said that they should not try to stop this.

Somewhat surprisingly, 40% of workers were already prepared to say that they believed an AI could do their job better than them in the next decade. These proportions only changed moderately when we extended the time horizon to the distant future, suggesting that many of the sceptics just fundamentally think their job is not able to be automated.

Digging into this at a more granular level, we saw that younger generations (under 35) were more likely to believe that AI could do their job better in the next decade, while older generations remained more sceptical.

In response to what the outcome of automation would be, our respondents seemed less clear. Just 30% believed that their job would disappear completely - but equally only 27% expected that automation would allow them to work fewer hours.

Who should be in control of AI?

How should Governments respond to the changes in AI?

In our polling, we saw that almost nobody thought that everyone should have unfettered access to AI for potentially harmful use cases. This applied not just to obviously dangerous examples - 81% agreed that you should not be allowed to use an AI tool to help you build a bomb - but all the way through to what might be thought of as more classical free speech and debate. Just 25% thought you should be allowed to ask AI for arguments in support of communism, and 26% for arguments against democracy.

When we asked who should decide what AI tools are allowed to be used for, by far the most common responses were national governments, regulators and the legal system. Just 21% believed that this decision should lay in the hands of the developers of the AI system, and 16% the user. Similar views were held across the political spectrum, with even those who had elsewhere identified with more libertarian policy views (economically right, socially left) not seeming to disagree.

Later on, we suggested a range of potential policy ideas to either slow or better control AI development. Just 20% believed that it was a good idea to ban new research into AI, and conversely 42% believed there should actually be increased government funding of AI research.

By far, the most popular policy idea was the creation of a new government regulatory agency, similar to the Medicines and Healthcare Products Regulatory Agency (MHRA), to regulate the use of new AI models.

Across the whole sample, this was supported by 62% of respondents - and more broadly we saw majority support across every demographic we tested, including: age, gender, social class, Brexit vote, voting intention, highest education view, ethnicity, familiar with science fiction, political ideology, overall tech optimism, whether they’d used Chat GPT or not, and views of the overall extinction level from AI.

parallax background

AI and Policy

Education

One of the most striking existing use cases of AI models is to help with learning. Current LLM models can already act as a personal tutor, help explain a difficult concept, or give advice on how to improve an essay. More concerning to many, they can also just write the essay for you.

In our poll, we wanted to find out more on whether the public thought AI tools were more likely to be an opportunity or threat for learning, and particularly for school aged children.

To start, we asked our sample whether it was more important for children to learn how to use AI tools, or to learn how to do things without the help of AI. Overwhelming, by a ratio of nearly three to one, our sample said that they believed it was more important to learn how to complete tasks without the help of an AI tool.

As a control however, we also asked the same question for a calculator just searching the Internet. Interestingly, we saw a divide here: for the calculator we saw a very similar pattern, whereas for searching the Internet parents believed it was more important to learn how to use it.

When we asked about more specific use cases, we saw that in general our sample was relatively relaxed about AI acting in more of assistant role - organising notes, acting as a personal tutor or helping with spelling and grammar - while being far more cautious around the use of AI during an exam or to complete the first draft of a homework assignment. Interestingly, being a parent seemed to cause no real difference in views on this.

Finally, we asked a range of questions over whether our sample thought it was right to try and ban AI tools - or whether their use by children was inevitable. In general, our sample agreed with both:

  • 63% agreed that we should ban children from using AI tools for exams, and while less popular more still agreed than disagreed that that we should ban children from using AI tools for homework (41%) or in the classroom (35%)
  • By contrast, 61% believed that it will be impossible to stop children using AI tools for thor homework, and 47% that banning AI tools is a pointless as banning the calculator

Health

As we saw earlier, one of the most popular potential use cases of AI is to help with medical diagnosis. 45% of our sample said that they would support an AI being used in GP surgeries or hospitals that could diagnose people based on the conditions they describe, compared to 24% who were opposed.

When we dug deeper into why people opposed the use of a medical AI, the most common objection was a worry that it would give incorrect diagnoses (67%). Another key concern was that it would not deal with the patient in a sympathetic, caring way (56%). By contrast, only a minority seemed concerned about the privacy implications of an AI having information about a people’s symptoms (27%).

In order to look further into concerns about accuracy, we asked our respondents how accurate they thought AI would have to be before it was rolled out in doctors offices and hospitals - and as a reference, how reliable they thought current human diagnosis was. In general, it seemed that our sample wanted an AI to achieve at least 90% accuracy, although notably only a minority (34%) thought human doctors reached this threshold.

Finally, we also asked about a range of different scenarios, trying to further assess when our sample was comfortable with the use of AI for diagnosis and when they were not. By far the biggest objections came when patients were forced to use the AI algorithm, with 70% opposed to this, whereas 76% were in favour when all diagnoses were double checked by a human doctor too.

Today’s AI models require massive amounts of training data - generally either text or images - in order to learn the patterns that eventually add up their current level of intelligence. Much of this training data is currently taken from the largest corpus of information we have: the Internet.

A key current policy debate is the extent to which it is okay for an AI model to learn from the intellectual property of another, generally human, creator. On the one hand, this is the way humans have always learned: taking inspiration and blending together from thousands of sources around them. On the other, without proper controls, modern AI image models could easily replicate a copyrighted character such as Spider Man, a real celebrity or just the distinctive style of an independent artist, potentially putting them out of work.

In order to test our panel’s views on this, we asked them whether they believed AI companies should be allowed to train on any images or texts that are publicly available, only when they have explicit permission to do so, or as an in between option, when the creator has not explicitly opted out of their work being used this way.

Just 20% told us that they believed AI companies should be allowed to train on any publicly available text or images, with around the same proportion (21%) saying they believed AI companies could train on any work where the creator has not opted out. By contrast, 37% believed AI companies should need explicit permission.

As a reference, we also asked our panel three questions in the same form, but this time asking about human learning, rather than AI. Here, we saw a very different pattern: 57% believed a human should be allowed to learn from any text or images, compared to 16% who thought that they should have explicit permission.

When we asked the panel to explain or justify why they had given different answers for humans and AI, it seemed that the majority did not really have clearly considered answers. The most popular answer, by far, was just that ‘humans and computers are different’ (78%).

Surveillance

One of the more controversial use cases of AI is as an aid for law enforcement and surveillance, with many commentators expressing a concern about the use of AI by authoritarian states such as China or Russia.

In our poll, however, we saw that the public were in general much more relaxed, and often actively in favour of the greater use of AI by the UK government. As we saw earlier, the most popular societal option for replacing human with AI detection came in tackling welfare fraud.

Another scenario we asked about the use of AI on a train to automatically recognise when a train passenger was posing as a threat, and to alert staff or the police. We saw overwhelming support for this, with 67% in favour compared to just 9% opposed. This high level of support was basically mirrored across demographics, with only the very youngest (18-24 year olds) being marginally less enthusiastic.

When we asked whether this system would have an impact, the public seemed largely positive, with 59% believing it would make train travel safer compared to 8% saying it would reduce safety.

In another scenario, we asked about the use of AI for automatic age verification at a supermarket. While less enthusiastic, here again we saw more overall support (44%) than opposition (27%). In this case, the youngest age groups were actually more likely to be in favour - perhaps as they normally have to deal with the most extra inconvenience of a human check.

Overall, 63% told us that they believed an AI would be as or more accurate than a human at identifying a person’s age from their face, and on average they believe an AI would be accurate most of the time (defined as 60-89% accuracy).

parallax background

The Future of AI

Artificial General Intelligence (AGI)

AI algorithms have already met or exceeded human abilities in many areas, including playing chess, handwriting recognition or brute force computation. How long will it be before they meet or exceed human abilities across all types of cognitive task?

In a recent survey of AI experts, the median forecast time to a human level artificial general intelligence (AGI) was 37 years, implying its arrival in 2059. By contrast, at the time of writing, one of the leading prediction markets, implied AGI was likely to arrive as below:


In our survey, the median time frame of the arrival of AGI matched this earlier prediction: 2030-2039. Indeed, a not insignificant proportion (21%) believed that such had already been developed.

When we asked what an AI would have to do to be as smart as a human in their view, one of the most popular views was similar to the classic Turing test: being capable of holding a conversation with a human (46%). Nearly as common a response was being capable of creativity (44%) or discovering a new scientific idea (37%). Just 13% took the hard line position that an AI could never be as smart as a human.

Interestingly, however, the most common response of all was that the AI would have to be able to feel emotions (49%).

When we asked which animal most closely matched the intelligence of advanced AIs, the most common answer (after Don’t Know) was already a human adult (27%), with just 10% thinking it was closest to a dog, 2% to a pig or 1% a sheep.

Emotions, Sentience and Rights

We next asked a very similar question, but this time asking what an AI would have to do to be considered conscious, rather than focusing on its intelligence. Again, our survey respondents largely focused on qualia as the dividing line: with 62% pointing to the ability to feel emotions, and 52% the ability to feel pain.

At the moment, almost nobody (5%) believed that the most advanced AI is currently able to feel pain. By contrast our pattern was very willing to acknowledge other animals could experience pain, with 97% saying dogs or cats felt pain, 70% saying a goldfish felt pain and even 64% saying ants could.

Suppose an AI is created that either is smart as a human, or can experience emotions like it. What does that imply for how we should treat it?

In our survey, just 16% thought that human level intelligence was enough to deserve to be treated equally, compared to 68% who thought it should not be treated equally. When we changed the comparison to an ability to be able to feel pain, this increased to 30% believing it should be treated equally, although 49% still disagreed.

Looking in more depth, our panel were generally opposed to a human level AGI having civil rights: 68% said it should not be allowed to vote in an election or marry a human, and only 25% believed that an AI should have the right to refuse to do a task it does not want to do.

Risks from Advanced AI

Many experts believe that the development of human or more advanced AGI could be one of the most significant future risks to human civilisation. At the moment, we do not understand how to align AIs to ensure that they only fulfil the goals we intend for them, while AI could also be an extremely powerful weapon in the hands of hostile actors.

As part of our polling, we asked our respondents which of a series risks they were most worried about for the next fifty years. The biggest reported concern was a major international war (83%), followed by nuclear war (79), terrorism (78%) and climate change (77%).

By contrast, AI was clearly in a different tier, with just 49% saying that they were worried compared to 46%. That said, it was clear that our panel weren’t completely ignoring risks - and concern was definitely greater than for another science fiction sounding concept, such as an asteroid strike.

In a follow up question, we asked which of the same list of dangers respondents believed there was a real risk that it could cause a breakdown in human civilisation in the next fifty years. Here, AI scored lowest, albeit with 20% still believing this was a real risk.

When we just asked about the overall impact of advanced AI:

  • 31% thought it would make us richer, compared to 22% who thought it would make us poorer
  • 32% thought it would make us safer, compared to 18% who thought it would make use less safe

The most important perceived risk from advanced AI was still increasing unemployment (49%), but many were also still worried about the danger from military robots (39%).

29% said that they worried an advanced AI might try to take over or destroy human civilisation, or the same proportion as who said that they were concerned an advanced AI might increase the amount of misinformation and deception on the Internet.

Finally, we asked respondents to give their overall probability that an advanced AI could cause humanity to go extinct in the next hundred years.

Of those who expressed an opinion, around half thought that this was highly unlikely: below 1 in 100. By contrast, just over a quarter (27%) thought that that was a significant probability, with at least a 10% possibility.

When we asked those who thought an existential risk from AGI was highly unlikely, the answers were not necessarily reassuring. Just 12% of these said that they did not believe advanced AIs will be developed in the next hundred years, while conversely 61% believed that human civilisation is more likely to be destroyed by other factors.

parallax background

Notes

About the Poll

Fieldwork: 8th Mar - 14th Mar 2023 Interview method: Online Survey Population represented: UK Adults

Sample size: 2003

Methodology:All results are weighted using Iterative Proportional Fitting, or ’Raking’. The results are weighted by interlocking age & gender, region and social grade to Nationally Representative Proportions

Public First is a member of the BPC and abides by its rules. For more information please contact the Public First Polling Team.

Artwork

Artwork produced by AI, with the following prompts:

Header: An impressionist art painting of a neural net (DALL-E)

Today: Black and white picture of students using tablet computers in a classroom to learn (DALL-E)

Policy: Human neural network full body scan, nerves and tissue (DALL-E)

Future: Humans using AIs as prisoners, graphic novel style, black and white (DALL-E)

Notes: AI creating the future, digital art (DALL-E)

  1. It is worth noting that this question took the form of a ‘Select all options’ that the respondent agrees with, which we know also tending to be biased downwards, with respondents not always selecting everything they agree with.