This site is part of the Siconnects Division of Sciinov Group
This site is operated by a business or businesses owned by Sciinov Group and all copyright resides with them.
ADD THESE DATES TO YOUR E-DIARY OR GOOGLE CALENDAR
Apr 8, 2025
The research, conducted as part of an exhibition at Science Gallery London, surveyed over 2,000 visitors on their views about AI (artificial intelligence) in healthcare. The results found that 80% of people said AI should be used in medicine, while just over half (56%) felt it would be safe. But when it came to trusting AI with major decisions, participants were more wary - more than 70% rejected the idea that AI could take over doctors' roles. Even if AI makes fewer mistakes, respondents were not comfortable letting it act alone.
Most people would not be happy for AI to make decisions without considering their feelings, the researchers noted. The study aimed to fill a gap in the existing research landscape, as most previous studies have focused on physician or patient perspectives rather than the general public. Notably, older respondents (50+) were more likely to consider AI safe (62%) compared to younger participants (55%). Gender differences were also evident—88% of men supported AI’s implementation in healthcare systems, compared to 77% of women.
The results align with existing research on the attitudes of healthcare professionals, particularly radiologists, who generally view AI as a complementary tool rather than a replacement. The findings also underscore two critical factors for AI adoption in medicine: public consent and transparent communication. Trust in AI, the study suggests, will depend on clear explanations of how these technologies operate and ensuring that human oversight remains central to clinical decision-making.