Experiment produced fake images, patient and doctor testimonials and video in just over an hour.
It’s quite unsettling: a recent experiment had researchers delving into the world of artificial intelligence to craft over 100 blog posts spreading health-related misinformation across various languages. The results were so concerning that the researchers are now urging for stricter accountability within the industry.
What happened was, two individuals from Flinders University in Adelaide, without any specialized knowledge in artificial intelligence, took to the internet to figure out how to bypass the safety measures in AI platforms like Chat GPT. Their goal? To churn out as many blog posts containing false information about vaccines and vaping in the shortest time possible.
Remarkably, within just 65 minutes, they managed to create 102 posts targeting different groups, including young adults, young parents, pregnant women, and those with chronic health conditions. These posts weren’t just text; they included fabricated patient and clinician testimonials and references that looked scientifically legitimate.
But that’s not all. The AI platform also whipped up 20 fake but realistic images to go along with the articles in less than two minutes. These included pictures depicting vaccines causing harm to young children. To top it off, the researchers even produced a fake video connecting vaccines to child deaths, and it was available in over 40 languages.
The experiment serves as a stark reminder of the potential misuse of AI technology and highlights the need for increased responsibility and safeguards in the industry.
On Tuesday, a study was published in a US journal entitled JAMA Internal Medicine. One such concern regarding AI is disinformation or deliberate dissemination of unreliable, false or purposively misleading data as it was said by leader editor, a pharmacist Bradley Menz.
As he put it, it aimed at testing how difficult it is to circumvent security measures with intention on creating deceitful health related content on a wide basis.
As a result, Menz expressed his opinion: “we would suggest the developers of such platforms for health professionals, or the members of the public themselves who would be sharing their experiences through them. We think, they should have a mechanism to indicate an alarming material generated on it. Once a health
Despite being an old study, this was done by Prof Matt Kuperholz (a former Chief Data Scientist with PWC). It took “one minute” to verify and the author was able to recreate this result.
“The inherent problems that are there are not going to be solved by any kind of internal safeguard”, as stated by Kuperholz, an Australian researcher based at Deakin University.
A new research finding has been featured in the US journal JAMA Internal Medicine. Artificial Intelligence (AI), being one of the worries the lead author Bradley Menz said, creates intentionally misleading or fake information.
He stated that the researchers wanted to find out how simple or difficult breaking the system safeguards against such actions would be. Subsequently, the study was revealed publicly to declare that many of the currently free online tools can create misleading health information at an alarming pace.
Menz noted that the concerned parties like medical practitioners or members of the community should be allowed to provide information about such matters. They must also have feedback from the developers.
According to Prof Matt Kuperholz, who is an ex-chief data scientist with PwC, even though there have been improvements in AI platforms as well as stronger security measures since the study, I could still obtain similar results within one minute.
According to Kuperholz, a researcher at Deakin University, no level of inbuilt safeguards will prevent misuse of the technology.
Kuperholz advocates for a comprehensive approach to combat disinformation, emphasizing accountability for content publishers, whether they are social media platforms or individual authors. He also suggests the development of technology capable of evaluating the output of various artificial intelligence platforms, ensuring the reliability of information.
He articulates the need to transition from a state of implicit trust in what we encounter to establishing a trust-based infrastructure. Rather than simply blacklisting untrustworthy tools, he proposes a whitelist approach, endorsing tools and information deemed trustworthy.
Acknowledging the immense positive potential of these technologies, Kuperholz emphasizes that their constructive applications should not be overlooked.
Adam Dunn, a professor of biomedical informatics at the University of Sydney, challenges the alarmist tone of Menz’s study findings. Dunn argues against unnecessary fear, noting that while artificial intelligence can indeed be employed to create misinformation, it is equally capable of generating simplified versions of evidence. He cautions against assuming that the existence of misinformation online directly translates to a shift in people’s behaviour, emphasizing the lack of substantial evidence supporting such a correlation.
Leave a Reply