According to a report from the UK government, artificial intelligence might heighten the risk of cyber-attacks and undermine trust in online content by 2025.
The technology could potentially be used in planning biological or chemical attacks by terrorists, according to the report. However, some experts have raised doubts about whether the technology will develop as projected.
Prime Minister Rishi Sunak is expected to discuss the opportunities and challenges presented by this technology on Thursday.
The government’s report specifically examines generative AI, the kind of system that currently powers popular chatbots and image generation software. It draws in part from declassified information provided by intelligence agencies.
The report issues a warning that by 2025, generative AI could potentially be employed to “accumulate information related to physical attacks carried out by non-state violent actors, including those involving chemical, biological, and radiological weapons.”
The report points out that while companies are striving to prevent this, the effectiveness of these protective measures varies.
Obtaining the knowledge, raw materials, and equipment required for such attacks poses obstacles, but these barriers are diminishing, potentially due to the influence of AI, as per the report’s findings.
Furthermore, it predicts that by 2025, AI is likely to contribute to the creation of cyber-attacks that are not only faster but also more efficient and on a larger scale.
Joseph Jarnecki, a researcher specializing in cyber threats at the Royal United Services Institute, noted that AI could aid hackers, particularly in overcoming their challenges in replicating official language. He explained, “There’s a particular tone used in bureaucratic language that cybercriminals have found challenging to emulate.”
The report precedes a speech by Mr. Sunak scheduled for Thursday, where he is expected to outline the UK government’s plans to ensure the safety of AI and position the UK as a global leader in AI safety.
Mr. Sunak is expected to express that AI will usher in new knowledge, economic growth opportunities, advancements in human capabilities, and the potential to address problems once deemed insurmountable. However, he will also acknowledge the new risks and fears associated with AI.
He will commit to addressing these concerns directly, ensuring that both current and future generations have the opportunity to benefit from AI in creating a better future.
This speech paves the way for a government summit set for the next week, focusing on the potential threat posed by highly advanced AI systems, often referred to as “Frontier AI.” These systems are believed to have the capacity to perform a wide range of tasks, surpassing the capabilities of today’s most advanced models.
Debates about whether such systems could pose a threat to humanity are ongoing. According to a recently published report by the Government Office for Science, many experts consider this a risk with low likelihood and few plausible pathways to realization. The report indicates that, to pose a risk to human existence, an AI would need control over crucial systems like weapons or financial systems, the ability to enhance its own programming, the capacity to avoid human oversight, and a sense of autonomy. However, it underscores the absence of a consensus on timelines and the plausibility of specific future capabilities.
Major AI companies have generally acknowledged the necessity of regulation, and their representatives are expected to participate in the summit. Nevertheless, Rachel Coldicutt, an expert on the societal impact of technology, questioned the summit’s focus. She noted that it places significant emphasis on future risks and suggested that technology companies, concerned about immediate regulatory implications, tend to concentrate on long-term risks. She also pointed out that the government reports are tempering some of the enthusiasm regarding these futuristic threats and highlight a gap between the political stance and the technical reality.