It doesn’t just have benefits.. These are the most prominent risks of artificial intelligence

The risks of artificial intelligence include many areas.

Researchers from the U.S. government and the private sector fear that Washington’s enemies could use models of artificial intelligence, Which analyzes massive amounts of text and images to summarize information and generate content, in launching severe digital attacks or even in manufacturing effective biological weapons.

It doesn’t just have benefits.. These are the most prominent risks of artificial intelligence

Here are some of the threats posed by AI:

Deepfake and misinformation

Deepfakes are emerging on social media, blurring the line between fact and fiction in the polarized world of American politics. Deepfakes are realistic-looking but untrue videos created by artificial intelligence algorithms trained on a vast amount of online video.

While these digitally generated media have been around for years, their capabilities have been exacerbated over the past year by a range of new “generative AI” tools like Midjourney that make it easy and cheap to create convincing deepfakes.

AI-powered image-generating tools from companies like OpenAI and Microsoft could be used to generate images that could promote misleading information related to elections and voting, despite both companies having policies in place to combat the creation of misleading content, researchers said in a report.

Some disinformation campaigns simply exploit AI’s ability to mimic real news articles as a means of spreading false information.

While major social media platforms, such as Facebook, Twitter, and YouTube, have made efforts to ban and remove deepfakes, the effectiveness of these efforts in combating such content varies.

It doesn't just have benefits.. These are the most prominent risks of artificial intelligence

Last year, for example, a Chinese government-controlled news site using a generative AI platform promoted a previously circulating false claim that the United States was operating a lab in Kazakhstan to manufacture biological weapons for use against China, the U.S. Department of Homeland Security said in its 2024 Homeland Threat Assessment.

White House National Security Adviser Jake Sullivan said the problem has no easy solution, because it combines the power of artificial intelligence with “the intent of state and non-state actors to use disinformation on a massive scale to subvert democracies, promote propaganda, and shape global consciousness.”

“Right now, the attack is far ahead of the defence,” he added.

biological weapons

The U.S. intelligence community, think tanks, and academia are increasingly concerned about the risks posed by hostile foreign actors with access to cyber capabilities. artificial intelligence Advanced.

Researchers at Gryphon Scientific and the Rand Corporation suggest that advanced AI models could provide information that could help in the manufacture of biological weapons.

Gryphon Scientific studied how adversaries use large language models to harm the life sciences, and concluded that “they can provide information that could support a malicious actor in creating a biological weapon by providing useful, accurate, and detailed information at every step of the way.”

Large language models are computer programs that use huge collections of text to generate responses to queries.

Griffon concluded that a large language model could, for example, provide post-doctoral-level information to solve problems that arise when working on a virus capable of spreading a pandemic.

The Rand Corporation has shown that large language models could help plan and execute a biological attack. It concluded that a large language model could, for example, suggest ways to disperse the toxic substance Botox into the air.

Digital weapons

The US Department of Homeland Security said in its 2024 Insider Threat Assessment that digital actors are likely to use artificial intelligence to “develop new tools to enable broader, faster, more efficient and more evasive digital attacks” on critical infrastructure, including oil and gas pipelines and railroads.

The department said China and other hostile actors are developing AI technology that could undermine U.S. digital defenses, including generative AI software that powers malware attacks.

Microsoft said in a report in February that it had tracked hacking groups affiliated with the Chinese and North Korean governments, as well as Russian military intelligence and Iran’s Revolutionary Guard, as they tried to fine-tune their online attacks with large language models.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *