Tube4vids logo

Your daily adult tube feed all in one place!

AI poses 'extinction-level' threat and US government must be given new 'emergency powers' to control technology, warns State Department report

PUBLISHED
UPDATED
VIEWS

A new US State Department-funded study calls for a temporary ban on the creation of advanced AI passed a certain threshold of computational power. 

The tech, its authors claim, poses an 'extinction-level threat to the human species.' 

The study, commissioned as part of a $250,000 federal contract, also calls for 'defining emergency powers' for the American government's executive branch 'to respond to dangerous and fast-moving AI-related incidents' — like 'swarm robotics.'

Treating high-end computer chips as international contraband, and even monitoring how hardware is used, are just some of the drastic measures the new study calls for. 

The report joins of a chorus of industry, governmental and academic voices calling for aggressive regulatory attention on the hotly pursued and game-changing, but socially disruptive, potential of artificial intelligence. 

Last July, the United Nation's agency for science and culture (UNESCO), for example paired its AI concerns with equally futuristic worries over brain chip tech, a la Elon Musk's Neuralink, warning of 'neurosurveillance' violating 'mental privacy.'

A new US State Department-funded study by Gladstone AI (above), commissioned as part of a $250,000 federal contract, calls for the 'defining emergency powers' for the US government's executive branch 'to respond to dangerous and fast-moving AI-related incidents'

A new US State Department-funded study by Gladstone AI (above), commissioned as part of a $250,000 federal contract, calls for the 'defining emergency powers' for the US government's executive branch 'to respond to dangerous and fast-moving AI-related incidents'

Gladstone AI's report floats a dystopian scenario that the machines may decide for themselves that humanity is an enemy to be eradicated, a la the Terminator films: 'if they are developed using current techniques, [AI] could behave adversarially to human beings by default'

Gladstone AI's report floats a dystopian scenario that the machines may decide for themselves that humanity is an enemy to be eradicated, a la the Terminator films: 'if they are developed using current techniques, [AI] could behave adversarially to human beings by default'

While the new report notes upfront, on its first page, that its recommendations 'do not reflect the views of the United States Department of State or the United States Government,' its authors have been briefing the government on AI since 2021.

The study authors, a four-person AI consultancy called firm Gladstone AI run by brothers Jérémie and Edouard Harris, told TIME that their earlier presentations on AI risks frequently were heard by government officials with no authority to act. 

That's changed with the US State Department, they told the magazine, because its Bureau of International Security and Nonproliferation is specifically tasked with curbing the spread of cataclysmic new weapons.

And the Gladstone AI report devotes considerable attention to 'weaponization risk.'

In recent years, Gladstone AI's CEO Jérémie Harris (inset) has also presented before the Standing Committee on Industry and Technology of Canada's House of Commons (pictured)

In recent years, Gladstone AI's CEO Jérémie Harris (inset) has also presented before the Standing Committee on Industry and Technology of Canada's House of Commons (pictured)

There is a great AI divide in Silicon Valley. Brilliant minds are split about the progress of the systems - some say it will improve humanity, and others fear the technology will destroy it

There is a great AI divide in Silicon Valley. Brilliant minds are split about the progress of the systems - some say it will improve humanity, and others fear the technology will destroy it

READ MORE: Ex-Google CEO Eric Schmidt says A.I. could endanger humanity in 5 YEARS - as he likens devastation to nuking Nagasaki and Hiroshima

Former Google CEO Eric Schmidt (pictured) is pushing for international governments to come together to regulate AI before it gets out of control. Speaking at a summit last November, Schmidt compared the need for regulating AI to weapon regulations after the bombings of Nagasaki and Hiroshima during WWII.

An offensive, advanced AI, they write, 'could potentially be used to design and even execute catastrophic biological, chemical, or cyber attacks, or enable unprecedented weaponized applications in swarm robotics.'

But the report floats a second, even more dystopian scenario, which they describe as a 'loss of control' AI risk. 

There is, they write, 'reason to believe that they [weaponized AI] may be uncontrollable if they are developed using current techniques, and could behave adversarially to human beings by default.' 

In other words, the machines may decide for themselves that humanity (or some subset of humanity) is simply an enemy to be eradicated for good. 

Gladstone AI's CEO, Jérémie Harris, also presented similarly grave scenarios before hearings held by the Standing Committee on Industry and Technology within Canada's House of Commons last year, on December 5, 2023.

'It's no exaggeration to say the water-cooler conversations in the frontier AI safety community frames near-future AI as a weapon of mass destruction,' Harris told Canadian legislators. 

'Publicly and privately, frontier AI labs are telling us to expect AI systems to be capable of carrying out catastrophic malware attacks and supporting bioweapon design, among many other alarming capabilities, in the next few years,' according to IT World Canada's coverage of his remarks.

'Our own research,' he said, 'suggests this is a reasonable assessment.' 

Harris and his co-authors noted in their new State Dept. report that the private-sector's heavily venture-funded AI firms face incredible 'incentive to scale' to beat their competition more than any balancing 'incentive to invest in safety or security.'

The only viable means of pumping the breaks in their scenario, they advise, is outside of cyberspace, via the strict regulation of the high-end computer chips used to train AI systems in the real world.

Gladstone AI's report calls the nonproliferation work on this hardware, the 'most important requirement to safeguard long-term global safety and security from AI.'

And it was not a suggestion they made lightly, given the inevitable likelihood of industry outcries: 'It's an extremely challenging recommendation to make, and we spent a lot of time looking for ways around suggesting measures like this,' they said.

One of the Harris' brothers co-authors on the new report, ex-Defense Department official Mark Beall served as the Chief of Strategy and Policy for the Pentagon's Joint Artificial Intelligence Center during his years of government service.

Beall appears to be acting urgently on the threats identified by the new report: the ex-DoD AI strategy chief has since left Gladstone to launch a super PAC devoted to the risks of AI. 

The PAC, dubbed Americans for AI Safety, launched on this Monday with the stated hope of 'passing AI safety legislation by the end of 2024.'

Comments