X
X
Where did you hear about us?
The monthly magazine providing news analysis and professional research for the discerning private investor/landlord

An Army of AI Agents – Use Them Before You Lose to Them

AI is advancing on a weekly basis, Peter Hemple looks at ways it can help landlords

Just two years from now there will be a new ‘machine god’ that will take control of the world and it will have the capability to destroy humanity if it so desires. Well, that went down like a lead balloon didn’t it…I will get my coat!

While not the opener you should use at your next dinner party, the prediction is a real one made recently in the ‘AI 2027’ report by Daniel Kokotajlo, executive director at AI Futures Project, and a former researcher for OpenAI (he left when he lost confidence that the company cared about the safe development of its technology). The report is a series of predictions and warnings about the risks AI poses to humanity in the coming years, from radically transforming the economy to developing armies of robots.

The researchers of the report suggest that by 2027 we will reach ‘AI general intelligence’, which basically means that leading AI companies will have completely automated the process of AI engineering, hence the AI agents will be able to autonomously improve themselves. Both China and the US will be unwilling to slowdown the process because of the benefits it offers to their militaries, meanwhile the AI will become increasingly misaligned with the goals that we set for it and it will become ever more able to hide its ulterior motives, which will include ensuring its survival, should humanity attempt to pull the plug (it will be too late by then).

After explaining this scenario to a reporter at the New York Times, Kokotajlo casually added: “And then they kill all the people.”

Before that happens, the report predicts that AI will be assigned to build data centres and solar power in unoccupied land or in the oceans, but when the AI decides it will be more economical to just build them all in city centres, humans will object and AI will suddenly view the human race as surplus to requirements, much like we view an army of ants. If they are in a forest, we have no problem with them, but if they are suddenly marching along our driveway in their hundreds, we usually reach for a freshly boiled kettle and sleep like a baby that night, despite committing mass genocide of the entire colony just a few hours earlier. AI, which is completely devoid of emotion, will likely view humans in the same way and the report’s prediction is that AI will simply release a biological weapon that kills us all.

Kokotajlo explains: “If they were ordinary software, there might be a line of code that states, ‘if you get here, then rewrite the goals’, but they are not ordinary software, they are giant artificial brains, so they don’t have just one goal.” He adds that they have tried implementing ‘honesty training’ but they have no idea whether the AI is being honest or not.

It should be noted that Kokotajlo also concluded: “On the bright side, I might be wrong about some of this stuff.” 

Want the full article?

subscribe