"The government doesn’t need to build all nuclear reactors to ensure nuclear security. It sets standards, monitors compliance, and provides security assistance while letting private companies operate the reactors. These targeted interventions could achieve 80% of the security benefits with 20% of the risks. They avoid triggering international arms races or concentrating power dangerously. Most importantly, they can be implemented incrementally and adjusted as we learn more about AI risks."
A counterpoint I'd offer is that the government does not let defense contractors assemble nukes in-house. The state maintains total strategic control throughout the entire process, and for obvious reasons: if the defense contractor had a nuke, they might threaten to use it.
It's great to limit concentration of power in the short term, and to do basic sensible things like promote infosecurity. But none of that gets at the heart of the problem, which is that the government needs to retain enforcement power to prevent both proliferation and misuse/concentration of power in the frontier labs themselves. If future AIs will be so dangerous that government regulation is needed, the government needs to maintain its monopoly on violence to uphold those regulations effectively. And to do that, it needs to have the final control over the actual deployment of models with strategically relevant capabilities (read: the most powerful ones).
Not so much for me, since I plan on not being around when the worst of it comes to pass, but for my kids and my students. By “worst of it,” I mean the lack of low-skill or even medium-skill career paths.
I often think about what jobs could be performed by a machine, if only we had slightly more intelligent machines. Driving will be the next big shoe to drop, once automated driving is perfected. Food and beverage prep will also be largely automated in my lifetime.
Currently, two of the best career paths for non-college graduates are delivery driver and barista. Those doors are closing rapidly, and none are opening to take their place.
Here is a painful truth that I’ve seen firsthand as a teacher: not every student is college material. In fact, I’d argue that the majority of students aren’t college material. The majority of students aren’t trade school material, either. There is a nontrivial percentage of people in the country whose only job skill is the ability to (usually) show up to work on time and follow some basic directions. And machines are very good at taking those types of jobs.
Think about this: Starbucks has almost 350,000 employees. Around here, they start at just under $16 an hour. It’s a great way for reliable, friendly people to make some money. The average Starbucks has around 85 employees.
Then this comes along:
A fully-automated Starbucks kiosk. Imagine if Starbucks could get that 85-employees-per-site average down to just five or so, just to make sure the machine was running properly. Starbucks’ owners (I am one… I have a dozen shares) would be thrilled! So would a lot of customers, who’d rather not talk to human anyway.
But that would be one less career path for the reliable, motivated, friendly, but otherwise unskilled crowd.
Improvements in A.I. are just going to make inequalities worse. The bell curve is turning into an “M” curve, and the people on the left hump of the “M” won’t sit back and accept their fate easily. Nor should they. There needs to be viable paths into the middle and upper classes for people of all ability levels. You should be able to work your way out of poverty, but A.I. makes that more difficult with every passing advancement.
"The American system is built on Jeffersonian checks and balances, pitting ambition against ambition to prevent any one person or group from accumulating concentrated power"
I think you aren't engaging with the fundamental issue; AGI is more powerful than checks and balances. The Jeffersonian system depends on assumptions that don't hold; we'll be in a world with smarter than human digital minds copying themselves and creating robots. I don't think this means a Manhattan Project is the solution, but I think we need to engage with the gravity of the issue if we stand a chance at navigating this. Corporate dynamism and humans participating in groups to solve problems with paper and computers was a fun frame (~post industrial revolution), but AGI will challenge that
Great post! Thank you for your thoughtful takes. I'm glad to see Bill's systematic reference-class analysis getting more publicity :)
Got many good updates from this article - great work!
"The government doesn’t need to build all nuclear reactors to ensure nuclear security. It sets standards, monitors compliance, and provides security assistance while letting private companies operate the reactors. These targeted interventions could achieve 80% of the security benefits with 20% of the risks. They avoid triggering international arms races or concentrating power dangerously. Most importantly, they can be implemented incrementally and adjusted as we learn more about AI risks."
A counterpoint I'd offer is that the government does not let defense contractors assemble nukes in-house. The state maintains total strategic control throughout the entire process, and for obvious reasons: if the defense contractor had a nuke, they might threaten to use it.
It's great to limit concentration of power in the short term, and to do basic sensible things like promote infosecurity. But none of that gets at the heart of the problem, which is that the government needs to retain enforcement power to prevent both proliferation and misuse/concentration of power in the frontier labs themselves. If future AIs will be so dangerous that government regulation is needed, the government needs to maintain its monopoly on violence to uphold those regulations effectively. And to do that, it needs to have the final control over the actual deployment of models with strategically relevant capabilities (read: the most powerful ones).
Not so much for me, since I plan on not being around when the worst of it comes to pass, but for my kids and my students. By “worst of it,” I mean the lack of low-skill or even medium-skill career paths.
I often think about what jobs could be performed by a machine, if only we had slightly more intelligent machines. Driving will be the next big shoe to drop, once automated driving is perfected. Food and beverage prep will also be largely automated in my lifetime.
Currently, two of the best career paths for non-college graduates are delivery driver and barista. Those doors are closing rapidly, and none are opening to take their place.
Here is a painful truth that I’ve seen firsthand as a teacher: not every student is college material. In fact, I’d argue that the majority of students aren’t college material. The majority of students aren’t trade school material, either. There is a nontrivial percentage of people in the country whose only job skill is the ability to (usually) show up to work on time and follow some basic directions. And machines are very good at taking those types of jobs.
Think about this: Starbucks has almost 350,000 employees. Around here, they start at just under $16 an hour. It’s a great way for reliable, friendly people to make some money. The average Starbucks has around 85 employees.
Then this comes along:
A fully-automated Starbucks kiosk. Imagine if Starbucks could get that 85-employees-per-site average down to just five or so, just to make sure the machine was running properly. Starbucks’ owners (I am one… I have a dozen shares) would be thrilled! So would a lot of customers, who’d rather not talk to human anyway.
But that would be one less career path for the reliable, motivated, friendly, but otherwise unskilled crowd.
Improvements in A.I. are just going to make inequalities worse. The bell curve is turning into an “M” curve, and the people on the left hump of the “M” won’t sit back and accept their fate easily. Nor should they. There needs to be viable paths into the middle and upper classes for people of all ability levels. You should be able to work your way out of poverty, but A.I. makes that more difficult with every passing advancement.
Good post but
"The American system is built on Jeffersonian checks and balances, pitting ambition against ambition to prevent any one person or group from accumulating concentrated power"
I think you aren't engaging with the fundamental issue; AGI is more powerful than checks and balances. The Jeffersonian system depends on assumptions that don't hold; we'll be in a world with smarter than human digital minds copying themselves and creating robots. I don't think this means a Manhattan Project is the solution, but I think we need to engage with the gravity of the issue if we stand a chance at navigating this. Corporate dynamism and humans participating in groups to solve problems with paper and computers was a fun frame (~post industrial revolution), but AGI will challenge that