Transparency could undermine a present bulwark against war: clarity about who gets the DSA. I guess that current competitors may disagree about who is favored, downstream of different views on timelines to AGI. Inceasing transparency could make it clear that war is preferred to competition.
Well researched. I somewhat know what he is talking about because as a DARPA SETA assisting its Strategic Technology Office, I was one of the techie types they brought into discussions with people above my pay grade. That's why I'm keeping my 1972 Ford Ranger as it is more resistant to EMP than modern vehicles. Also, horses.
Another major problem regarding private information is that the AI powers have very little insight into what their *future* strategic capabilities will look like and when they will have access.
Before RSI kicks off, it will be hard to estimate how long it is until superintelligence, what superweapons/defenses those ASIs can design, and how long it will take to integrate those innovations into industry and military uses.
For instance, scaling physical compute is currently contributing around 2/3rds of overall effective compute growth, with the remaining third being driven by algorithmic efficiency improvements (per epoch). As a result, our projections for the future are heavily focused on compute-based restrictions, such as with data center monitoring, training run FLOP limits, and export controls.
But who’s to say that there isn’t some algorithmic tweak we’ve missed that would make powerful AIs vastly cheaper to train? Now the company/country that’s discovered it has a huge and surprising strategic advantage, in a way that was hard for even the original developers to anticipate.
Transparency could undermine a present bulwark against war: clarity about who gets the DSA. I guess that current competitors may disagree about who is favored, downstream of different views on timelines to AGI. Inceasing transparency could make it clear that war is preferred to competition.
And thank you, Peter, for sharing your platform with Oscar Delaney.
Well researched. I somewhat know what he is talking about because as a DARPA SETA assisting its Strategic Technology Office, I was one of the techie types they brought into discussions with people above my pay grade. That's why I'm keeping my 1972 Ford Ranger as it is more resistant to EMP than modern vehicles. Also, horses.
Another major problem regarding private information is that the AI powers have very little insight into what their *future* strategic capabilities will look like and when they will have access.
Before RSI kicks off, it will be hard to estimate how long it is until superintelligence, what superweapons/defenses those ASIs can design, and how long it will take to integrate those innovations into industry and military uses.
For instance, scaling physical compute is currently contributing around 2/3rds of overall effective compute growth, with the remaining third being driven by algorithmic efficiency improvements (per epoch). As a result, our projections for the future are heavily focused on compute-based restrictions, such as with data center monitoring, training run FLOP limits, and export controls.
But who’s to say that there isn’t some algorithmic tweak we’ve missed that would make powerful AIs vastly cheaper to train? Now the company/country that’s discovered it has a huge and surprising strategic advantage, in a way that was hard for even the original developers to anticipate.