Something I'm not hearing discussed much is what state actors are doing with their current zero-day stockpiles.
I would expect that as soon as they saw the exploit stats, and that glasswing was too large a footprint of people to persuade to leave their zero-days unpatched, they would start using them to achieve goals, even if inefficient, as quickly as possible.
Presumably, every day more of the vulnerabilities identified by mythos are being patched, and nation states have little way of knowing if mythos will uncover an exploit they paid 800k for, so it's quickly a use it or lose it situation. I'm sure some will be kept in reserve, but it's a gamble. So stockpiles are likely being deployed now in ways and for goals which potentially won't become public for many years.
The NSA must very upset about mythos, especially given the WH-anthropic feud. Mythos also reduces the extent to which the NSA can maintain a capabilities lead through recruiting top cyber talent.
No. Highly likely, DeepSeek created a model on the same class as Mythos. Try to replace "Anthropic" with "DeepSeek" in your article. The implications are interesting.
You can't legislate competent regulators into existence — the people who understand frontier training runs are being paid seven figures to build them, not GS-15 salaries to oversee them, so what you'd actually get is a body too slow to keep up, too dependent on the labs to explain their own work, and too politically exposed to stay narrow; within a decade it'd be regulating chatbot outputs because that's what its principals demand, while the actual frontier moves offshore to jurisdictions that didn't bother. This is a really weak assessment and recommendation.
I agree you can't legislate competent regulators into existence, but I'm more optimistic this is addressable. Pay is just a bureaucratic barrier that can be addressed. And there are already very talented people in the Center for AI Standards and Innovation and other government agencies.
"In case we want to slow down", as if the current capabilities weren't already enough to overflow our proverbial plate. If it takes us this long to even begin to approach the problem seriously, slowing down shouldn't be conditional. We can enact procedures (whatever they may be) to slow immediately. The inertia seems strong. By the time they take effect, we will have gained a tremendous amount of extra capability.
Good stuff, but the mitigation effects you call for probably won't happen till some environmental group causes a dam to fail and kills thousands in a flood, or a cartel hacks a Mexican government database, obtains the names and addresses of narco workers, and goes on a cop killing spree or something of the like.
I disagree. I think AI development done by private companies should be unrestricted and in fact deregulated to make it easier to acquire training data without legal issues.
Without a doubt security issues make more advanced AI tech a potential threat, but mostly in the short term. Other AI can build security infrastructure to prevent or insulate against these threats. What I find more concerning is AI’s potential for persuasion technique, and what that can actually do when weaponized. That will not be something that can be fixed quickly. I suspect some work is being done to get ahead of this sort of thing. Chase Hugh’s whole brand is essentially about identifying “mind control.” This is especially important in open liberal democracies.
You are assuming equal diffusion. The correct perspective is "what if [Russia/NK/Al Quaeda/bad actor of your choice] had Mythos and we did not".
Persuasion is however very important, but less so: it increasingly appears that by the time the models are superhuman persuaders, they won't need to persuade anyone, they will already have root access to every tool and system, probably including the ostensibly offline ones!
Excellent thoughts!
Something I'm not hearing discussed much is what state actors are doing with their current zero-day stockpiles.
I would expect that as soon as they saw the exploit stats, and that glasswing was too large a footprint of people to persuade to leave their zero-days unpatched, they would start using them to achieve goals, even if inefficient, as quickly as possible.
Presumably, every day more of the vulnerabilities identified by mythos are being patched, and nation states have little way of knowing if mythos will uncover an exploit they paid 800k for, so it's quickly a use it or lose it situation. I'm sure some will be kept in reserve, but it's a gamble. So stockpiles are likely being deployed now in ways and for goals which potentially won't become public for many years.
The NSA must very upset about mythos, especially given the WH-anthropic feud. Mythos also reduces the extent to which the NSA can maintain a capabilities lead through recruiting top cyber talent.
Yeah I think this is right. I plan to touch on this more in another post.
In this context,
Isn't it suspicious that DeepSeek does not publish anything significant for the last few months?
Like you think Anthropic hacked them or something?
No. Highly likely, DeepSeek created a model on the same class as Mythos. Try to replace "Anthropic" with "DeepSeek" in your article. The implications are interesting.
I don't think that's highly likely at all. I think China really lacks the compute to pull that off. But I agree the implications are interesting.
You can't legislate competent regulators into existence — the people who understand frontier training runs are being paid seven figures to build them, not GS-15 salaries to oversee them, so what you'd actually get is a body too slow to keep up, too dependent on the labs to explain their own work, and too politically exposed to stay narrow; within a decade it'd be regulating chatbot outputs because that's what its principals demand, while the actual frontier moves offshore to jurisdictions that didn't bother. This is a really weak assessment and recommendation.
I agree you can't legislate competent regulators into existence, but I'm more optimistic this is addressable. Pay is just a bureaucratic barrier that can be addressed. And there are already very talented people in the Center for AI Standards and Innovation and other government agencies.
"In case we want to slow down", as if the current capabilities weren't already enough to overflow our proverbial plate. If it takes us this long to even begin to approach the problem seriously, slowing down shouldn't be conditional. We can enact procedures (whatever they may be) to slow immediately. The inertia seems strong. By the time they take effect, we will have gained a tremendous amount of extra capability.
Good stuff, but the mitigation effects you call for probably won't happen till some environmental group causes a dam to fail and kills thousands in a flood, or a cartel hacks a Mexican government database, obtains the names and addresses of narco workers, and goes on a cop killing spree or something of the like.
I disagree. I think AI development done by private companies should be unrestricted and in fact deregulated to make it easier to acquire training data without legal issues.
Without a doubt security issues make more advanced AI tech a potential threat, but mostly in the short term. Other AI can build security infrastructure to prevent or insulate against these threats. What I find more concerning is AI’s potential for persuasion technique, and what that can actually do when weaponized. That will not be something that can be fixed quickly. I suspect some work is being done to get ahead of this sort of thing. Chase Hugh’s whole brand is essentially about identifying “mind control.” This is especially important in open liberal democracies.
You are assuming equal diffusion. The correct perspective is "what if [Russia/NK/Al Quaeda/bad actor of your choice] had Mythos and we did not".
Persuasion is however very important, but less so: it increasingly appears that by the time the models are superhuman persuaders, they won't need to persuade anyone, they will already have root access to every tool and system, probably including the ostensibly offline ones!
Looking at ukraine, iran and Gaza today, the civilised world has concluded that russsia, china and north korea are less evil than america and israel.