Here's one partial answer to your question. In Moral Uncertainty (pg 209), MacAskill et al. suggest that you can sometimes calibrate your confidence in a moral view using "induction from past experience." The more often that you (or other reasoners in your reference class) have changed your mind in the course of investigating a moral issue, the less confidence you should have in your current best guess answer.
For example, perhaps you've spent a long time thinking about the ethics of letting a child drown in a shallow pond, and all along, you've never doubted that it's wrong. And perhaps you've also been thinking about whether it's categorically wrong to lie. Some days you're fully convinced by Kantian arguments for this view; other days you hear really convincing counterarguments, and you change your mind. Right now, you feel persuaded that lying is categorically wrong, but it nevertheless seems inappropriate for your credence on the wrongness of lying to vastly exceed your credence on the wrongness of letting a child drown.
Thanks for your comments, Michael.
The section of SB 53 that talks about external auditing was added to the bill by the Assembly Committee on Privacy and Consumer Protection. They wrote that the purpose of the four year grace period before auditing becomes mandatory is "to give time for the nascent industry of AI auditors to grow." Now, I don't think the auditing industry needs that much time. Deloitte has already helped Anthropic to audit Claude 4, and I suspect the other Big Four firms will get involved soon. They can pull in AI experts from RAND, METR, or AISI if they need to.
It's worth noting that even if the relevant parts of SB 53 pass unamended, some other state or the federal government could still pass an external auditing requirement that kicks in before 2030. I don't see an obvious reason why passing SB 53 makes it less likely that such a law passes in a jurisdiction other than CA.
The solution to the problem of AI developers choosing lax auditors is § 22757.16.b. The bill says that if an auditor is negligent or "knowingly include[s] a material misrepresentation or omit[s] a material fact" in their report to the AG, they're civilly liable for up to $10k in fines. Now, I think that penalty figure is probably too low, but if you raise it enough, it will solve the incentive problem. Auditors won't go easy on AI developers because they know they can be fined if they do.
CalCompute's effect might indeed be somewhat accelerationist. FWIW, all that SB 53 does is appoint a board to explore setting up CalCompute. The bill does not appropriate funds for a new cluster. Given how many hurdles CalCompute would still have to clear even if SB 53 passed, I don't think it should drive our net assessment of whether SB 53 is good or bad.