IT

Ian Turner

632 karmaJoined

Comments
158

Given how many of the frontier AI labs have an EA-related origin story, I think it's totally plausible that the EA AI xrisk project has been net negative.

Open Philanthropy has significantly cut back its allocation to GiveWell. “In our GHW portfolio, we decided — and announced last year — that we would scale back our donations to GiveWell’s recommendations to $100M/year, the level they were at in 2020

I would also not read too much into GiveWell’s decision to hold onto funds for a year. They do that sometimes when they have an opportunity which they expect will be good, but which hasn’t yet been fully vetted; or if there is an opportunity that isn’t quite ripe yet for some reason. This has as much to do with expectations about next year’s fundraising as it does about todays opportunities.

Something else you might consider is, if you didn’t give to GiveWell, where would you give? And would that other opportunity be better or worse, in expectation?

If the problem is ann employee rebellion, the obvious alternative would be to organize the company in a jurisdiction that allows noncompete agreements?

These things are not generally enforced in court. It’s the threat that has the effect, which means the non-disparagement agreement works even if it’s of questionable enforceability and even if indeed it is never enforced.

@Zvi  has a blog post about all the safety folks leaving OpenAI. It’s not a great picture. 

If Tina were to advertise that 100% of the profits generated by her store were going to a specific charity, in the current economic arrangement, this would not be a real Profit for Good business.

How much does the ability of companies to muddle the water affect your analysis? It seems to me that even today, regular for-profit companies find ways to imply that they are social beneficial, even when the opposite is true.

Oh sure, I'll readily agree that most startups don't have a safety culture. The part I was disagreeing with was this:

I think it’s hard to have a safety-focused culture just by “wanting it” hard enough in the abstract

Regarding finance, I don't think this is about 2008, because there are plenty of trading firms that were careful from the outset that were also founded well before the financial crisis. I do think there is a strong selection effect happening, where we don't really observe the firms that weren't careful (because they blew up eventually, even if they were lucky in the beginning).

How do careful startups happen? Basically I think it just takes safety-minded founders. That's why the quote above didn't seem quite right to me. Why are most startups not safety-minded? Because most founders are not safety-minded, which in turn is probably due in part to a combination of incentives and selection effects.

Not disagreeing with your thesis necessarily, but I disagree that a startup can't have a safety-focused culture. Most mainstream (i.e., not crypto) financial trading firms started out as a very risk-conscious startup. This can be hard to evaluate from the outside, though, and definitely depends on committed executives.

Regarding the actual companies we have, though, my sense is that OpenAI is not careful and I'm not feeling great about Anthropic either.

(I didn’t read the whole post)

Is deep honesty different from candor? I was surprised not to see that word anywhere in this post.

I am not that knowledgable myself. But about the vaccines, my understanding is that they are not that effective and that distributing them is very expensive. The vaccines require a cold chain, multiple doses spread well apart, and the vaccine is delivered as an injection. These are all major obstacles to cost-effective distribution in a developing country setting, so while some might say that "progress is slower than it should be", personally I have pretty low expectations.

Load more