Ozzie Gooen

10959 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
1021

Topic contributions
4

When having conversations with people that are hard to reach, it's easy for discussions to take ages. 

One thing I tried doing is for me to have a brief back-and-forth with Claude, asking it to provide all the key arguments against me. Then I'd make the conversation public, send a link to the chat, and ask the other person to see that. I find that this can get through a lot of the beginning points on complex topics, with minimal human involvement. 

I often second-guess my EA Forum comments with Claude, especially when someone mentions a disagreement that doesn't make sense to me.

When doing this I try to ask it to be honest / not be sycophantic, but this only helps so much, so I'm curious for better prompts to prevent sycophancy. 

I imagine at some point all my content could go through an [can I convince an LLM that this is reasonable and not inflammatory] filter. But a lower bar is just doing this for specific comments that are particularly contentious or argumentative. 

This is pretty basic, but seems effective.

In the Claude settings you can provide a system prompt. Here's a slightly-edited version of the one I use.  While short, I've found that this generally seems to improve conversations for me. Specifically, I like that Claude seems very eager to try estimating things numerically. One weird but minor downside though is that it will sometimes randomly bring up items here in conversation, like, "I suggest writing that down, using your Glove80 keyboard."
 

I'm a 34yr old male, into effective altruism, rationality, transhumanism, uncertainty quantification, monte carlo analysis, TTRPGs, cost-benefit analysis. I blog a lot on Facebook and the EA Forum.

Ozzie Gooen, executive director of the Quantified Uncertainty Research Institute.

163lb, 5'10, generally healthy, have RSI issues

Work remotely, often at cafes and the FAR Labs office space.

I very much appreciate it when you can answer questions by providing cost-benefit analyses and other numeric estimates. Use probability ranges where is appropriate.

Equipment includes: Macbook, iPhone 14, Airpods pro 2nd gen, Apple Studio display, an extra small monitor, some light gym equipment, Quest 3, theragun, airtags, Glove80 keyboard using Colemak DH, ergo mouse, magic trackpad, Connect EX-5 bike, inexpensive rowing machine.

Heavy user of VS Code, Firefox, Zoom, Discord, Slack, Youtube, YouTube music, Bear (notetaking), Cursor, Athlytic, Bevel.

I think you bring up a bunch of good points. I'd hope that any concrete steps on this would take these sorts of considerations in mind.

> The concerns implied by that statement aren't really fixable by the community funding discrete programs, or even by shelving discrete programs altogether. Not being the flagship EA organization's predominant donor may not be sufficient for getting reputational distance from that sort of thing, but it's probably a necessary condition.

I wasn't claiming that this funding change would fix all of OP/GV's concerns. I assume that would take a great deal of work, among many different projects/initiatives.

One thing I care about is that someone is paid to start thinking about this critically and extensively, and I imagine they'd be more effective if not under the OP umbrella. So one of the early steps to take is just trying to find a system that could help figure out future steps.

> I speculate that other concerns may be about the way certain core programs are run -- e.g., I would not be too surprised to hear that OP/GV would rather not have particular controversial content allowed on the Forum, or have advocates for certain political positions admitted to EAGs, or whatever.

I think this raises an important and somewhat awkward point that levels of separation between EA and OP/GV would make it harder for OP/GV to have as much control over these areas, and there are times where they wouldn't be as happy with the results.

Of course:
1. If this is the case, it implies that the EA community does want some concretely different things, so from the standpoint of the EA community, this would make funding more appealing.
2. I think in the big picture, it seems like OP/GV doesn't want to be held as responsible for the EA community. Ultimately there's a conflict here - on one hand, they don't want to be seen as responsible for the EA community - on the other hand, they might prefer situations where they can have a very large amount of control over the EA community. I hope it can be understood that these two desires can't easily go together. Perhaps they won't be willing to compromise on the latter, but also will complain about the former. That might well happen, but I'd hope there could be a better arrangement made. 

>  OP/GV is usually a pretty responsible funder, so the odds of them suddenly defunding CEA without providing some sort of notice and transitional funding seems low.

I largely agree. That said, if I were CEA, I'd still feel fairly uncomfortable. When the vast majority of your funding comes from any one donor, you'll need to place a whole lot of trust in them.

I'd imagine that if I were working within CEA, I'd be incredibly precautious not to upset OP or GV. I'd also imagine this to mess with my epistemics/communication/actions.

Also, of course, I'd flag that the world can change quickly. Maybe Trump will go on a push against EA one day, and put OP in an awkward spot, for example. 

The original idea of EA, as I see it, was that it was supposed to make the kind of research work done at philanthropic foundations open and usable for well-to-do-but-not-Bill-Gates-rich Westerners

This part doesn't resonate with me. I worked at 80k early on (~2014) and have been in the community for a long time. Then, I think the main thing was excitement over "doing good the most effectively". The assumption was that most philanthropic foundations weren't doing a good job - not that we wanted regular people to participate, specifically. I think then, most community members would be pretty excited about the idea of the key EA ideas growing as quickly as possible, and billionaires would help with that.

GiveWell specifically was started with a focus on smaller donors, but there was a always a separation between them and EA. 

(I am of course more sympathetic to a general skepticism around any billionaire or other overwhelming donor. Though I'm personally also skeptical of most other donation options to other degrees - I want some pragmatic options that can understand the various strengths and weaknesses of different donors and respond accordingly) 

 

Basically, I didn't think that there was much meaningful difference between a CEA that was (e.g.) 90% OP/GV funded vs. 70% OP/GV funded.


Personally, I'm optimistic that this could be done in specific ways that could be better than one might initially presume. One wouldn't fund "CEA" - they could instead fund specific programs in CEA, for instance. I imagine that people at CEA might have some good ideas of specific things they could fund that OP isn't a good fit for. 

One complication is that arguably we'd want to do this in a way that's "fair" to OP. Like, it doesn't seem "fair" for OP to pay for all the stuff that both OP+EA agrees on, and EA only to fund the stuff that EA likes. But this really depends on what OP is comfortable with. 

Lastly, I'd flag that CEA being 90% OP/GV funded really can be quite different than 70% in some important ways, still. For example, if OP/GV were to leave - then CEA might be able to go to 30% of its size - a big loss, but much better than 10% of its size.  

Someone wrote to me in a PM that they think one good reason for EA donors not to have funded EA community projects was because OP was funding them, and arguably there are other more neglected projects.

I do think this is a big reason, and I was aware of this before. It's a complex area.

At the same time, I think the current situation is really not the best, and can easily imagine healthier environments where motivated funders and community would have found good arrangements here.  

I also take responsibility for not doing a better job around this (and more). 

Ozzie Gooen
2
0
0
29% agree

I have mixed feelings here. But one major practical worry I have about "increasing the value of futures" is that a lot of that looks fairly zero-sum to me. And I'm scared of getting other communities to think this way. 

If we can capture 5% more of the universe for utilitarian aims, for example, that's 5% less from others. 

I think it makes sense for a lot of this to be studied in private, but am less sure about highly public work.

I think this is overall an important area and am happy to see it getting more research. 

This might be something of a semantic question, but I'm curious if you what you think of the line/distinction between "moral errors" and say, "epistemic errors".

It seems to me like a lot of the "moral errors" you bring up involve a lot of epistemic mistakes.

There are interesting empirical questions about what causes what here. Wrong epistemic beliefs clearly lead to worse morality, and also, worse morality can get one to believe in convenient and false things. 

As I think about it, I realize that we probably agree about this main bucket of lock-in scenario. But I think the name of "moral errors" makes some specific assumptions that I think are highly suspect. Even if it seems like the case now that differences of morality are the overriding factor to differences in epistemics, I would place little confidence in this - it's a tough topic.

Personally I'm a bit paranoid that people in our community have academic foundations in morality more than epistemics, and thus correspondingly emphasize morality more because of that. Or, it seems a bit convenient when specialists in morality come out arguing about "moral lock-in" as a major risk. 

Unfortunately, by choosing one name to discuss this (i.e. "Moral Errors"), we might be locking in some key assumptions. Which would be ironic, given that the primary worry itself is about lock-in of these errors. 

(I've written a bit more on "Epistemic Lock-In" here)

I want to see more discussion on how EA can better diversify and have strategically-chosen distance from OP/GV.

One reason is that it seems like multiple people at OP/GV have basically said that they want this (or at least, many of the key aspects of this). 

A big challenge is that it seems very awkward for someone to talk and work on this issue, if one is employed under the OP/GV umbrella. This is a pretty clear conflict of interest. CEA is currently the main organization for "EA", but I believe CEA is majority funded by OP, with several other clear strong links. (Board members, and employees often go between these orgs).

In addition, it clearly seems like OP/GV wants certain separation to help from their side. The close link means that problems with EA often spills over to the reputation of OP/GV. 

I'd love to see some other EA donors and community members step up here. I think it's kind of damning how little EA money comes from community members or sources other than OP right now. Long-term this seems pretty unhealthy. 

One proposal is to have some "mini-CEA" that's non-large-donor funded. This group's main job would be to understand and act on EA interests that organizations funded by large donors would have trouble with. 

I know Oliver Habryka has said that he thinks it would be good for the EA Forum to also be pulled away from large donors. This seems good to me, though likely expensive (I believe this team is sizable).

Another task here is to have more non-large-donor funding for CEA. 

For large donors, one way of dealing with potential conflicts of interest would be doing funding in large blocks, like a 4-year contribution. But I realize that OP might sensibly be reluctant to do this at this point. 

Also, related - I'd really hope that the EA Infrastructure Fund could help here, but I don't know if this is possible for them. I'm dramatically more excited about large long-term projects on making EA more community-driven and independent, and/or well-managed, than I am the kinds of small projects they seem to fund. I don't think they've ever funded CEA, despite that CEA might now represent the majority of funding on the direct EA community. I'd encourage people from this fund to think through this issue and be clear about what potential projects they might be excited for, around this topic. 

Backing up a bit - it seems to me like EA is really remarkably powerless for what it is, outside of the OP/GV funding stream right now. This seems quite wrong to me, like large mistakes were made. Part of me think that positive change here is somewhat hopeless at this point (I've been thinking about this space for a few years now but haven't taken much action because of uncertainty on this), but part of me also thinks that with the right cleverness or talent, there could be some major changes. 

Another quick thought: This seems like a good topic for a "Debate Week", in case anyone from that team is seeing this.

(To add clarity - I'm not suggesting that OP drops it's funding of EA! It's more that I think that non-OP donors should step up more, and that key EA services should be fairly independent.)

Load more