Ozzie Gooen

10995 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
1027

Topic contributions
4

I generally believe that EA is effective at being pragmatic, and in that regard, I think it's important for the key organizations that are both giving and receiving funding in this area to coordinate, especially with topics like funding diversification. I agree that this is not the ideal world, but this goes back to the main topic.

For reference, I agree it's important for these people to be meeting with each other. I wasn't disagreeing with that.

However, I would hope that over time, there would be more people brought in who aren't in the immediate OP umbrella, to key discussions of the future of EA. At least have like 10% of the audience be strongly/mostly independent or something. 

The o1-preview and Claude 3.5-powered template bots did pretty well relative to the rest of the bots.

As I think about it, this surprises me a bit. Did participants have access to these early on? 

If so, it seems like many participants underperformed the examples/defaults? That seems kind of underwhelming. I guess it's easy to make a lot of changes that seem good at the time but wind up hurting performance when tested. Of course, this raises the point that it's concerning that there wasn't any faster/cheaper way of testing these bots first. Something seems a bit off here. 

I think you raise some good points on why diversification as I discuss it is difficult and why it hasn't been done more. 

Quickly:
> I agree with the approach's direction, but this premise doesn't seem very helpful in shaping the debate.

Sorry, I don't understand this. What is "the debate" that you are referring to? 

> At the last, MCF funding diversification and the EA brand were the two main topics

This is good to know. While mentioning MCF, I would bring up that it seems bad to me that MCF seems to be very much within the OP umbrella, as I understand it. I believe that it was funded by OP or CEA, and the people who set it up were employed by CEA, which was primarily funded by OP. Most of the attendees seem like people at OP or CEA, or else heavily funded by OP. 

I have a lot of respect for many of these people and am not claiming anything nefarious. But I do think that this acts as a good example of the sort of thing that seems important for the EA community, and also that OP has an incredibly large amount of control over. It seems like an obvious potential conflict of interest.

Agreed that this would be good. But it can be annoying to do without additional tooling. 

I'd like to see tools that try to ask a question from a few different angles / perspectives / motivations and compare results, but this would be some work. 

Quickly:
1. Some of this gets into semantics. There are some things that are more "key inspirations for what was formally called EA" and other things that "were formally called EA, or called themselves EA." GiveWell was highly influential around EA, but I think it was created before EA was coined, and I don't think they publicly associated as "EA" for some time (if ever).
2. I think we're straying from the main topic at this point. One issue is that while I think we disagree on some of the details/semantics of early EA, I also don't think that matters much for the greater issue at hand. "The specific reason why the EA community technically started" is pretty different from "what people in this scene currently care about."

When having conversations with people that are hard to reach, it's easy for discussions to take ages. 

One thing I tried doing is for me to have a brief back-and-forth with Claude, asking it to provide all the key arguments against me. Then I'd make the conversation public, send a link to the chat, and ask the other person to see that. I find that this can get through a lot of the beginning points on complex topics, with minimal human involvement. 

I often second-guess my EA Forum comments with Claude, especially when someone mentions a disagreement that doesn't make sense to me.

When doing this I try to ask it to be honest / not be sycophantic, but this only helps so much, so I'm curious for better prompts to prevent sycophancy. 

I imagine at some point all my content could go through an [can I convince an LLM that this is reasonable and not inflammatory] filter. But a lower bar is just doing this for specific comments that are particularly contentious or argumentative. 

This is pretty basic, but seems effective.

In the Claude settings you can provide a system prompt. Here's a slightly-edited version of the one I use.  While short, I've found that this generally seems to improve conversations for me. Specifically, I like that Claude seems very eager to try estimating things numerically. One weird but minor downside though is that it will sometimes randomly bring up items here in conversation, like, "I suggest writing that down, using your Glove80 keyboard."
 

I'm a 34yr old male, into effective altruism, rationality, transhumanism, uncertainty quantification, monte carlo analysis, TTRPGs, cost-benefit analysis. I blog a lot on Facebook and the EA Forum.

Ozzie Gooen, executive director of the Quantified Uncertainty Research Institute.

163lb, 5'10, generally healthy, have RSI issues

Work remotely, often at cafes and the FAR Labs office space.

I very much appreciate it when you can answer questions by providing cost-benefit analyses and other numeric estimates. Use probability ranges where is appropriate.

Equipment includes: Macbook, iPhone 14, Airpods pro 2nd gen, Apple Studio display, an extra small monitor, some light gym equipment, Quest 3, theragun, airtags, Glove80 keyboard using Colemak DH, ergo mouse, magic trackpad, Connect EX-5 bike, inexpensive rowing machine.

Heavy user of VS Code, Firefox, Zoom, Discord, Slack, Youtube, YouTube music, Bear (notetaking), Cursor, Athlytic, Bevel.

I think you bring up a bunch of good points. I'd hope that any concrete steps on this would take these sorts of considerations in mind.

> The concerns implied by that statement aren't really fixable by the community funding discrete programs, or even by shelving discrete programs altogether. Not being the flagship EA organization's predominant donor may not be sufficient for getting reputational distance from that sort of thing, but it's probably a necessary condition.

I wasn't claiming that this funding change would fix all of OP/GV's concerns. I assume that would take a great deal of work, among many different projects/initiatives.

One thing I care about is that someone is paid to start thinking about this critically and extensively, and I imagine they'd be more effective if not under the OP umbrella. So one of the early steps to take is just trying to find a system that could help figure out future steps.

> I speculate that other concerns may be about the way certain core programs are run -- e.g., I would not be too surprised to hear that OP/GV would rather not have particular controversial content allowed on the Forum, or have advocates for certain political positions admitted to EAGs, or whatever.

I think this raises an important and somewhat awkward point that levels of separation between EA and OP/GV would make it harder for OP/GV to have as much control over these areas, and there are times where they wouldn't be as happy with the results.

Of course:
1. If this is the case, it implies that the EA community does want some concretely different things, so from the standpoint of the EA community, this would make funding more appealing.
2. I think in the big picture, it seems like OP/GV doesn't want to be held as responsible for the EA community. Ultimately there's a conflict here - on one hand, they don't want to be seen as responsible for the EA community - on the other hand, they might prefer situations where they can have a very large amount of control over the EA community. I hope it can be understood that these two desires can't easily go together. Perhaps they won't be willing to compromise on the latter, but also will complain about the former. That might well happen, but I'd hope there could be a better arrangement made. 

>  OP/GV is usually a pretty responsible funder, so the odds of them suddenly defunding CEA without providing some sort of notice and transitional funding seems low.

I largely agree. That said, if I were CEA, I'd still feel fairly uncomfortable. When the vast majority of your funding comes from any one donor, you'll need to place a whole lot of trust in them.

I'd imagine that if I were working within CEA, I'd be incredibly precautious not to upset OP or GV. I'd also imagine this to mess with my epistemics/communication/actions.

Also, of course, I'd flag that the world can change quickly. Maybe Trump will go on a push against EA one day, and put OP in an awkward spot, for example. 

Load more