H

hbesceli

586 karmaJoined

Comments
21

I liked this comment and thought it raised a bunch of interesting points, thanks for writing it. 

> Putting this author aside, it seems like many of the folk who talk about this stuff are merely engaging in self-absorbed obscurantism.

I had a bit of a negative reaction to this comment - it seems a bit uncharitable to me

Thanks for writing this! I'm curious about how important you think the selection effects are here? eg. in terms of people who have had worse experiences with EA being less likely to fill out the survey. 

I think my initial guess here would be that EA is roughly net neutral for people in terms of mental health, and because I expect the selection effects in who takes the survey to be fairly strong, I don't really update much on seeing the above data

Thanks for your work on this! 

I think I mildly prefer the older landing page (sorry!). 

The newer one feels more shiny, in a way that appeals to me a bit less. Trying to spell this out a bit more/ what it is that appeals to me less:
- In terms of the vibe it feels more professionalismy, status signallingy, corporate, respectable or something. (I don't think it's entirely fair to describe the new website as these things, but it does at least feel like the new website is more in this direction relative to the old). 
- I'm remembering Sarah Constantin's article on Ra as I write this, which I think gestures towards what I like less. 
- I feel a bit of an ick of the 'featured in' section (which has logos of eg. the BBC, NYT etc.). I'm not entirely sure why. Maybe because it feels like there's a subtle implication of 'if you respect these institutions/ brands, then you'll like this effective altruism thing'. And I'm like 'huh, I'm not sure how much I do respect these institutions/ brands'... idk. (also, featured in feels like a bit of an odd way to put it, as a bunch of these places have written things very unchartiable about EA). 
- From the new website, I get a bit more of a sense that someone is trying to sell something to me somehow. Like, it seems like the kind of website that I expect to see from a corporation that wants me to buy their product, and less like the kind of website that I expect to see from something that wants to provide me information or something. 
- It may just be that I am into boring websites. For example, my idea of a good time/ a good website is eg. Wikipedia, Stanford Encyclopedia of Philosophy, and Astral Codex Ten. I guess I'm just really into large walls of text. But doubling down on this, I think there is something good about the statement that large walls of text makes. It's like 'hey, what we're about is like thinking carefully and passionately impassionate reasoning, and so we're going to communicate with you via lots of words and explicit arguments and claims, and not with shiny images and vague associations with established institutions, because we think that in an ideal world people should be persuaded to get involved with a community/ philosophy on the basis of explicit arguments and claims, and not on the basis of shiny images etc.'
- I think the above comments might make it seem like I'm more anti the new website than I actually am. I think it's like if there was a content-to-shiny scale for websites, with Wikipedia at 1 and idk the Adidas website at 10, then I'd put the new website at like a 5.5, and ideally I'd want it to be a 3.5. Maybe even a 3. But also if I were king then I'd want all websites to move a couple of points down on this scale. 
- I like the tagline change and other word changes! I also like that an essay to what EA is is linked early on and also highlighting actions that people have taken. 


 


 

On this topic, I like Anna Salamon's post What should you change in response to an "emergency"? And AI risk for working out how to react/ respond appropriately to emergencies

Could you expand on why that's the case? Is the idea that you believe those projects are net negative, or that you would rather marginal donations go to animal welfare and the long term future instead of EA infrastructure?

In some cases there are projects that I or other fund managers think are net negative, but this is rare. Often things that we decide against funding I think are net positive, but think that the projects aren't competitive with funding things outside of the EA Infrastructure space (either the other EA Funds or more broadly). 

I think it's a bit weird for donors who want to donate to EA infrastructure projects to see that initiatives like EA Poland are funding constrained while the EA Infrastructure fund isn't

I think it makes sense that there are projects which EAIF decides not to fund, and that other people will still be excited about funding (and in these cases I think it makes sense for people to consider donating to those projects directly). Could you elaborate a bit on what you find weird? 

and extra donations to the EAIF will likely counterfactually go to other cause areas

I don't think this is the case. Extra donations to EAIF will help us build up more reserves for granting out at a future date. But it's not the case that eg. if EAIF has more money that we think that we can spend well at the moment, that we'll then eg. start donating this to other cause areas. I might have misunderstood you here? 

A lot of what I have seen regarding "EA Community teams" seems to be be about managing conflicts between different individuals. 

Not sure I understand this part - curious if you could say more. 

It would be interesting to see an organization or individual that was explicitly an expert in knowing different individuals and organizations and the projects that they are working on and could potentially connect people who might be able to add value to each other's projects.

I like this idea. A related idea/ framing that comes to mind. 

  • There's often a lot of value for people having a strong professional network. Eg. for finding collaborators, getting feedback or input etc.
  • People's skills/ inclination for network building will vary a lot. And I suspect there's a significant fraction of people working on EA projects that have lower network building inclination/ skills, and would benefit from support in building their network.
  • eg. If I could sign up for a service that substantially increased my professional network/ helped me build more valuable professional relationships, I would and would be willing to pay for such a service. 
     

Thanks for the suggestion - I read the proposal a while ago, and hadn't thought about it recently, so it's good to be reminded of it again. 

The fact that that has not already been funded, and that talk around it has died down, makes me wonder if you have already ruled out funding such a project.

We haven't decided against funding projects like this. (EAIF's grantmaking historically has been very passive - eg. the projects that we end up considering for funding has been determined by the applications we received. And we haven't received any strong applications in the 'FHI of the West' ballpark - at least as far as I'm aware)

as you get better models of the world/ability to get better models of the world, you start noticing things that are inconvenient for others. Some of those inconvenient truths can break coordination games people are playing, and leave them with worse alternatives.

I haven't thought about this particular framing before, and it's interesting to me to think about - I don't quite have an opinion on it at the moment. Here's some of the things that are on my mind at the moment which feel related to this. 

Perhaps relatedly or perhaps as a non-sequitur, I'm also curious about what changed since your post a year ago talking about how EA doesn't bring out the best in you.

 

This seems related to me, and I don't have a full answer here, but some things that come to mind:

  • For me personally, I feel a lot happier engaging with EA than I did previously. I don't quite know why this is, I think some combination of: being more selective in terms of what I engage with and how, having a more realistic view of what EA is and what to expect from it, being part of other social environments which I get value from and make me feel less 'attached to EA' and my mental health improving. And also perhaps having a stronger view of what I want to be different with EA, and feeling more willing to stand behind that. 
  • I still feel pretty wary of things which I feel that EA 'brings out of me' (envy, dismissiveness, self-centredness etc.) which I don't like, and it still can feel like a struggle to avoid the pull of those things. 

Have you messaged people on the EA and epistemics slack?

... there is an EA and epistemics slack?? (cool!) if it's free for anyone to join, would you be able to send me an access link or somesuch? 

Load more