G

GideonF

2370 karmaJoined

Bio

Participation
5

How I can help others

Reach out to me if you have questions about SRM/Solar geoengineering

Comments
200

Also not Holly, but another response might be the following:

  • Pausing in the very near future without a rise in political salience is just very very unlikely. The pause movement getting large influence is unlikely without a similar rise in political salience.
  • If a future rise in political salience occurs, this is likely an approximation of a 'pivotal point' (and if its not, well, policymakers are unlikely to agree to a pause at a pivotal point anyway)
  • Thus, what advocacy now is actually doing predominantly is creating the groundwork for a movement/idea that can be influential when the time comes.

I think this approach runs real risks, which I'd be happy to discuss, but also strikes me as an important response to the Shulman take.

GideonF
3
0
0
70% agree

There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention

 

My guess is that pesticides impact on insect welfare probably falls into this category.

GideonF
7
1
0
70% disagree

Some Positions In EA Leadership Should Be Elected

 

We should also think about why we want democracy. Intra-communal democracy is not an inherent good, and indeed, the EA community is not here for the sake of the EA community, but rather to have positive impact. However, we might think that 'democratising' or whatever we might want to call it may play important ethical or epistemic roles when we think a) diversifying viewpoints is important and b) justification and accountability are important. However, I think none of these are best served by elections. 

For diversifying viewpoints, we may want this because of the epistemic situation we are in might suggest to us that a more 'diverse' (this may only be along certain axes eg expertise, assumptions, political viewpoint/party) decision making body is necessary. I certainly think this is true in a fair few areas EA functions in. However, it isn't clear that elections, which often focus on popularity or consensus actually do that. Maybe we'd be better off doing some sort of deliberately diverse expert elicitation panel, or simply caring more about (relevant forms of) diversity in our hiring. For example, perhaps grantmakers should be making an effort to hire people with experience in conservative policy circles. Or maybe we simply do this by doing CB efforts to have a more pluralsitic 'community'; again. I notbaly think EA (or certain parts of it, for example AI) have got MUCH MUCH better at this the last few years, such that its not actually obvious how much concerted effort is needed. 

 

Accountability may be another reason. EAs tie lots of our identity to this community, and also much of our professional reputation. As such, we might want to be able to hold representatives accountable. However, it isn't obvious that we can't trust well constructed boards to do this, for example. Otherwise, I could imagine a scenario where a certain designated body (say, all people who have attended 2 EAGs or been employed at a certain list of organisations etc) can petition to remove someone from important leadership roles, and if a supermajority votes to remove them then they are removed. But this doesn't really seem like an election. 

 

More generally, it just isn't clear to me what sorts of roles we want elected. The two main levers of power in EA are a) money and b) prestige. A lot of prestige is generated by who speaks at EAGs, appears on the 80000 hours podcast etc, and its really unlikely that having an elected person making these decisions would actually change very much, or encourage the sorts of outcomes wanted. Maybe there are better ways to harness the collective wisdom of the community in these decisions, but I think they are unlikely to look like elections. And for grantmaking, there also just appears minimal reason to do elections. The main issue with regards to grantmaking in this vicinity is how few grantmakers there are (although this is maybe better than it was), which creates centralisation and thus likely a sub-optimal tayloring of the landscape to the preferences of existing grantmakers, and tying of reputations to those grantmakers. This problem is not at all solved by elections, and maybe would get worse rather than better; the problem is solved by bringing more money from different sources into EA.

I think the best argument for elections is it would reduce the 'who you know' component of EA. But a) I think this is just a lot better now than it was - as the community has grown, i think much of this has been adjusted and b) its not obvious to me that elections wouldn't optimise for something similar. 

I think the argument that insect suffering is of overwhelming importance doesn't actually require pure utilitarianism. It probably works for any form of aggregationist, and maybe even partially agreggationist ethics. Indeed, itsw not clear the problem isn't worse under certain formations of deontological views, where discounting the life of an insect relative to a human would be unacceptable

Are the annoying happy lightbulbs when you upvote something here to stay, or they just an April Fool's thing that haven't been removed yet?

I think you should delete the post and resend it out another day (maybe on the 3rd?)

In fairness to Richard, I think it comes across in text a lot more strongly than in my view it came across listening on youtube

I really like this piece, and I think I share in a lot of these views. Just on some fairly minor points:

  1. Deep Incommensurability. It seems like incommensurability helps with regards to avoiding MPL, but not actually that much. For example, there seem many moral theories (ie something that is somewhat like Person Affecting Views) that are incommensurable (or indifferent) between different size worlds, but not different qualities. So they may really care if it is a world of humans, or insects, or hedonium.

I can imagine views (they do run into non-identity, but maybe there is ways of formulating them that don't) that this would be a real problem. For example, imagine a view that holds that simulated human existence if the best form of life, but is indifferent between that and non-existence. As such, they won't care whether we leave the universe insentient, but faced with a pair-wise choice between hedonium and simulated humans, they will take the simulated humans everytime. So they don't care much if we do extinct, but do care if the hedonistic utilitarians win. indeed, these views may be even less willing to take trades than many views that care about quantity. I imagine many religions, particularly universalist religions like Christianity and Islam, may actually fall into this category.

  1. I think some more discussion of the 'kinetics' vs 'equilibrium' point you sort of allude to seems pretty interesting. I think you could reasonably hold the view that rational (or sensing or whatever other sort of beings) beings converge to moral correctness in infinite time. But we are likely not waiting infinite time before locking in decisions that cannot be reversed. Thus, because irreversible moral decisions could occur at a faster rate than correct moral convergence (ie the kinetics of the process is more important than what it would be at equilibrium), we shouldn't expect the equilibrium process to dominate. I think you gesture towards this, but I think exploration of the ordering further would be very interesting.

  2. I also wonder if views that are pluralist rather than monist about value may make the MPL problem worse or better. I think I could see arguments either way, depending on exactly how those views are formulated, but would be interesting to explore.

Very interesting piece anyway, thanks a lot, and really resonates with a lot I've been thinking about

I'm sure I'll have a few more comments at some point as I revisit the essay.

Ye, I might be wrong, but something like Larry Temkin's model might work best here (been a while since I read it so may be getting it wrong)

I think averageists may actually also care about the long term future a lot, and it may still have a MPL if they don't hold (rapid) diminish returns to utility WITHIN lives (ie it is possible for the average life to be a lot worse or a lot better than today). Indeed, given (potentially) plausible views on interspecies welfare comparisons, and how bad the lvies of lots of non-humans seem today, this just does seem to be true. Now, its not clear they shouldn't be at least a little more sympathetic to us converging on the 'right' world (since it seems easier), but it doesn't seem like they get out of much of the argument either

Load more