Rían O'M

@ Arb Research
182 karmaJoined Pursuing an undergraduate degreeWorking (0-5 years)

Participation
1

Comments
22

Topic contributions
3

Assuming there will continue to be three EAG-like conferences each year, these should all be replaced by conferences framed around specific cause areas/subtopics rather than about EA in general (e.g., by having two conferences on x-risk or AI-risk and a third one on GHW/FAW)

 

Is this not already the case? I.e. don't the major EAGs already focus on specific cause areas?

Vote power should scale with karma

 

On group think: I think this worry can mostly be ignored if the elite-karma accounts have sufficiently diverse views. That being the case would mean that (a) diverse views aren't obviously being punished and (b) to voters with the most individual leverage are less likely to all vote in the same direction. If they top karma accounts were all aligned in how they vote or were even colluding to suppress comments / posts, then the downsides of group-think would be more pronounced.

I guess It feels like this should be a testable claim: do the most upvoted posts / comments conform to the views of the highest karma users? Given how diverse the viewpoints are of the +5000 karma plus users (even just the top twenty), I'm not even sure their is a single coherent view among the karma-elite.

Among the karma-elite are a few OpenPhil / CEA accounts, whilst the second highest upvoted account is Habryka (arguably OpenPhil's antichrist[1] and one of CEA's largest critique). 

I haven't really thought about all the angles though and can imagine something like "the EA forum voting overly favours the bucket of people who have  ~1000 karma and use the form everyday."

Probably the risk of group-think is more dependant on who the forum users are, rather than the specific mechanics of the forum.

  1. ^

    I'm using this phrase as a tongue-and-cheek reference to Habryka hailing a kind of end time for OpenPhil; "I think OpenPhil is kind, is like dead in the sense of the historical role it has played within the AI safety and EA ecosystem."

And if he is still giving money to official EA causes, it should be loudly and swiftly returned.

 

In cases where a person has donated money they secured through crime, it seems right to reject it, but rejecting someone's money because one doesn't like their politics seems like a bad idea.

Suppose a hypothetical medicine-distribution charity that had been funded by Musk announced they would forgo accepting his future donations and would distribute fewer pills as a result. What exactly would this achieve? Maybe they would succeed in pleasing people who share their politics, but their beneficiaries (the very people they are supposed to help) would suffer. 

I personally think it would be good if people who's politics I dislike donated more to EA causes. 

People frequently do things like taking Rethink's moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth

Can you point to specific cases of that happening? I haven't seen this happen before. My sense is that most people who quote Rethinks moral weights project are familiar with the limitations. 

The animal welfare side of things feels less truthseeking, more activist, than other parts of EA

Can you say more on this? 

This makes me think that countries who as of yet don't have an entrenched factory farming lobby/industry would benefit advocacy groups similar to Shrimp Welfare Project (work in the reverent countries with stakeholders to improve the wellbeing of farmed animals).

 I began wondering if any org was approaching this similar to SWP. There seem to be two EA groups working on this:

It would have made sense for there to be a bit more discussion about ethical side-constraints, but including transparency in the list of core principles would honestly be just weird because transparency isn't distinctly EA. Beyond that, the importance of transparency is significantly complicated by the concept of infohazards in areas like biohazards or AI safety. I really don't see it as CEA's role to take a side in these debates. I think it makes sense for CEA to embrace transparency as a key organisational value, but it's not a core principle of EA in general and we should accept that different orgs will occupy different positions on the spectrum.

I agree that absolute transparency is not ideal. That said, there is a version of transparency (i.e 'reasoning transparency') that is a somewhat distinct EA value. 

is filled with bizarre factual errors, one of which was so egregious that it merited a connection.

Small nitpick; this is a typo or 'connection' is something I'm not familiar with in this context. 

I don't think the global optimal solution is an EA forum that's a cuddly little safe space for me.

I agree with this, but also think the forum "not being cuddly for Sean" and "not driving contributors away" aren't mutually exclusive. Maybe I am not seeing all the tradeoffs though. 

I am going to engage less with EA forum/LW as a result of this and a few similar interactions, and I am especially going to be more hesitant to be critical of EA/LW sacred cows.

This makes me sad as I enjoy reading your comments and find them insightful. That said, I understand and support your reasoning. I feel as though some amount of "mistake mindset" has disappeared a little in the two years I've been reading the forum. 

Load more