The recent FTX scandal has, I think, caused a major dent in the confidence many in the EA Community have in our leadership. It seems to me increasingly less obvious that the control of a lot of EA by a narrow group of funders and thought leaders is the best way for this community full of smart and passionate people to do good in the world. The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA. A series of failure by the community this year, including the Carrick Flynn campaign and now the FTX scandal has shattered my confidence in this group. I really think EA is amazing, and I am proud to be on the committee of EA Oxford (this represent my own views), having been a summer research fellow at CERI and having spoken at EAGx Rotterdam; my confidence in the EA leadership, however, is exceptionally low, and I think having an answer to some of these questions would be very useful.
An aside: maybe I’m wrong about power structures in EA being unaccountable, centralised and non-transparent. If so, the fact it feels like that is also a sign something is going wrong.
Thus, I have a number of questions for the “leadership group” about how decisions are made in EA and rationale for these. This list is neither exhaustive nor meant as an attack; there possibly are innocuous answers to many of these questions. Moreover, not all of these are linked to SBF and that scandal, and many of these probably have perfectly rational explanation.
Nonetheless, I think now is the appropriate time to ask difficult questions of the EA leadership, so this is just my list of said questions. I do apologise if people take offence to any of these (I know it is a difficult time for everyone), as we really are I am sure all trying our best, but nonetheless I do think we can only have as positive an impact as possible if we are really willing to examine ourselves and see what we have done wrong.
- Who is invited to the coordination forum and who attends? What sort of decisions are made? How does the coordination forum impact the direction the community moves in? Who decides who goes to the coordination forum? How? What's the rationale for keeping the attendees of the coordination forum secret (or is it not purposeful)?
- Which senior decision makers in EA played a part in the decision to make the Carrick Flynn campaign happen? Did any express the desire for it not to? [The following question has been answered]Who signed off on the decision to make the campaign manager someone with no political experience(edit: I have now recieved information that the campaign did their own hiring of a campaign manager and had experienced consultants assist through the campaign. So whether I agree with this or not, it seems the campaign manager is quite different from the issues I raise elsewhere in this post)
- Why did Will MacAskill introduce Sam Bankman-Fried to Elon Musk with the intention of getting SBF to help Elon buy twitter? What was the rationale that this would have been a cost effective use of $8-15 Billion? Who else was consulted on this?
- Why did Will MacAskill choose not to take on board any of the suggestions of Zoe Cremer that she set out when she met with him?
- Will MacAskill has expressed public discomfort with the degree of hero-worship towards him. What steps has he taken to reduce this? What plans have decision makers tried to enact to reduce the amount of hero worship in EA?
- The EA community prides itself on being an open forum for discussion without fear of reprisal for disagreement. A very large number of people in the community however do not feel it is, and feel pressure to conform and not to express their disagreement with the community, with senior leaders or even with lower level community builders.Has there been discussions within the community health team with how to deal with this? What approaches are they taking community wide rather than just dealing with ad hoc incidents?
- A number of people have expressed suspicion or worry that they have been rejected from grants because of publicly expressing disagreements with EA. Has this ever been part of the rationale for rejecting someone from a grant?
- FTX Future Fund decided to fund me on a project working on SRM and GCR, but refused to publicise it on their website. How many other projects were funded but not publicly disclosed? Why did they decide to not disclose such funding?
- What sort of coordination, if any, goes on around which EAs talk to the media, write highly publicised books, go in curricula etc? What is the decision making procedure like?
- The image, both internally and externally, of SBF was that he lived a frugal lifestyle, which it turns out was completely untrue (and not majorly secret). Was this known when Rob Wiblin interviewed SBF on the 80000 Hours podcast and held up SBF for his frugality?
I don't think I am a great representative of EA leadership, given my somewhat bumpy relationship and feelings to a lot of EA stuff, but I nevertheless I think I have a bunch of the answers that you are looking for:
The Coordination Forum is a very loosely structured retreat that's been happening around once a year. At least the last two that I attended were structured completely as an unconference with no official agenda, and the attendees just figured out themselves who to talk to, and organically wrote memos and put sessions on a shared schedule.
At least as far as I can tell basically no decisions get made at Coordination Forum, and it's primary purpose is building trust and digging into gnarly disagreements between different people who are active in EA community building, and who seem to get along well with the others attending (with some bal... (read more)
The 15 billion figure comes from Will's text messages themselves (page 6-7). Will sends Elon a text about how SBF could be interested in going in on Twitter, then Elon Musk asks, "Does he have huge amounts of money?" and Will replies, "Depends on how you define "huge." He's worth $24B, and his early employees (with shared values) bump that up to $30B. I asked how much he could in principle contribute and he said: "~1-3 billion would be easy, 3-8 billion I could do, ~8-15b is maybe possible but would require financing"
It seems weird to me that EAs would think going in with Musk on a Twitter deal would be worth $3-10 billion, let alone up to 15 (especially of money that at the ti... (read more)
Makes sense, I think I briefly saw that, and interpreted the last section as basically saying "ok, more than 8b will be difficult", but the literal text does seem like it was trying to make $8b+ more plausible.
... (read more)If you think investing in Twitter is close to neutral from an investment perspective (maybe reasonable at the time, definitely not by the time Musk was forced to close) then the opportunity cost isn't really billions of dollars. Possibly this would have been an example of marginal charity.
I can see where you're coming from with this, and I think purely financially you're right, it doesn't make sense to think of it as billions of dollars 'down the drain.'
However, if I were to do a full analysis of this (in the framing of this being a decision based on an EA perspective), I would want to ask some non-financial questions too, such as:
My reading (and of course I could be completlely wrong) is that SBF wanted to invest in Twitter (he seems to have subsequently pitched the same deal through Michael Grimes), and Will was helping him out. I don't imagine Will felt it any of his business to advise SBF as to whether or not this was a good move. And I imagine SBF expected the deal to make money, and therefore not to have any cost for his intended giving.
Part of the issue here is that people have been accounting the bulk of SBF's net worth as "EA money". If you phrase the question as "Should EA invest in Twitter?" the answer is no. EA should probably also not invest in Robinhood or SRM. If SBF's assets truly were EA assets, we ought to have liquidated them long ago and either spent them or invested them reasonably. But they weren't.
It's hard to read the proposal as only being motivated by a good business investment, because Will says in his opening DM:
[sorry for multiple comments, seems better to split out separate points]
Investing in assets expected to appreciate can be a form of earning to give (not that Twitter would be a good investment IMO). That's how Warren Buffett makes money and probably nobody in EA has criticized him for doing that. Investing in a for-profit something is very different and is guided by different principles from donating to something, because you are expecting to (at least) get your money back and can invest it again or donate it later (this difference is one of the reasons microloans became so hugely popular for a while).
On the downside, concentrating assets (in any company, not just Twitter) is a bad financial strategy, but on the upside, having some influence at Twitter could be useful to promote things like moderation rules that improve the experience of users and increase the prevalence of genuine debate and other good things on the platform.
Hi Oli — I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.
I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.
It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam's reference to 'nice apartments' in the interview:
Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.
In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the 'crypto' social scene. That may help to explain why this issue never came up in casual conversation.
Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible.
Separately from the FTX issue, I'd be curious about you dissecting what of Zoe's ideas you think are worth implementing and what would be worse and why.
My takes:
- Set up whistleblower protection schemes for members of EA organisations => seems pretty good if there is a public commitment from an EA funder to something like "if you whistleblow we'll cover your salary if you are fired while you search another job" or something like that
- Transparent listing of funding sources on each website of each institution => Seems good to keep track of who receives money from who
- Detailed and comprehensive conflict of interest reporting in grant giving => My sense is that this is already handled sensibly enough, though I don't have great insight on grantgiving institutions
- Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% => this seems bad for incentives and complicated to put into action
- Within 5 years: EA funding decisions are made collectively => seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
- No fireside chats at EAG with leaders. I
... (read more)I think I am across the board a bit more negative than this, but yeah, this assessment seems approximately correct to me.
On the whistleblower protections: I think real whistleblower protection would be great, but I think setting this up is actually really hard and it's very common in the real world that institutions like this end up traps and net-negative and get captured by bad actors in ways that strengthens the problems they are trying to fix.
As examples, many university health departments are basically traps where if you go to them, they expel you from the university because you outed yourself as not mentally stable. Many PR departments are traps that will report your complaints to management and identify you as a dissenter. Many regulatory bodies are weapons that bad actors use to build moats around their products (indeed, looks like indeed that crypto regulatory bodies in the U.S. ended up played by SBF, and were one of the main tools that he used against his competitors). Many community dispute committees end up being misled and siding with perpetrators instead of victims (a lesson the rationality community learned from the Brent situation).
I think it's possible to set up good institutions like this, but rushing towards it is quite dangerous and in-expectation bad, and the details of how you do it really matter (and IMO it's better to not do anything here than to not try exceptionally hard at making this go well).
It seems worth noting that UK employment law has provisions to protect whistleblowers and for this reason (if not others) all UK employers should have whistleblowing policies. I tend to assume that EA orgs based in the UK are compliant with their obligations as employers and therefore do have such policies. Some caution would be needed in setting up additional protections, e.g. since nobody should ever be fired for whistleblowing, why would you have a policy to support people who were?
In practice, I notice two problems. Firstly, management (particularly in small organisations) frequently circumvent policies they experience as bureaucratic restrictions on their ability to manage. Secondly, disgruntled employees seek ways to express what are really personal grievances as blowing the whistle.
I would add that SBF and people around him decided to invest a lot of resources into this. As far as I can tell, he didn't seem interested in people's thoughts on whether this is a good idea. Most EAs thought it wasn't wise to spend so much on the campaign.
I also just made an edit after reflecting a bit more on it and talking to some other people:
Strong upvote here. I really like how you calmly assessed each of these in a way that feels very honest and has a all-cards-on-the-table feel to it. Some may still have reservations towards your comments given that you seem to at least somewhat fit into this picture of EA leadership, but this feels largely indicative of a general anger at the circumstances turned inwards towards EA that feels rather unhealthy. I certainly appreciate the OP as this does seem like a moment ripe for asking important questions that need answers, but don't forget that those in leadership are humans who make mistakes too, and are generally people who seem really committed to trying to do what everyone in EA is: make the world a better place.
I think it's right that those in leadership are humans who make mistakes, and I am sure they are generally committed to EA; in fact, many have served as real inspirations to me. Nonetheless, as a movement we were founded on the idea that good intentions are not enough, and somewhere this seems to be getting lost somehow. I have no pretentions I would do a better job in leadership than these people; rather, I think the way EA concentrates power (formally and even more so informally) in a relatively small and opaque leadership group seems problematic. To justify this, I think we would need these decisionmakers to be superhuman, like Platos Philosopher King. But they are not, they are just human.
Why?
A few things (I will reply in more detail in the morning once I have worked out how to link to specific parts of your text in my comment). These comments do appear a bit blunt, and I do apologies, they are blunt for clarity sake rather than to imply aggressiveness or rudeness.
- With regards to the Coordination Forum, even if no "official decisions" get worked out, how impactful over the overall direction of the movement do you think it is? Anyway, why are the attendees of this not public? If the point is to build trust between those community building and to understand the core gnarly disagreements, why is the people going and what goes on so secretive?
- Your Carrick Flynn answer sort of didn't really tell me which senior EA leaders if any encouraged Carrick to run/ knew before he announced etc, which is something I think is important to know. It also doesn't explain the decision around the choice of campaign manager etc.
- With regards to buying twitter: whilst it is Will's right to do whatever he wants, it really does call into question whether it is correct for him to be the "leader of EA" (or EA to have a defacto leader in such a way). If he has that role, surely he has certain
... (read more)Definitely not 2 orders of magnitude too much.
The book was, in Will's words "a decade of work", with a large number of people helping to write it, with a moderately large team promoting it (who did an awesome job!). There were a lot of adverts certainly around London for the book, and Will flew around the world to promote the book. I would certainly be hugely surprised if the budget was under $1 million (I know of projects run by undergraduates with budgets over a million!), and to be honest $10 million seems to me in the right ball park. Things just cost a lot of money, and you don't promote a book for free!
Why would a retraction be misleading? A valid reason for retracting a statement is failure to verify it. There is no indication in these cases that the statement is false.
If someone can't provide any evidence for a claim that very likely traces back to Emile Torres, and they can't be bothered to send a one-line email to Will's team asking for confirmation, then it seems natural to ask this person to take back the claim. But I'm also okay with an edit to the original comment along the lines you suggest.
My claims evoke cringe from some readers on this forum, I believe, so I can supply some examples:
(This is an annoyed post. Having re-read it, I think it's mostly not mean, but please downvote it if you think it is mean and I'll delete it.)
I have a pretty negative reaction to this post, and a number of similar others in this vein. Maybe I should write a longer post on this, but my general observation is that many people have suddenly started looking for the "adults in the room", mostly so that they can say "why didn't the adults prevent this bad thing from happening?", and that they have decided that "EA Leadership" are the adults.
But I'm not sure "EA Leadership" is really a thing, since EA is a movement of all kinds of people doing all kinds of things, and so "EA Leadership" fails to identify specific people who actually have any responsibility towards you. The result is that these kinds of questions end up either being vague or suggesting some kind of mysterious shadowy council of "EA Leaders" who are secretly doing naughty things.
It gets worse! When people do look for an identifiable figure to blame, the only person who looks vaguely like a leader is Will, so they pick on him. But Will is not the CEO of EA! He's a philosopher who writes books about EA and has received ... (read more)
If I want EA to become less decentralized and have some sort of internal political system, what can I do?
I have 0 power or status or ability to influence people outside of persuasive argumentation. On the other hand, McCaskill and Co have a huge ability to do so.
The idea that we can't blame the high-status people in this community because they aren't de jure leaders when it's incredibly likely they are the only people who could facilitate a system in which there are de jure leaders seems misguided. I'm not especially interested in assigning blame but when you ask the question who could make significant change to the culture or structure of EA I do think the answer falls on the thought leaders, even if they don't have official positions.
I don't think it's mean, and I don't think you should delete it (and clearly many others think it's a good comment). However, I strongly disagree with the claim that EA leadership isn't really a thing. I'll also aim to explain why I think why asking questions directed at "EA leadership" is reasonable to me, even if they may not be to you.
The coordination forum literally used to be called the "leaders forum". The description of the first coordination forum was literally "leaders and experienced staff from established EA organizations". The Centre for Effective Altruism organizes events called "Ëffective Altruism Global" and has the ability to prevent or very strongly recommend that organizers don't allow people into community events.
If you have spent millions of dollars on a PR campaign for your book and are seen as the public face of EA, people who self-identify as EA a... (read more)
I think that in a relevant sense, there is an EA Leadership, even if EA isn't an organisation. E.g. CEA/EV has been set up to have a central place in the community, and runs many coordinating functions, including the EA Forum, EA Global, the community health team, etc. Plus it publishes much of the key content. I think this comment overstates how decentralised the EA community is (for better or worse).
CEA/EV can prevent people from coming to the most important in-person meetups (EAG) and from participating in the most important EA online space (the EA Forum). In that sense, they're not just offering services, but have a lot of power. (That power also manifests itself in many other ways, including ways that are more directly relevant to the subject of the post.) And with that power comes responsibility.
I think in some important cases there really are leaders, or at least people in positions of extreme responsibility, who could've done more. In terms of letting SBF stay in the EA community after the Alameda incident in 2018, that seems like it might've been a failure of information sharing (e.g.), if not an outright failure of e.g the Community Health team at CEA. If it was largely just a failure of information sharing, then that in turn could be a failure of EA culture (too much deference, worrying about prestige and PR, and Ra), for which thought leaders could be in part responsible. (To be clear, I'm not saying I would've done any better if I was in such a position of responsibility, or a thought leader. And maybe no one could reasonably have been expected to have done better, given all the tradeoffs involved.)
People in charge of granting $100Ms-$Bs of EA money. See my link to: Why didn't the FTX Foundation secure its bag?
Disowned him (publicly). Not laud him as a paragon of virtue in earning-to-give. Not invite him to speak at EA conferences. (As I say, I get that there might've been a failure of communication amongst people in the know, but it looks pretty bad that it was known to at least some influential people that Sam was not someone to be trusted.)
The first group of people are not the people who took the latter group of actions.
I'm being picky here, but my point is that people are being very wooly about this idea of "EA Leadership". The FTX Foundation team and the 80k team are different people, not arms of the amorphous "EA Leadership". So maybe the FTX Foundation team shouldn't have lauded SBF - but they didn't, that was someone else.
This is again where being specific matters. "The FTX Foundation team should have done more due diligence before agreeing to work with SBF" is at least a reasonable, specific, criticism that relates to the specific responsibilities those people might have. "Why did EA Leadership not Do Something?" is not.
Thanks for the question Gideon, I'll just respond to this question directed at me personally.
When preparing for the interview I read about his frugal lifestyle in multiple media profiles of Sam and sadly simply accepted it at face value. One that has stuck in my mind up until now was this video that features Sam and the Toyota Corolla that he (supposedly) drove.
I can't recall anyone telling me that that was not the case, even after the interview went out, so I still would have assumed it was true two weeks ago.
Is this actually true right now? People donating to EA Funds seem like an example of deferring financial decisions, but I don't have data how EAs donate to the Funds vs. decide themselves where to donate. Or do you mean decisions like relying on GiveWell recommendations as an example of 'deferring financial power'?
I am also not sure how the EA Community compares to other movements. Is your claim that EA is worse at this than comparable movements or that we should hold ourselves to a higher standard?
I have mixed feelings about your post overall. If people defer decision-making power to "the leadership" then it's good to ask these questions. But mostly I see individuals making decisions for themselves. If others think the decisions are bad, they don't have to admire "the leadership" for it.
The vast bulk of funds in EA (OpenPhil and, until last week, FTX Future Fund) are controlled by very few people (financial). As is admission to EA Global (social). Intellectual direction is more open with e.g. the EA Forum, but things like big book projects and their promotion (The Precipice, WWOTF) are pretty centralised, as is media engagement in general.
The FTX Future Fund had a large regranter program. They didn't fully let regranters do whatever they wanted with funds, but I think it's incorrect to say that it's controlled by very few people.
Ultimately the Future Fund had veto power over regranters (even those with their own pots), [edit:] so I think it's inaccurate to say that the regranters had control of the funds (influence, sure; but not control).
I'm somewhat perturbed by the ratio of karma on these comments (esp agree karma; although low sample size - mine has only 1 vote on agreement (5 votes on karma); see pic below for time of writing this comment)[1]. We've just found out that we've in general been way too trusting as a community, and could do with more oversight etc (although I guess it's open to discussion how much decentralisation of decision making is ideal; see below). The fact that regranters could influence the Future Fund on their grantmaking was great, but we shouldn't confuse that with actual control. What ultimately matters is what is true from a mechanistic legal perspective - where the buck actually stops, and who is actually in charge of authorising grants. For the Future Fund, that was 5 people (who presumably in turn could still have been vetoed by the 4 on the board).
The next step for a regranting program in terms of actually distributing control would be to actually give the regranters the money, to do whatever they saw fit with it. I can imagine many people screaming in horror at the thought, especially those in central positions who think that they are the best experts on avoiding the un... (read more)
I was a regranter. I did not have my own pot, but could make recommendations for grants. 52% of my regrants (11/21) were approved (32% by $ value). I understand that those with their own pots allocated to them had a lower bar for acceptance so probably had a better success rate for approvals.
I've been trying to find people willing and able to write quality books and have had a hard time finding anyone. "Doing Doing Good Better Better" seems one of the highest-EV projects, and EA Funds (during my tenure) received basically no book proposals, as far as I can remember. I'd love to help throw a lot of resources after an upcoming book project by someone competent who isn't established in the community yet.
Even the forum is organised so as to promote posts from people with large networks of high-upvoted people, which de facto means that core network of people pretty much get auto-highlighted for posting their shopping list.
I'm really surprised anyone is even super confident that the Carrick Flynn campaign made major mistakes (or was a major mistake to attempt), much less that anyone thinks of the campaign as "a confidence-shattering failure" about EA as a whole. I feel like I must be missing something very basic that's in other people's models. Or maybe a lot of people were just very emotionally invested in that primary race?
There are probably things that could have been done better in the campaign, especially with the benefit of hindsight and experience. But getting members of a weird new niche academic philosophy elected to the US House of Representatives isn't the sort of thing I expect to have a >50% success rate, even if we try our hardest. And Flynn did pretty well in the polls, and would have won the primary if he'd peeled off ~5500 votes (9% of all votes cast) from Salinas.
That's a good enough showing that I expect there are a lot of nearby worlds where Flynn wins, and I'd happily give it another attempt if I could travel back in time, even ... (read more)
On Flynn Campaign: I don't know if it's "a catastrophe" but I think it is maybe an example of overconfidence and naivete. As someone who has worked on campaigns and follows politics, I thought the campaign had a pretty low chance of success because of the fundamentals (and asked about it at the time) and that other races would have been better to donate to (either state house races to build the bench or congressional candidates with better odds like Maxwell Frost, a local activist who ran for the open seat previously held by Val Demings, listed pandemic prevention as a priority, and won. Then again, Maxwell raised a ton of money, more than all the other candidates combined, so maybe he didn't need those funds as much as other candidates). Salinas was a popular, progressive, woman of color with local party support who already represented much of the district at the state level and helped draw the new one. So, it seemed pretty unlikely to me that she would lose to someone who had not lived in the state for years, did not have strong local connections, and had never run a campaign before, even with a massive money advantage. And from what I understand, the people in the district were ... (read more)
Not saying I disagree with this, but it may be worth noting that "democracy" as an alternative didn't exactly do great either -- Stuart Buck wrote this comment, and it got downvoted enough that he deleted it.
Indeed. I actually am inclined to agree that more democracy in distributing funds and making community decisions is safer overall and prevents bad tail risks, and I think Zoe Cremer's suggestions should be take seriously, but let's remember that democracy in recent years has given us Modi, Bolsonaro, Trump, Duterte and Berlusconi as leaders of countries with millions of citizens, on the basis of millions of votes, and that Hitler did pretty well in early 1930s German elections. Democracy is not just "not infallible" but has led to plausibly bad decisions about who should lead countries (as one example) on many occasions. (That might be a bit politicized for some people, but I feel personally confident all those leaders were knowably bad.)
I think we EAs need to increasingly prioritize speaking up about concerns like the ones Habryka mentioned.
Even when positive in-group feelings, the fear of ostracism, and uncertainty/risk aversion internally influences one to not bring up these concerns, we should fight back against this urge because the concerns, if true, will likely grow larger and larger until they blow up.
There is very high EV in course correction before the catastrophic failure point.
I’ll speak to question 6, since I am on the community health team, and in particular was hired in large part to work on community epistemics, but am only speaking to the work I’ve done rather than the whole team since I’m newish to the team. (Haven’t done tons of work on this yet, and my initial experiments and forays have been pretty varied, since the epistemics space is really large)
Tl;dr I think this matters, in and of itself it hasn’t been the top thing on my list, adjacent/related things have been high priority.
(Other CEA teams online (via the forum), groups and events teams have all thought about this as well.)
Whether people feel “able” to disagree itself might take some disambiguation - I tried to think a bunch about (1) intellectual challenge of having an inside view in a world with tons of information and how to make that easier and (2) the emotional difficulty of believing in your own ideas, not falling prey to epistemic learned helplessness, noticing your own intuitions, etc.
When I thought about working on the latter at scale, I thought about:
- Modelling thinking out loud, what it looks like when people try to figure things out and show all the messiness, that people
... (read more)I have slightly edited the post, just to clarify some things I ought to have done.
Not every question I pose is related to SBF etc., just questions I think the EA Leadership at large should answer. I am sure there are rational responses to many of these questions, and in the way that these are interpreted as an "attack" I do apologise; moreover, the "attack-lines" are also plausibly inconsistent, as some lines of attack likely point towards less centralisation and some to more.
Oh, you know, you could help me by giving me a little feedback on what you think the community would either find most interesting or most beneficial.
Here is a list of resource links that I am considering for the post:
Oh, well thank you for suggesting that my cringy ideas are worth conversation within the community! That's very kind of you. Those ideas of mine were already discussed here, at least by me, and with some exceptions, have been met with indifference or a disagreement checkmark. That's OK with me.
I was led here by a couple of Peter Singer's books and then by Galef's "Scout Mindset", by the way.
I have revised her model of Scout vs Solder, in my own mind, to encompass a broader category and additional partitions outside her model. In particular, when exploring ... (read more)
In what sense does EA have something like a leadership?
There is no official overarching EA organisation. Strictly speaking, EA is just a collection of people who all individually does whatever they want. Some of these people have chosen to set up various orgs that does various things.
But in a less formal but still very real way, EA is very hierarchical. There is a lot of concentration of power.
- Some of this is based on status and trust. Some people and orgs have built up a reputation which grants them a lot of soft power within the EA network.&n
... (read more)Thanks for this post - I think a lot of people have these questions and it's good to have common knowledge of that. I work on the community health team, and one of my areas is community epistemics so I have a lot of thoughts about question 6 and plan to come back to this when things are a little less frenetic.
Did you receive the grant directly or as part of their regranting program?
EDIT: You know what, acylhalide, I got a little impatient in this reply. Sorry. Let me get to work, and do my best given your previous response. Thanks. :)
Hm, well, there's a range of temperature rise mentioned in IPCC reports. You're discussing it as if there's one. There was one goal temperature rise, a rise of less than 1.5C GAST this century, but it's not plausible now.
So I guess explaining why that is so is useful to you.When you say a different understanding of civilizational collapse, different than whose? Some scientists who helped create the IPCC... (read more)
No, I was not being sarcastic, acylhalide. Thanks.
You're interested in climate change resources from me? OK, when I have the opportunity, providing an outline of such resources to the community could be a productive thing to do. Thanks again!