This is a special post for quick takes by Ben_West🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Adult film star Abella Danger apparently took an class on EA at University of Miami, became convinced, and posted about EA to raise $10k for One for the World. She was PornHub's most popular female performer in 2023 and has ~10M followers on instagram. Her post has ~15k likes, comments seem mostly positive.

I think this might be the class that @Richard Y Chappell🔸 teaches?

Thanks Abella and kudos to whoever introduced her to EA!

It looks like she did a giving season fundraiser for Helen Keller International, which she credits to the EA class she took. Maybe we will see her at a future EAG!

3
akash 🔸
(Tangential but related) There is probably a strong case to be made for recruiting the help of EA sympathetic celebrities to promote effective giving, and maybe even raise funds. I am a bit hesitant about "cause promotion" by celebrities, but maybe some version of that idea is also defensible. Turns out, someone wrote about it on the Forum a few years ago, but I don't know how much subsequent discussion there has been on this topic since then.
2
Karthik Tadepalli
I follow a lot of YouTubers and streamers who run large-scale charitable events (example, example, example) and I've always thought about how great it would be to convince them to give the money to an effective charity.

EA in a World Where People Actually Listen to Us

I had considered calling the third wave of EA "EA in a World Where People Actually Listen to Us". 

Leopold's situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren't going to be listening to some random internet charity nerds and changing policy as a result.

Well, they are and they are. Let's hope it's for the better.

Seems plausible the impact of that single individual act is so negative that aggregate impact of EA is negative.

I think people should reflect seriously upon this possibility and not fall prey to wishful thinking (let's hope speeding up the AI race and making it superpower powered is the best intervention! it's better if everyone warning about this was wrong and Leopold is right!).

The broader story here is that EA prioritization methodology is really good for finding highly leveraged spots in the world, but there isn't a good methodology for figuring out what to do in such places, and there also isn't a robust pipeline for promoting virtues and virtuous actors to such places.

9
sapphire
I spent all day in tears when I read the congressional report. This is a nightmare. I was literally hoping to wake up from a bad dream. I really hope people don't suffer for our sins. How could we have done something so terrible. Starting an arms race and making literal war more likely.
8
Charlie_Guthmann
| and there also isn't a robust pipeline for promoting virtues and virtuous actors to such places. this ^

I'm not sure to what extent the Situational Awareness Memo or Leopold himself are representatives of 'EA'

In the pro-side:

  • Leopold thinks AGI is coming soon, will be a big deal, and that solving the alignment problem is one of the world's most important priorities
  • He used to work at GPI & FTX, and formerly identified with EA
  • He (probably almost certainly) personally knows lots of EA people in the Bay

On the con-side:

  • EA isn't just AI Safety (yet), so having short timelines/high importance on AI shouldn't be sufficient to make someone an EA?[1]
  • EA shouldn't also just refer to a specific subset of the Bay Culture (please), or at least we need some more labels to distinguish different parts of it in that case
  • Many EAs have disagreed with various parts of the memo, e.g. Gideon's well received post here
  • Since his EA institutional history he moved to OpenAI (mixed)[2] and now runs an AGI investment firm.
  • By self-identification, I'm not sure I've seen Leopold identify as an EA at all recently.

This again comes down to the nebulousness of what 'being an EA' means.[3] I have no doubts at all that, given what Leopold thinks is the way to have the most impact he'll be very effective at ach... (read more)

I think he is pretty clearly an EA given he used to help run the Future Fund, or at most an only very recently ex-EA. Having said that, it's not clear to me this means that "EAs" are at fault for everything he does. 

5
JWS 🔸
Yeah again I just think this depends on one's definition of EA, which is the point I was trying to make above. Many people have turned away from EA, both the beliefs, institutions, and community in the aftermath of the FTX collapse. Even Ben Todd seems to not be an EA by some definitions any more, be that via association or identification. Who is to say Leopold is any different, or has not gone further? What then is the use of calling him EA, or using his views to represent the 'Third Wave' of EA?  I guess from my PoV what I'm saying is that I'm not sure there's much 'connective tissue' between Leopold and myself, so when people use phrases like "listen to us" or "How could we have done" I end up thinking "who the heck is we/us?"

In my post, I suggested that one possible future is that we stay at the "forefront of weirdness." Calculating moral weights, to use your example.

I could imagine though that the fact that our opinions might be read by someone with access to the nuclear codes changes how we do things.

I wish there was more debate about which of these futures is more desirable.

(This is what I was trying to get out with my original post. I'm not trying to make any strong claims about whether any individual person counts as "EA".)

9
MichaelDickens
I don't want to claim all EAs believe the same things, but if the congressional commission had listened to what you might call the "central" EA position, it would not be recommending an arms race because it would be much more concerned about misalignment risk. The overwhelming majority of EAs involved in AI safety seem to agree that arms races are bad and misalignment risk is the biggest concern (within AI safety). So if anything this is a problem of the commission not listening to EAs, or at least selectively listening to only the parts they want to hear.

In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders "basically agreed with the China part of situational awareness". 

Again, people should really take this with a double-dose of salt, I am personally at like 50/50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn't seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn't result in endorsing a "Manhattan project to AGI", though the rumors that I have heard did sound like they would endorse that)

Less rumor-based, I also know that Dario has historically been very hawkish, and "needing to beat China" was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like... (read more)

9
MichaelDickens
I looked thru the congressional commission report's list of testimonies for plausibly EA-adjacent people. The only EA-adjacent org I saw was CSET, which had two testimonies (1, 2). From a brief skim, neither one looked clearly pro- or anti-arms race. They seemed vaguely pro-arms race on vibes but I didn't see any claims that look like they were clearly encouraging an arms race—but like I said, I only briefly skimmed them, so I could have missed a lot.
6
Dicentra
This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.
7
Habryka
The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development. Of course no one likes a symmetric arms race, but the question is did people favor the "quickly establish overwhelming dominance towards China by investing heavily in AI" or the "try to negotiate with China and not set an example of racing towards AGI" strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it's a quite divisive topic). To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent "AI Security Forum" in Vegas, many x-risk concerned people expressed very hawkish opinions.
3
Dicentra
Yeah re the export controls, I was trying to say "I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so" (though I used the word "ambiguous" because my impression was that some relevant people saw a pro of that work that it also mostly didn't directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were "trying to establish overwhelming dominance over China" but not by "investing heavily in AI". 
6
MichaelDickens
It looks to me like the online EA community, and the EAs I know IRL, have a fairly strong consensus that arms races are bad. Perhaps there's a divide in opinions with most self-identified EAs on one side, and policy people / company leaders on the other side—which in my view is unfortunate since the people holding the most power are also the most wrong. (Is there some systematic reason why this would be true? At least one part of it makes sense: people who start AGI companies must believe that building AGI is the right move. It could also be that power corrupts, or something.) So maybe I should say the congressional commission should've spent less time listening to EA policy people and more time reading the EA Forum. Which obviously was never going to happen but it would've been nice.

Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on 'we need to beat China' arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an 'overwhelming majority of EAs involved in AI safety' disagree with it even now.

Example from August 2022:

https://www.astralcodexten.com/p/why-not-slow-ai-progress

So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies...

This is the most common question I get on AI safety posts: why isn’t the rationalist / EA / AI safety movement doing this more? It’s a great question, and it’s one that the movement asks itself a lot...

Still, most people aren’t doing this. Why not?

Later, talking about why attempting a regulatory approach to avoiding a race is futile:

The biggest problem is China. US regulations don’t affect China. China says that AI leadership is a cornerstone of their national security - both as a massive boon to their surveillan

... (read more)
5
Ben_West🔸
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowsky's "don't fight over which monkey gets to eat the poison banana first" framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
9
AGB 🔸
That sounds plausible. I do think of ACX as much more 'accelerationist' than the doomer circles, for lack of a better term. Here's a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scott's position.  https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate
4
MichaelDickens
Scott's last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn't race). But I can see how a politician reading this article wouldn't see that implication.

Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.

My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances. 

But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).

2
MichaelDickens
Not just alignment being easy, but alignment being easy with overwhelmingly high probability. It seems to me that pushing for an arms race is bad even if there's only a 5% chance that alignment is hard.
2
Habryka
I think most of those people believe that "having an AI aligned to 'China's values'" would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with "aligned AI" instead.
3
MichaelDickens
I think that's not a reasonable position to hold but I don't know how to constructively argue against it in a short comment so I'll just register my disagreement. Like, presumably China's values include humans existing and having mostly good experiences.
3
Habryka
Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.
4
Habryka
A somewhat relevant article that I discovered while researching this: Longtermists Are Pushing a New Cold War With China - Jacobin The article seems quite biased to me, but I do think some of the basics here make sense and match with things I have heard (but also, some of it seems wrong).

Maybe instead of "where people actually listen to us" it's more like "EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn't exist."

4
MichaelDickens
On that framing, I agree that that's something that happens and that we should be able to anticipate will happen.
5
Charlie_Guthmann
Call me a hater, and believe me, I am, but maybe someone who went to university at 16 and clearly spent most of their time immersed in books is not the most socially developed.  Maybe after they are implicated in a huge scandal that destroyed our movement's reputation we should gently nudge them to not go on popular podcasts and talk fantastically and almost giddily about how world war 3 is just around the corner. Especially when they are working in a financial capacity in which they would benefit from said war. Many of the people we have let be in charge of our movement and speak on behalf of it don't know the first thing about optics or leadership or politics. I don't think Elizier Yudowsky could win a middle school class president race with a million dollars.  I know your point was specifically tailored toward optics and thinking carefully about what we say when we have a large platform, but I think looking back and forward bad optics and a lack of real politik messaging are pretty obvious failure modes of a movement filled with chronically online young males who worship intelligence and research output above all else.  I'm not trying to sh*t on Leopold and I don't claim I was out here beating a drum about the risks of these specific papers but yea I do think this is one symptom of a larger problem. I can barely think of anyone high up (publicly) in this movement who has risen via organizing.  

(I think the issue with Leopold is somewhat precisely that he seems to be quite politically savvy in a way that seems likely to make him a deca-multi-millionaire and politically influental, possibly at the cost of all of humanity. I agree Eliezer is not the best presenter, but his error modes are clearly enormously different)

5
Charlie_Guthmann
I don't think I was claiming they have the exact same failure modes - do you want to point out where I did that? Rather they both have failure modes that I would expect to happen as a result of selecting them to be talking heads on the basis of wits and research output. Also I feel like you are implying Leopold is evil or something like that and I don't agree but maybe I'm misinterpretting.  He seems like a smooth operator in some ways and certainly is quite different than Elizier. That being said I showed my dad (who has become an oddly good litmus test for a lot of this stuff for me as someone who is somewhat sympathethic to our movement but also a pretty normal 60 year old man in a completely different headspace) the Dwarkesh episode and he thought Leopold was very, very, very weird (and not because of his ideas). He kind of reminds me of Peter Thiel. I'll completely admit I wasn't especially clear in my points and that mostly reflects my own lack of clarity on the exact point I was trying to getting across. I think I take back like 20% of what I said (basically to the extent I was making a very direct stab at what exactly that failure mode is) but mostly still stand by the original comment, which again I see as being approximately ~ "Selecting people to be the public figureheads of our movement on the basis wits and research output is likely to be bad for us".
8
David Mathers🔸
The thing about Yudkowsky is that, yes, on the one hand, every time I read him, I think he surely must be coming across as super-weird and dodgy to "normal" people. But on the other hand, actually, it seems like he HAS done really well in getting people to take his ideas seriously? Sam Altman was trolling Yudkowsky on twitter a while back about how many of the people running/founding AGI labs had been inspired to do so by his work. He got invited to write on AI governance for TIME despite having no formal qualifications or significant scientific achievements whatsoever. I think if we actually look at his track record, he has done pretty well at convincing influential people to adopt what were once extremely fringe views, whilst also succeeding in being seen by the wider world as one of the most important proponents of those views, despite an almost complete lack of mainstream, legible credentials. 
1
Charlie_Guthmann
Hmm, I hear what you are saying but that could easily be attributed to some mix of  (1) he has really good/convincing ideas  (2) he seems to be a a public representative for the EA/LW community for a journalist on the outside. And I'm responding to someone saying that we are in "phase 3" - that is to say people in the public are listening to us - so I guess I'm not extremely concerned about him not being able to draw attention or convince people. I'm more just generally worried that people like him are not who we should be promoting to positions of power, even if those are de jure positions. 
3
David Mathers🔸
Yeah, I'm not a Yudkowsky fan. But I think the fact that he mostly hasn't been a PR disaster is striking, surprising and not much remarked upon, including by people who are big fans.
1
Charlie_Guthmann
I guess in thinking about this I realize it's so hard to even know if someone is a "PR disaster" that I probably have just been confirming my biases. What makes you say that he hasn't been?
3
David Mathers🔸
Just  the stuff I already said about the success he seems to have had. It is also true that many people hate him and think he's ridiculous, but I think that makes him polarizing rather than disastrous. I suppose you could phrase it as "he was a disaster in some ways but a success in others" if you want to. 
3
David Mathers🔸
How do you know Leopold or anyone else actually influenced the commission's report? Not that that seems particularly unlikely to me, but is there any hard evidence? EDIT: I text-searched the report and he is not mentioned by name, although obviously that doesn't prove much on its own.
2
yanni kyriacos
Hi Ben! You might be interested to know I literally had a meeting with the Assistant Defence Minister in Australia about 10 months ago off the back of one email. I wrote about it here. AI Safety advocacy is IMO still low extremely hanging fruit. My best theory is EAs don't want to do it because EAs are drawn to spreadsheets etc (it isn't their comparative advantage).
-1
Matrice Jacobine
It seems to plausible that, much like Environmental Political Orthodoxy (reverence for simple rural living as expressed through localism, anti-nuclear sentiment, etc.) ultimately led the environmental movement to be harmful for its own professed goals, EA Political Orthodoxy (technocratic liberalism, "mistake theory", general disdain for social science) could (and maybe already had, with the creation of OpenAI) ultimately lead EA efforts on AI to be a net negative by its own standards.

Animal Justice Appreciation Note

Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it.

Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!

Marcus Daniell appreciation note

@Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!

8
GraceAdams🔸
I was lucky enough to see Marcus play this year at the Australian Open, and have pledged alongside him! Marcus is so hardworking - in tennis alongside his work at High Impact Athletes! Go Marcus!!!
2
NickLaing
New Zealand let's go!

First in-ovo sexing in the US

Egg Innovations announced that they are "on track to adopt the technology in early 2025." Approximately 300 million male chicks are ground up alive in the US each year (since only female chicks are valuable) and in-ovo sexing would prevent this. 

UEP originally promised to eliminate male chick culling by 2020; needless to say, they didn't keep that commitment. But better late than never! 

Congrats to everyone working on this, including @Robert - Innovate Animal Ag, who founded an organization devoted to pushing this technology.[1]

  1. ^

    Egg Innovations says they can't disclose details about who they are working with for NDA reasons; if anyone has more information about who deserves credit for this, please comment!

6
Julia_Wise🔸
For others who were curious about what time difference this makes: looks like sex identification is possible at 9 days after the egg is laid, vs 21 days for the egg to hatch (plus an additional ~2 days between fertilization and the laying of the egg.)  Chicken embryonic development is really fast, with some stages measured in hours rather than days.
6[anonymous]
I asked Google when chicken embryos start to feel pain and this was the first result (i.e. I didn't look hard and I didn't anchor on a figure):
3
Gina_Stuessy
How many chicks per year will Egg Innovations' change save? (The announcement link is blocked for me.)
4[anonymous]
  This interview with the CEO suggests that Egg Innovations are just in the laying (not broiler) business and that each hen produces ~400 eggs over her lifetime. So this will save ~750,000 chicks a year?
2
Ben_West🔸
I don't think they say, unfortunately.
1
Nathan Young
Wow this is wonderful news.

Sam Bankman-Fried's trial is scheduled to start October 3, 2023, and Michael Lewis’s book about FTX comes out the same day. My hope and expectation is that neither will be focused on EA,[1] but several people have recently asked me about if they should prepare anything, so I wanted to quickly record my thoughts.

The Forum feels like it’s in a better place to me than when FTX declared bankruptcy: the moderation team at the time was Lizka, Lorenzo, and myself, but it is now six people, and they’ve put in a number of processes to make it easier to deal with a sudden growth in the number of heated discussions. We have also made a number of design changes, notably to the community section

CEA has also improved our communications and legal processes so we can be more responsive to news, if we need to (though some of the constraints mentioned here are still applicable).

Nonetheless, I think there’s a decent chance that viewing the Forum, Twitter, or news media could become stressful for some people, and you may want to preemptively create a plan for engaging with that in a healthy way. 

  1. ^

    This market is thinly traded but is currently predicting that Le

... (read more)

My hope and expectation is that neither will be focused on EA

I'd be surprised [p<0.1] if EA was not a significant focus of the Michael Lewis book – but agree that it's unlikely to be the major topic. Many leaders at FTX and Alameda Research are closely linked to EA. SBF often, and publically, said that effective altruism was a big reason for his actions. His connection to EA is interesting both for understanding his motivation and as a story-telling element. There are Manifold prediction markets on whether the book would mention 80'000h (74%), Open Philanthropy (74%), and Give Well (80%), but these markets aren't traded a lot and are not very informative.[1]

This video titled The Fake Genius: A $30 BILLION Fraud (2.8 million views, posted 3 weeks ago) might give a glimpse of how EA could be handled. The video touches on EA but isn't centred on it. It discusses the role EAs played in motivating SBF to do earning to give, and in starting Alameda Research and FTX. It also points out that, after the fallout at Alameda Research, 'higher-ups' at CEA were warned about SBF but supposedly ignored the warnings. Overall, the video is mainly interested in the mechanisms of how the suspected ... (read more)

Yeah, unfortunately I suspect that "he claimed to be an altruist doing good! As part of this weird framework/community!" is going to be substantial part of what makes this an interesting story for writers/media, and what makes it more interesting than "he was doing criminal things in crypto" (which I suspect is just not that interesting on its own at this point, even at such a large scale).

The Panorama episode briefly mentioned EA. Peter Singer spoke for a couple of minutes and EA was mainly viewed as charity that would be missing out on money. There seemed to be a lot more interest on the internal discussions within FTX, crypto drama, the politicians, celebrities etc. 

Maybe Panorama is an outlier but potentially EA is not that interesting to most people or seemingly too complicated to explain if you only have an hour.

Yeah I was interviewed for a podcast by a canadian station on this topic (cos a canadian hedge fund was very involved). iirc they had 6 episodes but dropped the EA angle because it was too complex.

2
Sean_o_h
Good to know, thank you.

Agree with this and also with the point below that the EA angle is kind of too complicated to be super compelling for a broad audience. I thought this New Yorker piece's discussion (which involved EA a decent amount in a way I thought was quite fair -- https://www.newyorker.com/magazine/2023/10/02/inside-sam-bankman-frieds-family-bubble) might give a sense of magnitude (though the NYer audience is going to be more interested in these sort of nuances than most.

The other factors I think are: 1. to what extent there are vivid new tidbits or revelations in Lewis's book that relate to EA and 2. the drama around Caroline Ellison and other witnesses at trial and the extent to which that is connected to EA; my guess is the drama around the cooperating witnesses will seem very interesting on a human level, though I don't necessarily think that will point towards the effective altruism community specifically.

5
quinn
Michael Lewis wouldn't do it as a gotcha/sneer, but this is a reason I'll be upset if Adam McKay ends up with the movie. 

Update: the court ruled SBF can't make reference to his philanthropy

5
Ben_West🔸
Yeah, "touches on EA but isn't centred on it" is my modal prediction for how major stories will go. I expect that more minor stories (e.g. the daily "here's what happened on day n of the trial" story) will usually not mention EA. But obviously it's hard to predict these things with much confidence.

The Forum feels like it’s in a better place to me than when FTX declared bankruptcy: the moderation team at the time was Lizka, Lorenzo, and myself, but it is now six people, and they’ve put in a number of processes to make it easier to deal with a sudden growth in the number of heated discussions. We have also made a number of design changes, notably to the community section

This is a huge relief to hear. I noticed some big positive differences, but I couldn't tell where from. Thank you.

If I understand this correctly, maybe not in the trial itself:

Accordingly, the defendant is precluded from referring to any alleged prior good acts by the defendant, including any charity or philanthropy, as indicative of his character or his guilt or innocence.

I guess technically the prosecution could still bring it up.

5
Ben_West🔸
I hadn't realized that, thanks for sharing
3
quinn
(I forgot to tell JP and Lizka in at EAG in NY a few weeks ago, but now's as good a time as any):  Can my profile karma total be two numbers, one for community and one for other stuff? I don't want a reader to think my actual work is valuable to people in proportion to my EA Forum karma, as far as I can tell I think 3-5x my karma is community sourced compared to my object-level posts. People should look at my profile as "this guy procrastinates through PVP on social media like everyone else, he should work harder on things that matter". 
7
Ben_West🔸
Yeah, I kind of agree that we should do something here; maybe the two-dimensional thing you mentioned, or maybe community karma should count less/not at all. Could you add a comment here?
7
Tristan Williams
Could see a number of potentially good solutions here, but think the "not at all" is possibly not the greatest idea. Creating a separate community karma could lead to a sort of system of social clout that may not be desirable, but I also think having no way to signal who has and has not in the past been a major contributor to the community as a topic would be more of a failure mode because I often use it to get a deeper sense of how to think about the claims in a given comment/post.
3
quinn
There would be some UX ways to make community clout feel lower status than the other clout, I agree with you that having community clout means more investment / should be preferred over a new account which for all you know is a driveby dunk/sneer after wandering in on twitter.  I'll cc this to my feature request in the proper thread. 
3
Michelle_Hutchinson
Thank you for the prompt.

Thoughts on the OpenAI Board Decisions

A couple months ago I remarked that Sam Bankman-Fried's trial was scheduled to start in October, and people should prepare for EA to be in the headlines. It turned out that his trial did not actually generate much press for EA, but a month later EA is again making news as a result of recent Open AI board decisions.

A couple quick points:

  1. It is often the case that people's behavior is much more reasonable than what is presented in the media. It is also sometimes the case that the reality is even stupider than what is presented. We currently don't know what actually happened, and should hold multiple hypotheses simultaneously.[1]
  2. It's very hard to predict the outcome of media stories. Here are a few takes I've heard; we should consider that any of these could become the dominant narrative.
    1. Vinod Khosla (The Information): “OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence”
    2. John Thornhill (Financial Times): One entrepreneur who is close to OpenAI says the board was “incredibly principled and brave” to confront Altman, even if it
... (read more)

I've commented before that FTX's collapse had little effect on the average person’s perception of EA

Just for the record, I think the evidence you cited there was shoddy, and I think we are seeing continued references to FTX in basically all coverage of the OpenAI situation, showing that it did clearly have a lasting effect on the perception of EA. 

Reputation is lazily-evaluated. Yes, if you ask a random person on the street what they think of you, they won't know, but when your decisions start influencing them, they will start getting informed, and we are seeing really very clear evidence that when people start getting informed, FTX is heavily influencing their opinion.

6
Ben_West🔸
Thanks! Could you share said evidence? The data sources I cited certainly have limitations, having access to more surveys etc. would be valuable.

The Wikipedia page on effective altruism mentions Bankman-Fried 11 times, and after/during the OpenAI story, it was edited to include a lot of criticism, ~half of which was written after FTX (e.g. it quotes this tweet https://twitter.com/sama/status/1593046526284410880 )

It's the first place I would go to if I wanted an independent take on "what's effective altruism?" I expect many others to do the same.

There are a lot of recent edits on that article by a single editor, apparently a former NY Times reporter (the edit log is public). From the edit summaries, those edits look rather unfriendly, and the article as a whole feels negatively slanted to me. So I'm not sure how much weight I'd give that article specifically.

Sure, here are the top hits for "Effective Altruism OpenAI" (I did no cherry-picking, this was the first search term that I came up with, and I am just going top to bottom). Each one mentions FTX in a way that pretty clearly matters for the overall article: 

... (read more)
4
Ben_West🔸
Ah yeah sorry, the claim of the post you criticized was not that FTX isn't mentioned in the press, but rather that those mentions don't seem to actually have impacted sentiment very much. I thought when you said "FTX is heavily influencing their opinion" you were referring to changes in sentiment, but possibly I misunderstood you – if you just mean "journalists mention it a lot" then I agree.
2
Habryka
You are also welcome to check Twitter mentions or do other analysis of people talking publicly about EA. I don't think this is a "journalist only" thing. I will take bets you will see a similar pattern.

I actually did that earlier, then realized I should clarify what you were trying to claim. I will copy the results in below, but even though they support the view that FTX was not a huge deal I want to disclaim that this methodology doesn't seem like it actually gets at the important thing.

But anyway, my original comment text:

As a convenience sample I searched twitter for "effective altruism". The first reference to FTX doesn't come until tweet 36, which is a link to this. Honestly it seems mostly like a standard anti-utilitarianism complaint; it feels like FTX isn't actually the crux. 

In contrast, I see 3 e/acc-type criticisms before that, two "I like EA but this AI stuff is too weird" things (including one retweeted by Yann LeCun??), two "EA is tech-bro/not diverse" complaints and one thing about Whytham Abbey.

And this (survey discussed/criticized here):

4
Habryka
I just tried to reproduce the Twitter datapoint. Here is the first tweet when I sort by most recent:  Most tweets are negative, mostly referring to the OpenAI thing. Among the top 10 I see three references to FTX. This continues to be quite remarkable, especially given that it's been more than a year, and these tweets are quite short. I don't know what search you did to find a different pattern. Maybe it was just random chance that I got many more than you did. 
2
Ben_West🔸
I used the default sort ("Top"). (No opinion on which is more useful; I don't use Twitter much.)
4
Habryka
Top was mostly showing me tweets from people that I follow, so my sense is it was filtered in a personalized way. I am not fully sure how it works, but it didn't seem the right type of filter.
4
Ben_West🔸
Yeah, makes sense. Although I just tried doing the "latest" sort and went through the top 40 tweets without seeing a reference to FTX/SBF. My guess is that this filter just (unsurprisingly) shows you whatever random thing people are talking about on twitter at the moment, and it seems like the random EA-related thing of today is this, which doesn't mention FTX. Probably you need some longitudinal data to have this be useful.
2
Nathan Young
I would guess too that these two events have made it much easier to reference EA in passing. eg I think this article wouldn't have been written 18 months ago. https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362 So I think there is a real jump of notoriety once the journalistic class knows who you are. And they now know who we are. "EA, the social movement involved in the FTX and OpenAI crises" is not a good epithet.
3
trevor1
Upvoted, I'm grateful for the sober analysis. I think this is an oversimplification. This effect is largely caused by competing messages; the modern internet optimizes information for memetic fitness e.g. by maximizing emotional intensity or persuasive effect, and people have so much routine exposure to stuff that leads their minds around in various directions that they get wary (or see having strong reactions to anything at all as immature, since a large portion of outcries on the internet are disproportionately from teenagers). This is the main reason why people take things with a grain of salt. However, overton windows can still undergo big and lasting shifts (this process could also be engineered deliberately long before generative AI emerged, e.g. via clown attacks which exploit social status instincts to consistently hijack any person's impressions of any targeted concept). The 80,000 hours podcast with Cass Sunstein covered how Overton windows are dominated by vague impressions of what ideas are acceptable or unacceptable to talk about (note: this podcast was from 2019). This dynamic could plausibly strangle EA's access to fresh talent, and AI safety's access to mission-critical policy influence, for several years (which would be far too long). On the flip side, johnswentworth actually had a pretty good take on this; that the human brain is instinctively predisposed to over-focus on the risk of their in-group becoming unpopular among everyone else:
4
Ben_West🔸
Thanks for the helpful comment – I had not seen John's dialogue and I think he is making a valid point. Fair point that the lack of impact might not be due to attention span but instead things like having competing messages.  In case you missed it: Angelina Li compiled some growth metrics about EA here; they seem to indicate that FTX's collapse did not "strangle" EA (though it probably wasn't good).
Ben_West🔸
Moderator Comment54
0
0

Possible Vote Brigading

We have received an influx of people creating accounts to cast votes and comments over the past week, and we are aware that people who feel strongly about human biodiversity sometimes vote brigade on sites where the topic is being discussed. Please be aware that voting and discussion about some topics may not be representative of the normal EA Forum user base.

Huh, seems like you should just revert those votes, or turn off voting for new accounts. Seems better than just having people be confused about vote totals.

And maybe add a visible "new account" flag -- I understand not wanting to cut off existing users creating throwaways, but some people are using screenshots of forum comments as evidence of what EAs in general think.

5
Larks
Arguably also beneficially if you thought that we should typically make an extra effort to be tolerant of 'obvious' questions from new users.
2
Ben_West🔸
Thanks! Yeah, this is something we've considered, usually in the context of trying to make the Forum more welcoming to newcomers, but this is another reason to prioritize that feature.
1
Peter Wildeford
I agree.
9
Ben_West🔸
Yeah, I think we should probably go through and remove people who are obviously brigading (eg tons of votes in one hour and no other activity), but I'm hesitant to do too much more retroactively. I think it's possible that next time we have a discussion that has a passionate audience outside of EA we should restrict signups more, but that obviously has costs.
6
Habryka
When you purge user accounts you automatically revoke their votes. I wouldn't be very hesitant to do that. 
6
Ben_West🔸
How do you differentiate someone who is sincerely engaging and happens to have just created an account now from someone who just wants their viewpoint to seem more popular and isn't interested in truth seeking? Or are you saying we should just purge accounts that are clearly in the latter category, and accept that there will be some which are actually in the latter category but we can't distinguish from the former?
5
Habryka
I think being like "sorry, we've reverted votes from recently signed-up accounts because we can't distinguish them" seems fine. Also, in my experience abusive voting patterns are usually very obvious, where people show up and only vote on one specific comment or post, or on content of one specific user, or vote so fast that it seems impossible for them to have read the content they are voting on.
3
Bob Jacobs 🔸
How about: getting a lot of downvotes from new accounts doesn't decrease your voting-power and doesn't mean your comments won't show up on the frontpage? Half a dozen of my latest comments  have responded to HBDers. Since they get a notification it doesn't surprise me that those comments get immediate downvotes which hides them from the frontpage and subsequently means that they can easily decrease my voting-power on this forum (it went from 5 karma for a strong upvote to now 4 karma for a strong upvote). Giving brigaders the power to hide things from the frontpage and decide which people have more voting-power on this forum seems undesirable.

Note: I went through Bob's comments and think it likely they were brigaded to some extent. I didn't think they were in general excellent, but they certainly were not negative-karma comments. I strong-upvoted the ones that were below zero, which was about three or four.

I think it is valid to use the strong upvote as a means of countering brigades, at least where a moderator has confirmed there is reason to believe brigading is active on a topic. My position is limited to comments below zero, because the harmful effects of brigades suppressing good-faith comments from visibility and affirmatively penalizing good-faith users are particularly acute. Although there are mod-level solutions, Ben's comments suggest they may have some downsides and require time, so I feel a community corrective that doesn't require moderators to pull away from more important tasks has value.

I also think it is important for me to be transparent about what I did and accept the community's judgment. If the community feels that is an improper reason to strong upvote, I will revert my votes.

Edit: is to are

5
Peter Wildeford
I agree.
6
Larks
Could you set a minimum karma threshold (or account age or something) for your votes to count? I would expect even a low threshold like 10 would solve much of the problem.

Yeah, interesting. I think we have a lot of lurkers who never get any karma and I don't want to entirely exclude them, but maybe some combo like "10 karma or your account has to be at least one week old" would be good.

4
Peter Wildeford
Yeah I think that would be a really smart way to implement it.
4
pseudonym
Do the moderators think the effect of vote brigading reflect support from people who are pro-HBD or anti-HBD?
Ben_West🔸
Moderator Comment55
0
0

The Forum moderation team has been made aware that Kerry Vaughn published a tweet thread that, among other things, accuses a Forum user of doing things that violate our norms. Most importantly:

Where he crossed the line was his decision to dox people who worked at Leverage or affiliated organizations by researching the people who worked there and posting their names to the EA forum

The user in question said this information came from searching LinkedIn for people who had listed themselves as having worked at Leverage and related organizations. 

This is not "doxing" and it’s unclear to us why Kerry would use this term: for example, there was no attempt to connect anonymous and real names, which seems to be a key part of the definition of “doxing”. In any case, we do not consider this to be a violation of our norms.

At one point Forum moderators got a report that some of the information about these people was inaccurate. We tried to get in touch with the then-anonymous user, and when we were unable to, we redacted the names from the comment. Later, the user noticed the change and replaced the names. One of CEA’s staff asked the user to encode the names to allow those people mor... (read more)

-4
Cathleen
How I wish the EA Forum had responded I’ve found that communicating feedback/corrections often works best when I write something that approximates what I would’ve wished the other person had originally written.  Because of the need to sync more explicitly on a number of background facts and assumptions (and due to not having time for edits/revisions), my draft is longer than I think a moderator’s comment would need to be, were the moderation team to be roughly on the same page about the situation. While I am the Cathleen being referenced, I have had minimal contact with Leverage 2.0 and the EA Forum moderation team, so  I expect this draft to be imperfect in various ways, while still  pointing at useful  and important parts of reality. Here I’ve made an attempt to rewrite what I wish Ben West had posted in response to Kerry’s tweet thread:   The Forum moderation team has been made aware that Kerry Vaughn published a tweet thread that, among other things, accuses a Forum user of doing things that violate our norms. Most importantly: We care a lot about ensuring that the EA Forum is a welcoming place where people are free to discuss important issues related to world improvement. While disagreement and criticism are an important part of that, we want to be careful not to allow for abuse to take place on our platform, and so we take such reports seriously.  After reviewing the situation, we have compiled the following response (our full review is still in process but we wanted to share what we have so far while the issue is live):   While Leverage was not a topic that we had flagged as “sensitive” back in Sept 2019 when the then-anonymous user originally made his post, the subsequent discussion around the individuals and organizations who were part of the Leverage/Paradigm ecosystem prior to its dissolution in June 2019 has led it to be classified as a sensitive topic to which we expend more scrutiny and are more diligent about enforcing our norms.  In reviewing

To share a brief thought, the above comment gives me a bad juju because it puts a contested perspective into a forceful and authoritative voice, while being long enough that one might implicitly forget that this is a hypothetical authority talking[1]. So it doesn't feel to me like a friendly conversational technique. I would have preferred it to be in the first person.

  1. ^

    Garcia Márquez has a similar but longer  thing going on in The Handsomest Drowned Man In The World, where everything after "if that magnificent man had lived in the village" is a hypothetical. 

8
Ben_West🔸
(fwiw I didn't mind the format and felt like this was Cathleen engaging in good faith.)
4
LarissaHeskethRowe
I would have so much respect for CEA if they had responded like this. 
-10[anonymous]

Startups aren't good for learning

I fairly frequently have conversations with people who are excited about starting their own project and, within a few minutes, convince them that they would learn less starting project than they would working for someone else. I think this is basically the only opinion I have where I can regularly convince EAs to change their mind in a few minutes of discussion and, since there is now renewed interest in starting EA projects, it seems worth trying to write down.

It's generally accepted that optimal learning environments have a few properties:

  • You are doing something that is just slightly too hard for you.
    • In startups, you do whatever needs to get done. This will often be things that are way too easy (answering a huge number of support requests) or way too hard (pitching a large company CEO on your product when you've never even sold bubblegum before).
    • Established companies, by contrast, put substantial effort into slotting people into roles that are approximately at their skill level (though you still usually need to put in proactive effort to learn things at an established company). 
  • Repeatedly practicing a skill in "chunks"
    • Similar to the last poin
... (read more)
7
Clifford
I think I agree with this. Two things that might make starting a startup a better learning opportunity than your alternative, in spite of it being a worse learning environment: 1. You are undervalued by the job market (so you can get more opportunities to do cool things by starting your own thing) 2. You work harder in your startup because you care about it more (so you get more productive hours of learning)
1
Dave Cortright 🔸
It depends on what you want to learn. At a startup, people will often get a lot more breadth of scope than they would otherwise in an established company. Yes, you might not have in-house mentors or seasoned pros to learn from, but these days motivated people can fill in the holes outside the org.
1
Yonatan Cale
It depends what you want to learn As you said. * Founding a startup is a great way to learn how to found a startup. * Working as a backend engineer in some company is a great way to learn how to be a backend engineer in some company. (I don't see why to break it up more than that)

Plant-based burgers now taste better than beef

The food sector has witnessed a surge in the production of plant-based meat alternatives that aim to mimic various attributes of traditional animal products; however, overall sensory appreciation remains low. This study employed open-ended questions, preference ranking, and an identification question to analyze sensory drivers and barriers to liking four burger patties, i.e., two plant-based (one referred to as pea protein burger and one referred to as animal-like protein burger), one hybrid meat-mushroom (75% meat and 25% mushrooms), and one 100% beef burger. Untrained participants (n=175) were randomly assigned to blind or informed conditions in a between-subject study. The main objective was to evaluate the impact of providing information about the animal/plant-based protein source/type, and to obtain product descriptors and liking/disliking levels from consumers. Results from the ranking tests for blind and informed treatments showed that the animal-like protein [Impossible] was the most preferred product, followed by the 100% beef burger. Moreover, in the blind condition, there was no significant difference in preferences between t

... (read more)

Interesting! Some thoughts:

  1. I wonder if the preparation was "fair", and I'd like see replications with different beef burgers. Maybe they picked a bad beef burger?
  2. Who were the participants? E.g. students at a university, and so more liberal-leaning and already accepting of plant-based substitutes?
  3. Could participants reliably distinguish the beef burger and the animal-like plant-based burger in the blind condition?

(I couldn't get access to the paper.)

This Twitter thread points out that the beef burger was less heavily salted.

5
Linch
Thanks for the comment and the followup comments by you and Michael, Ben. First, it's really cool that Impossible was preferred to beef burgers in a blind test! Even if the test is not completely fair! Impossible has been around for a while, and obviously they would've been pretty excited to do a blind taste test earlier if they thought they could win, which is evidence that the product has improved somewhat over the years. I want to quickly add an interesting tidbit I learned from food science practitioners[1] a while back:  Blind taste tests are not necessarily representative of "real" consumer food preferences. By that, I mean I think most laymen who think about blind taste tests believe that there's a Platonic taste attribute that's captured well by blind taste tests (or captured except for some variance). So if Alice prefers A to B in a blind taste test, this means that Alice in some sense should like A more than B. And if she buys (at the same price) B instead of A at the supermarket, that means either she was tricked by good marketing, or she has idiosyncratic non-taste preferences that makes her prefer B to A (eg positive associations with eating B with family or something).  I think this is false. Blind taste tests are just pretty artificial, and they do not necessarily reflect real world conditions where people eat food. This difference is large enough to sometimes systematically bias results (hence the worry about differentially salted Impossible burgers and beef burgers). People who regularly design taste tests usually know that there are easy ways that they can manipulate taste tests so people will prefer more X in a taste test, in ways that do not reflect more people wanting to buy more X in the real world. For example, I believe adding sugar regularly makes products more "tasty" in the sense of being more highly rated in a taste test. However, it is not in fact the case that adding high amounts of sugar automatically makes a product more commonly

New Netflix show ~doubles search traffic for plant-based eating

Image

h/t Lewis Bollard.

Reversing start up advice
In the spirit of reversing advice you read, some places where I would give the opposite advice of this thread:

Be less ambitious
I don't have a huge sample size here, but the founders I've spoken to since the "EA has a lot of money so you should be ambitious" era started often seem to be ambitious in unhelpful ways. Specifically: I think they often interpret this advice to mean something like "think about how you could hire as many people as possible" and then they waste a bunch of resources on some grandiose vision without having validated that a small-scale version of their solution works.

Founders who instead work by themselves or with one or two people to try to really deeply understand some need and make a product that solves that need seem way more successful to me.[1]

Think about failure
The "infinite beta" mentality seems quite important for founders to have. "I have a hypothesis, I will test it, if that fails I will pivot in this way" seems like a good frame, and I think it's endorsed by standard start up advice (e.g. lean startup).

  1. ^

    Of course, it's perfectly coherent to be ambitious about finding a really good value proposition. It's just that I worry t

... (read more)
4
Ben_West🔸
Two days after posting, SBF, who the thread lists as the prototypical example of someone who would never make a plan B, seems to have executed quite the plan B.

Longform's missing mood

If your content is viewed by 100,000 people, making it more concise by one second saves an aggregate of one day across your audience. Respecting your audience means working hard to make your content shorter.

When the 80k podcast describes itself as "unusually in depth," I feel like there's a missing mood: maybe there's no way to communicate the ideas more concisely, but this is something we should be sad about, not a point of pride.[1]


  1. I'm unfairly picking on 80k, I'm not aware of any long-form content which has this mood that I claim is missing ↩︎

7
Charles He
This is a thoughtful post and a really good sentiment IMO! As you touched on, I’m not sure 80k is a good negative example, to me it seems like a positive example of how to handle this?  In addition to a tight intro, 80k has a great highlight section, that to me, looks like someone smart tried to solve this exact problem, balancing many considerations.  This highlight section has good takeaways and is well organized with headers. I guess this is useful for 90% of people who only browse at the content for 1 minute.
2
80000_Hours
We also offer audio version of those highlights for all episodes on the '80k After Hours' feed: https://80000hours.org/after-hours-podcast/
4
Ben_West🔸
Thanks for the push back! I agree that 80k cares more about the use of their listener's time than most podcasters, although this is a low bar. 80k is operating under a lot of constraints, and I'm honestly not sure if they are actually doing anything incorrectly here. Notably, the fancy people who they get on the podcast probably aren't willing to devote many hours to rephrasing things in the most concise way possible, which really constrains their options. I do still feel like there is a missing mood though.
1
Yarrow
To me, economy of words is what’s important, rather than overall length. Long can be wonderful, as long as the writer uses all those words well. Short can be wonderful, if the writer uses enough words to convey their complete thoughts.
Ben_West🔸
Moderator Comment15
0
0

Closing comments on posts
If you are the author of a post tagged "personal blog" (which notably includes all new Bostrom-related posts) and you would like to prevent new comments on your post, please email forum@centerforeffectivealtruism.org and we can disable them.

We know that some posters find the prospect of dealing with commenters so aversive that they choose not to post at all; this seems worse to us than posting with comments turned off.

Democratizing risk post update
Earlier this week, a post was published criticizing democratizing risk. This post was deleted by the (anonymous) author. The forum moderation team did not ask them to delete it, nor are we aware of their reasons for doing so.
We are investigating some likely Forum policy violations, however, and will clarify the situation as soon as possible.

2
Lizka
See the updates here. 

EA Three Comma Club

I'm interested in EA organizations that can plausibly be said to have improved the lives of over a billion individuals. Ones I'm currently aware of:

  1. Shrimp Welfare Project – they provide this Guesstimate, which has a mean estimate of 1.2B shrimps per year affected by welfare improvements that they have pushed
  2. Aquatic Life Institute – they provide this spreadsheet, though I agree with Bella that it's not clear where some of the numbers are coming from.

Are there any others?

9
Habryka
This is a nitpick, but somehow someone "being an individual" reads to me as implying a level of consciousness that seems a stretch for shrimps. But IDK, seems like a non-crazy choice under some worldviews.
1
Ben_West🔸
That's fair. I personally like that this forces people to come to terms with the fact that interventions targeted at small animals are way more scalable than those targeted at larger ones. People might decide on some moral weights which cancel out the scale of small animal work, but that's a nontrivial philosophical assumption, and I like prompting people to think about whether it's actually reasonable.  
5
Habryka
I think "animals that have more neurons or are more complex are morally more important" is not a "nontrivial philosophical assumption".  It indeed strikes me as a quite trivial philosophical assumption the denial of which would I think seem absurd to almost anyone considering it. Maybe one can argue the effect is offset by the sheer number, but I think you will find almost no one on the planet who would argue that these things do not matter. 
3
Ben_West🔸
On the contrary, approximately everyone denies this! Approximately ~0% of Americans think that humans with more neurons than other humans have more moral value, for example.[1]  1. ^ Citation needed, but I would be pretty surprised if this were false. Would love to hear contrary evidence though!
7
Habryka
Come on, you know you are using a hilariously unrepresentative datapoint here. Within humans the variance of neuron count only explains a small fraction of variance in experience and also we have strong societal norms that push people's map towards pretending differences like this don't matter.
-3
Ben_West🔸
Unrepresentative of what? At least in my University ethics courses we spent way more time arguing about the rights of anencephalic children or human fetuses than insects. (And I would guess that neuron count explains a large fraction of the variance in experience between adult and fetal humans, for example.) In any case: I think most people's moral intuitions are terrible and you shouldn't learn a ton from the fact that people disagree with you. But as a purely descriptive matter, there are plenty of people who disagree with you – so much so that reading their arguments is a standard part of bioethics 101 in the US.
4
Habryka
It's unrepresentative of the degree to which people believe that corollaries like neuron count and brain size and behavior complexity are an indicator of moral relevance across species (which is the question at hand here). 
2
calebp
If the funders get a nontrivial portion of the impact for early-stage projects then I think the AWF (inc. its donors) is very plausible.
4
Ben_West🔸
Yeah, I am not sure how to treat meta. In addition to funders, Charity Entrepreneurship probably gets substantial credit for SWP, etc.
Ben_West🔸
Moderator Comment9
0
0

We are banning stone and their alternate account for one month for messaging users and accusing others of being sock puppets, even after the moderation team asked them to stop. If you believe that someone has violated Forum norms such as creating sockpuppet accounts, please contact the moderators.

@lukeprog's investigation into Cryonics and Molecular nanotechnology seems like it may have relevant lessons for the nascent attempts to build a mass movement around AI safety: 

First, early advocates of cryonics and MNT focused on writings and media aimed at a broad popular audience, before they did much technical, scientific work. These advocates successfully garnered substantial media attention, and this seems to have irritated the most relevant established scientific communities (cryobiology and chemistry, respectively), both because many

... (read more)

Working for/with people who are good at those skills seems like a pretty good bet to me.

E.g. "knowing how to attract people to work with you" – if person A has a manager who was really good at attracting people to work with them, and their manager is interested in mentoring, and person B is just learning how to attract people to work with them from scratch at their own startup, I would give very good odds that person A will learn faster.

2
Charles He
Can you give some advice about the topic of attracting good people to work with you, or have any writeups you like?

An EA Limerick

(Lacey told me this was not good enough to actually submit to the writing contest, so publishing it as a short form.)

An AI sat boxed in a room 

Said Eliezer: "This surely spells doom! 

With self-improvement recursive, 

And methods subversive 

It will just simply go 'foom'."

1
E Vasquez
Nice!

I have recently been wondering what my expected earnings would be if I started another company. I looked back at the old 80 K blog post arguing that there is some skill component to entrepreneurship, and noticed that, while serial entrepreneurs do have a significantly higher chance of a successful exit on their second venture, they raise their first rounds at substantially lower valuations. (Table 4 here.)

It feels so obvious to me that someone who's started a successful company in the past will be more likely to start one in the future, and I continue to b... (read more)

7
Lorenzo Buonanno🔸
Wild guesses as someone that knows very little about this: I wonder if it's because people have sublinear returns on wealth, so their second company would be more mission driven and less optimized for making money. Also, there might be some selection bias in who needs to raise money vs being self funded. But if I had to bet I would say that it's mostly noise, and there's not enough data to have a strong prior.

Person-affecting longtermism

This post points out that brain preservation (cryonics) is potentially quite cheap on a $/QALY basis because people who are reanimated will potentially live for a very long time with very high quality of life.

It seems reasonable to assume that reanimated people would funge against future persons, so I'm not sure if this is persuasive for those who don't adopt person affecting views, but for those who do, it's plausibly very cost-effective.

This is interesting because I don't hear much about person affecting longtermist causes.

Yeah definitely. I don't want to claim that learning is impossible at a startup – clearly it's possible – just that, all else equal, learning usually happens faster at existing companies.

Thanks! I'm not sure I fully understand your comment – are you implying that the skills you mention are easier to learn in a startup?

Unsurprisingly, I disagree with that view :)

Curated and popular this week
Relevant opportunities