titotal

Computational Physicist
6766 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
555

So I suspect there are plenty of physicists who would politely disagree that it’s not possible to really understand quantum mechanics. Sure, it might take them a few decades of dedicated work in theoretical physics and a certain amount of philosophical sophistication, but there surely are physicists out there who (justifiably) feel like they grok quantum mechanics both technically and philosophically, and who feel deeply satisfied with the frameworks they’ve adopted. Carlo Rovelli (proponent of the relational interpretation) and Sean Carroll (proponent of the many-worlds interpretation) might be two such people.

 

Sorry to derail, but I'm a physicist in a related field who's been reading up on this, and I'm not sure I agree with this characterization. 

The issue with quantum physics is that it's not that hard to "grok" the recipe for actually making quantum predictions within the realms we can reasonably test. It's a simple two step formula of evolving the wavefunction and then "collapsing" it, and you could probably do it in a afternoon for a simple 1D system. All the practical difficulty comes from mathematically working with more complex system and solving the equations efficiently. 

The interpretations controversy comes from asking why the recipe works, a question almost all quantum physicists avoid because there is as of yet no way to distinguish different interpretations experimentally (and also the whole thing is incompatible with general relativity anyway). Basically every interpretation requires biting some philosophical bullet that other people think is completely insane. 

I very much doubt that Carroll is "deeply satisfied" with MWI, although he does think it's probably true. MWI creates a ton of philosophical problems about identical clones, identity, and probability, Carroll has made attempts to address this but IMO the solution is rather weak. 

I haven't read up much on the consciousness debate, but it seems like it could end up in a similar place: everybody agreeing on the experimentally observable results, but unable to agree on what they mean. 

The public already has a negative attitude towards the tech sector before the AI buzz. in 2021 45% of americans had a somewhat or very negative view of tech companies. 

I doubt the prevalence of AI is making people more positive towards the sector given all the negative publicity over plagarism, job loss, and so on. So I would guess the public already dislikes AI companies (even if they use their products), and this will probably increase. 

titotal
54
14
18
2
2

I want make my prediction about the short-term future of AI. Partially sparked by this entertaining video about the nonsensical AI claims made by the zoom CEO. I am not an expert on any of the following of course, mostly writing for fun and for future vindication. 

The AI space seems to be drowning in unjustified hype, with very few LLM projects having a path to consistent profitabilitiy, and applications that are severely limited by the problem of hallucinations and the general fact that LLM’s are poor at general reasoning (compared to humans). It seems like LLM progress is slowing down as they run out of public data and resource demands become too high. I predict gpt-5, if it is released, will be impressive to people in the AI space, but it will still hallucinate, will still be limited in generalisation ability, will not be AGI and the average joe will not much notice the difference. Generative AI will be big business and play a role in society and peoples lives, but in the next decade will be much less transformative than, the introduction of the internet or social media. 

I expect that sometime in the next decade it will be widely agreed that AI progress has stalled, that most of the current wave of AI bandwagon jumpers will be quietly ignored or shelved, and that the current wave of LLM hype might look like a financial bubble that burst (ala dotcom bubble but not as big). 

Both AI doomers and accelerationists will come out looking silly, but will both argue that we are only an algorithmic improvement away from godlike AGI. Both movements will still be obscure silicon valley things that the average joe only vaguely knows about. 

I think posts like this exhibit the same thought terminating cancel culture behaviour that you are supposedly complaining about, in a way that is often inaccurate or uncharitable. 

For example, take the mention of scott alexander:

It reports, for example, that Scott Alexander attended the conference, and links to the dishonest New York Times smear piece criticizing Scott, as well as a similar hitpiece calling Robin Hanson creepy. 

Now, compare this to the actual text of the article:

Prediction markets are a long-held enthusiasm in the EA and rationalism subcultures, and billed guests included personalities like Scott Siskind, AKA Scott Alexander, founder of Slate Star Codex; misogynistic George Mason University economist Robin Hanson; and Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute (Miri).

Billed speakers from the broader tech world included the Substack co-founder Chris Best and Ben Mann, co-founder of AI startup Anthropic.

Now, I get the complaint about the treatment of robin hanson here, and I feel that "accused of misogyny" would be more appropriate (outside of an oped). But with regards to scott alexander, there was literally no judgement call included. 

When it comes to the NYT article, very few people outside this sphere know who he is. Linking to an article about him in one of the most well known newspapers in the world does not seem like a major crime! People linking to articles you don't like is not cancel culture. Or if it is, then I guess I'm pro cancel culture, because the word has lost all meaning. 

It feels like you want to retreat into a tiny, insular bubble where people can freely be horribly unpleasant to each other without receiving any criticism at all from the outside world. And I'm happy for those bubbles to exist, but I have no obligation to host your bubble or hide out there with you. 

Imagine I go to a conference, and a guy poops himself deliberately on stage as performance art. It smells a lot and is very unpleasant and I have a sensitive nose.

I announce, publically, that "I don't like it when deliberately people poop themself on stage. If other  places have deliberate pants pooping, I won't go to them". 

I am 1.) publically stopping going, 2) because of who they associate with (pants poopers) and 3) implying I'll do that to other people who associate with the same group (pants poopers). 

Ergo, according to your logic, I am boycotting, encouraging others to boycott, and "trying to control who people can hang out with", even if, yknow, I just don't want to go to conferences where I don't smell poop. 

I have free association, as does everyone else. I don't like pants shitters, and I don't like scientific racists (who are on about the same level of odiousness), and I'm free to not host them or hang around them if I want to. 

I recognise that a lot of criticism is bad, and I have written a long post on why I think that is. But this is going  too far in the other direction. 

Spend enough time listening to the criticisms of effective altruism and it becomes clear that, aside from those arguing for small tweaks at the marigns, they all stem from either a) people being very dogmatic and having a worldview that’s strangely incompatible with doing good things (if, for instance, they don’t help the communist revolution); b) people wanting an excuse to do nothing in the face of extreme suffering; or c) people disliking effective altruists and so coming up with some half-hearted excuse for why EA is really something-something colonialism.

All of them? You think literally every person who is not on board with the effective altruism movement is doing so for these three reasons? 

EA, as a movement, is miniscule and highly homogenous. Like any group, it will be wrong about a lot of things. I think sentiments like this, dismissing every person who is not on board with the EA movement as some kind of crazy SJW, is epistemological suicide

Look, I'm a fan of malaria nets and animal welfare EA. I have donated plenty to malaria nets myself. But that is not the entire movement. You can't just isolate one part of it and ignore the whole "billion dollar fraud" thing, the abuses of power, the mini-cults, the sexism/racism controversies. Or it's part in building up OpenAI and starting the AI arms race, with all the harms they have brought. 

EA is seeking power and influence, and wants to have a large effect on the future of humanity. People are allowed to be concerned about that. 

Trying to cancel folks because they spoke at an event but another speaker said a bad thing 15 years ago---that's an absurd level of guilt by association. 

This is a very uncharitable, bordering on dishonest, interpretation of the critics of this event.

 Like, even  if you're talking about the guardian article, which definitely has an anti-EA stance, I would describe their main "cancellation" (not a fan of how this word is used) targets as Lightcone and manifest. The charge is that lightcone hosted a conference filled with racist speakers at the lighthaven campus, and that manifest invited said speakers to the conference. 

I don't see them cancelling, say, nate silver, who fills your description of "spoke at the event but another speaker said a bad thing 15 years ago". 

Also, "said a bad thing 15 years ago" is an absurd twisting of the accusations. Hanania said some really, really racist things under a pseudonym up to 2012 (12 years ago, not 15) that he apologises for, but even the OP admits that he still says "distasteful" things today on twitter, and I personally think he's still pretty racist. And most of the other controversial speakers have never apologised for anything, and plenty of the things they said were recent, like the comments of  brian chau

titotal
42
44
20
4

You say you had 57 speakers (or i guess more that weren't featured?). An attendee estimates that 8 speakers in lessonline and manifest had scientific racism controversies (with 2 more debatebly adjacent). Obviously this isn't an exact estimate, but it looks like something on the order of 5-10% of the speakers had scientific racism ties. 

What percentage of speakers were African American (or african anything else)? I did not see any of the 30 with pictures on the site, so i'd guess something on the order of 0-3%. 

Do you see a problem with a conference that has something like twice or three times as many scientific racist speakers as it does black people speakers?

These speakers are not a representative slice of society. Scientific racists are much, much more rare, and black people are much, much more common. If your goal is a free exchange of ideas, the ideas you are recieving here are vastly skewed in one direction. 

The actual effect of this type of speaker list is to push out anti-racists, and encourage more people sympathetic to scientific racism to join your community. I think this is bad!

and the highly controversial rationalist Michael Vassar

Was Vassar a speaker or just an attendee?

In addition to the cult stuff you mentioned, when the time article on sexual harassment in rationalist communities came out, many responses on the article claimed Vassar had been accused of multiple instances of sexual harassment or assault and banned from multiple communities. I got the impression he was no longer around, and am disturbed that he would be allowed in such a conference. 

Edit: see the edit in the Op, vassar did not actually attend, but apparently he could have if he wanted to. I would advise everyone to not let this guy attend your conferences. 

You've caught me stuck in bed, and I'm probably the most EA-critical person that regularly posts here, so I'll take a stab at responding point by point to your list:

  1. It’s good and virtuous to be beneficent and want to help others, for example by taking the Giving What We Can 10% pledge.
  1. Agree.
  1. It’s good and virtuous to want to help others effectively: to help more rather than less with one’s efforts.

2. Agree.

  1. We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).

3. Agree on global poverty and animal welfare, but I think it might be difficult to do "a lot of good" in some catastrophic risk areas. 

  1. In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.

4. Agreed, although I should note that efforts to better target efforts can have diminishing returns, especially when a problem is speculative and not well understood. 

  1. In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is what’s decision-relevant.)

5. Agreed for global poverty and animal welfare, but I'm mixed on this for speculative causes like AI risk, where theres a decent chance that efforts could backfire and make things worse, and theres no real way to tell until after the fact. 

  1. Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.

6. Agreed. Unfortunately, EA often fails to live up to this idea.

  1. So it’s good and virtuous to use quantitatively tools and evidence wisely.

7. Agreed, but see above.

  1. GiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.

8. Agreed, I like givewell in general.

  1. So it’s good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.

9. Agreed, with regards to the area givewell specialises in.

  1. There’s no good reason to think that GiveWell’s top charities are net harmful.[1]

10. I think the chances that givewells top charities are net good is very high, but not 100%. See the  mosquito net fishing for a possible pitfall. 

  1. But even if you’re the world’s most extreme aid skeptic, it’s clearly good and virtuous to voluntary redistribute your own wealth to some of the world’s poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)

11.Agreed.

  1. Many are repelled by how “hands-off” effective philanthropy is compared to (e.g.) local volunteering. But it’s good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.

12.Agreed, but sometimes being hands on can be helpful with improving lives. For example, being hands on can allow one to more easily recieve feedback and understand overlooked problems with an intervention, and to ensure it goes to the right place. I don't think voluntourism is good at this, but I would like to see support for more grassroots projects by people actually from impoverished communities.  

  1. Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value “sure things”. In such cases, this is worth doing.

13. I agree in principle, but disagree in practice given the "hits based giving" of EA can be pretty bad. The effectiveness of hits based giving very much depends on how much each miss costs and the likely effectiveness of a hit. I don't think the 100,000 grant for a failed video game was a good idea, nor the $28000 to print out harry potter fanfiction that was free online anyway

  1. Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)

14. This is so broad as to be trivially true, but in practice I often disagree with the judgements here. 

  1. The above point encompasses much relating to politics and “systemic change”, in addition to longtermist long-shots. It’s very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropy—just note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.

15. Generally agree. 

  1. Anti-capitalist critics of effective altruism are absurdly overconfident about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.

16. Anti-capitalist is a pretty broad tent. I agree that some people who adopt that label are dumb and naive, but others have pretty good ideas. I think it would be really dumb if capitalism was still the dominant system 1000 years from now, and there are political interventions that can be predicted to reliably help people. I think "overthrow the government for communism" gets the sideye: "universal healthcare" does not. 

  1. In general, I don’t think that doing good through one’s advocacy should be treated as a substitute for “putting one’s money where one’s mouth is”. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But I’m completely open to judging political donations (when epistemically justified) as constituting “effective philanthropy”—I don’t think we should put narrow constraints on the latter concept, or limit it to traditional charities.

Some people are poor and cannot contribute much without kneecapping themselves. I don't think those people are useless, and I think for a lot of people political action is a rational choice for how to effectively help. Similarly, some people are very good at political action, but not so good at making large amounts of money, and they should do the former, not the latter. 

  1. Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.

I agree it provides useful tools. But if you take the tools like expected value too seriously, you end up doing insane things (see SBF). In general EA is way too willing to swallow the math even when it gives bad results. 

  1. It would be very bad for humanity to go extinct. We should take reasonable precautions to try to reduce the risk of this.

Agreed, depending on what you mean by "reasonable". 

  1. Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more

Agreed, with the caveat that we are talking about beings that currently exist or have a high probability of existing in the future.

  1. Insofar as one’s natural sympathy falls short, it’s better and more virtuous to at least be “continent” (as Aristotle would say) and allow one’s reason to set one on the path that the fully virtuous agent would follow from apt feelings.

The term "fully virtuous agent" raises my eyebrows. I don't think that's a thing that can actually exist. 

  1. Since we can do so much good via effective donations, we have—in principle—excellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.

Agreed, with emphasis on the "permissible means". 

  1. Many individuals have in fact successfully pursued this path. (So Rousseauian predictions of inevitable corruption seem misguided.)

It looks like it, although of course this could be negated if they got their fortunes from more harmful than average means. I don't see evidence that this is the case for these examples. 

  1. Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)

agreed

  1. When the stakes are high, there are no “safe” options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWell’s top charities, would make you causally responsible for approximately ten deaths every year. That’s really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)

Agreed, although I'll note that from my perspective, persuading an EAer to donate to AI x-risk instead of givewell will have a similar effect, and should be treated to the same level of scrutiny. 

  1. Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They don’t, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.

Agreed for some critiques of Givewell/AMF in particular, like the recent time article. However, I don't think this applies to critiques of AI x-risk, because I don't think AI x-risk charities are effective. If that turns them away and they donate to oxfam or something instead, that is a net good. 

  1. Deliberately or negligently making the world worse is vicious, bad, and wrong.

Agreed.

  1. Most (all?) of us are not as effectively beneficent as would be morally ideal.

Agreed

  1. Our moral motivations are very shaped by social norms and expectations—by community and culture.

Agreed

  1. This means it is good and virtuous to be public about one’s efforts to do good effectively.

Generally agreed.

  1. If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.

Agreed

  1. In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.

Agreed, but "in principle" is doing a lot of work here. I think the initial Bolshevik party broadly fit this description, for an example of how this could go wrong. 

  1. For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.

Depends on which community we are talking about. See again: the Bolsheviks. 

  1. That’s what the “Effective Altruism” community constitutively aims to do.

agreed.

  1. It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).

Agreed on all statements.

  1. Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.

Agreed.

  1. Such reflection has indeed happened. (I don’t know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (It’s not entirely new, of course: SBF’s fraud flagrantly violated extant EA norms;[2] everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more “professional” in various ways.)

There has definitely been some reflections and changes, many of which I approve of. But it has not been smooth sailing, and I think the response to other scandals leaves a lot to be desired.  It remains to be seen as to whether ongoing efforts are enough. 

  1. No community is foolproof against bad actors. It would not be fair or reasonable to tar others with “guilt by association”, merely for sharing a community with someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.

I agree that individuals should not be tarred by SBF, but I don't think this same protection applies to the movement as a whole. We care about outcomes. If a fringe minority does bad things, those things still occur. SBF conducted one of the largest frauds in history: you don't see OXFAM having this kind of effect. It's n=1 for billion dollar frauds, but the n is a lot higher if we consider abuse of power, sexual harrassment, and other smaller harms. 

The more power and influence EA amasses, the more appropriate it is to be concerned about bad things within the community.

  1. The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.

I think EA has totally flubbed on AI x-risk. Therefore, if I have the choice between recommending them EA in general, or just givewells top charities, doing the latter will be better. 

  1. The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.

Agreed, but certain types of dickish behaviour are a flaw of the community that has a detrimental effect on it's health, and make it's decision making and effectiveness worse. 

  1. If you find the EA community annoying, it’s fine to say so (and reject the “EA” label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or don’t want to be associated with them.

Agreed. I generally steer people to givewell or it's charities, rather than 

  1. None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)

I think some of the claims are less valuable outside of utilitarianism, but whatever.

With that all answered, let me add my own take on why I don't recommend EA to people anymore:

I think that the non-speculative side of EA (global poverty and animal welfare) is nice and good, and is on net making the world a better place. I think the speculative side of EA, and in particular AI risk, contains some reasonable people but also enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory. 

Most of this bad thinking originates from the Rationalist community, which is generally a punchline in wider intellectual circles. I think the Rationalist community is on the whole epistemically atrocious and overconfident on things for baffling reasons, and tend to be hero worshipping and spread a lot of factually dubious ideas with very poor justification. I find some of the heroes they adore to be unpleasant people who spread harmful norms, ideas, and behaviour. 

Putting it all together, I think that overall EA is a net positive, but that recommending EA is not the most positive thing you can do. Attacking the bad parts of EA while acknowledging that malaria nets are still good seems like a completely rational and good thing to do, either to put pressure on EA to improve, or to provide impetus for the good parts of EA to split off. 

Load more