huw

567 karmaJoined Working (0-5 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
99

I was under the impression that most people in AI safety felt this way—that transformers (or diffusion models) weren't going to be the major underpinning of AGI. As has been noted a lot, they're really good at achieving human-level performance in most tasks, particularly with more data & training, but that they can't generalise well and are hence unlikely to be the 'G' in AGI. Rather:

  1. Existing models will be economically devastating for large sections of the economy anyway
  2. The rate of progress across multiple domains of AI is concerning, and that the increased funding to AI more generally will flow back to new development domains
  3. Even if neither of these things are true, we still want to advocate for increased controls around the development of future architectures

But please forgive me if I had the wrong impression here.

I'm a bit confused. I was just calling Aschenbrenner unimaginative, because I think trying to avoid stable totalitarianism while bringing about the conditions he identified for stable totalitarianism lacked imagination. I think the onus is on him to be imaginative if he is taking what he identifies as extremely significant risks, in order to reduce those risks. It is intellectually lazy to claim that your very risky project is inevitable (in many cases by literally extrapolating straight lines on charts and saying 'this will happen') and then work to bring it about as quickly and as urgently as possible.

Just to try and make this clear, by corollary, I would support an unimaginative solution that doesn't involve taking these risks, such as by not building AGI. I think the burden for imagination is higher if you are taking more risks, because you could use that imagination to come up with a win-win solution.

You are right—thank you for clarifying. This is also what Torres says in their TESCREAL FAQ. I've retracted the comment to reflect that misunderstanding, although I'd still love Ozy's take on the eugenics criticism.

If you are concerned about extinction and stable totalitarianism, 'we should continue to develop AI but the good guys will have it' sounds like a very unimaginative and naïve solution

This sounds great! Have you given any thought to how we could make suzetrigine available to people in LMICs, or people who can’t afford it/don’t have health insurance?

Very curious what the actual play is here. I suspect, at worst, xAI just gets to be a holding company for GPUs and can flip them at a profit. At best, maybe Elon thinks generative Twitter will restore its original value for a sale? Regardless, his ability to fundraise for mid ideas is remarkable.

Equally, the best talent from non-Western countries usually migrates to Western countries where wages are orders of magnitude higher. So this ends up being self-reinforcing.

Hey there! For what it's worth, did you look at the Global Burden of Disease study? They define 'cause' and 'risk factor' separately. So they have direct drug overdoses in causes, but also calculate death & DALY burdens that are attributable to drug addiction, tobacco use, and high alcohol use (you can play around with the models here). Note that all estimates below have wide credible intervals in their models, but I've omitted them for readability. I also don't know how they perform their risk factor attribution, but since a lot of experts contribute to this I can't imagine it's worse than your analysis or missing something crucial.

In their data, tobacco contributes 195M DALYs/year (6.76% of the total DALY burden suffered by all humanity), high alcohol use contributes 72M or 2.51%, and drug use 28M or 0.96%.

In the U.S., these risk factors contribute 22M DALYs/year. Combined, this is more than all direct level-2 causes of death in the GBD (cardiovascular, in #1, has 18M DALYs/year). Equivalently, it would be the 5th-largest level-2 direct cause globally, behind cardiovascular, respiratory, neoplasms, and maternal disorders. But I'd warn against making these sorts of comparisons because they obviously depend on how you slice up your data (for the same reason, the chart you made from WHO data doesn't hold much water with me on its face).

I think the best next steps for you would be to create a strong case that addiction is neglected relative to top EA cause areas such as malaria, childhood vaccinations, maternal health, and so on. You could try to find good estimates on the amount of global or per-country funding going to each issue relative to their contribution to the global burden of disease. I am not sure how that analysis would play out, but I'd love to see it on the forum!

Yeah, I think Ozy's article is a great retort to Torres specifically, but probably doesn't extrapolate well to anyone who has used the TESCREAL label to explain this phenomenon, many of whom probably have stronger arguments.

I noticed Torres likes to bring up a particular critique around how longtermism is eugenicist. I haven't been great at parsing it because it's never very well explained, but my best guess is that it goes:

  • Longtermists prioritise the long term future very strongly
  • They are regularly happy to make existential trade-offs for one group of people in order to improve the lives of a different group of people in their thought experiments
    • In some cases, arguments such as these have been made in order to materially redirect funding from saving lives of the global poor to 'increasing capacity' for rich people who could work on longtermist causes
    • These 'capacity increases' sometimes look like just improving quality of life for these people (ex. Wytham Abbey in the most egregious case)
  • Sometimes these groups get selected in a way that makes them look suspiciously genetic
    • For example, the people who get privileged in these scenarios are overwhelmingly (but not exclusively) white, and the people who get traded off are overwhelmingly non-white
  • Therefore, longtermism isn't necessarily intentionally eugenicist, but without significant guardrails could very well end up improving the lives of some genetic groups at the expense of others

This is the best steelperson I could come up with. I am sympathetic to the above formulation, but I imagine Torres' version is a bit more extreme in practice. Fundamentally, I wonder if longtermists should more strongly reject arguments that involve directing funding toward privileged people for 'capacity building'.

But regardless, I'd love to know what your thoughts on that particular line of reasoning are (and not necessarily Torres' specific formulation of it, which as you've demonstrated, is likely to be too extreme to be coherent).

(I wonder if now that we've thoroughly discredited this person, we can move onto more interesting and stronger critiques of longtermism)

[This comment is no longer endorsed by its author]Reply
Load more