Hi there! I'm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
I think one could argue that creating an index across causes makes sense, because it allows for exposure on things that are hard to compare (eg, we are not perfect utilitarians). I think in practical terms, it would help EAs have some reference and an easy way to donate across causes. For example, one could create a fund that is indexed to the elicited preferences of the EA community, or to some group of experts. The closest I am aware of is what Giving What We Can do.
I don’t know about other folks but I think this is my first criticism of them as long as I can remember, both online and offline. In general I think they have been fairly responsible with AI safety, or as responsible as I would expect a company to be. But even if I did criticise them a lot, I think it would still be a valid criticism. After all, as a non American I feel quite unease about this, even if they are arguably not the main actor. In any case, I think liberal democracies should oppose mass surveillance in general.
Worth noting that the mass surveillance friction point is only about domestic mass surveillance. Thus, does Anthropic believes mass surveillance of non-Americans is just fine?
I’m pretty confident the EA community is underdiscussing on how to prevent global AGI powered autocracy, especially if the US democracy implodes under AGI pressure. There are two key questions here: (I) How to make the US more resilient, and (ii) how can we make the world less dependent on the US democracy resilience.
Thanks for the post!
As Bruce Friedrich mentions in its book Meat, I think this is unlikely to be the case. While I expect opposition from farmers, I think the large companies are more likely to be supportive, because (i) it is plausible that cultivated meat could become much cheaper than animal produced one, the floor is lower, (ii) they could create larger barriers to entry, using eg IP, and (iii) they do not have large sunk costs in their conventional animal farming facilities, and (iv) it likely allows them faster market reactions to demand and more stability (no avian flu, say). Bruce sometimes feels a bit too optimistic in its book, but I tentatively agree with those points.
I am not well calibrated on this, but I would argue the likelihood of the full EU making cultivated meat illegal is low. I think many countries in the EU have been able to ban GMOs or nuclear power because there was little push from the pro-GMO or pro-nuclear side, and there were easy environmental arguments to be made from the anti side, even if misguided. I don't think that is likely to be the case for cultivated meat. It is more likely to resemble what happened with coal phase-outs.
I think the most likely scenario is:
It is also worth noting that if one cultivated meat product is approved for sale in the EU, one could, with time and patience, probably strike a single market case to bring down laws forbidding its sale in other EU countries. I agree, though, with the statement that "current trajectories in the US and EU point toward more restrictions before the likely arrival of AGI."
Throughout the draft there seems to be a question on whether cultivated meat would achieve the same taste. In practice, I think the consumers won't really wonder too much if it looks, tastes and is otherwise exactly the same as what they typically buy. This is, in fact, perhaps the biggest difference between cultivated vs plant-based food: plant-based can taste just as good, but an individual product may not offer the original culinary flexibility. For example, literally from today, blind tasting of Aleph Farms cultivated meat did confirm this source here.
I think it is likely that from the purely scientific point of view, someone will pay for that to happen, be it Jeff Bezos (who has research centres for the matter), Bill Gates (who is quite worried about climate change) or Dario Amodei (who thinks AI for biotechnology is the best application of AI).
Again, how rooted is this in actual data? It would seem to me that if you go to the supermarket and you find two exactly equivalent products (package included) with a $0.25 price difference, people will typically buy the cheap one. Especially if they try it and they like it, which they should, since it is the same. In fact, what confuses me about comments like this is how a product can cause neophobia if it looks and is perceived as exactly equivalent to the old one. You'd have to flag something about "labs" for it to be even perceived as a different product in the first place.
I think this may not be the right way to look at this, not just because they may be correlated, but also because it is not a 1-shot event. There will likely be a back and forth of events with bans and reversals, say, until some stable equilibrium is achieved. I think a more useful question to ask is how to stack the probabilities in favour of a given equilibrium.