C

ClimateDoc

576 karmaJoined

Bio

Mid-career climate science researcher in academia

Previously used display name "Pagw"

Comments
85

I think there are at least two relevant aspects here - the impact of ceasing insect farming and the question of which policies should be supported.

On the impact of ceasing insect farming, a consideration that it's not clear to me has been taken into account is what the land would be used for if not for growing food for insects - it wouldn't necessarily become wild, rather it could be used to grow other crops, and thereby have no large effect on wild animal welfare. Rates of deforestation seem to indicate there is plenty of demand for arable land. Also, biofuels seem to be being held back by land availability and worries over these competing with food crops, again potentially acting as a strong source of demand for land. So the effect of removing one source of demand seems complex, and it seems like it may just result in substitution by another type of farming. The marginal effect may be to affect deforestation rates - but to what degree these are affected by changes in demand for crops is unclear to me.

Re the question of support this gives for insect farming, even if it had an overall positive effect, it's not clear it should be advocated if there would be other uses for that land that would be better e.g. growing biofuels. So it doesn't clearly make a "case" for defending insect farming.

More generally, if an action A involves doing P and Q, where P is good and Q is bad, but there are ways of doing P that don't involve the harm of Q, then the implication would seem to be to advocate one of those other ways of doing P and not to defend A - in this case P = farming crops and Q = farming insects.

It sounds like the benefit under this argument comes from reducing wild land. You could do that without causing lots of other insects (or other farmed animals) to suffer e.g. grow crops and burn them for energy instead, or manage the land to keep insect numbers down. So I don't find this argument very persuasive that we should think of this as a positive benefit to intensive farming of insects or other animals, even supposing that insects (or other animals) have overall negative lives in the wild. Perhaps this isn't the right location to discuss this in depth, though.

Relevant news article from today, on a report saying people are unlikely to be willing to eat insects - just thought I'd share: https://www.theguardian.com/environment/2025/jun/25/eating-insects-meat-planet

One of the main points of the article is that insect farming is bad for insect welfare, so Vasco's comment seems on-topic enough for me. Maybe the link to that part of the argument could have been stated more clearly.

Maybe it seems repetitive if you see such comments a lot, but then it suggests that main posts are repeatedly neglecting the argument. Perhaps it would be better for main posts just to point out that this argument exists in their caveats and link to a discussion somewhere. If it might change the whole sign of whether something is good or bad, it seems like it should be at least mentioned.

For people like me who only come to read the occasional post, it does feel useful to be reminded of these other perspectives.

I had a look, it seems to presume the AI-owners will control all the resources, but this doesn't seem like a given (though it may pan out that way). 

I realise you said you didn't want to debate these assumptions, but just wanted to point out that the picture painted doesn't seem inevitable.

I don't really follow why one set of entities getting AGI and not sharing it should necessarily lead to widespread destitution.

Suppose A, B and C are currently working and trading between each other. A develops AGI and leaves B and C to themselves. Would B and C now just starve? Why would that necessarily happen? If they are still able to work as before, they can do that and trade with each other. They would become a bit poorer due to needing to replace the goods that A had a comparative advantage in producing I guess.

For B and C to be made destitute directly, it would seem to require that they are prevented at working at anything like their previous productivity eg if A were providing something essential and irreplaceable for B and C (maybe software products if A is techy?) or if A's AGI went and pushed B and C off a large fraction of natural resources. It doesn't seem very likely to me that B and C couldn't mostly replace what A provided (eg with current open-source software). For A to push B and C off a large enough amount of resources, when the AGI has presumably already made A very rich, would require A to be more selfish and cruel than I hope is likely - but it's unfortunately not unthinkable.

Of course there would probably still be hugely more inequality - but that doesn't imply B and C are destitute.

I could imagine there being indirect large harms on B and C if their drop in productivity were large enough to create a depression, with financial system feedbacks amplifying the effects.

In any case, the picture you paint seems to require an additional reason that B and C cannot produce the things they need for themselves.

Are there roles in your current organisation that you think would be more enjoyable and could move into, say more at the level of making direct contributions?

Also, have you very thoroughly thought through the risks of retiring on $700k? I've seen in various discussions that it's common for people to think that a 4% withdrawal rate is likely sustainable to enable early retirement with low risk, but there are various reasons why that's probably optimistic, so just thought I'd flag it in case that's what this is based on. Maybe it's not...

My understanding of these "reasoning" approaches is that they seem to work very well on problems where there is a well-defined correct answer, and where that can be automatically verified. And it seems reasonable to expect much progress in that area.

What is the thinking of how much of human reasoning work is to do with problems like these?

As a counter-example, in my own particular work on climate prediction, we do not get rapid feedback about what works well, and it is contested what methods and frameworks we should even use i.e. it's not possible presently to say "getting a good answer just requires solving [list of well-defined problems]" (except making computers so fast that we can do pretty much exact simulations of physics). So it doesn't seem clear to me that these reasoning models will get a lot better at that kind of thing. But this is perhaps towards the far end of the spectrum of complex problems.

I can see these reasoning models becoming very good at things like writing code where requirements to be met can be precisely specified and automatically verified, and improving performance of devices (such as computer chips) according to well-specified benchmarks. How much difference would it make to make fast progress on problems similar to these?

There doesn't look to me to be a reason to think that systems trained this way will yield impressive performance at solving messier problems without clear right answers, like predicting complex systems (that can't be observed experimentally or simulated very well), selecting amongst decision options with different strengths on multiple criteria, dealing with organisational politics etc. Does that seem fair?

These are genuine questions - I don't feel I have a good grasp of what kinds of work most of our economy is engaged in...

It's not clear to me why the aim ought to be to sample randomly amongst all people - it seems like a different population could reasonably be chosen!

Sounds interesting. I had a go at the tool, but was a bit perplexed that the "lottery story" it showed me was for a Romanian earning $2,500/month, which doesn't seem like the kind of life that people's attention needs to be most drawn to or represents people that would be helped by effective development charities (it even says this person is at the 86th percentile of global income). And then below that it talked about ending hunger, eradicating disease etc., which didn't relate to the story. I'd focus it on stories about the kinds of people that effective charities would actually help. I tried to get it to generate another story to see what else comes up, but it wouldn't.

Load more