This is a special post for quick takes by Haris Shekeris. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

A bit of a newbie in EA (two-three weeks of reading and discovering stuff) so this may prove to be quite irrelevant, but here it goes anyway. I'm wondering if EAs should be worried about stories like the following (if needed i think i can find the scientific literature behind it):

https://www.sciencetimes.com/articles/40818/20221104/terrifying-model-provides-glimpse-humans-look-year-3000-overusing-technology.htm

My worry is that the standard EA literature where it is assumed that there will be thousands of generations left if humans are left alone may overlook some mundane effects or scenaria such as those stemming from such studies. 

An example, based on the above, could be that humans could be unrecognizable from today but due to the mundane reasons of using well-established technologies that exist today (laptops and smartphones). An unlikely extension of this may be that in say 1000 years homo sapiens goes extinct because evolution-for-optimized software use (i mean smartphones and laptops) has fiddled with the reproduction system (or simply people decided rationally they no longer wanted to have sex, either for pleasure or for child-raising purposes).

Another example could be long-term effects (unknown-unknowns at the moment though, but there's the example of the fish turning hermaphrodite due to exposure to antidepressants in a lake in the 90s, something that alerted people to the effects of small yet steady concentrations of medicines in human bodies) of substances in the body, which again change the average human body. An example of such a scenario would be if we discovered soon, say in 2025 that a concentration of above say 2μgs of microplastics in the gut begins to seriously mess up sperm or egg quality, hence rendering reproduction impossible.

Of course, we can always assume that major scientific bodies may produce advice to reverse such adverse effects, but what if these are as effective as anti-smoking campaigns? (imagine a current campaign to urgently reduce internet time to 30mins per day in advanced western countries because a cutting-edge scientific report has linked it to a rise in deadly brain tumours - how would that scenario play out? My predictions would involve some sort of denialism and conspiracy theories as well as rioting if the technological fix doesn't come fast, to say the least). Remember that even with COVID, which was a global pandemic, the desired response ( for example everybody or most people in the world getting vaccinated so that they don't die nor do they become altruists by not spreading the virus, at least when there was the uncertainty about how deadly the virus would be) largely failed due to humanity bringing out its worst self (politics among countries such as for securing more vaccines for their citizens or pharmaceutical companies maximizing their profits, to cite just two examples).

Once again, apologies if this is a bit off-topic or totally misses the point of EA.

I disagree a little bit with the credibility of some of the examples, and want to double-click others. But regardless, I think this is a very productive train of thought and thank you for writing it up. Interesting!

And btw, if you feel like a topic of investigation "might not fit into the EA genre", and yet you feel like it could be important based on first-principles reasoning, my guess is that that's a very important lead to pursue. Reluctance to step outside the genre, and thinking that the goal is to "do EA-like things", is exactly the kind of dynamic that's likely to lead the whole community to overlook something important.

Dear Emrik, 

Many thanks for the feedback and for the encouragement! The examples were a bit speculative, though the fish one is quite well-known, I think it was in the 90s. Also I know that it's only recently that the effects of long-term studies of small yet steady concentrations of macro-molecules  have begun to be conducted, not least because ten years ago we didn't have the technology to pursue such kinds of studies.
 If anybody is interested to 'research together' I can do imagine pursuing this further (this is an invitation), at the moment though it's just an idle thought. 

So please, anybody with more knowledge and means, if you're interested, I'd welcome the chance to conduct a literature review for anything mentioned above, and we can take it from there!

https://www.theguardian.com/commentisfree/2022/nov/30/science-hear-nature-digital-bioacoustics

what happens if in the future we discover that all life on Earth (especially plants) are sentient, but at the same time a) there are a lot more humans on the planet waiting to be fed and b) synthetic food/ proteins are deemed dangerous to human health?

Do we go back to eating plants and animals again? Do we farm them? Do we continue pursuing technologies for food given the past failures? 

Flagging a potential problem for longtermism and the possibility on expanding human civilisation on other planets: What will the people eat there? Can we just assume that technoscience will give us the answer'? Or is that a quick and too optimistic question? Can one imagine a situation where humanity goes extinct because the earth finally becomes uninhabitable and the on the first new planet on which we step on the technology either fails or the settlers miss the opportunity window to develop their food? I'm sure there must be some such examples in the history of settlers into new worlds in the existing human history, I don't know if anybody's working on this in the context of longtermism though.

Just some food for thought hopefully

https://www.theguardian.com/environment/2023/jan/07/holy-grail-wheat-gene-discovery-could-feed-our-overheated-world

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 6m read
 · 
TLDR: This 6 million dollar Technical Support Unit grant doesn’t seem to fit GiveWell’s ethos and mission, and I don’t think the grant has high expected value. Disclaimer: Despite my concerns I still think this grant is likely better than 80% of Global Health grants out there. GiveWell are my favourite donor, and given how much thought, research, and passion goes into every grant they give, I’m quite likely to be wrong here!   What makes GiveWell Special? I love to tell people what makes GiveWell special. I giddily share how they rigorously select the most cost-effective charities with the best evidence-base. GiveWell charities almost certainly save lives at low cost – you can bank on it. There’s almost no other org in the world where you can be pretty sure every few thousand dollars donated be savin’ dem lives. So GiveWell Gives you certainty – at least as much as possible. However this grant supports a high-risk intervention with a poor evidence base. There are decent arguments for moonshot grants which try and shift the needle high up in a health system, but this “meta-level”, “weak evidence”, “hits-based” approach feels more Open-Phil than GiveWell[1]. If a friend asks me to justify the last 10 grants GiveWell made based on their mission and process, I’ll grin and gladly explain. I couldn’t explain this one. Although I prefer GiveWell’s “nearly sure” approach[2], it could be healthy to have two organisations with different roles in the EA global Health ecosystem. GiveWell backing sure things, and OpenPhil making bets.   GiveWell vs. OpenPhil Funding Approach What is the grant? The grant is a joint venture with OpenPhil[3] which gives 6 million dollars to two generalist “BINGOs”[4] (CHAI and PATH), to provide technical support to low-income African countries. This might help them shift their health budgets from less effective causes to more effective causes, and find efficient ways to cut costs without losing impact in these leaner times. Teams of 3-5
 ·  · 3m read
 · 
We’re excited to announce SparkWell! What is SparkWell? SparkWell is an Anti Entropy program designed to help high-impact nonprofit projects test ideas, develop operational capabilities, and launch as independent entities. We provide a temporary home for a diverse range of promising initiatives. Why have we built this? We believe that we’re living through a transformational period in history. Catastrophic risks loom large, whether from climate change, factory farming, pandemics, nuclear or cyber warfare, or the misalignment or misuse of intelligent systems. A transformational period in history warrants a transformation in philanthropy — and we want to give innovative projects the support they need to test their ideas and scale. We leverage our skills and experience with nonprofit operations to guide enrolled projects through a bespoke acceleration roadmap.  Within 6–24 months, your project will graduate into an independent entity with operational best practices. This will put you in a position to scale your activities — and help mitigate the catastrophic risks facing us. What does SparkWell offer? SparkWell offers 6-month, 12-month, or 24-month tracks to accommodate projects at different stages. We enable each project to: * Test ideas * Receive tax-exempt funding via Anti Entropy's 501(c)(3) * Run your project, including hiring staff, contractors, and managing expenses * Receive feedback and develop your theory of change * Develop operational capabilities * Access your bank account, company card, and dashboard * Receive mentorship and resources from your Project Liaison * Leave bookkeeping and compliance to us * Launch an independent entity * Monitor your progress along entity formation milestones * Be on track to independence within 6, 12, or 24 months * Launch an independent entity when you’re ready We apply a 7% service fee on funds raised or received during the program. You can learn more about the program here. Who are