Redirects to Request* for Biosecurity project assistance — EA Forum (effectivealtruism.org)
Longtermist shower thought: what if we had a campaign to install Far-UVC in poultry farms? Seems like it could:
1. Reduce a bunch of diseases in the birds, which is good for: a. the birds’ welfare; b. the workers’ welfare; c. Therefore maybe the farmers’ bottom line?; d. Preventing/suppressing human pandemics (eg avian flu)
2. Would hopefully drive down the cost curve of Far-UVC
3. May also generate safety data in chickens, which could be helpful for derisking it for humans
Insofar as one of the main obstacles is humans' concerns for health effects, this would at least only raise these for a small group of workers.
Sales professionals might be able to meaningfully contribute to reducing bio x-risk. They could do so by working for germicidal UV companies in promoting their product and increasing sales. This is not my own idea, but I do not think I have seen this career track before and thought it might be useful to some - people with sales backgrounds might not easily find impactful roles (perhaps apart from fundraising and donor relations). If you need more details please just comment here and I will give as much detail as I have on this opportunity.
An interesting quote relevant to bio attention hazards from an old CNAS report on Aum Shinrikyo:
Footnote source in the report: "Interview with Fumihiro Joyu (21 April 2008)."
There is a natural alliance that I haven't seen happen, but both are in my network: pandemic preparedness and covid-caution. Both want clean indoor air.
The latter group of citizens is a very mixed group, with both very reasonable people and unreasonable 'doomers'. Some people have good reason to remain cautious around COVID: immunocompromised people & their household, or people with a chronic illness, especially my network of people with Long Covid, who frequently (~20%) worsen from a new COVID case.
But these concerned citizens want clean air, and are willing to take action to make that happen. Given that the riskiest pathogens trend to also be airborne like SARS-COV-2, this would be a big win for pandemic preparedness.
Specifically, I believe both communities are aware of the policy objectives below and are already motivated to achieve it:
1) Air quality standards (CO2, PM2.5) in public spaces.
Schools are especially promising from both perspectives, given that parents are motivated to protect their children & children are the biggest spreaders of airborne diseases. Belgium has already adopted regulations (although very weak, it's a good start), showing that this is a tractable policy goal.
Ideally, air quality standards also incentivize Far UVC deployment, which would create the regulatory certainty for companies to invest in this technology.
Including standards for airborne pathogen concentrations would be great, but has many technical limitations at the moment I think.
2) Public R&D investments to bring down cost & establish safety of Far UVC
Most of these concerned citizens are actually aware of Far UVC and would support this measure. It appears safe in terms of no radiation damage, but may create unhealthy compounds (e.g. ozone) by chemically reacting with indoor air particles.
I also believe that governments have good reasons to adopt these policies, given that they would reduce the pressures on healthcare and could reduce the disease burde
We should prepare for a hypothetical generalized EA-bashing.
As time goes by, we should expect EA to be the target of more and more criticism. More than that, we should probably also plan for spans of time during which EA will be, by default, considered an evil thing. This line of scenario does not seem far-fetched to me, as it already seems to start concretizing itself in France.
We need a plan, it's not costly to build one, and I think that it is plausible enough for EA's reputation to keep degrading in the next three years for time spent on this in local groups to have net-positive expected value.
I think that the best thing we can do is to never, never abandon the principles of charitability, respect and rationality that inhabit the EA space. Some people will try to do it, they will try to push us so as to make us angry, to say things that are unwarranted. But we should never commit this crime. Yann Lecun is a good example of how someone can end up exploiting (voluntarily or not) one's anger : on twitter, he's borderline violent, while in real life, he retracts and dicusses calmly. This could manifest itself with violent interlocutors presenting in real life, in front of his calm version, resenting from the near-violence he displayed online. This would be disastrous.
On all sides and with all interlocutors, even the most abhorrent ones, we should strive to be calm and respectful. I think that Eliezer Yudkowsky's exchanges with Yann Lecun are, sadly, an example of the opposite happening. Maybe Eliezer sounds like a calm person to you -but I can very easily empathize with Lecun on why his replies sound arrogant and dismissive. You cannot say the same about someone like, e.g., Anthony Magnabosco, which is a better model to strive towards in this setting (I'm not talking about the method but the general tone and gentleness).
2-Do not loose the purpose.
Something worth noting is that, as EA is going to be the center of many critiques, so
Spread the word! https://web.archive.org/web/20240105055337/http://www.nytimes.com/2024/01/03/health/covid-masks-vaccinations.html?smid=nytcore-ios-share&referringSource=articleShare (I edited the link so that it doesn't require you to have a New York Times account to read the New York Times article) TL;DR is that there might (emphasis on the "might") be another COVID-19 wave on the rise (in the US), there's a new COVID-19 variant (JN.1) (which has a vaccine) (everywhere), not enough people are wearing masks(in the US), and not enough people are getting the latest vaccines against COVID-19 (including the vaccine(s) against the new variant, JN.1), the flu, and RSV (in the US)
Load more (8/22)
With a number of charity evaluators recommendations coming out over the last few days/weeks, has there been any further development on AI safety/GCR evaluator(s)? A need that was raised in the post below (I don't know if best EA forum practice is to ressurrect an old thread or not, so I apologies if it's better I just comment in there).