I work as a researcher in statistical anomaly detection in live data streams. I work at Lancaster University and my research is funded by the Detection of Anomalous Structure in Streaming Settings group, which is funded by a combination of industrial funding and the Engineering and Physical Sciences Research Council (ultimately the UK Government).
There's a very critical research problem that's surprisingly open - if you are monitoring a noisy system for a change of state, how do you ensure that you find any change as soon as possible, while keeping your monitoring costs as low as possible?
By "low", I really do mean low - I am interested in methods that take far less power than (for example) modern AI tools. If the computational cost of monitoring is high, the monitoring just won't get done, and then something will go wrong and cause a lot of problems before we realise and try to fix things.
This has applications in a lot of areas and is valued by a lot of people. I work with a large number of industrial, scientific and government partners.
Improving the underlying mathematical tooling behind figuring out when complex systems start to show problems reduces existential risk. If for some reason we all die, it'll be because something somewhere started going very wrong and we didn't do anything about it in time. If my research has anything to say about it, "the monitoring system cost us too much power so we turned it off" won't be on the list of reasons why that happened.
I also donate to effective global health and development interventions and support growth of the effective giving movement. I believe that a better world is eminently possible, free from things like lead pollution and neglected tropical diseases, and that everyone should be doing at least something to try to genuinely build a better world.
Here's my problem with neglectedness:
Have you looked into Power for Democracies? https://powerfordemocracies.org - EA does in fact do anti-fascist interventions evaluation
Not "good practice" as in policy but "a good practice", as I was replying to a comment saying that it was a bad thing that there had been no official CEA response in 24 hours on a platform that CEA owns.
I do not think that quick responses will help this situation. The time for a quick response meaningfully fixing things is long past. And I would think that any attempt to respond too quickly would be CEA attempting to control the narrative developing here in a way that is unfair to Frances and to the EA community. The purpose of this forum is to allow the EA community to meet and discuss things of importance to EA (which this is), and CEA hosts this forum to serve the EA community - not to control its brand image.
I also explained my disagree vote in order to imply that I was not disagreeing with the rest of the post. I do agree that allowing a misogynistic culture to develop to the degree an incident like this could happen is indicative of a failure of leadership capacity in EA's leadership organisation. And this raises questions about if some of the leaders involved here really are the kind of people best able to lead the EA movement.
https://80000hours.org/2015/08/what-are-the-10-most-harmful-jobs/
Number 6: Weapons research
Weapons researchers develop new ways of waging war. While in some cases new weapons will be more targeted and less harmful than the old ones, in general we expect this work to be very bad for the world. Over the medium term new weapons technologies becomes widely disseminated and so both sides end up being able to use them. Usually this makes any remaining wars more destructive.
Furthermore, weapons researchers are among the most likely to accidentally design new technologies that could be used to accidentally destroy humanity, such as nuclear or biological weapons in the past. We just don’t know what new opportunities for mass destruction are available, and we are probably better off not knowing.
My thoughts here:
*half in the sense that it should be one of at least two life philosophies you genuinely have.
That is a great deontological point for why you might wish to avoid paying your local EA community-builder out of pledge money. (I could counter that I think it is deontologically quite inappropriate that EA's growth strategy currently expects its local community-builders to do a bunch of free work).
And I understand your hesitancy about counterfactuality. I'm not sure I believe organisations with very high multipliers that brand themselves as fundraising from EAs, because of counterfactuality issues (I think EAs are precisely the kinds of people who might have otherwise given effectively anyway).
A professional fundraising organisation such as One for the World will likely never meet you, and is a slightly safer bet on the deontology part. And because they fundraise from outside of the EA community, there's not so much of a counterfactuality problem.
Yep, Giving What We Can is a great place to donate money, and you can do it really easily from GWWC's website!
I have written this post to reach someone who wants to know, with great certainty, that if they put $1k somewhere that at least $1k extra goes to GiveWell's Top Charities (very specifically those). The point is to let people who are of the persuasion to donate their money to GiveWell's Top Charities (lots of those people around) know that such organisations exist and have plenty of funding and scaling gaps.
I haven't attempted a comparative analysis of multipliers, other than convincing myself that there's a lot of things currently not funded that are definitely above 3x, even if you apply various downward adjustments. I'd be interested in seeing a comparative analysis, though I imagine it might be tricky to equalise the methodologies.
Surely you could phrase things the other way round?
"We're pretty sure this will be made illegal in 10 years time, as the law catches up to our technology advances. However, it's not illegal now, so feel free to buy it from us and use it!"
I'd be really uncomfortable with a billionaire tech CEO openly saying that.