I work as a researcher in statistical anomaly detection in live data streams. I work at Lancaster University and my research is funded by the Detection of Anomalous Structure in Streaming Settings group, which is funded by a combination of industrial funding and the Engineering and Physical Sciences Research Council (ultimately the UK Government).
There's a very critical research problem that's surprisingly open - if you are monitoring a noisy system for a change of state, how do you ensure that you find any change as soon as possible, while keeping your monitoring costs as low as possible?
By "low", I really do mean low - I am interested in methods that take far less power than (for example) modern AI tools. If the computational cost of monitoring is high, the monitoring just won't get done, and then something will go wrong and cause a lot of problems before we realise and try to fix things.
This has applications in a lot of areas and is valued by a lot of people. I work with a large number of industrial, scientific and government partners.
Improving the underlying mathematical tooling behind figuring out when complex systems start to show problems reduces existential risk. If for some reason we all die, it'll be because something somewhere started going very wrong and we didn't do anything about it in time. If my research has anything to say about it, "the monitoring system cost us too much power so we turned it off" won't be on the list of reasons why that happened.
I also donate to effective global health and development interventions and support growth of the effective giving movement. I believe that a better world is eminently possible, free from things like lead pollution and neglected tropical diseases, and that everyone should be doing at least something to try to genuinely build a better world.
I would argue that EA jobs don't pay well at all for the level of work they expect, and that they all have a substantial sacrifice premium as compared to other jobs. Also EA job hunting is awful, the work is quite horrendously insecure, and I definitely wouldn't recommend EA as a kind of way to get ahead in life. I consider the overmarketing of this to ambitious young idealists to be one of EA's worst failures.
I would agree that I view EA as a great opportunity, and by such sacrifice we achieve a somewhat spiritual process of transformative self-actualisation. But I don't think someone else should have to, particularly not a someone else that is struggling in a more marginalised space. And it's my experience that they generally don't.
Legitimately, if you have any "you should join up with EA" argument that works on a marginalised person that isn't just lying by pretending there's more in it for them than there actually is, please let me know because I'd like to use it.
Hello! Great to have you on-board.
While I'm very supportive of improving the intracommunity experience of members of marginalised groups, I'm skeptical of outreach to them (above and beyond general outreach). What being an EA is, is a substantial personal sacrifice in favour of helping other people you'll probably never meet. I'm not sure it's effective, or appropriate, to disproportionately ask for that sacrifice from members of marginalised groups.
For example, I don't think we should be asking black Americans (more than white Americans) to fundraise for malaria prevention for black African children, just because they're both black. That, to me, seems incredibly crass. Being an EA as a whole is basically that, a couple steps removed - make substantial sacrifices to effectively help some of the most marginalised people in the world.
People may feel strongly that the work to increase the cost-effectiveness and transparency of the funding in the effective animal advocacy movement is sufficiently high leverage that it needs a more professional tone than that likely to come from work performed by an emotionally motivated volunteer.
I would suggest that if this is the case, they fund someone to do this, rather than disparage EA volunteers or near-volunteers for not being up to the standards of paid full-time professionals. I note that GiveWell researcher salaries are substantial.
The issues raised are, after all, legitimate - Vetted Causes has led to several transparency improvements around the direction of millions of pounds of animal advocacy funding.
Thank you for your work improving the evaluations of Animal Charity Evaluators, and for the evaluations you have done.
I think that highly engaged animal welfare focused EAs doing evaluations on a volunteer or near-volunteer basis, and publishing them for others to learn things from, is very important in a field so starved for funding.
I have mostly decided that I don't care what anyone else thinks about the diamond next to my name, because anyone else that could be thinking of this is probably not someone in danger of dying from a neglected tropical disease. There are probably a few people in my life that would be actively upset if they were aware I was giving away thousands of pounds a year but not to them. I don't go out of my way to wave it in their face that that's what I'm doing, that'd be crass. But I am really rather over the idea that we all need to pretend that we don't have any money spare for giving while at the same time socking out piles of cash on personal spending.
My hottest take is that EA needs to figure out better engagement strategies for effective givers and not constantly dunk on "earning to give" as a controversial concept. And also make its large events cheaper (there's been great progress on this). I reckon EA needs to prepare for the very real possibility/eventuality of our per-person infrastructure funding getting restricted.
Hi Zoe! Great to see you on the Forum.
I know that EAs and FIRE people often hang out in similar social spaces - as they're both interested in things like higher-paying jobs, lower-cost lifestyles, and well-chosen investment strategies. I also know of a few FIRE people who plan on donating their wealth to effective charities should they not end up spending it all in retirement. I believe the rough opinion of EAs who donate now is that there are a number of pressing near-term issues that will (hopefully!) not exist in 40 years. One might hope that we'd have eradicated malaria by then, for example. So people who are really interested in fighting malaria (or other neglected global health challenges) donate now.
I think the choice for an effective giver to pledge or not to pledge, to advertise or not to advertise one's giving, to donate now or later etc. is really an individual's choice, and I'd support them setting up a structure that works for them and their life. It seems like you have a structure that works for you, and I'm glad of that.
On a separate note: I'm hoping that EA is getting away from being "AI safety long words club" as a result of a) AI safety group organising spinning out of EA into its own thing, and b) funded AI safety work becoming more about public and government engagement rather than fairly niche research.
You may be interested in Christians for Impact or in Bless Big.
Experientially, I agree: "doing good better" just isn't as inspiring a message as "the best ways to help others".
With that said:
Wasn't the reason for "doing good better" partially about tamping down maximising messages within the EA community, on account of committed EAs sometimes tending to overmaximise at the cost of their health or their other ethics, then ultimately burn out and quit?
Is it not supposed to provide a more sustainable framework, rather than a more initially personally inspiring one?
So I wouldn't strike it off the whole record just yet.
I agree with you that improving intracommunity experience is important.