TG

Tom Gardiner

Warfare officer @ Royal Navy
442 karmaJoined Working (0-5 years)

Bio

Participation
3

Tom is a junior officer in the UK's Royal Navy. He has been interested in EA and Rationality since 2017, was on the committee for the EA society at the University of St Andrews, and can intermittently be found in Trajan House, Oxford.

Note: Evidence suggests there is another Tom Gardiner in the EA community which may lead to reputational confusion.

Comments
26

That was the point I had meant to convey, Aaron. Thanks for clarifying that. 

This seems like an important critique, Tobias, and I thank you for it. It was a useful readjustment to realise I wouldn't be exceptionally wealthy for doing this in either society at large or the EA community. My sense is still that even being in the 92nd percentile of the UK going into this would be really valuable. Not world-changing valuable, but life-changing for many. That everything might get solved by technology and richer people is plausible, given the challenges in predicting how the future will pan out. I see this strategy mainly as a backstop to mitigate the awfulness of the most S-risk intensive ways this could go. 

Thanks for the input, Theodore!

I agree that my chances of getting a trader role are higher than average and whoever would get the job instead is almost certainly not going to donate appreciable sums. Naturally, I would devote a very large amount of time and energy to the decision of how to give away this money. 

I'm very sceptical about my ability to become an "expert" on these questions surrounding AI. This is largely based on my belief that my most crippling flaw is a lack of curiosity but I also doubt that anyone could come up with robust predictions on these questions through casual research inside a year.

My intuition is strongly in the other direction regarding donating to AMF now (with the caveat that I have been donating to GiveWell's top charity portfolio for years). I don't have strong credence on how the cost of a DALY will change in the future, but I am confident it won't increase by a greater percentage than tactful investments. It is a tragedy that anyone dies before medicine advances to the point of saving them but we must triage our giving opportunities. 

I'd never been convinced that Earning To Give in the conventional sense would be a more impactful career for me than operations management work. My social network (which could be biased) consistently implies the EA community has a shortage of management talent. A large amount of money is already being thrown at solving this problem, particularly in the Bay Area and London. 

I'm sympathetic to this point and stress that my argument above only applies if one is relatively optimistic about solving alignment and relatively pessimistic about these governance/policy problems. I don't think I'm informed enough to be optimistic on alignment but I do feel very pessimistic on preventing immense wealth inequality. The amount of coordination between so many actors for this not to be the default seems unachievable to me. 

This may be available elsewhere and I accept that I might not have looked hard enough, but are there impactful, funding-constrained donation opportunities to solve these problems?

Further to this, if the primary goal is to learn about how the general public thinks about charitable giving, you could probably achieve the same result for far less than 100k. The remainder could be held in reserve and given to that cause if you really do think it's the best use of the money, or to your current best guess if you do not. It seems like there's an insight you wish to have and you've set a needlessly big pricetag on obtaining it. 

I must offer my strongest possible recommendation for Speedy BOSH! - it has genuinely changed my relationship with food. None of the recipes I have tried are bad, some are fairly average but many are truly glorious. Obviously, as an EA I have been keeping notes on each dish I try from it in Google Doc and I'd be happy to suggest my favourites to anyone who buys/has the book. 

Lot of good points here. One slight critique and one suggestion to build on the above. If I seem at all confrontational in tone, please note that this is not my aim - I think you made a solid comment. 

Critique: I have a great sense of caution around the belief that "smart, young EAs", and giving them grants to think about stuff, are the best solution to something, no matter how well they understand the community. In my mind, one of the most powerful messages of the OP is the one regarding a preference for orthodox yet inexperienced people over those with demonstrable experience but little value alignment. Youth breaking from tradition doesn't seem a promising hope when a very large portion of this community is, and always has been, in their youth. Indeed, EA was built from the ground up by almost the same people in your proposed teams. I'm sure there are smart, young EAs readily available in our labour force to accept these grants, far more readily than people who also deeply understand the community but do not consider themselves EAs (whose takes should be most challenging) or have substantial experience in setting good norms and cultural traits (whose insights will surely be wiser than ours). I worry the availability and/or orthodoxy of the former is making them seem more ideal than the latter. 

Suggestion: I absolutely share your concerns about how the EA electorate would be decided upon. As an initial starting point, I would suggest that voting power be given to people who take the Giving What We Can pledge and uphold it for a stated minimum time. It serves the costly signalling function without expecting people to simply buy "membership". My suggestion has very significant problems, that many will see at first glance, but I share it in case others can find a way to make it work.  Edit: It seems others have thought about this a lot more than I have, and it seems intractable.

The first point here seems very likely true. As for the second, I suspect you're mostly right but there's a little more to it. The first of the people I quote in my comment was eventually persuaded to respect my views on altruism, after discussing the philosophy surrounding it almost every night for about three months. I don't think shorter timespans could have been successful in this regard. He has not joined the EA community in any way, but kind of gets what it's about and thinks it's basically a good thing. If his first contact with the community he had was hearing someone express that they donate 10% of their income or try to do as much good as possible, his response in NATO phonetics could be abbreviated to Foxtrot-Oscar. 

In the slow, personal, deliberate induction format, my friend ended up with a respectful stance. Through any less personal or nuanced medium, I'm confident he would have thought of the community only with resentment. Of course, there's no counterfactual of him donating or doing EA-aligned work so this has not been lost. The harm I see from this is a general souring of how Joe and Jane Public respond to someone identifying as an EA. Thus far, most people's experience will be their friends and family hadn't heard of it, don't have a strong opinion and, if they're not interested, live and let live. I caveat the next sentence with this being a system 1 intuition, but I fear that there's only so much of the general public who can hear about EA and react negatively before admitting to being in the community becomes an outright uncool thing, that many would be reluctant to voice. Putting the number-crunching for how that would affect total impact aside, it would be a horrible experience for all of us. I don't think you need a population that's proactively anti-EA for this to happen, a mere passive dislike is likely sufficient. 

Thank you for writing this. I'm not sure whether I agree or disagree, but it seems like a case well made. 

While I do not mean to patronise, as many others will have found this, the one contribution I feel I have to make is an emphasis on how very differently people in the wider public may react to ideas/arguments that seem entirely reasonable to the typical EA. Close friends of mine, bright and educated people, have passionately defended the following positions to me in the past:
-They would rather millions die from preventable diseases than Jeff Bezos donate his entire wealth to curing those diseases if such donation was driven by obnoxious virtue-signalling. The difference made to real people didn't register in their judgements at all, only motivations. Charitable donation can only be good if done privately without telling anyone. 

-It is more important that money be spent on the people it is most costly and difficult to help than those whose problems can be cured cheaply because otherwise the people with expensive problems will never be helped. 

-Charity should be something that everyone can agree on, and thus any charity dedicated to farmed animal welfare is not a valid donation opportunity.

-The Future of Humanity Institute shouldn't exist and people there don't have real jobs. I didn't even get to explaining what FHI is trying to do or what their research covers; from the name alone they concluded that discussion of how humanity's future might go should be considered an intellectual interest for some people, but not a career. They would not be swayed. 

Primarily, I think the "so what?" of this is trying to communicate EA ideas, nuanced or not, to the wider public is almost certainly going to be met with backlash. The first two anecdotes I list imply that even "It is better to help more people than fewer people." is contentious. Sadly, I don't think most of what this community supports fits into the "selfless person deserving praise" category many people have, and calling ourselves Effective Altruists sounds like we've ascribed ourselves virtues without justification that a person on the street would acknowledge. 

Accepting some people will react negatively and this is beyond our control, my humble recommendation would be for any more direct attempt to communicate ideas to the public gets substantial feedback beforehand from people in walks of life very different to the EA norm. People are really surprising.  

Agreed - Scott Alexander does this very well, as does Yudkowsky in Rationality: A-Z. Both of these also benefit from being blogs of their own creation, where they can dictate a lot of the norms, and so I expect to have a fair bit more slack in how high the ceiling is. 

As a teenager, I came up with a set of four rules that I resolved ought to be guiding and unbreakable in going through life. They were, somewhat dizzyingly in hindsight, the product of a deeply sad personal event, an interest in Norse mythology and Captain America: Civil War. Many years later, I can't remember what Rules 3 and 4 were; the Rules were officially removed from my ethical code at age 21, and by that point I'd stop being so ragingly deontological anyway. I recall clearly the first two.

Rule 1 - Do not give in to suffering. Rule 2 - Ease the suffering of others where possible. 

The first Rule was readily applicable to daily life. As for the second, it seemed noble and mightily important, but rarely worth enacting. In middle-class, rural England with no family drama and generally contented friends, there wasn't much suffering around me. Moving out to University, one of my flatmates was close friends with the man who set up the EA group there, and on learning more about it I was struck by the opportunity for fulfilling my Rules that GiveWell and 80k represented. 

This story does not account for my day-to-day motivation to uphold a Giving What We Can pledge or fumble through longtermist career planning. I've been persuaded by the flavour of consequentialism used here, think that improving the experience of sentient life is wonderful and, quite frankly, don't have any other strong compulsions for career aims to offer competition. Generally buying-in to the values and aims of this community is my day-to-day motivation. Nevertheless, on taking a step back and thinking about my life and what I wish to do with it, I still feel about the abstract concept of suffering the way Bucky Barnes feels about Iron Man at the end of that film. The Rules don't matter to me anymore, but their origin grants my EA values the emotional authority to set out a mission statement for what I should be doing. 

Load more