I’ve been wondering whether EA can’t find some strategic benefits from a) a peer-to-peer trust economy, or b) rational coordination towards various goals. These seem like simple ideas, but I haven’t seen them publicly discussed.
I’ll start from the related and oversimplifying assumptions that
a) there’s a wholly fungible pool of EA money (for want of a better name, let’s call it Gringotts) shared among EAs and EA organisations, and
b) all EAs trust all other EAs as much as they trust themselves such that we form a megamind (the Hive), and
c) all EAs consider all EA goals to be worthwhile and high value, even if they see some as substantially less so than others, such that we all have basically the same goal (collecting all the Pokemon)
In some cases these assumptions are so flawed as to be potentially fatal, but I think they’re an interesting starting point for some thought experiments - and we can focus on relevant problems with them as we go. But the EA movement is getting large enough that even if these assumptions were only to hold for microcosms of it, we might still be able to get some big wins. So here are some ideas for exploiting our Hivery, in two broad categories:
Building an EA social safety net
1) Intra-Hive insurance
Normal insurance is both inherently wasteful (insurance companies have to spend ages assessing risk to ensure that they make a profit on their rates) and negative expected value for the insuree (who pays for the waste, plus the insurance profits). In a well functioning Hive seeking all the Pokemon, with a sufficiently sizable Gringotts, each EA could just register things of irreplaceable value to them personally, and if they ever broke/lost/accidentally swallowed the item, job, existential status etc, they would get some commensurate amount of money with few questions asked. That would save Gringotts from the negative expected value (EV) of almost all insurance, give everyone peace of mind, and avoid a lot of time and angst spent dealing with potentially unscrupulous or opaque insurers.
In practice this is only as good an idea as our simplifying assumptions combined, and creates some dubious incentives, so might be a pipe dream. Still, if the EA community really is or could be a lot closer to the assumed ideal than society at large, it seems like there could be room for some small-scale operations like this - for example EA organisations offering such pseudo-insurance to their staff, and large-scale donors offering it to the EA organisations.
One way to potentially strengthen the trust requirement would be to develop an opt-in EA reputation system on an EA app or website, much like the ratings for Uber drivers. If it felt uncomfortable, obviously you wouldn’t have to get involved, but it could allow a fairly straightforward tier-based system of what you were eligible for based on your rating (probably weighted by other factors, like how many people had voted). You could also add some weighting to people currently working for EA organisations, though it might be too limiting to make that a strong prerequisite (Earn-to-Givers might want to insure themselves so they could safely give a higher proportion, for eg). As with normal insurance it would create moral hazard problems, but hopefully with some intelligent but low-cost reputation management this could still be a big net positive for Gringotts.
Personally, I think the reputation app is a really cool idea even if it never got used for anything substantial, but I’m prepared to be alone in that.
1.1) Guaranteed income pool for entrepreneurial EAs
This is much like insurance, with similar limitations and much the same potential for mitigating them. Except here it’s based on the idea that entrepreneurialism is one of the highest EV-earning pathways, but because of the ultra-high risks it’s out of reach to anyone who can’t get insta-VCed or support themselves for several months. As with insurance, Gringotts is big enough that it doesn’t really suffer from risks that affect a single person. In this case though, a further factor is that the EA would need to demonstrate some degree of competence to ensure that they were actually doing something positive EV, and to the extent that they could do so they might be able to get funding from regular pathways.
Something similar might also be useful for people interested in starting EA charities, before the stage where they might be eligible for a substantial grant from eg Givewell or OpenPhil. Again, I’m not sure such a window exists, but it seems worth looking at for people from poorer backgrounds.
2) Low interest loans
Loans have all the waste and negative EV of insurance, except that you get the money straight away - and there’s no question about whether you get it. This maybe makes them a stronger candidate for Gringotts-coverage, since it removes one risk factor. Relatedly, they also avoid the incentive-distorting effects of insurance, removing another.
In the real world, loans also require a credit rating check, which can be based on some quite arbitrary criteria, such as not being able to guarantee repayments because you’re poor, whether you use a credit card or debit card, or even whether you’re registered to vote. And given the relatively low number of factors the credit rating relies on, there would probably be a lot of random noise in it even if they were all sensible.
Lastly, with a normal loan, something has necessarily gone wrong for the creditor if a repayment is missed. Gringotts, on the other hand, might sometimes be content for a debitor to miss repayments if the money nonetheless went towards gathering a lot of Pokemon, or even if it had been wisely spent on an ultimately doomed venture.
3) A Hive existential space network
Couchsurfing may already be A Thing, but there might be some opportunities for making it a smoother experience given a robust trust network. Also, since sleeping isn’t the only mode of existence, ‘living spaces’ aren’t the only form of living spaces; given how many EAs work remotely, there’s probably also a lot of demand for working spaces. EAs with more modest living accommodation could also offer solo- or duo- (etc) working spaces, in the former case if they would normally work elsewhere. It might even be helpful to have co-working spaces in a fairly close area, with explicitly differing cultures (eg one being mostly silent, the other having music or ambient sound, or freer conversation, or with people working on similar project types, people with similar - or deliberately disparate - skills, people bringing children etc)
Given the psychological benefits for some of us of having a separate space for living and working combined with the emotional benefits of having a short commute, EAs who live near each other might even benefit from just swapping homes for the working day.
4) EA for-profits offering discounts on VAT-applied goods and services
At the moment there are few EA for profits, and many of those mainly offer services to disadvantaged subgroups rather than to other EAs. Nonetheless in future we might see a proliferation of EA startups, even if the only sense in which they’re EA is a strong effective giving culture among their founders. In such a case, if the goods/services they offer are VAT (or similar) taxable, it would provide an incentive for them to offer heavy discounts to other EA organisations and/or EAs - since the lower the cost, the less Gringotts would leak in service tax.
Gringotts could incentivise this with one of the strategies above, though there might be legal implications. Nonetheless, the Hive would surely benefit from finding out exactly what the legal limits are and exploring the possibilities of going right up to them.
Maximising the value of EA employees
5) EA organisations partially substituting salaries with benefits
Every time an EA working at an EA org buys something the org could have bought, Gringotts loses the income tax on whatever they’ve bought. In the UK at least, there’s a tax-free threshold of £11,500, but in an ideal world everything EA employees would want to spend money on above that threshold would be bought for them by EA organisations. More realistically, to keep things relatively egalitarian and maintain sensible incentives, the ideal might be to pay for any things that EA employees would need to maintain a healthy lifestyle. An initial laundry list of candidates I put together:
- accommodation, (not necessarily just for employees - we might ultimately be able to build peer-to-org or org-to-org existential space networks, per point 3 above)
- bills,
- travel to and from work,
- a gym membership (or some equivalent physical activity for people who find the gym too sterile),
- out-of-work education,
- electronic equipment for unrestricted (within reason) personal use,
- clothes,
- pension contributions,
- toiletries,
- food,
- medical supplies
I know some of these are already offered by some EA organisations (and many for-profits, come to that), and there will surely be legal restrictions on how much money you can spend on employees like this without it getting taxed. But the potential savings are so big that again the Hive should surely explore and share knowledge of the exact legal boundaries.
6) Employees of EA organisations not donating
Since every such donation is made with an EA’s taxed income, the same considerations as in 5) apply. Every time an EA does this, Gringotts loses the tax value of the donation. The simplest way to avoid this would be for EAs to just ask for a 10% lower salary (or whatever donation proportion they would imagine themselves having made) than they would have done for a comparable job elsewhere .
This would potentially redistribute money among causes since EAs working at one org might not think it’s actually the best one. But unless the proportion redistributed would be more than the average income tax on an EA salary (somewhere in the vicinity of 20% seems like a plausible estimate), this would be an iterated prisoner’s dilemma. Any individual could move more money to their cause of choice by requesting a higher income, but the fewer of us did so, the more money would end up with all the causes. And it feels like a Hive of cooperating altruists should be able to deal with one little wafer thin prisoner’s dilemma…
In cases where individuals are working for an EA org but feel that other organisations are substantially more than 20% more effective than their own, it feels like they should often prefer just earning to give. There are numerous possible exceptions - for example if you feel like the multiplier on the other org is higher than 20% but you wouldn’t earn enough in another to multiply to a net plus, or you’re planning to move to another EA organisation but are working in your current job to gain skills and reputation. It seems like such motivations would have intra-EA signalling costs, though, since they imply both that you’re defecting in the prisoner’s dilemma and that you don’t value the work of the people around you that highly. Ironically, it might actually look bad for an EA employee to admit to charitable donations.
Even so, the extra-EA signalling costs of not giving could conceivably outweigh both the intra-EA signals and the tax savings from doing so. If we believe this, an alternative approach would be to have EA orgs explicitly run donation-directing schemes. Each org could contribute to a pool of money they planned to redirect whose size was dependent on their number of staff and staff salaries. Then each employee could direct some proportion of it to the cause of their choice; the weight of their direction could either be proportional to the difference between their salary and the max salary they could have asked for or, more diplomatically, just equal for each employee. That way the money would still be distributed in much the same proportion as it currently is, but without being taxed - and EAs could still be said to be donating in some sense at least (and would still have an incentive to keep abreast of what’s going on elsewhere in the EA world).
7) Basing salary on expected career trajectory
Similarly to the previous idea, if I’m working at an EA organisation but expect that in the near future I’ll end up working in the private sector - either because I’m earning to give, because I’m trying to build career capital, or any number of other possible reasons - it doesn’t make sense for me to get a substantial amount more than I need to live on at the EA org and then give a lot of money away after I transition. Better to earn less now and give slightly less later.
Again, this follows from taxation - whether I later pay back the tax on the extra money I earned at the EA org or not, Gringotts will be that much poorer (because it partly comprises me). It also compounds to the extent that you agree with the haste consideration - the money saved now could be worth substantially more than the money you give later.
If you’re moving from the private sector into an EA org, the same strategy would probably make sense in reverse - if you’re in commerce and transitioning into an EA organisation, you would keep more now and ask for a commensurately lower salary from the EA org - though it would be less clear/less pronounced an effect because of the haste consideration. Also, the haste consideration suggests that if you’re never expecting to work at an EA organisation, it might be better to donate a declining proportion of your income (or rather, to donate in such a way as to increase the amount you keep for yourself over time, holding the net lifetime amount you expect to donate constant). Since this front-loads your donations, it also has the side-benefit of making future burnout less costly to Gringotts, and perhaps also less tempting for future-you.
This strategy is fairly high risk for an individual, in that if you suddenly need to pay for urgent medical assistance or some other emergency expenditure in your younger life, you might find yourself unable to afford it - but that’s just the sort of issue that could be mitigated or even resolved by Hive insurance.
It would also ‘lose’ the interest you’d have earned on the money you’d kept earlier, but you can account for that when calculating future donations. The effect will be dominated by the tax savings, and in any case, the money will still having been earning a (greater) return on investment through its EA use elsewhere in the Hive.
One complicating factor is that sometimes commercial employers will offer a salary based on the size of your current one, so taking a low salary from an EA org might harm future earning prospects. A possible remedy for this, if it wasn’t perceived as dishonest, and assuming the EA is leaving their organisation openly and on good terms with it, would be for them to briefly take a higher salary just as they started hunting for their next job. Personally I think this would be a poetic antidote to this obnoxious practice in the first place, but wider public opinion might disagree with me.
8) Offering clear financial security to all EA employees
Seemingly contrariwise to the above, but bear with me…
EA employees will be more productive if they aren’t dealing with financial insecurity, since such insecurity has high costs in both time and mental health.
According to 80K’s talent gap survey, even a junior hire is worth about $83,000 per year on average (median - much higher mean) to their EA organisation. If we take this literally, then a) EA organisations could comfortably test the effect of doubling (or more) the offered salaries on the number and quality of applications and perhaps more realistically b) they could afford to offer sufficiently high rates to even their most junior employees that money isn’t a substantial limiting factor in their lives.
What ‘isn’t a substantial limiting factor’ means is obviously fairly vague, but it seems like if any EA is eg spending a lot of time commuting, waiting for dated hardware to run, or eating a lot of cheap unhealthy food or not participating in healthy hobbies, or otherwise losing time or health to save money, then it will impede their productivity. Again, taking the above survey at an admittedly naive face value, it would be worth the average EA org spending up to $830 more per year to increase their junior employees’ productivity by just 1% (perhaps more if the employee's productivity increase would compound over their career).
We should probably be sceptical of such striking survey results - nonetheless, there’s room to be more conservative and still see the potential for gain here. In an ideal world, the financial security offered could mostly come from the benefits and insurance discussed above - ie at a ~20% discount.
Lastly, this reasoning argues only for the option of higher salaries/benefits - many EAs on very low salaries seem perfectly able and willing to get by on them - and only to people who would otherwise be below whatever financial threshold would allow them to stop feeling constrained or anxious in daily life.
I’m aware that some EA organisations are already implementing some form of these strategies, but they’re far from universally adopted. Perhaps this is because they’re bad ideas - this was quite an off-the-cuff post - but I haven’t really heard substantial discussion of any of them, so let’s have it now. And if there’s any mileage in the core assumptions, I’d hope such discussion will reveal several more ways we can use our almighty collective will.
Full disclosure - I work for an EA organisation (Founders Pledge), so some of these strategies would potentially benefit me. But hopefully they’d benefit FP still more.
Thanks to Kirsten Horton and John Halstead for some great feedback on this post.
For this group to make an effective social safety net for EAs having a bad time, more is needed than just money. When a real problem actually does arise, people tend to spam that person with uninformed suggestions which won't work. They're trying to help, but due to the "what you see is all there is" bias and others, they can't see that they are uninformed and spamming. The result is that the problem doesn't seem real to anyone.
So, the person who has a problem, who may not have any time or emotional energy or even intellectual capacity left over, must explain why dozens of spitball suggestions won't work.
How spitballing can totally sabotage people in need of help:
Imagine that to obtain help, you have to patiently and rigorously evaluate dozens of ill-conceived suggestions, support your points, meet standards of evidence, seem to have a positive attitude about each suggestion, and try not to be too frustrated with the process and your life.
The task of convincing people your problem is real while a bunch of friends are accidentally spamming you with clever but uninformed suggestions might be the persuasive challenge of a lifetime. If any of the ill-conceived options still seem potentially workable to your friends, you will not be helped. To succeed at this challenge, you have to make sure that every spitball you receive from friends is thoroughly addressed to their satisfaction.
A person with a real problem will be doing this challenge when they're stressed out, time poor and emotionally drained. They are at their worst.
A person at their worst shouldn't need to take on the largest persuasive challenge of their lives at that time. To assume that they can do this is about as helpful as "Let them eat cake.".
There's an additional risk that people will sour on helping you if they see that lots of solution ideas are being rejected. This is despite the fact that the same friends will tell you "most ideas will fail" in other circumstances. They know that ideas are often useless, but instead of realizing that the specific set of ideas in question are uninformed or not helpful, some people will jump to the conclusion that the problem is your attitude.
Just the act of evaluating a bunch of uninformed spitball suggestions can get you rejected!
Making a distinction between a problem that is too hard for the person to solve, and a person who has a bad attitude about solving their problem is a challenge. It's hard for both sides to communicate well enough to figure this out. Often a huge amount of information has to be exchanged.
The default assumption seems to be that a person with a problem should talk to a bunch of friends about it to see if anyone has ideas. If you count up the number of hours it actually takes to discuss dozens of suggestions in detail multiplied by dozens of people, it's not pretty. For many people who are already burdened by a serious problem, that sort of time investment just is not viable. In some cases the entire problem is insufficient time, so it can be unfair to demand for them to do this.
In the event that potential helpers are not convinced the problem is real, or aren't convinced to take the actions that would actually work, the person in need of help could easily waste 100 hours or more with nothing to show for it. This will cause them to pass up other opportunities and possibly make their situation far worse due to things like opportunity costs and burnout.
Solution: well-informed advocates.
For this reason, people who are experiencing a problem need an advocate. The advocate can take on the burden of evaluating solution ideas and advocating in favor of a particular solution.
Given that it often requires a huge amount of information to predict which solution ideas will work and which solution ideas will fail, it is probably the case that an advocate needs to be well-informed about the type of problem involved, or at least knows what it is like to go through some sort of difficult time due to past experience.
Another framing of that solution: EA needs a full time counselor who works with EAs gratis. I expect that paying the salary of such a person would be +ROI.