I've heard a plan to use impact certificates ( https://medium.com/@paulfchristiano/certificates-of-impact-34fa4621481e ) in the following way.

Suppose I work at org A, but I actually value work at org B more. The plan is that I get impact certificates for my work for A, and find a buyer who is willing to give me B-certificates in exchange for my A-certificates. Now I have some amount of B-certificates, which is like doing work for B in the first place.

I'm not convinced on this. My question: if I don't think work at A is valuable, why should I trust the market to know better than me? I'm okay with the stock market determining good prices for shares of Microsoft, but that market is huge; the impact certificate market is likely to be small for at least a while.

An analogy: when should I trust PredictIt's market (https://www.predictit.org/markets/detail/3633/Who-will-win-the-2020-Democratic-presidential-nomination) on who will win the Democratic nomination over Nate Silver's analysis ( https://projects.fivethirtyeight.com/2020-primary-forecast )? Right now the two disagree significantly on Bloomberg's chances (PredictIt gives 25%, Nate Silver gives him 4%).

Another angle on the concern: ordinarily, when you believe B > A, you "vote with your feet" by doing B instead of doing A. In this situation, you instead are effectively "voting" to lower the price of A-certificates on the market. So you're trading off one against the other. But it seems likely that many people will take you working at A as an endorsement of A. This convention is really strong; I certainly do it. Unless you put "I ACTUALLY TRADE ALL THESE IMPACT CERTIFICATES FOR B-CERTIFICATES" on your LinkedIn and mention it to people you meet at parties, I think people will continue to do it.

I'm concerned that splitting the "vote" between these two methods will do harm to the community's ability to decide what types of work are good.

What are people's thoughts on this? Any written resources? (I can't find much on impact certs beyond Paul's original post.)

21

0
0

Reactions

0
0
Comments12


Sorted by Click to highlight new comments since:

This use-case for impact certificates isn't predicated on trusting the market more than yourself (although that might be a nice upside). It's more like a facilitated form of moral trade, where people with different preferences about what altruistic work happens all end up happier on account of switching so that people can work on things they can make more progress on rather than the things they personally want to bet on. (There are some reasons to be sceptical about how often this will actually be a good trade, because there can be significant comparative advantage to working on a project you believe in, from both motivation and having a clear sense of the goals; however I expect at least some of the time there would be good trades.)

On your second concern, I think that working in this way should basically be seen as a special case of earning to give. You're working for an employer whose goals you don't directly believe in because they will pay you a lot (in this case in impact certificates), which you can use to further things you do believe in. Sure there's a small degree to which people might interpret your place of work as an endorsement, but I don't think this is one of the principle factors feeding into our collective epistemic processes (particularly since you can explicitly disavow it; and in a world where this happens often others may be aware of the possibility even before disavowal) and wouldn't give it too much weight in the decision.

Hmm, your first paragraph is indeed a different perspective than the one I had. Thanks! I remain unconvinced though.

Casting it as moral trade gives me the impression that impact certificates are for people who disagree about ends, not for people who agree about ends but disagree about means. In the case where my buyer and myself both have the same goals (e.g. chicken deaths prevented), why would I trust their assessment of chicken-welfare org A more than I trust my own? (Especially since presumably I work there and have access to more information about it than them.)

Some reasons I can imagine:

- I might think that the buyer is wiser than me and want to defer to them on this point. In this case I'd want to be clear that I'm deferring.

- I might think that no individual buyer is wiser than me, but the market aggregates information in a way that makes it wiser than me. In this case I'd want a robust market, probably better than PredictIt.

I'm not trying to take any view over whether there's moral disagreement (I think in practice moral and empirical disagreements are not always cleanly distinguishable, but that's a side point).

If you agree on goals, then maybe you will Aumann update towards agreement on actions and no trade will be needed. If there's a persistent disagreement (even after you express that organisation A does not seem to you to be a good use of resources) then maybe it's not a trade between different ultimate moral perspectives, but a trade between different empirical worldviews, such that the expectation of having made the trade is better for both worldviews than before making the trade. From your perspective as a certificate-seller, you don't need to know whether the buyer agrees with your moral views or not.

I agree with this. I wasn't trying to make a hard distinction between empirical and moral worldviews. (Not sure if there are better words than 'means' and 'ends' here.)

I think you've clarified it for me. It seems to me that impact certificate trades have little downside when there is persistent, intractable disagreement. But in other cases, deciding to trade rather than to attempt to update each other may leave updates on the table. That's the situation I'm concerned about.

For context, I was imagining a trade with an anonymous partner, in a situation where you have reason to believe you have more information about org A than they do (because you work there).

In the case where the other party is anonymous, how could you hope to update each other? (i.e. you seem to be arguing against anonymity, not against selling impact certificates)

Sure, I agree that if they're anonymous forever you can't do much. But that was just the generating context; I'm not arguing only against anonymity.

I'm arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it's important to know you're doing that and to have a decent understanding of why you differ.

If you are trading certificates with someone, you are deferring to their views on what to do

I think this is meaningfully wrong; at least the sense in which you are deferring is not stronger than the sense in which employees are deferring to their employer's views on what to do (i.e. it's not an epistemic deferral but a deferral to authority).

"The sense in which employees are deferring to their employer's views on what to do" sounds fine to me, that's all I meant to say.

[for context, I've talked to Eli about this in person]

I'm interpreting you as having two concerns here.

Firstly, you're asking why this is different than you deferring to people about the impact of the two orgs.

From my perspective, the nice thing about the impact certificate setup is that if you get paid in org B impact certificates, you're making the person at orgs A and B put their money where their mouth is. Analogously, suppose Google is trying to hire me, but I'm actually unsure about Google's long term profitability, and I'd rather be paid in Facebook stock than Google stock. If Google pays me in Facebook stock, I'm not deferring to them about the relative values of these stocks, I'm just getting paid in Facebook stock, such that if Google is overvalued it's no longer my problem, it's the problem of whoever traded their Facebook stock for Google stock.

The reason why I think that the policy of maximizing impact certificates is better for the world in this case is that I think that people are more likely to give careful answers to the question "how relatively valuable is the work orgs A and B are doing" if they're thinking about it in terms of trying to make trades than if some random EA is asking for their quick advice.

---

Secondly, you're worrying that people might end up seeming like they're endorsing an org that they don't endorse, and that this might harm community epistemics. This is an interesting objection that I haven't thought much about. A few possible responses:

  • It's already currently an issue that people have different amounts of optimism about their workplaces, and people don't very often publicly state how much they agree and disagree with their employer (though I personally try to be clear about this). It's unlikely that impact equity trades will exacerbate this problem much.
  • Also, people often work at places for reasons that aren't "I think this is literally the best org", eg:
    • comparative advantage
    • thinking that the job is fun
    • the job paying them a high salary (this is exactly analogous to them paying in impact equity of a different org)
    • thinking that the job will give you useful experience
    • random fluke of who happened to offer you a job at a particular point
    • thinking the org is particularly flawed and so you can do unusual amounts of good by pushing it in a good direction
  • Also, if there were liquid markets in the impact equity of different orgs, then we'd have access to much higher-quality information about the community's guess about the relative promisingness of different orgs. So pushing in this direction would probably be overall helpful.

What is meant by "not my problem"? My understanding is that what is meant is "what I care about is no better off if I worry about this thing than if I don't." Hence the analogy to salary; if all I care about is $$, then getting paid in Facebook stock means that my utility is the same if I worry about the value of Google stock or if I don't.

It sounds like you're saying that, if I'm working at org A but getting paid in impact certificates from org B, the actual value of org A impact certificates is "not my problem" in this sense. Here obviously I care about things other than $$.

This doesn't seem right at all to me, given the current state of the world. Worrying about whether my org is impactful is my problem in that it might indeed affect things I care about, for example because I might go work somewhere else.

Thinking about this more, I recalled the strength of the assumption that, in this world, everyone agrees to maximize impact certificates *instead of* counterfactual impact. This seems like it just obliterates all of my objections, which are arguments based on counterfactual impact. They become arguments at the wrong level. If the market is not robust, that means more certificates for me *which is definitionally good*.

So this is an argument that if everyone collectively agrees to change their incentives, we'd get more counterfactual impact in the long run. I think my main objection is not about this as an end state — not that I'm sure I agree with that, I just haven't thought about it much in isolation — but about the feasibility of taking that kind of collective action, and about issues that may arise if some people do it unilaterally.

I'm concerned that splitting the "vote" between these two methods will do harm to the community's ability to decide what types of work are good.

Could you go into detail about why you think this would be bad? Typically when you are uncertain about something it is good to have multiple (semi-) independent indicators, as you can get a more accurate overall impression by combining the two in some way.

I'm deciding whether organization A is effective. I see some respectable people working there, so I assume they must think work at A is effective, so I update in favor of A being effective. But unbeknownst to me, those people don't actually think work at A is effective, but they trade their impact certificates to other folks who do. I don't know these other folks.

Based on the theory that it's important to know who you're trusting, this is bad.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat