Hide table of contents

21

On January 15 we will have the drawing for the donor lottery discussed here. The opportunity to participate has passed; this post just lays out the details and final allocation of lottery numbers. If you regret missing out, I expect there will be another round, and it would be useful to know that you are interested.

There were 18 participants who contributed a total of $45,650. We will take the first 10 random hexadecimal digits from the NIST randomness beacon at 12pm PST on January 15 and interpret them as a random integer between 0 and 16^10-1. The interval [0, 16^10-1] has been allocated amongst the 18 participants in proportion to their contribution, as indicated in the table below. The random number will fall into the [Low #, High #] range of exactly one participant, who is the winner.

I will set aside $45,650 from my DAF, to be granted at the winner's discretion at any time. They can also choose how that money should be invested in the meantime.

We originally stated that the prize pool would be $100,000, but have decided to adjust it to $45,650, guaranteeing that there will be a winner and reducing my personal risk to zero. The winner is welcome to take a double-or-nothing bet in order to get up to $100,000 if they prefer the larger scale (and can probably find a way to gamble to even larger amounts if they want to).

Because I no longer bear any risk, I am not going to charge a 1% fee (which was my original plan). Organizing and thinking about the lottery still took 3-4 hours of my time, but I think that I can offer lotteries with minimal labor in the future, and I am happy to put a little volunteer time into making the first one happen. (Some other donor may be a more natural provider over the long run though.) 

Contributor Amount ($) Low # (in decimal) High # (in decimal) Probability
Timothy Telleen-Lawton 5050 0 121632721144.5 11%
Gregory Lewis 5000 121632721144.5 242061157921.5 11%
Ajeya Cotra 2200 242061157921.5 295049670103.5 5%
Rohin Shah 2800 295049670103.5 362489594699.5 6%
Helen Toner 2500 362489594699.5 422703813087.5 5%
Nicole Ross 500 422703813087.5 434746656765.5 1%
Howie Lempel 5000 434746656765.5 555175093542.5 11%
Rebecca Raible 2000 555175093542.5 603346468253.5 4%
Pablo Stafforini 2000 603346468253.5 651517842964.5 4%
Aaron Gertler 500 651517842964.5 663560686641.5 1%
Brayden McLean 5000 663560686641.5 783989123418.5 11%
Benjamin Hoffman 100 783989123418.5 786397692154.5 0.2%
Catherine Olsson 500 786397692154.5 798440535832.5 1%
Eric Herboso 500 798440535832.5 810483379509.5 1%
Ian David Moss 2500 810483379509.5 870697597898.5 5%
Glenn Willen 500 870697597898.5 882740441576.5 1%
Jacob Steinhardt 4000 882740441576.5 979083190997.5 9%
Brandon Reinhart 5000 979083190997.5 1099511627775 11%

Lessons

Donating to popular charities is a lot easier than contributing to a DAF; future lotteries should probably be implemented as donation swaps. For example, if I wanted to make a $100k contribution to MIRI, then participants could donate $X to MIRI and tell me to reduce my donation by $X. This makes participating in the lottery roughly as easy as donating to MIRI, which has good payment infrastructure. I think donation swaps are also useful when employers offer donation matching, though donation matching didn't come up this year. (I think matching lottery entries is compatible with the spirit of employer donation matching.)

We got more participation than I initially expected. Some of the participation was based on the novelty of the idea, but nevertheless I expect there will be a larger lottery next year. That should also be helped by a smoother user experience---no $5k minimum, implemented as donation swapping so very easy, and accompanied by an upfront explanation of how to participate.

Now that the drawing is going to happen, I do expect the lottery winner to make a materially better decision (in expectation) than they would have made otherwise. Moreover, I think the existence of the lottery was bottlenecked on the kind of work that Carl did in advocating for the idea and contacting possible providers (rather than on the existence of customers). So I've increased my estimate for the value of entrepreneurial spirit in the EA community.

Sanity check

3c0ade0f0490dff240b5b4a97c522c14cfd1490a2d40b4dddc535e0bd238c6fb

This is the SHA-256 hash of the first section of the post (original text here, I've edited it since then but not changed the substance of the agreement), which I will post on twitter and other people are free to store for their records. Hopefully this is a cheap measure which can make it difficult to manipulate the terms of the auction after the random number is revealed.

I've emailed these hopefully-final terms to participants and no one has objected so far. If there are any last-minute revisions, then we will hopefully have time to get things in order prior to January 15. I will tweet the updated SHA-256 hash at that time.

Comments9


Sorted by Click to highlight new comments since:

Looks like Tim Telleen-Lawton won, as the first ten digits of the beacon at noon PST were 0CF7565C0F=55689239567. Congratulations to Tim, and to all of the early adopters.

I owe Michael Nielsen $60k to donate as he pleases if [beacon.nist.gov](beacon.nist.gov/home] is between 0000000000... and 028F5C28F5... at noon PST on 2017/4/2.

Are further results out yet? (e.g., where Tim Telleen-Lawton donated, or whether Michael Nielsen got the $60K)

I’d like to give a quick update on my plans for the 2016 Donation Lottery winnings.

Of the $45,650, I’ve decided to give $21,000 to the Czech Association for Effective Altruism so they can hire one full time staff (or equivalent) for one year to manage the organization. I have not yet transferred that money, nor decided how to allocate the other $24,650.

I decided to support the Czech Association for Effective Altruism because I am impressed with their ability to execute difficult projects, I believe their projects have the potential to make a large positive impact (including via the impact on the chapter members executing them), I believe they will be able to execute substantially more and higher-quality projects with employed leadership than without one, and I believe funding is the limiting factor for the chapter hiring leadership staff.

I became aware of the Czech Association for Effective Altruism (The Chapter) when they hosted 2 CFAR workshops near Prague in October 2017; CFAR hired me to be one of a handful of instructors for those workshops. Some observations and beliefs from spending time with a few of the leaders from the chapter:

  • The Chapter successfully caused there to be CFAR workshops in Europe in 2017 that wouldn’t have happened otherwise. The demand for the workshops was high enough to justify two workshops in rapid succession. Hosting these workshops was one of a few major priorities for The Chapter in 2017.
  • The Chapter handled virtually all of the operations for the two workshops (~10 staff and ~30 participants each workshop), including finding a venue with relatively narrow specifications and providing lodging, food, local transportation, supplies, and instructor support. While there were some hiccups in the operations, it generally went very well, and better than I (and most CFAR staff with whom I discussed it) had expected from a first-time crew. At least one CFAR instructor believed that the operations at the Prague workshops were even better than they are for the typical CFAR workshop in the Bay Area, where they are generally managed by a CFAR employee with support from volunteers.
  • The leaders of The Chapter seem to be observant, thoughtful, self-critical, and dedicated. These attributes make me much more confident that they will be successful, particularly for their ability to observe problems and make adjustments accordingly over time.
  • The Chapter seems less well connected to the global EA movement and possible funders than other equivalently talented EAs with which I’m familiar. I also expect that the global movement would benefit from The Chapter being more influential within it.

Some expectations related to the donation:

  • Much of the success of The Chapter in 2017 seems to be attributable to having a Director that was spending approximately full-time on the chapter (despite very little compensation). The past Director recently left to acquire a paid full-time job, and I expect The Chapter’s effectiveness to drop substantially if they are not able to hire replacement leadership.
  • The Chapter believes that the staff they hire with this donation will be able to lead fundraising efforts to support their own salary and the rest of The Chapter budget for future years.
  • I intend to only make this donation if I can do so legally. The donation process may involve donating the money to another non-profit (with 501(c)3 tax advantaged status) that would in turn consider supporting The Chapter. If not all of the money is passed on to The Chapter, it will reduce the efficiency of the donation. I hope for The Chapter to receive about $20k, since that is what they estimate they need to hire leadership for one year (and they believe other donations can cover their other budgetary needs). I expect I will need to allocate about $21k in order for The Chapter to likely receive $20k.

I’m planning to post audio of my last interview with The Chapter, as well as budgetary and strategic information that The Chapter has shared with me.

Edits: inserted the organization's official name, "Czech Association for Effective Altruism", and corrected bullet formatting.

I'd be interested in learning your general thought process, though probably you should only answer these after you've allocated the entire lottery amount, and only if you feel that it makes sense to answer publicly.

  1. How much time would you say that you invested in determining where to give?
  2. How many advisors did you turn to in order to help think through these decisions? In retrospect, do you think that you took advice from too many different people, not enough, or just the right amount?
  3. Was The Chapter among the first potential causes you thought of?
  4. How many different organizations did you seriously consider? Of these, how many reached the stage where you interviewed them?

The Chapter sounds like an excellent giving opportunity for a gift of this size, since it's directly paying for a position that they would need to maintain their current level of effectiveness. I'm glad to know that my portion of the donor lottery funds are being used in such a positive manner.

I'm glad to know that my portion of the donor lottery funds are being used in such a positive manner.

I would add, though, that participation doesn't affect the expected payout to any player's recommendations (and in the CEA lottery setup, it doesn't affect the pot size or draw probability). I.e. if other donor lottery players planned to donate their funds to something completely useless, that doesn't make any difference for you (unless hearing that they had made that donation outside the lottery context would have changed your own charity pick).

Update

  • I've posted the audio from my last interview with The Chapter here.
  • I received this update from The Chapter January 10th:
  • we are in the process of signing an agreement with [an organization that has agreed to transfer the funds], should be finished in about a week / then they can accept the transfer (attached)
  • they have a bit nonsense fee structure, so they want 8% of transfers under 25k USD, and 5% over 25k. I asked for a discount, but they decided against it: so it would be actually cheaper if you send us 25 thousand USD, pay 1250 fee and we send you 5120 back (or we can re-send it anywhere you want)
  • we were able to secure additional funding for the gap in "organization" budget sized $3000, so the organization budget is covered no
  • our [draft-for-comments] plan for 2018 is here: https://docs.google.com/document/d/1NqRum6-kAyl7bUUxjkX-IzVFpZnujy05MbjrUuuVIwg/edit?usp=sharing
  • at least temporarily we split the work into two positions - something like "strategy director" (strategy, public communication, fundraising) and "community directory" (coordination of volunteers, meetups, member onboarding, etc...) with time dotation ca 15h and 30h/week, respectively, and I'm doing the strategy work, while Kristina Nemcova is doing the community part. Timeframe for finding a permanent director is 3 months when Kristina is leaving abroad

Thanks for the update!

The past Director recently left to acquire a paid full-time job, and I expect The Chapter’s effectiveness to drop substantially if they are not able to hire replacement leadership.

Do you know if the chapter is planning to hire back the outgoing director, or hire a different replacement director?

They are not planning to hire the outgoing director as that person has already started a new job in a different city.

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
 ·  · 1m read
 · 
This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on. Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are: * Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers. * A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk. * Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem. * An overview, in section 6, of what we can do, today, to prepare for this range of challenges.  Here’s the abstract: > AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.  > > These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. > > We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are alig