Hide table of contents

Three levels of detail: 

  1. Snapshot: 1 minute read (below)
  2. Outline:10 minute read (below)
  3. Detailed Write-up: 1 hour read (separate page, for reference)

I. SNAPSHOT

  • A projects platform called EA Impact CoLabs has just been released as a 1.0 beta version.  We are looking for EAs with projects and EA volunteers to add to or join the growing database - visit our website to join now!
  • A projects platform such as EA Impact CoLabs is a website where people with free time to devote to non-career-related projects can find interesting projects to volunteer for, or they can express a general interest in volunteering and be connected by our coordinators with projects that match their skills and interests.  A project's platform also allows people with existing projects or a project idea to post said projects on the platform to attract potential collaborators/volunteers. 
  • We believe a successful EA-aligned (but maybe not EA-branded) projects platform could lead to significantly more good being done via direct and indirect means.  Many current project leaders cannot find volunteers or collaborators easily and even if only a small fraction of projects are highly impactful or evolve into non-profits, those few could justify the effort required to build and maintain a platform.
  • An improved 2.0 website that builds off of an existing platform (HelpWithCovid) will be our next phase of development and we are looking for feedback on the concept in general and our plans in particular, as well as volunteers to join us and help build our current and future solutions - fill out this form or email us at info@impactcolabs.com to join our group!

II. OUTLINE

THE IMPORTANT PART

A group of volunteers has been working on developing a projects platform for EA, currently titled EA Impact CoLabs, and we would like to announce the release of our 1.0 beta version and request that the EA community begin to join as volunteers and/or post projects in need of assistance. Go to www.impactcolabs.com and join today! 

We would love feedback on this current version, including general thoughts or reporting of any bugs/issues.  We welcome feedback in the comments below and at our email: info@impactcolabs.com

We also have plans for a 2.0 version, which we intend to build off an existing open-source solution (HelpWithCovid, which apparently originated from a discussion “between Dustin Moskovitz & Sam Altman”) and need more volunteers ourselves.  We anticipate this new version will have significantly more functionality (see below under “Strategies..”), an improved user experience and, ultimately, a higher chance of long-term success.  Although we have some particular needs (e.g. Ruby developers) we welcome anyone that is willing to help.  If you are interested, please fill out this form or email us, ideally but not necessarily with a brief summary of your skillset, or other background info, at: info@impactcolabs.com.

Finally, if you have any questions not addressed in this post or if you disagree with the concept of a projects platform in principle, please see the Detailed Write-up. It contains a more nuanced and longform exploration of how a projects platform would fit within and benefit the EA community as a whole, an analysis of non-EA solutions that currently are trying to solve this problem in the market, and an in-depth review of each failure mode or challenge plus the potential mitigating strategies. 

Below you will find an outline that provides a summarized justification of the platform, an explanation on how we plan to achieve our objectives (including identified risks and challenges) and a final reiteration of our suggested next steps. As mentioned, if you have questions or concerns we suggest reviewing the more detailed write-up, but overall we certainly welcome any and all feedback in the comments below. 

JUSTIFICATION OF EA IMPACT COLABS (“Impact CoLabs”)

Significant Problems to be Addressed by Impact CoLabs

  1. There are currently significant inefficiencies within the EA community with the following:
    1. How project leaders and existing organizations find and attract potential volunteers (see IV.2. and IV.10)
    2. How volunteers find volunteering opportunities (see IV.2. and IV.10)
    3. How project leaders find and collaborate with others working on the same problem, or find how similar projects have failed (the “coordination problem”) (see IV.2. and IV.10.)
    4. How project leaders provide updates on their project to the wider community (see IV.9.)
  2. There are the following challenges:
    1. The EA community cannot easily oversee or stay up-to-speed on which projects are being founded and run and by who (see IV.9.)
    2. EA leadership cannot screen or vet projects that have significant potential downsides  (see IV.9.)
    3. Potential project founders cannot easily find a centralized list of tractable problems that need solving or a list of ideas for potential projects (see IV.9.)
    4. Project leaders are not being supported with guidance and resources to maximize their chance of success (see IV.11)
  3. There is an existing demand for a project platform that has been expressed multiple times on the EA forum. (see IV.10.)
  4. The current solutions within and outside the EA community are fantastic initiatives but they each have some drawbacks when considering the community as a whole:
    1. EA Work Club is not very active and focused on job opportunities
    2. Effective Altruism Volunteer Facebook group it not widely publicized/used and past opportunities are not easily visible
    3. EA Forum’s community projects tag is not filterable and is inconsistent
    4. Charity Entrepreneurship efforts are focused on their nonprofits
    5. Animal Advocacy Careers skilled volunteering section is focused on one cause area and on skilled volunteers.
    6. The 22+ non-EA project/volunteer matchmaking websites are not optimized for doing good (see Section V: Non-EA Solutions)

Potential Positive Impacts of Impact CoLabs

  1. Impact CoLabs could increase the number of projects being founded (see Figure 1 below) and increase the success rate of those projects, which would in turn lead to:
    1. More successful projects, which do more direct good in the world (see IV.1.)
    2. More projects that become incorporated nonprofit startups that do direct good in the world (see IV.1. IV.4)  (see Figure 3 below)
    3. More skill-building by those involved, which can be applied to other projects or jobs that do direct good in the world. (see IV.5)
    4. More risk-taking and entrepreneurship in the EA community as a whole that can have an indirect impact on good being done in the world (see IV.6.)
    5. More EAs collaborating more, which increases engagement and retention within EA (see IV.6.)(see Figure 2 below)
    6. More successful projects, which demonstrate the value of the EA movement/philosophy to enact good in the world, thus increasing its reputation and potential impact (see IV.1.) (see Figure 2 below)
  2. Impact CoLabs could allow more volunteers to find projects, which would:
    1. Increase the success rate of projects, which do more direct good in the world (see IV.I)
    2. Increase engagement in an EA-aligned activity by people that are primarily looking to do good by donating their non-work time, increasing top-of-the-funnel EA growth (see IV.7.)
  3. Impact CoLabs could decrease the likelihood of potentially hazardous projects being founded, or good project being run by the wrong leaders, through vetting, which would in turn lead to:
    1. Fewer direct negative effects from EA Projects (see IV.9.)
    2. Reduced likelihood of reputational harm to EA due to harmful EA Projects (see IV.9.)
  4. Impact CoLabs could eventually increase the quality of any non-EA projects posted on the platform due to better guidance and required sections or constraints within the platform (see IV.1.)
  5. Impact CoLabs could eventually enable projects and funders to find each other and communicate more efficiently (see IV.8.)

HOW TO ACHIEVE THE OBJECTIVES OF A PROJECTS PLATFORM

Primary, Quantitative, High Level Objectives

  1. Increase the total number of successful matches between project leaders and volunteers
  2. Increase the number of (EA) projects founded (see Figure 1 below)
  3. Increase the quality of (EA and non-EA) projects founded
  4. Increase the success rate of (EA) projects
  5. Decrease the number of harmful (EA and non-EA) projects
  6. Decrease the wasted effort of different people unknowingly working on the same problem/project
  7. Increase the success rate and decrease the effort of project leaders or organizations finding volunteers.
  8. Increase the success rate for and decrease the effort of volunteers finding projects or organizations that need volunteers.
  9. Decrease the effort of EA leadership in tracking and vetoing harmful or reputationally-harmful projects
  10. Increase the success rate of project volunteers getting hired by and contributing to EA organizations compared to comparable candidates/employees that did not work on projects found through Impact CoLabs.

Strategies for Achieving Objectives (in order of probable priority)

  1. Provide project founders with:
    1. A project page as a place to explain/justify their project, specify what help is needed, recruit volunteers and provide updates
      1. Include questions helping to analyze the impact of the project such as impact analysis, tractability/neglectedness/importance, counterfactual analysis, etc.
    2. A page listing specific, tractable problems that need solving or a list of ideas by others who do not have the bandwidth, inclination or skill to execute on those ideas
    3. A resources page for project founders to learn how to lead projects, attract volunteers, find the right expertise/counseling and perhaps find funding or established partners willing to collaborate
    4. A searchable, filterable database of other ongoing and failed projects to ensure they are not duplicating or wasting effort and to learn from the failed projects.
  2. Provide volunteers with:
    1. An easily searchable, filterable database to find projects that fit their cause area interest, their skillset, their desired time investment and/or seem like a good skill-building opportunity
    2. A profile that can be always active allowing project leaders to find and reach out to them if they have the in-demand skillset
  3. Screen/Vet potentially harmful projects and highlight/promote potentially high-impact projects (see tentative Project Vetting Guidelines)
  4. Either do not brand or associate Impact CoLabs with EA to mitigate reputational risks to EA and to potentially help attract non-EA volunteers in the future **OR** brand with EA so that EAs trust and invest their time in this platform and to help promote EA and its principles (currently assuming the latter but this might change when we launch our 2.0 version)
  5. Build a community by:
    1. Hosting “matching events” or speed-dating type meetups at EAGs or online.
    2. Help organize volunteer rings around certain skill-sets, cause areas, geographies or timezones.
    3. Collecting, synthesizing and sharing lessons learned, project ideas, past failed projects, and tractable problems that need solving
    4. Hosting a slack/discord server for EA project leaders and volunteers
    5. Hosting regular hack-a-thons, competitions and/or challenges around specific problem areas
  6. Partner with external organization to help with or provide:
    1. Vetting and screening of projects (e.g. EA organizations)
    2. Cost-benefit analyses on projects (e.g. charity evaluators)
    3. Guidance or training on how to successfully run projects (e.g. experienced non profit entrepreneurs or project leaders, foundries or incubators)
    4. Free services or resources for projects (e.g. legal firms, marketing firms, AWS, Google Ads)
    5. Funding sources or opportunities (foundations or funds)
    6. A steady stream of volunteers or project ideas (i.e. coding bootcamps, top universities, nonprofit foundries or incubators)

Risks and Challenges 

  1. More projects might not lead to more good in the world (see VI.1.)
    1. Projects and volunteer effort result in a net-negative impact, either because most projects have a negative impact, or due to the creation of extremely harmful projects that sink an otherwise net-positive impact (see VI.1)
    2. EA projects and volunteer effort result in net positive impact but the increase and/or distraction of non-EA projects and volunteers drag the overall impact into the negative (see VI.1. and VI.12)
    3. Even though the effort expended might have resulted in a net positive outcome, the time spent might have been better spent on other activities (i.e. opportunity cost is too high) (see VI.1. and VI.7.)
    4. Projects might not lead to incorporated nonprofit startups, which is the scenario with the highest upside potential of Impact CoLabs (see VI.3. VI.4)
  2. There won’t be enough projects or volunteers to sustain a platform on a continual basis (see VI.2.)
    1. Developing good project ideas is hard and becomes the bottleneck (see VI.3)
    2. Volunteering opportunities do not scale to meet the supply of volunteers (see VI.2. VI.6.)
  3. Increasing the ease with which an individual can post and gain traction on a harmful idea or project outweighs any positive impacts (i.e. unilateralist’s curse). (see VI.1. VI.8)
    1. Might increase the risk of information hazards (see VI.9.)
  4. Failure might cause reputation harm
    1. Failed projects (or a failed projects platform) might cause reputational harm to the project leaders, the volunteers, to Impact CoLabs, to the concept of a projects platform and/or to EA. (see VI.10.)
    2. Failed projects might turn off volunteers, project leaders and EAs on working on projects or in participating in EA initiatives (see VI.5.)
  5. It might be impossible or too resource intensive to screen/vet projects (see VI.11.)
  6. Impact CoLabs might cause value drift in the community, either by promoting risk-taking with small, low-stakes projects, or by attracting new EAs that bring different values (see VI.13. VI.14.)
  7. Any of the above negative effects could become permanent or long-term, causing them to become locked-in negative effects. (see VI.15.)

NEXT STEPS (+ the ‘ask’)

  1. Currently our Beta version is live for testing and we are asking people to join so we can begin collecting and matching projects and volunteers, and to test the current functionality
    1. Please visit EA IMPACT COLABS to volunteer or post a project! Send bugs/issues to info@impactcolabs.com.
  2. We are asking the EA community to provide feedback and/or suggestions on the concept of a projects platform in principle (see detailed write-up) and our current approach in particular.
    1. Please comment below or email info@impactcolabs.com
  3. We are planning a new, more capable 2.0 platform building off the open-source solution HelpWithCovid. Please visit our detailed Project Plan for more information.  If you are interested in volunteering to help build this platform:
    1. Please fill-out this form or email info@impactcolabs.com! We are looking for the following:
      1. Anyone that is interested in helping!
      2. Graphic designer(s)
      3. UX/UI designer(s)
      4. FE and BE developers (Ruby particularly)
      5. Marketing/outreach volunteers
      6. Community engagement/management volunteers
      7. Project vetting volunteers

 

Figure 1.

Figure 2. 

Figure 3. 

 

ACKNOWLEDGEMENTS

EA Impact CoLabs consists of a dedicated team who have developed the project to date (I mainly just wrote it up) and were instrumental in the development of this post, including (alphabetically) Edo Arad, Yonatan Cale, Tomer Eldor, Charles Escalante, Gidon Kadosh, Jatin Kansal, Naomi Nederlof, and Sam Nolan.  In addition, many others have provided critical input on the project and/or on this post including Catherine Low, Aaron Gertler, Vaidehi Agarwalla, Victor Yunenko, Aman Patel, Tinnei Pang, Radu Spineanu, and Harry K Ng.


 

68

0
0

Reactions

0
0

More posts like this

Comments13


Sorted by Click to highlight new comments since:

It's good to see a new enthusiastic team  working on this! My impression, based on working on the problem ~2 years ago is this has good chances to provide value in global health a poverty, animal suffering, or parts of meta- cause areas; in case of x-risk focused projects, something like a 'project platform' seems almost purely bottlenecked by vetting. In the current proposal this seems to mostly depend on "Evaluation Commission"->  as a result,  the most important part for x-risk projects seems judgement of members of this commission and/or it's ability to seek external vetting

Thanks Jan! Yes, we even reference your post in our detailed write-up and agree that vetting will be critical and a bottle-neck to maximum positive impact, particularly related to x-risk. Currently we have implemented a plan that we believe is manageable exclusively by a small group of volunteers, and have included a step in the process that involves CEA's Community Health team. Having said that, we don't think that is an ideal stopping point, we hope to expand into other forms of vetting pending general interest in the project, vetting volunteer interest and the building of other functionality or establishment of partnership with outside orgs. You can read more in sections IV.9 and VI.11 of the write-up about our thinking on these topics. Lastly, given your fantastic analysis in the past, if you would like to help out we would welcome any new team members that are interested in or familiar with this metaproject -- you can email info@impactcolabs.com anytime!

Just a quick update on this project (8 Oct 2023):

We decided to close the project in 2022 for two main reasons:

  1. We looked at engagement metrics and calculated how much throughput of volunteers and projects we would need to achieve a take-off trajectory on this metaproject and concluded it would take much active effort, partnerships and acceptance of the wider EA community.  Given that this was a volunteer project we thought the odds of this happening were low.  
  2. CEA also communicated with us that they were considering including something similar in the forum.  They ended up decided against it at the time (focusing on individual profiles more so founders could find each other), but in the meantime our volunteers became demotivated. 

We also found many difficulties around project vetting, volunteer management and upkeep that made it a hard initiative to continue.  Many of us still strongly believe more entrepreneurship within the EA community would be beneficial and that something like this metaproject would be very helpful. I am eager to discuss new metaprojects like this with anyone that wants to try to launch one and to communicate further leassons learned. 

This failure mode seemed similar in nature to this listed mistake on the CEA website. Specifically:

We think we should have taken on fewer new projects, set clearer expectations for them, and ended unsuccessful projects earlier.

Running this wide array of projects has sometimes resulted in a lack of organizational focus, poor execution, and a lack of follow-through. It also meant that we were staking a claim on projects that might otherwise have been taken on by other individuals or groups that could have done a better job than we were doing (for example, by funding good projects that we were slow to fund).

OTOH, it may not have caused harm in this case if 1) or others were sufficient reasons to close the project without 2), or if this wasn't a project that could have been done better than CEA.

I'd like to discuss a similar "metaproject" I have in the works. Currently my goal for a "minimum viable product" is just the list, with volunteer matching added later if it works, but also including smaller "quick win" projects and immediate contributions that could be made. Would you be willing to share further and discuss lessons learned on this one? 

Feedback on this post: I did not understand what CoLabs was after reading the 1-minute snapshot; I had to read aways into the outline and click through to the site to understand what the product is. I think you assume your target audience knows what a “projects platform” is without needing it defined, but it took me a while to understand what that meant.

Thank you for the feedback! You found a blindspot that most of us at Impact CoLabs and those we asked to review this post had, namely that we all had a concept in our mind for what a project platform was. I have adjusted the snapshot to hopefully aid in explaining the concept in general, but please let me know if this still doesn't address your issue.

This sounds like really valuable project!

I’ve been thinking about helping to set up some sort of EA incubator ecosystem. My contribution could be to collect, organize, prioritize, and roadmap all the project ideas that are floating around. I’d apply some sort of process along the lines of that of Charity Entrepreneurship but with a much more longtermist focus. I’ve been envisioning this in the form of a wiki with a lot of stub articles for project ideas that didn’t pass the shallow review phase and a few comprehensive articles that compile (1) detailed thinking on robustness, importance, tractability, etc.; (2) notes from interviews with domain experts; (3) a roadmap for how the project might be realized; (4) descriptions of the sorts of skills and resources it will require; (4) talent, funding, and other buy-in that is maybe already interested; (5) a comment section for discussions. (Jan’s process could be part of this too.) Since this would take the format of a wiki, I could easily add other editors to contribute to it too. I wouldn’t make it fully publicly editable though. Ideally, there’d also be a forum post for each top project that is automatically updated when the wiki changes and whose comments are displayed on the wiki page too.

My main worry is that the final product will just collect dust until it is hopelessly outdated.

So I’ve been wondering whether there are maybe synergies here, e.g., along the lines where I do the above, and your platform can in the end reduce the risk that nothing ever comes of the top project ideas?

I’ve only spot-checked a few of your current projects, but it seems to me that they typically have project owners whereas my projects would typically start out with no one doing them and at max. vague buy-in of the sort “People X and Y are tentatively interested in funding such a project, and person Z has considered starting this project but is now working on something else because they couldn’t find a cofounder.” Do you think that would be a critical problem?

Hi Denis, thank you for your message and your offer to contribute, it is welcome. Since we are just starting out we still haven't built all the capabilities we have envisioned. For example, and as mentioned above, we were planning a list of tractable problems and project ideas to guide potential project leaders, as well as a list of past/failed projects or lessons learned from projects to ensure the community as a whole is not just spinning its wheels (e.g. this metaproject has had similar iterations in the past..). But your idea for a wiki that not only provides problem areas and project ideas but also provides thought-through analyses, roadmaps, required skills lists, available resources and community input is a huge improvement over our current plan. So I don't think the issue of not having project leaders identified upfront would be a critical problem, as long as you're OK with your wiki being separate from the project database. Ideally entrepreneurial EAs will find your project write-ups on Impact CoLabs and then create a project from it (or people that are screened from the platform due to low-impact ideas can be directed to those pre-vetted ideas).

We definitely want the ultimate version of Impact CoLabs to be the central node for project creation, and we want the resources we provide to reflect that. The goal is to be a more high-volume/low-touch, top-of-the-funnel solution than the incubators/accelerators like Charity Entrepreneurship or other upcoming startup factories. But even if we are not going to shepherd projects personally and diligently, it doesn't mean we can't try to provide as detailed and well-researched guidance as possible.

My only slight hesitation for your approach is the effort involved in development and upkeep, we would rather offer a lower-value solution (just a list of ideas) that we can guarantee can be maintained than a higher-value solution (detailed wiki with required fields for each project idea) that has a large chance of being abandoned after a while. So it all depends on volunteer interest in contributing and/or how we set it up. Would love to chat about this more. If you want to take this offline we would recommend filling out our new team member form so we can get you more background info on the project, or alternatively you can just email info@impactcolabs.com.

Hi Mats! That sounds splendid!

Meanwhile I’ve set up my wiki, started thinking about the structure of the template I’d like to use for the project pages, and have started reading up on your Google Docs. It’s impressive how thoroughly you’ve already evaluated your project concept!

My “idea foundry” project itself will have its own page in its wiki with more information on my future plans. That’ll make it easier to reflect on whether the whole thing is sustainable. I haven’t thought about it sufficiently myself. I’ll only publish individual pages once I have proofread them for possible info hazards and have gotten feedback from some trusted friends too.

… as well as a list of past/failed projects or lessons learned from projects

Yeah, and there are also a lot of ostensibly brilliant project ideas in various lists that I think are subtly deleterious. No one has attempted to realize them yet (at least the ones I vaguely recall and to my knowledge) but a project database with just a bit more detailed thinking may help to keep it that way. (Or else may inspire someone to come up with a way to realize the project in a way that avoids the subtly deleterious bits.)

… as long as you're OK with your wiki being separate from the project database

Totally. It feels like so far I’ve been wholly unconvinced by some 95+% of project ideas I’ve read about, so those should not end up on your platform. It would just be valuable – or essential – to be able to promote the top of the shortlist to potential founders.

My only slight hesitation for your approach is the effort involved in development and upkeep, we would rather offer a lower-value solution (just a list of ideas) that we can guarantee can be maintained than a higher-value solution (detailed wiki with required fields for each project idea) that has a large chance of being abandoned after a while.

I’m worried about that too. I’d be willing to risk it, pending further thinking. An alleviating factor is that the detailed reviews would be reserved for a small shortlist of projects. Most of them would just get a quick stub summary and the reason why I didn’t prioritize them.

I’ve read that you’re perfectly open to (for-profit) social enterprises and of course early-stage project in need of cofounders. But I see the term “volunteer” a lot in the materials. It has these particular associations with low commitment, low responsibility, no salary, nonprofits, etc. Is it the best synonym for the job? None of the alternatives I can think of is quite broad enough either – cofounder, collaborator, partner, talent, … – but I imagine that such word choices can influence what the platform will end up being used for. A platform for “cofounder matching” may end up being used for more high-value work than one for “volunteer matching,” maybe some sort of “Task Y” notwithstanding. But I’ve also heard that someone had the impression that cofounder matching is not a current bottleneck, which I found surprising.

I’ll get in touch through one of the channels you recommended.

Thank you for the kind words and the great feedback! You make a great point about 'volunteering', we will discuss that internally. I'm generally in agreement with your comments but would love to explore some of the nuance! Look forward to hearing form you, if you reach out and don't hear back, please message me here to make sure we are being responsive.

Thank you Mats for posting this, this is exciting!

Hey everyone, I'll update that we (Impact CoLabs) would be helped by volunteers in:
 

1. Matching people & projects, and managing user communication in general

2. Evaluating projects (vetting if they should be accepted or not)

3. Developers - especially ruby on rails, to develop the better and more automated version 2.0.

If you're interested please let us know :) info@impactcolabs.com

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier