Since many of us around the world are in various stages of quarantine and all events are online, it might make sense to open up some of our planned local events to the wider EA public.

So here's an open thread for open events!

16

0
0

Reactions

0
0
Comments12


Sorted by Click to highlight new comments since:

Weekly EA Icebreaker Socials (speed friending sessions)

This is an opportunity for EAs from different countries to connect with each other. These involve you getting paired up with different people to play games, answer interesting questions, or just get to know each other. You'll get to meet several new people each week.

Last weekend's international icebreaker event had great reviews.

We will host 3 sessions per week for different timezones:

EUROPE/AFRICA (every Thurs 8-9:30pm GMT): https://www.facebook.com/events/163517728167150/

AUSTRALASIA/ASIA (every Wed 7-8:30pm SGT): https://www.facebook.com/events/571492756800073/

AMERICAS (every Fri 8-9:30 pm EDT): https://www.facebook.com/events/2973423259381560/

*All events are open to everyone regardless of timezone

Intro to Forecasting (EA San Francisco + Stanford EA discussion event)

cross-posted from Facebook. (I will check both this thread and the FB event for comments)

Date: 2020/04/29

Time: 19:00-21:00 (PDT)

Location: Zoom videoconferencing (recommend web portal)

You might have heard of forecasting by now. Many of the cool kids* are doing it, using fancy terms like "brier score," "metaculus", "log-odds", "calibration" and "modeling." You might have heard of superforecasters: savvy amateurs who make robustly better forecasts on geopolitical events than trained analysts at the CIA. What you might not have learned is that these skills are eminently trainable: In the original Good Judgement Project, researchers have found that a short 1-hour training course can robustly improve accuracy over the course of a year!

Next Wednesday, EA SF is collaborating with Stanford Effective Altruism to host an introductory event on forecasting. Together, we will practice (super)forecasting techniques: the skills and ways of thinking that allowed savvy amateurs to make better forecasts on geopolitical events than trained analysts at the CIA.

Background
https://en.wikipedia.org/wiki/The_Good_Judgment_Project
https://en.wikipedia.org/wiki/Superforecasting
http://www.academia.edu/download/37711009/2015_-_superforecasters.pdf
https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/

Here are some of the topics we'll try to cover and practice in small groups, time permitting:

Base Rates: Outside View vs. Inside View
Credence References: Thinking in Credible Intervals
Controlling for Scope: Consider the probability distribution across different outcomes than posed by the question, such as longer/shorter timeframes
Analytics: Fermizing, assessing signal vs. noise. Controlling for biases and fallacies
Comment: Making explicit rationales to prevent hindsight bias and share information
Compare: Explain your reasoning, benefit from viewpoint diversity, and accelerate learning
Update: Revise your forecast as new information comes in or you change your view

Structure: We'll meet together briefly to go over the details and then split into smaller groups with 3-6 group members each, including a group leader. Each group will be given a discussion sheet that they can copy and group leaders will be given an answer key.

We'll be using Zoom as our platform as that allows the most seamless split into smaller groups. For security reasons, we recommend using the Zoom Web portal over the Mac App.

We expect many of the attendees to be new to forecasting, but also several people who're very experienced forecasters and/or otherwise quite plugged in into the current state-of-the-art of forecasting. Depending on who shows up, it might also make sense to have a Q&A in addition to the small group discussions.

As this is a virtual event, all are welcome. However, we only have limited small group leader capacity, so in the (very fortunate!) world where many more people show up than we expect, groups may be asked to nominate their own group leaders instead of having an appointed one with prior experience managing EA discussions.

Hope to see you there!

*and some of the uncool kids

Covid-19 Mistakes and How We Can Do Better (EA SF event)

FB event here

NOTE: Because David is based in Israel, we'll start our event on 11AM Sunday rather than our usual Wednesday meeting time.

What and how did we get things in the pandemic predictably wrong? More than that, what lessons can we learn that generalizes to future biosecurity and pandemic prevention/relief efforts?

David Manheim, an EA biosecurity and policy researcher, superforecaster, and Future of Humanity contract researcher, discussed mistakes he and others have made in covid-19 forecasting.

This will be EA: San Francisco's fourth speaker event.

Due to security issues with Zoom App, we *strongly* recommend Zoom web client if you have security concerns.

As this will be online, I see little reason to restrict this to people living within the physical Bay Area. Feel free to invite friends from all over the world (in a compatible time zone) if they wish to attend.

Tentative schedule (PDT; UTC -7:00):
Talk: 11:00-11:30 AM (Youtube Live)
Q&A: 11:30-12:10 (Zoom)
Mingling: 12:10-

(Details of exact schedule TBD)

There is a calendar of events here.

Does anyone know how I can add my event "Option Value in Effective Altruism: Worldwide Online Meetup" (April 13, 16:00 UTC) to this calendar?

[anonymous]1
0
0

Copying Catherine's message from the Group Organizers Slack:

Option Value in Effective Altruism: Worldwide Online Meetup

April 13, at 16:00 UTC

https://www.facebook.com/events/574583239932514

What's Up With the Windfall Clause? (online EA SF event)

cross-posted from Facebook. (I will check both this thread and the FB event for comments)

Date: 2020/04/08

Time: 19:00-21:00 (PDT)

How can we ensure that the gains from Transformative AI collectively benefit humanity, rather than just the lucky few? How can we incentivize innovation in AI in a way that's broadly positive for humanity? Other than alignment, what are distributive issues with the status quo in AI profits? What are practical issues with different distributive mechanisms?

In this talk, Cullen O'Keefe, Research Scientist at OpenAI and Research Affiliate at FHI, will argue for the "windfall clause": in short, that companies should donate excess windfall profits from Transformative AI for the common good.

You may be interested in reading his paper summarizing the core ideas [1], or his AMA on the EA Forum [2].

This will be EA: San Francisco's inaugural online event (and only our second general event).

We're still looking into different technological options for the best way to host this talk, but please have the Zoom App downloaded and create a Zoom account.

As this will be online, I see little reason to restrict this to people living within the physical Bay Area. Feel free to invite friends from all over the world (in a compatible time zone) if they wish to attend.

Tentative schedule:
Talk: 7:00-7:25
Q&A: 7:25-8:10
Structured Mingling: 8:10-9:00.
(Details of exact schedule TBD)

For the sake of everyone's mental health, we are banning all discussions of The Disease Which Must Not Be Named.

[1] https://arxiv.org/pdf/1912.11595.pdf
[2] https://forum.effectivealtruism.org/posts/9cx8TrLEooaw49cAr/i-m-cullen-o-keefe-a-policy-researcher-at-openai-ama

The talk is finished! You can view the video here.

I was impressed by how high the turnout was. 34 concurrent on the livestream, 100+ views, and 29 people at the follow-up Q&A Zoom call afterwards.

This is happening on Facebook in an hour! Please check the FB event for more details.

We'd like to try using this forum to coalesce possible questions to ask Cullen. Please use this comment chain to ask and rank questions about the Windfall clause!!

(We will attempt, but not guarantee, asking him questions in the order of highest-upvoted questions in this thread that wasn't covered in the talk, as well as some live questions!)

cross-posted from Facebook, which I will likely be checking much more regularly.

Forecasting 102 (EA SF discussion event)

Our previous forecasting workshop in April was a smashing success! Many people* have said it was helpful, insightful, fun etc. Can we repeat our success by having another great forecasting workshop? Only time (and a sufficiently large sample size) can tell!

Next Wednesday, EA SF is collaborating with Stanford Effective Altruism to host another event on forecasting. Together, we will practice forecasting techniques: the skills and ways of thinking that allow us to quantify our uncertainties and be slightly less confused about the future.

Some Background

https://en.wikipedia.org/wiki/The_Good_Judgment_Project

https://en.wikipedia.org/wiki/Superforecasting

http://www.academia.edu/download/37711009/2015_-_superforecasters.pdf

https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/

Here are some of the tentative topics we'll try to cover and practice in small groups, time permitting:

Question selection: How do we know what questions are the right ones to ask?

Question operationalization: How do we ask the right questions in the right way?

Intuition pumps & elicitation: How do we understand our intuitions in a way that’s accessible to our conscious thinking?

Information sources & internet research: How do we efficiently gather information in a time-constrained manner?

Distributional forecasting: How do we give probabilities on a range of outcomes, not just a single number for a distribution?

Technical tools: What tools are useful in aiding our forecasting?

As well as some general practice on calibration and making quantified predictions of the future!

Structure: We'll meet together briefly to go over the details and then split into smaller groups with 3-6 group members each, including a group leader. Each group will be given a discussion sheet that they can copy and group leaders will be given an answer key.

We'll be using Zoom as our platform as that allows the most seamless split into smaller groups. For people with security concerns, we recommend using the Zoom Web portal over the Mac/Windows App (I am uncertain of the quality of the app on Linuxes).

The assumed background is people with some passing familiarity with forecasting (eg, have attended a prior forecasting workshop by EA San Francisco or others, have done some predictions on metaculus, or have otherwise read Superforecasting), with some members having significantly more experience. However everybody’s welcome! If you have no prior exposure to forecasting, I recommend reading the AI impacts summary of Superforecasting[1] and doing some Open Phil calibration exercises[2]. Depending on who shows up, it might also make sense to have a Q&A in addition to the small group discussions.

As this is a virtual event, all are welcome. However, we only have limited small group leader capacity, so in the (very fortunate!) world where many more people show up than we expect, groups may be asked to nominate their own group leaders instead of having an appointed one with prior experience managing EA discussions.

Hope to see you there!

*n>=1, source unverified

[1] https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/

[2] https://www.openphilanthropy.org/blog/new-web-app-calibration-training

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 14m read
 · 
As we mark one year since the launch of Mieux Donner, we wanted to share some reflections on our journey and our ongoing efforts to promote effective giving in France. Mieux Donner was founded through the Effective Incubation Programme by Ambitious Impact and Giving What We Can. TLDR  * Prioritisation is important. And when the path forward is unclear, trying a lot of different potential priorities with high productivity leads to better results than analysis paralysis. * Ask yourself what the purpose of your organisation is. If you are a mainly marketing/communication org, hire people from this sector (not engineers) and don’t be afraid to hire outside of EA. * Effective altruism ideas are less controversial than we imagined and affiliation has created no (or very little) push back * Hiring early has helped us move fast and is a good idea when you have a clear process and a lot of quality applicants Summary of our progress and activities in year 1 In January 2025, we set a new strategy with time allocation for our different activities. We set one clear goal - 1M€ in donations in 2025. To achieve this goal we decided: Our primary focus for 2025 is to grow our audience. We will experiment with a variety of projects to determine the most effective ways to grow our audience. Our core activities in 2025 will focus on high-impact fundraising and outreach efforts. The strategies where we plan to spend the most time are : * SEO content (most important) * UX Optimization of the website * Social Media ; Peer to Peer fundraising ; Leveraging our existing network The graphic below shows how we plan to spend our marketing time: We are also following partnership opportunities and advising a few high net worth individuals who reached out to us and who will donate by the end of the year. Results: one year of Mieux Donner On our initial funding proposal in June 2024, we wrote down where we wanted to be in one year. Let’s see how we fared: Meta Goals * Spendi