Once (or twice) a year, Scott Alexander does a big push to organize ACX meetups all around the world. When I've done interviews with many existing EA, Rationality and ACX groups, this is often the time they got off the ground, as it is for many groups the first time they reach critical mass. All you have to do is set a time and place, and show up to meet other people who also like reading Scott's blog.

You can volunteer to host a meetup in your city by filling out this form. The deadline is the end of the day on Tuesday (two days from now).

If you want more details, check out the ACX post announcing the meetups, or ask me questions here.

32

0
0

Reactions

0
0
Comments1


Sorted by Click to highlight new comments since:

I am +1-ing this post, because [anecdote, n=1] ACX Everywhere Meetups helped me find and recruit EA focused individuals when I organized meetups in Houston, Texas whom I connected with an existing EA organizer there.

I'm hoping to increment that n=1 to n=2 here in Norfolk, Virginia and support EA focused individuals here too :) Come join us on 18 September, 2022.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
 ·  · 12m read
 · 
TL;DR HealthLearn provides accredited, engaging, mobile-optimized online courses for health workers in Nigeria and Uganda. We focus on lifesaving clinical skills that are simple to implement. Our recent evaluation of the HealthLearn Newborn Care Foundations course showed significant improvements in birth attendants’ clinical practices and key birth outcomes. Early initiation of breastfeeding, strongly linked to reduced newborn mortality, improved significantly in the evaluation. After applying large (>10X) discounts, we estimate the course is ~24 times more cost-effective than GiveWell’s cash transfer benchmark. We are uncertain about the precise magnitude of impact, but a sensitivity analysis suggests that the program is cost-effective under a wide range of plausible scenarios. Our already-low unit costs should decline as we scale up. This is likely to increase or at least maintain the program’s cost-effectiveness, even if the impact per trainee is lower than our current point estimate. We also earn revenue by hosting courses for another NGO, which covers a portion of our core team costs and increases cost-effectiveness per philanthropic dollar spent. We have identified key uncertainties in evidence strength, sustainability of clinical practice change, and intervention reach. We plan to improve our monitoring and evaluation to assess these uncertainties and develop more precise estimates of impact per trainee. We will continue our work to improve and scale up the Newborn Care Foundations course, while also developing new courses addressing other gaps in clinical practices where impactful interventions are needed. Background HealthLearn is an AIM-incubated nonprofit that develops and provides engaging, accredited, case-based, mobile-optimized online courses for health workers (HWs) in Nigeria and Uganda. This includes one HealthLearn course (Newborn Care Foundations) and two courses (focused on epidemic preparedness and hypertension diagnosis and management) f