Hide table of contents

The Progress Open Thread is a place to share good news, big or small.

See this post for an explanation of why we have these threads.

What goes in a progress thread comment? 

Think of this as an org update thread for individuals. You might talk about...

  • Securing a new job, internship, grant, or scholarship
  • Starting or making progress on a personal project
  • Helping someone else get involved in EA
  • Making a donation you feel really excited about
  • Taking the Giving What We Can pledge or signing up for Try Giving
  • Writing something you liked outside the Forum (whether it's a paper you've submitted to a journal or just an insightful Facebook comment)
  • Any of the above happening to someone else, if you think they'd be happy for you to share the news
  • Other EA-related progress in the world (disease eradication, cage-free laws, cool new research papers, etc.)

20

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:

I won the Stevenson prize (a prize given out at my faculty) for  my performance in the  MPhil in Economics.  I gather Amartya Sen won the same prize some 64 years ago, which I think is pretty cool.

Amartya Sen won the same prize 

No pressure.

 

Just kidding, congratulations!

Damn congrats!!! 

I'm currently #1 on the leaderboard of CSET's foretell; predicting that, before the US elections, China would not add a U.S. company to its newly created Unreliable Entities List, and that no private messages obtained in the July Twitter hack would be leaked to the public just brought me from 2nd to 1st.

CSET's foretell is "a crowd forecasting pilot project launched by Georgetown’s Center for Security and Emerging Technology that focuses on questions relevant to technology-security policy." CSET received/was created with a large grant from OpenPhil.

congrats that's awesome!

Do you know what's the base rate of questions on foretell that resolves "yes?"

Good question. Of the binary questions which I've predicted and have resolved, 0/10 (!). But 7 were about US-China relations, so you'd expect them to be somewhat correlated. Some of them also seemed like they could have happened, like Microsoft's acquisition of TikTok, or European countries restricting Huawai. So, overall, my gut tells me I'd expect about 10-20% to resolve positively in the future. Note that binary questions (as opposed to questions which ask about different ranges) are relatively scarce; there are 29 questions open right now of which only 4 are binary.

Togo just became the first African country to have officially (according to the World Health Organization) eliminated sleeping sickness!

Myanmar has eliminated trachoma

In 2005, trachoma was responsible for 4% of all cases of blindness in Myanmar. By 2018, the prevalence of trachoma was down to a mere 0.008% with trachoma no longer a public health problem.

Meanwhile, Maldives and Sri Lanka have both eliminated rubella. This appears in the same article as the Myanmar news, because... 

...I guess eliminating diseases nationwide is so commonplace that we don't even need separate updates for each country/disease? That seems like a good thing.

I finished my degree! (BA in economics and philosophy). It ended up being quite a challenging final semester, mostly because of COVID and things going on in my personal life, so it's great to have it done.

I also won an award from my college for my performance (highest GPA), which was pretty cool.

Congratulations!

The world's first lab grown meat restaurant opened in Israel: https://www.livekindly.co/first-lab-grown-meat-restaurant/

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal