EffectiveAdvocate🔸

205 karmaJoined

Bio

I created this account because I wanted to have a much lower bar for participating in the Forum, and if I don't do so pseudonymously, I am afraid of looking dumb. 

 

I also feel like my job places some constraints on the things I can say in public.

Comments
44

I also don’t think it’s a good use of time, which is why I’m asking the question.

However, I believe attending is worth significantly more than three hours. That’s why I’ve invested a lot of time in this previously, though I’d still prefer to allocate that time elsewhere if possible.

E: It’s very helpful to know that the acceptance rate is much higher than I had thought. It already makes me feel like I can spend less time on this task this year.

Hi, I hope this is a good time to ask a question regarding the application process. Is it correct that it is possible to apply a second time after an initial application has been rejected?  

I understand that the bar for acceptance might be higher on a second attempt. However, I feel this would allow me to save considerable time on the application process. Since I was accepted last year and a few times before, I might be able to reuse an old application with minimal editing. This could help me—and potentially many others—avoid spending three or more hours crafting an entirely new application from scratch.  

Looking forward to your response! 😊 

Does anyone have thoughts on whether it’s still worthwhile to attend EAGxVirtual in this case?

I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't:

  • I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if I'm only available for that time slot, as this is something I would never do for an in-person conference.
  • I can't find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.

Hi Lauren!

Thank you for another excellent post! I’m becoming a big fan of the Substack and have been recommending it.

Quick question you may have come across in the literature, but I didn’t see it in your article: Not all peacekeeping missions are UN missions; there are also missions from ECOWAS, the AU, EU, and NATO.

Is the data you presented exclusively true for UN missions, or does it apply to other peacekeeping operations as well?

I’d be curious to know, since those institutions seem more flexible and less entangled in geopolitical conflicts than the UN. However, I can imagine they may not be seen as neutral as the UN and therefore may be less effective.

Could you say a bit more about your uncertainty regarding this?  
After reading this, it sounds to me like shifting some government spending to peacekeeping would be money much better spent than on other themes. 

Or do you mean it more from an outsider/activist perspective—that the work of running an organization focused on convincing policymakers to do this would be very costly and might make it much less effective than other interventions? 

Thank you for the response! I should have been a bit clearer: This is what inspired me to write this, but I still need 3-5 sentences to explain to a policymaker what they are looking at when you show them this kind of calibration graph. I am looking for something even shorter than that.
 

Simple Forecasting Metrics?
I've been thinking about the simplicity of explaining certain forecasting concepts versus the complexity of others. Take calibration, for instance: it's simple to explain. If someone says something is 80% likely, it should happen about 80% of the time. But other metrics, like the Brier score, are harder to convey: What exactly does it measure? How well does it reflect a forecaster's accuracy? How do you interpret it? All of this requires a lot of explanation for anyone not interested in the science of Forecasting. 

What if we had an easily interpretable metric that could tell you, at a glance, whether a forecaster is accurate? A metric so simple that it could fit within a tweet or catch the attention of someone skimming a report—someone who might be interested in platforms like Metaculus. Imagine if we could say, "When Metaculus predicts something with 80% certainty, it happens between X and Y% of the time," or "On average, Metaculus forecasts are off by X%". This kind of clarity could make comparing forecasting sources and platforms far easier. 

I'm curious whether anyone has explored creating such a concise metric—one that simplifies these ideas for newcomers while still being informative. It could be a valuable way to persuade others to trust and use forecasting platforms or prediction markets as reliable sources. I'm interested in hearing any thoughts or seeing any work that has been done in this direction.

Hi there!

I really enjoy the curated EA forum podcast and appreciate all the effort that goes into it. However, I wanted to flag a small issue: in my podcast app, emojis cannot be included in filenames. With the increasing use of the "🔸" in forum usernames, this has been causing some problems.

Would it be possible to remove emojis from the filenames?

Thanks for considering this!

This is a very non-EA opinion, but personally I quite like this on, for lack of a better word, aesthetics grounds: Charities should be accountable to someone, in the same way as companies are to shareholders, and politicians are to electorates. Membership models are a good way of achieving that. I am a little sad that my local EA group is not organized in the same way.

Load more