If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)

If you're new to the EA Forum, you can use this thread to introduce yourself! You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)

If you're new to effective altruism, consider checking out the Motivation Series (a collection of classic articles on EA). You can also learn more about how the Forum works on this page.

14

0
0

Reactions

0
0
Comments16


Sorted by Click to highlight new comments since:

Hello everyone,

I am Srishti Goyal from New Delhi, India. I have been working as a researcher in the social development space since my post-graduation in Economics. In the coming years, I intend to undertake a Ph.D. in Behavioral economics, and I would like to get connected to people who are interested or are working in this space. Otherwise, my interests varies which includes international affairs, political affairs, climate change, apart from social development (education, health, child protection, among others) and behavioral development.

I would like to thank Silvana Hultsch from introducing me to effective altruism and this forum, as it turns out that my ideology is in sync with EA, I was just not aware of this technical jargon. 

I look forward to learning from you! :)

Regards,

Srishti

Anyone else find it weird that we can strongly upvote our own comments and posts? It doesn’t seem to do anything except promote the content of certain people who are happy to upvote themselves, at the expense of those who aren’t.

EDIT: I strongly upvoted this comment

Yeah, this has been discussed before. I think that it should not be possible to strongly upvote one's own comments.

Relatedly, should we have a strong dispreference for upvoting (especially strong upvoting ) people who work in the same org as us, or whom we otherwise may have a nonacademic interest in promoting*? Deliberately soliciting upvotes on the Forum is clearly verbotten, yet in practice I know that I'm much more likely to read work by somebody else if I had a prior relationship with them**, and since I only upvote posts I've read, this means that I'm disproportionately likely to upvote posts by people who I work with, which seems bad.

On the flip side, I guess you can argue that any realistic pattern of non-random upvoting is a mild conflict of interest. For example, I'm more likely to read forecasting posts on the Forum, and I'm much more likely to upvote (and I rarely downvote) posts about forecasting. This in turn has a very small effect of raising awareness/attention/prestige of forecasting within EA, which has a very small but nonzero probability of having material consequences for me later.

So broadly, there are actions along the spectrum of "upvoting things you find interesting may lead to the movement being more interested in things you find interesting, which in turn may have a positive effect on your future material consequences" all the way up to "full astroturfing."

A possible solution to this is for people to reflect on how they came across the article and chose to read it. If the honest answer is "I'm unlikely to have read this article if not for a prior connection with the author," then opt against upvoting it***.

It's also possible I'm overthinking this, and other people don't think this is a problem in practice.

*(eg funders/fundees, mentors/mentees, members of the same cohort, current project partners, romantic relationships, etc)

**I haven't surveyed others so I don't know if this reading pattern is unusual. I will be slightly surprised if it is though.

***or flip a coin, biased towards your counterfactual probability of reading the article without that prior connection.

I strong-upvote when I feel like my comment is underappreciated, and don't think of it as too different from strong-upvoting someone else's comment. The existence of the strong-upvote already allows someone to strong-upvote whatever they want, which doesn't seem to be a problem.

I think of this as different from voting for another person's content. When I read a comment with e.g. 3 upvotes and 10 karma, I assume "the author supports this, and I guess at least one other person really strongly agrees." If the "other person" who strongly agrees is actually the author, I get a skewed sense of how much support their view has. 

Given the tiny sample sizes that voting represents, this isn't a major problem, but it still seems to make the karma system work a bit less well. As a moderator/admin, I'd discourage strong-upvoting yourself, though the Forum doesn't have an official ban on it.

Is it difficult to remove the possibility of strongly upvoting yourself?

Not particularly hard. My guess is half an hour of work or so, maybe another half hour to really make sure that there are no UI bugs.

Ah OK it may be worth doing then

This hasn't been implemented yet, was it forgotten about or just not worth it?

Oh, I think the functionality is currently net-positive. I was just commenting on the technical difficulty of implementing it if the EA Forum thought it was worth the change.

On a related question: I just posted a question to the forum, and once the page refreshed on the question I had just asked, it already had one vote. Is this an auto-setting where my questions get automatically upvoted by me, or did someone really upvote it in the few (mili)seconds before submitting it and the page reloading?

All of your posts start with a strong upvote from "you" automatically. Your comments start with a normal-strength upvote from "you" (as they do on Reddit). You can undo these votes the same way you'd undo any of your other votes.

I have recently been toying with a metaphor for vetting EA-relevant projects: that of a mountain climbing expedition. I'm curious if people find it interesting to hear more about it, because then I might turn it into a post.

The goal is to find the highest mountains and climb them, and a project proposal consists of a plan + an expedition team. To evaluate a plan, we evaluate

  • the map (Do we think the team perceives th territory accurately? Do we agree that the territory looks promising for finding large mountains? and
  • the route (Does the strategy look feasible?)

To evaluate a team, we evaluate

  • their navigational ability (Can they find & recognise mountains? Can they find & recognise crevasses, i.e. disvalue?)
  • their executive ability (Can they executive their plan well & adapt to surprising events? Can they go the distance?)

Curious to hear what people think. It's got a bit of overlap with Cotton-Barratt's Prospecting for Gold, but I think it might be sufficiently original.

IIRC, Charity Navigator had some plans to look into cost-effectiveness/impact for a while, so maybe this was an easy way to expand their work into this? Interesting to see that this was supported by the Gates Foundation.

More discussion in this EA Forum post.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed
 ·  · 11m read
 · 
Summary The purpose of this post is to summarize the achievements and learnings at Impact Ops in its first two years. Impact Ops provides consultancy and hands-on support to help high-impact organizations upgrade their operations. We’ve grown from three co-founders to a team of 11 specialists and supported 50+ high-impact organizations since our founding in April 2023. We deliver specialist operations services in areas where we have deep experience, including finance, recruitment, and entity setup. We have 50+ active clients who we’ve helped tackle various operational challenges. Besides our client work, we’re pleased to have contributed to the broader nonprofit ecosystem in several ways, including via free resources. We’re also proud to have built a sustainable business model that doesn’t rely on continuous fundraising. We’ll share details about our services, projects, and business model in what follows, including our key takeaways and what’s next for Impact Ops! What is Impact Ops? Impact Ops is an operations support agency that delivers services to nonprofit organizations.  Our mission is to empower high-impact projects to scale and flourish. We execute our mission by delivering specialist operations services in areas where we have deep experience, including finance, recruitment, and entity setup. Our team has extensive experience within nonprofit operations. Collectively, we have: * 50+ years’ experience working at nonprofits (incl. Effective Ventures, CEA, Panorama Global, Anti Entropy, Code For Africa, Epistea, and the Marine Megafauna Foundation) * 50+ further years’ experience working in related roles outside the nonprofit community, including COO, recruitment, and accounting positions. These figures underrepresent our collective relevant experience, as they exclude time spent supporting nonprofit organizations via Impact Ops (10 years collectively) and working for other consultancies (incl. PwC, EY, BDO, and Accenture). If it sounds like we're pr