If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)

 


Open threads are also a place to share good news, big or small. See this post for ideas.

11

0
0

Reactions

0
0
Comments15


Sorted by Click to highlight new comments since:

Hello! I’ve been around EA since 2019! I was trying to choose a thesis topic and stumbled across Effective Thesis, which led to 80,000 hours, which led to hanging out a little with the Beijing chapter… you get the picture. Things have escalated since then (in a good way, ha!) and now I’m making a formal account here. 

I just started as the Research Coordinator at EA for Christians, where I support community building and research around the intersection of Christian theology and EA. I’ll be starting my Master of Divinity in NYC this September. (MDiv’s are typical degrees for priests/pastors, seminary professors, etc.) If you are interested in EA and theology/religion/spirituality, happy to talk!

But I do other stuff besides religion. I was a Yenching Scholar at Peking University and focused on Law & Society, writing my master’s thesis on international data law. I’m focused on institutional decision-making, great power conflict, US-China relations, theology, and research. Yet, I’m eclectic. So, interested in almost everything. 

Looking forward to engaging in discussions!

Welcome Caleb!

Welcome, Caleb! I'm always excited to see people with unusual specialties on the Forum; every bit of expertise matters.

Hi!

My main personal project for the summer is trying to figure out what I think about AI-risk, so I thought I should engage with the forum more to ask questions/solicit feedback. I'm currently a mathematics undergrad, about to start my 4th year, so part of this is trying to figure out whether or not I should pivot toward working in something closer to AI-risk. 

About me -- I first got interested in EA after reading Reasons and Persons in the summer of 2020. My main secondary academic interest in undergrad has been in political theory, so I'm very interested in questions such as whether naïve utilitarianism endorses political extremism, how that might be mitigated by a proper social epistemology, and what that might entail for consequentialists interested in voting/political process reform. I'm also very interested in the economics of cities and innovation, as well as understanding how we learn mathematics. I'm less sure how those topics fit in an EA framework, but I'm always interested in seeing what insights others might be able to bring to them from an EA standpoint. 

Here's hoping to learning a lot from y'all's!

-- Edgar

Two articles that you might find helpful:

AGI Safety from First Principles by richard_ngo
My Personal Cruxes for Working on AGI Safety by Buck
 

The former is an argument for why AGI Safety is potentially a really big problem (maybe biggest problem of our lifetimes), and the latter is stepping into the internal thought processes of an individual trying to decide whether to work on AGI safety over other important longtermist causes.

Great to meet you! You might be interested in some posts in the AI forecasting and Estimation of existential risk categories, such as:

I've also written a lot about AI risk on my shortform.

Hi,

I am about 2-3 months old into knowing EA. I was going through the bio of a professor who impressed me in a virtual lecture and her bio stated that she had pledged a part of her income to EA. That's where I first stumbled upon the name 'Effective Altruism' and it caught my attention immediately. The name says a lot. Thus, one thing led to the other as I continued browsing and reading about it, and here I am today.

Not knowing what I would do after my undergraduate studies, I knew one thing, I wanted to be able to help others as part of my profession. This led me to get my post-graduation degree in social work. I continued working in a variety of areas from human trafficking, children with intellectual disability, community development, counseling, capacity building of counselors,  school social work, designing and carrying out researches in different areas, and teaching research methodology to post-grad students.

Thereafter I took a long break in my career and long story short, here I am trying to find my way back. For the past year, I have been educating myself through various online courses in computational social science, research methods, data, and development policy. Childhood poverty is one of the areas where I am keenly interested in working.  Reading about EA brought my focus to concerns of farmed animal welfare part of which were there at the back of my mind but, thanks to EA work, got to the fore now.  I also got to know a lot about longtermism issues that I didn't know much about earlier.

I am looking forward to interacting with members here and learn a lot. I am open to discussions, volunteering or assisting/liaison with anyone on interesting EA-related projects.

 

Thanks,

Naghma

Welcome Naghma! It is great to have you here and learn about your background and interests.

A belated welcome, Naghma! 

A couple of recommendations for learning more:

  • Join the EA Newsletter to get regular updates on different causes, events, etc.
  • Browse through the EA Intro Program, a collection of articles on different topics that were selected for being among the best we have. It's a lot of material, but I'd recommend skipping around to whatever looks interesting.

And if you're ever looking for something to read on a specific topic, open threads are a great place to ask about that.

Hi, 

having been passionate about the bigger picture for many years I discovered EA maybe five years ago.  I attended a handful of events in Manchester and I was curious why something like Positive Psychology etc was not a core part of EA.  After all, many of humanities problems are caused by humanity and can only be solved by humanity.  

Six months ago I started work creating what I hope will be a global platform, there is a brief intro at potentialisation.com, to help people understand themselves and others better, learn and grow using that understanding and connect with other people more effectively - whether it be people round the corner to create a craft group because they are lonely or to connect with other would be global solution architects and supporters from around the globe that they have synergy with :-)

Hopefully the system help a few people be better in ways that give humanity a bit more chance of navigating the next few decades more successfully, or at the least be a bit less miserable as we head toward self destruction :-)  

thanks,

jon

Best of luck with the project. It looks like there's a lot of different material in the works; I hope that whatever first tool you launch has clear benefits for the people who use it, and you can build out from an initial success.

I struggled for a long time to fit forum content into my workflow, but have found something that works well for me:

  • I use feedbin as a space for long form content.
  • I subscribe to newsletters and the forum's digest using the feedbin email
  • Reading forum articles fits as an activity kind of like scrolling through twitter.

Is it still possible to create an event page on the forum?

Not right now. That feature popped up for a time but wasn't meant to be usable yet — this was just an inadvertent consequence of the code we share with LessWrong. 

However, getting the feature imported in a usable way is on our near-term roadmap! We don't have a specific launch date yet, but event pages are under active development. I wouldn't be surprised if they were in our next feature update post.

Thanks for letting me know! I'm interested in organizing an event soon, so this feature would be useful to me.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s
 ·  · 2m read
 · 
Summary Arkose is an early-stage AI safety fieldbuilding nonprofit focused on accelerating the involvement of experienced machine learning professionals in technical AI safety research through direct outreach, one-on-one calls, and public resources. Between December 2023 and June 2025, we had one-on-one calls with 311 such professionals. 78% of those professionals said their initial call accelerated their involvement in AI safety[1].  Unfortunately, we’re closing due to a lack of funding.  We remain excited about other attempts at direct outreach to this population, and think the right team could have impact here. Why are we closing? Over the past year, we’ve applied for funding from all of the major funders interested in AI safety fieldbuilding work, and several minor funders. Rather than try to massively change what we're doing to appeal to funders, with a short funding runway and little to no feedback, we’re choosing to close down and pursue other options. What were we doing? Why? * Calls: we ran 1:1 calls with mid-career machine learning professionals. Calls lasted an average of 37 minutes (range: 10-79), and we had a single call with 96% of professionals we spoke with (i.e. only 4% had a second or third call with us). On call, we focused on: * Introducing existential and catastrophic risks from AI * Discussing research directions in this field, and relating them to the professional’s areas of expertise. * Discussing specific opportunities to get involved (e.g. funding, jobs, upskilling), especially ones that would be a good fit for the individual. * Giving feedback on their existing plans to get involved in AI safety (if they have them). * Connecting with advisors to support their next steps in AI safety, if appropriate (see below). * Supportive Activities: * Accountability: after calls, we offered an accountability program where participants set goals for next steps, and we check in with them. 114 call participants set goals for check