If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)

 


Open threads are also a place to share good news, big or small. See this post for ideas.

11

0
0

Reactions

0
0
Comments15
Sorted by Click to highlight new comments since:

Hello! I’ve been around EA since 2019! I was trying to choose a thesis topic and stumbled across Effective Thesis, which led to 80,000 hours, which led to hanging out a little with the Beijing chapter… you get the picture. Things have escalated since then (in a good way, ha!) and now I’m making a formal account here. 

I just started as the Research Coordinator at EA for Christians, where I support community building and research around the intersection of Christian theology and EA. I’ll be starting my Master of Divinity in NYC this September. (MDiv’s are typical degrees for priests/pastors, seminary professors, etc.) If you are interested in EA and theology/religion/spirituality, happy to talk!

But I do other stuff besides religion. I was a Yenching Scholar at Peking University and focused on Law & Society, writing my master’s thesis on international data law. I’m focused on institutional decision-making, great power conflict, US-China relations, theology, and research. Yet, I’m eclectic. So, interested in almost everything. 

Looking forward to engaging in discussions!

Welcome Caleb!

Welcome, Caleb! I'm always excited to see people with unusual specialties on the Forum; every bit of expertise matters.

Hi!

My main personal project for the summer is trying to figure out what I think about AI-risk, so I thought I should engage with the forum more to ask questions/solicit feedback. I'm currently a mathematics undergrad, about to start my 4th year, so part of this is trying to figure out whether or not I should pivot toward working in something closer to AI-risk. 

About me -- I first got interested in EA after reading Reasons and Persons in the summer of 2020. My main secondary academic interest in undergrad has been in political theory, so I'm very interested in questions such as whether naïve utilitarianism endorses political extremism, how that might be mitigated by a proper social epistemology, and what that might entail for consequentialists interested in voting/political process reform. I'm also very interested in the economics of cities and innovation, as well as understanding how we learn mathematics. I'm less sure how those topics fit in an EA framework, but I'm always interested in seeing what insights others might be able to bring to them from an EA standpoint. 

Here's hoping to learning a lot from y'all's!

-- Edgar

Two articles that you might find helpful:

AGI Safety from First Principles by richard_ngo
My Personal Cruxes for Working on AGI Safety by Buck
 

The former is an argument for why AGI Safety is potentially a really big problem (maybe biggest problem of our lifetimes), and the latter is stepping into the internal thought processes of an individual trying to decide whether to work on AGI safety over other important longtermist causes.

Great to meet you! You might be interested in some posts in the AI forecasting and Estimation of existential risk categories, such as:

I've also written a lot about AI risk on my shortform.

Hi,

I am about 2-3 months old into knowing EA. I was going through the bio of a professor who impressed me in a virtual lecture and her bio stated that she had pledged a part of her income to EA. That's where I first stumbled upon the name 'Effective Altruism' and it caught my attention immediately. The name says a lot. Thus, one thing led to the other as I continued browsing and reading about it, and here I am today.

Not knowing what I would do after my undergraduate studies, I knew one thing, I wanted to be able to help others as part of my profession. This led me to get my post-graduation degree in social work. I continued working in a variety of areas from human trafficking, children with intellectual disability, community development, counseling, capacity building of counselors,  school social work, designing and carrying out researches in different areas, and teaching research methodology to post-grad students.

Thereafter I took a long break in my career and long story short, here I am trying to find my way back. For the past year, I have been educating myself through various online courses in computational social science, research methods, data, and development policy. Childhood poverty is one of the areas where I am keenly interested in working.  Reading about EA brought my focus to concerns of farmed animal welfare part of which were there at the back of my mind but, thanks to EA work, got to the fore now.  I also got to know a lot about longtermism issues that I didn't know much about earlier.

I am looking forward to interacting with members here and learn a lot. I am open to discussions, volunteering or assisting/liaison with anyone on interesting EA-related projects.

 

Thanks,

Naghma

Welcome Naghma! It is great to have you here and learn about your background and interests.

A belated welcome, Naghma! 

A couple of recommendations for learning more:

  • Join the EA Newsletter to get regular updates on different causes, events, etc.
  • Browse through the EA Intro Program, a collection of articles on different topics that were selected for being among the best we have. It's a lot of material, but I'd recommend skipping around to whatever looks interesting.

And if you're ever looking for something to read on a specific topic, open threads are a great place to ask about that.

Hi, 

having been passionate about the bigger picture for many years I discovered EA maybe five years ago.  I attended a handful of events in Manchester and I was curious why something like Positive Psychology etc was not a core part of EA.  After all, many of humanities problems are caused by humanity and can only be solved by humanity.  

Six months ago I started work creating what I hope will be a global platform, there is a brief intro at potentialisation.com, to help people understand themselves and others better, learn and grow using that understanding and connect with other people more effectively - whether it be people round the corner to create a craft group because they are lonely or to connect with other would be global solution architects and supporters from around the globe that they have synergy with :-)

Hopefully the system help a few people be better in ways that give humanity a bit more chance of navigating the next few decades more successfully, or at the least be a bit less miserable as we head toward self destruction :-)  

thanks,

jon

Best of luck with the project. It looks like there's a lot of different material in the works; I hope that whatever first tool you launch has clear benefits for the people who use it, and you can build out from an initial success.

I struggled for a long time to fit forum content into my workflow, but have found something that works well for me:

  • I use feedbin as a space for long form content.
  • I subscribe to newsletters and the forum's digest using the feedbin email
  • Reading forum articles fits as an activity kind of like scrolling through twitter.

Is it still possible to create an event page on the forum?

Not right now. That feature popped up for a time but wasn't meant to be usable yet — this was just an inadvertent consequence of the code we share with LessWrong. 

However, getting the feature imported in a usable way is on our near-term roadmap! We don't have a specific launch date yet, but event pages are under active development. I wouldn't be surprised if they were in our next feature update post.

Thanks for letting me know! I'm interested in organizing an event soon, so this feature would be useful to me.

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat
Relevant opportunities