After following ACX for a few years, getting more immersed in EA activities, and seeing the Mastermind post, I think an activity in which I’ve been engaged with for over a decade may be worth a shot among EA groups and meetups: Socrates Café. I was a bit surprised that it hadn’t been mentioned on the EA Forum before now, but better now than never.

The modern incarnation of Socrates Café has its origins with author Christopher Phillips (TED talk in 2021 https://www.youtube.com/watch?v=sWNOa-Q0S6c, https://socratescafe.com/?page_id=56). Long story short, it’s a group where individuals, using the Socratic method, delve into philosophical topics. From the meaning of life and the existence of morality to every manner of “is” and “should” you can conjure up, the group picks a topic, and the discussion begins. Extensive knowledge of particular philosophers and philosophies isn’t a prerequisite unless the group wants to create that kind of focus. We discuss topics and inquire with each other from our different perspectives. It’s not a formal organization, merely an idea in the public domain. I participated in a local Socrates Café group for many years since moving to Colorado. There was a core group of people from all walks of life, all generally cursed with thinking too much. 

When COVID hit, I considered branching out online to start my own Socrates Café within an organization I currently lead. After some thought and considering my own irritability with people who cannot figure out how to use a mute button, the decision was made to hold off until we could all meet in person. When the bulk of the pandemic was in the rearview mirror, I fired things up using a community room at my local city hall. My particular experience notwithstanding, there’s no reason this couldn’t be effectively executed in a virtual format.  

The rules are basic. It starts with a facilitator, a person familiar with the general process of running a Socrates Café meeting. For the first meeting, the facilitator usually picks the question to be asked for the session. Recent examples from my group include “What obligations do the living have to future generations?” “Was Michelangelo always in the block of marble?” and “What should the US do for Ukraine, if anything, amidst its war with Russia?” In principle, it can be a question about anything, it just must be a question. 

Generally, the first speaker is someone lays out some context for the question to be asked, and it doesn’t have to be the facilitator. From there, participants raise their hands to speak. (Speaking is encouraged for all attendees, but not required.) This creates a spot for you in the queue tracked by the facilitator. 

Now here’s where it gets Socratic: If you have a question for the current person speaking you may ask it directly without raising your hand and waiting (doing so in a timely and courteous manner). Questions asked of the speaker help to clarify, elicit expounding, and/or poke holes in the reasoning of what the speaker has put forward. These are the key exchanges, using questions to probe and counter assertions by others. 

At the end of the session (usually two hours), the group nominates and votes to select the topic for the next planned meeting. In my experience, topics about religion or politics are frowned upon unless kept on a strictly philosophical level. 

While those are the rules laid out by one Socrates Café group in Colorado, the benefits of the methodology are why I write about it. For any topic, broad or narrowly tailored, approaching EA subjects among the EA-minded in this manner could be a great addition to the EA quiver. Need to get the juices flowing on prioritizing one subject over another? Seeking to draw brighter lines to better refine how to measure the “good” done by a specific action? Wondering if you’re grasping your Bentham as well as you should?  Socrates Café may do the trick. It’s not designed with creating revenue in mind, but contemplated as a forum for those whose draw is solely intellectual curiosity. Its few formalities are there to give discussions some coherence, but the directions taken by a Socrates Café group are driven solely by the members.

I gave some consideration to writing a proposal for the FutureFund #23 project “A constitution for the future,” but time and other commitments prevented me from doing so (see the spinoff Constitution Café). Regardless, I am confident that this approach to discussion and thinking would benefit a whole host of EA-related endeavors. So long as discussions are all done in good faith, there’s no reason that any topic would need to be off limits. For those in the Anglophone world, there’s a good chance a Socrates Café group already exists in your area. If you’re curious about more particulars, ask away!

27

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

This seems cool.

I think people should ideally do a lot of experimentation, running a bunch of EA events in different formats and reporting back on how they seemed to go. I like something of the spirit of the Socrates cafe, and hope it gets tried a few times!

Thanks, Owen. I agree that this approach lends itself to a lot of experimentation. While the "usual" approach (often) doesn't lend itself to a final consensus at the end of a session, I think doing this with a more defined purpose for EA participants would be relatively straightforward. I have some thoughts on how to best execute it, perhaps including a survey element for participants before and after a session or sessions. If you're interested in more particulars and nuances, I would be happy to share thoughts and ideas on a call or other correspondence.

I participated in an activity of this sort some years ago. I really enjoyed the structured conversation, and working towards consensus in a group. The experience was way more intense than any other context of presentation or debate that I have been a part of otherwise. I don't know whether EA groups should use the technique, but I wanted to share from my own experience:)

hi Ryan, this is Christopher Phillips, founder of Socrates Cafe.  Thanks so much for your kind words about it.  I can always be reached at SocratesCafe@gmail.com   thanks again

A further addition to the EA quiver would be reading groups to discuss the best books related to EA. As with the Socrates Cafe, discussions  could be structured around answering a central question.

Also ripe for a survey element 

Curated and popular this week
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s