Hide table of contents

TLDR: We're opening a Constellation/Trajan/Lightone style office in Boston in 2023 and we're interested in user feedback.

We – Kaleem Ahmid, EV’s new Project Manager for the Boston Office, supervised by Jonathan Michel, EV’s Head of Property – are managing the creation of an office space in Harvard Square in Cambridge, Massachusetts. It is currently projected to open in the first half of 2023. The (currently unnamed) office will consist of a space dedicated to EA outreach, and a productivity-optimized, professional, interdisciplinary co-working space.

Our current plan includes: 

  • Student outreach: One section of the space will be outreach-focused, with a coworking/event space and various meeting spaces for students at Harvard and other Boston-area schools and offices for full-time outreach professionals.
  • Professional coworking space: One section will be dedicated to full-time professionals, in the style of Trajan House or Constellation, with private offices, meeting and coworking spaces, and various amenities. We imagine this will accommodate ~40 people at any given time.

Note that there will be a reasonably clear separation between these two spaces, e.g. being on different floors.

We won’t be settling on the final group of individuals and orgs who will work in the office space for a while, and reserve the right to make changes to the mix as the space develops. However, if you are interested, we encourage you to read on.

 If you might want to work in either of these spaces, please fill out our interest form ASAP for two reasons: 

  1. We are in the early design stages, and many of the decisions we will make benefit from potential-user input.
  2. As we approach the opening of the space, we will reach out to people who fill out this interest form as we build our initial community, including both full-time office occupants and tentative plans for a semester-long fellowship to work out of the office and talk to a bunch of students. So if you express interest early, your application will be considered when there are still plenty of spots to fill. Don’t wait too long!

We’re also interested in hearing about other people who you think we should consider reaching out to, or who should fill out this form. Feel free to tell us at the end of filling it out yourself, or send me a DM. Some examples of people we’d be interested in hearing from:

  • You are a dedicated and knowledgeable EA and would like to work from our office to engage with others members of the community.
  • You are working for an EA org, starting a new EA org, (in the area) and you are looking for an office.
  • You’re an experienced community builder and are interested in building the EA community at Harvard/MIT, and want to use the dedicated outreach space at the office to do so.
  • You’re working on an EA project which would benefit from being housed in a space full of other EAs or in proximity with university students.

Why a Boston/Cambridge, MA hub for outreach and EA more broadly?

The office will be located in the only two mile radius on earth home to two of the world’s top 5 universities, in the center of Harvard’s campus (less than 5 minutes walking-distance from the Kennedy School, the Law School, the Science Center, and most undergraduate residences), as well as being within half an hour of other well-regarded universities like Tufts, Boston University, Brandeis, Northeastern, Boston College, Suffolk University, Emerson, UMass Boston, and is right next to a subway station on the MBTA red line (15 minutes to MIT).

EA community building is a high-impact endeavor. Increasing the chances of Harvard and MIT students to pursue high-impact careers seems like a very valuable thing to do. A world-class office is one way to attract high-performing members of the EA community to come and work in close proximity with talented students with whom they can share exciting ideas and core values.

Boston is also already home to (at least some employees at) various EA organizations, including the Legal Priorities Project, Alvea, Kevin Esvelt’s Sculpting Evolution lab, CEA, and various university-affiliated and independent researchers and community builders/meta-EA professionals. We also expect more initiatives and organizations to be started in Boston which this office could house as well. Facilitating more information transfer and a sense of shared purpose between these groups and individuals seems important. Boston is also a ~17-minute drive from an airport with regular nonstop service to SF, London, DC, NYC, and Nassau, and is geographically central amongst that group of cities, making it a good point of convergence when needed.

Gratitude

We’d like to thank the team of people who have identified this space and have worked voluntarily on this project in their spare time up to this point – Trevor Levin, Christoph Winter, Lucius Caviola, and Nikola Jurkovic.

Comments8


Sorted by Click to highlight new comments since:

What probability of wanting to work in the office/in Boston should you be at for filling out that form?

You can indicate uncertainty in the form, so feel free to fill it out and state your probability :)

I was already strongly considering moving to Boston, so this makes me feel lucky :)

Any updates on this? I am interested.

Checking in again :)

Any updates here?

didn't pan out, unfortunately

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed