Hide table of contents

TLDR: We're opening a Constellation/Trajan/Lightone style office in Boston in 2023 and we're interested in user feedback.

We – Kaleem Ahmid, EV’s new Project Manager for the Boston Office, supervised by Jonathan Michel, EV’s Head of Property – are managing the creation of an office space in Harvard Square in Cambridge, Massachusetts. It is currently projected to open in the first half of 2023. The (currently unnamed) office will consist of a space dedicated to EA outreach, and a productivity-optimized, professional, interdisciplinary co-working space.

Our current plan includes: 

  • Student outreach: One section of the space will be outreach-focused, with a coworking/event space and various meeting spaces for students at Harvard and other Boston-area schools and offices for full-time outreach professionals.
  • Professional coworking space: One section will be dedicated to full-time professionals, in the style of Trajan House or Constellation, with private offices, meeting and coworking spaces, and various amenities. We imagine this will accommodate ~40 people at any given time.

Note that there will be a reasonably clear separation between these two spaces, e.g. being on different floors.

We won’t be settling on the final group of individuals and orgs who will work in the office space for a while, and reserve the right to make changes to the mix as the space develops. However, if you are interested, we encourage you to read on.

 If you might want to work in either of these spaces, please fill out our interest form ASAP for two reasons: 

  1. We are in the early design stages, and many of the decisions we will make benefit from potential-user input.
  2. As we approach the opening of the space, we will reach out to people who fill out this interest form as we build our initial community, including both full-time office occupants and tentative plans for a semester-long fellowship to work out of the office and talk to a bunch of students. So if you express interest early, your application will be considered when there are still plenty of spots to fill. Don’t wait too long!

We’re also interested in hearing about other people who you think we should consider reaching out to, or who should fill out this form. Feel free to tell us at the end of filling it out yourself, or send me a DM. Some examples of people we’d be interested in hearing from:

  • You are a dedicated and knowledgeable EA and would like to work from our office to engage with others members of the community.
  • You are working for an EA org, starting a new EA org, (in the area) and you are looking for an office.
  • You’re an experienced community builder and are interested in building the EA community at Harvard/MIT, and want to use the dedicated outreach space at the office to do so.
  • You’re working on an EA project which would benefit from being housed in a space full of other EAs or in proximity with university students.

Why a Boston/Cambridge, MA hub for outreach and EA more broadly?

The office will be located in the only two mile radius on earth home to two of the world’s top 5 universities, in the center of Harvard’s campus (less than 5 minutes walking-distance from the Kennedy School, the Law School, the Science Center, and most undergraduate residences), as well as being within half an hour of other well-regarded universities like Tufts, Boston University, Brandeis, Northeastern, Boston College, Suffolk University, Emerson, UMass Boston, and is right next to a subway station on the MBTA red line (15 minutes to MIT).

EA community building is a high-impact endeavor. Increasing the chances of Harvard and MIT students to pursue high-impact careers seems like a very valuable thing to do. A world-class office is one way to attract high-performing members of the EA community to come and work in close proximity with talented students with whom they can share exciting ideas and core values.

Boston is also already home to (at least some employees at) various EA organizations, including the Legal Priorities Project, Alvea, Kevin Esvelt’s Sculpting Evolution lab, CEA, and various university-affiliated and independent researchers and community builders/meta-EA professionals. We also expect more initiatives and organizations to be started in Boston which this office could house as well. Facilitating more information transfer and a sense of shared purpose between these groups and individuals seems important. Boston is also a ~17-minute drive from an airport with regular nonstop service to SF, London, DC, NYC, and Nassau, and is geographically central amongst that group of cities, making it a good point of convergence when needed.

Gratitude

We’d like to thank the team of people who have identified this space and have worked voluntarily on this project in their spare time up to this point – Trevor Levin, Christoph Winter, Lucius Caviola, and Nikola Jurkovic.

Comments8


Sorted by Click to highlight new comments since:

What probability of wanting to work in the office/in Boston should you be at for filling out that form?

You can indicate uncertainty in the form, so feel free to fill it out and state your probability :)

I was already strongly considering moving to Boston, so this makes me feel lucky :)

Any updates on this? I am interested.

Checking in again :)

Any updates here?

didn't pan out, unfortunately

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
LewisBollard
 ·  · 6m read
 · 
> Despite the setbacks, I'm hopeful about the technology's future ---------------------------------------- It wasn’t meant to go like this. Alternative protein startups that were once soaring are now struggling. Impact investors who were once everywhere are now absent. Banks that confidently predicted 31% annual growth (UBS) and a 2030 global market worth $88-263B (Credit Suisse) have quietly taken down their predictions. This sucks. For many founders and staff this wasn’t just a job, but a calling — an opportunity to work toward a world free of factory farming. For many investors, it wasn’t just an investment, but a bet on a better future. It’s easy to feel frustrated, disillusioned, and even hopeless. It’s also wrong. There’s still plenty of hope for alternative proteins — just on a longer timeline than the unrealistic ones that were once touted. Here are three trends I’m particularly excited about. Better products People are eating less plant-based meat for many reasons, but the simplest one may just be that they don’t like how they taste. “Taste/texture” was the top reason chosen by Brits for reducing their plant-based meat consumption in a recent survey by Bryant Research. US consumers most disliked the “consistency and texture” of plant-based foods in a survey of shoppers at retailer Kroger.  They’ve got a point. In 2018-21, every food giant, meat company, and two-person startup rushed new products to market with minimal product testing. Indeed, the meat companies’ plant-based offerings were bad enough to inspire conspiracy theories that this was a case of the car companies buying up the streetcars.  Consumers noticed. The Bryant Research survey found that two thirds of Brits agreed with the statement “some plant based meat products or brands taste much worse than others.” In a 2021 taste test, 100 consumers rated all five brands of plant-based nuggets as much worse than chicken-based nuggets on taste, texture, and “overall liking.” One silver lining
 ·  · 1m read
 · 
Recent opportunities in Building effective altruism
14
CEEALAR
· · 1m read