James Brobin

Software Engineer @ Laboratory For Atmospheric and Space Physics
300 karmaJoined Working (0-5 years)Seeking workBoulder, CO, USA
jamesbrobin.substack.com/

Bio

Participation
1

 Hi! I'm a twenty three year old ex-software engineer. I like to write about EA and do independent research related to it. I'm particularly interested in longtermism, and how we navigate the transition to a post-AGI society.

My blog is James Brobin's Substack, and it includes stuff that I haven't posted on this forum.

How others can help me

I'm looking for opportunities that make use of writing or research skills, such as communicating about effective altruism or doing research related to longtermism. If you have any projects I find interesting, I may be able to help out for free.

I'm also looking for someone who can exchange feedback on writing with me.

How I can help others

If you want to discuss anything I've written on the forum, don't be afraid to reach out.

Also, I've done a lot of research on EA's presence on YouTube. If you are making YouTube videos or are hoping to do so, feel free to reach out for feedback or suggestions.

Comments
48

Hi Fin,

I have a lot of questions so I figure I would just share all of them and you could respond to the ones you want to.

  1. I think Forethought is a super cool institution. What advice would have for someone who wanted to work there as a researcher? Do you think it's important to have a strong understanding of how LLMs work?
  2. I made this post where I categorized flourishing cause areas based on "How To Make The Future Better." I thought I'd share. I'm curious if this categorization generally aligns with how you think about the problem.
    1. Locking-in one’s values
    2. Ensuring the future is aligned with the correct values
      1. Working towards viatopia
      2. Promoting futures with more moral reflection
      3. Improving the ability for people with different views to get their desired futures
    3. Ensuring future people are able to create a good future
      1. Keeping humanity’s options open
      2. Improving global stability
      3. Improving future human’s decision making
      4. Empowering responsible actors
    4. Speeding up progress
  3. I made this post which is an overview of longtermism's ideas, writings, individuals, institutions, and history. I thought I'd share since you made the longtermism website.
  4. The Better Futures series assumes that the future will be net-positive by default. To me, the ideas presented in the series (strong self-modification, modification of descendants, selection of beliefs by evolutionary pressures) indicate that we should expect future humans to be very different from us, and that, as a result, we should expect the future to be neutral in expectation. Do you agree with this logic or do you think the future will be net-positive by default? Additionally, why?
  5. Currently, there are a wide range of ideas about how a post-AGI future will go and what features it will contain. To me, this strongly indicates that we should expect the post-AGI future could go in a very broad range of ways and that we should prepare for the many different ways it could go. At the same time, I get the sense that Forethought has a very specific vision about how a post-AGI future will go (there will be an intelligence explosion, tools for epistemics will be beneficial, we might begin acquiring resources in other solar systems, small sets of actors could use AGI in malicious ways.) I'm wondering how do you decide what ideas you think are likely, and do you guys have any measures in place to ensure you're receiving criticism of your ideas so you don't create an epistemic bubble?
  6. I understand that you have done some work related to space governance. A criticism I have of working on this field is that (1) it seems like it has been very intractable due to the lack of space treaties (2) if any great power has a decisive advantage, global treaties won't matter (3) even if you are able to get a law or treaty passed, corporate or state interests could easily override these laws later on (4) there's probably a low chance of success of even getting into a position where you could influence this stuff. As such, I'm wondering, if you think it's valuable for additional people to work in the field, why do you think this?
  7. It seems like longtermism is an unhelpful idea since it requires people to believe that our actions could persist for millions of years. I personally am pretty skeptical of this, although I do think it is possible. It also seems like the idea has been somewhat harmful to EA as a movement since people can always point out that some of the founders of the movement are focused on helping people millions of years from now, which sounds pretty crazy. I'm wondering if you agree with this assessment.
  8. In ""How To Make The Future Better," MacAskill argues that we should make AIs encourage humans to be good people and use them as a source of moral reflection. This seems like it could be deeply problematic in case moral sense theory is true, but AIs lack a moral sense. Do you agree with this?

Hey Toby,

That's a great idea! Thanks for suggesting it!

Yeah, that's totally fair. I think the dynamics around public events probably vary a lot across the US.

And, yeah, I think I pretty much entirely agree with your second paragraph. Creating free in-person events from online platforms can only do so much.

Yeah, I see what you mean.

I'm reasonably idealistic in thinking that we could basically just do a bunch of interventions that make it easier to socialize and that would resolve most of the problem, but I'm definitely pessimistic that we could get much culture change, since it doesn't feel like there's much motion to do that. It seems hard to imagine an American culture that encourages people to actively socialize each week in community settings. And, the problem of "increasingly entertaining other options" is probably intractable.

I do think you're wrong about the platform thing though. As someone in their early 20s, I know pretty much no one who uses platforms other than Meetup and online forums/word of mouth/fliers to find events. As such, to me, it does feel like Meetup has a seriously monopoly. Additionally, a lot of people in my town will commonly say that they wish there more public events to go to so, at least where I live, it seems like supply of events is the real issue and not the events being too low quality.

I also think you're overemphasizing the need for group culture and leaders to be designed well, since I think this stuff just naturally arises in environments where typical people with shared interests come together.

Hey Charlie,

I'm super glad you made an attempt with that events building app!!

Yeah, I agree that it's definitely just a very multi-causal problem, which makes it really difficult to approach. Last year, I read Dr. Vivek Murphy's (the former attorney surgeon general of the US) book on the loneliness epidemic, and I was pretty disappointed that his ideas were mostly just along the lines of "get people together more."

For what it's worth, I think that the culture on Meetup really depends on the city and the exact kind of event. When I lived in a large city, it seems like a significant portion of attendees were people who were really struggling to make friends, which made for a very awkward and kind of tense environment. But, when I moved to a smaller city, it seemed like crowds that would show up were pretty representative of the actual demographics of the town. Additionally, it seems like events that are like "Let's meet people" can vary from a very excellent experience to very awkward. On the other hand, stuff like hikes and particular interest groups seem to be pretty quality groups from my experience. So, that said, I wouldn't discount Meetup entirely. The reason I said "Developing online platforms that allow individuals to host in-person community events for free." was because Meetup currently costs $175 a year and is the main platform in my city, which means that they're basically monopolizing the online public events space but also reducing the number of events that are occurring by making it cost-prohibitive to do so.

I also agree that tractability seems unclear, but, without having looked into the issue very much, it seems like it's reasonably neglected. For instance, at the university I attended, my RAs never hosted events, cafeterias weren't designed as places to interact with people, and the events that were hosted by the university were often not designed for socializing. It seems like a researcher could pretty easily trial a bunch of different initiatives at universities, and the universities would have profit incentives to implement them since students who have better mental health are probably less likely to drop out. A researcher could also try something similar at like companies or small towns or suburbs or large cities.

I wrote up a post that responds to this essay series. I thought I'd share that here for convenience.

Load more