Howdy!
I am Patrick Hoang, a student at Texas A&M University.
Others can probably help me with community building at Texas A&M University.
I am planning to start an Effective Altruism group at Texas A&M. This is my plan:
Summer 2024: Non-Trivial Fellowship
Early Fall 2024 Q1: Find existing organizations at Texas A&M and understand Texas A&M culture/values
Late Fall 2024 Q2: Finding people who might be interested in EA; networking
Early Spring 2025 Q3: Get some people to do the EA Introductory Fellowship
Late Spring 2025 Q4: Start an MVP, such as a 6-8 week reading group.
Summer 2025: Do some back-preparations for advertising the group
Fall 2025: Launch!
Some ideas (I am not a parent, but came from a family which had a lot of children. I think my grandmother had 13 kids):
Some people in EA should have kids so it makes EA more friendly to child-bearing parents, especially older professionals who can transition into EA. Look at Julia Wise here and here that one can still be an EA when having kids.
You do have to be careful not letting parenting cost you a lot of impact. For example, if parenting would prevent you from launching new organizations, and launching new orgs would be ridiculously impactful for you, then think twice. However, there are many ways to lower the burden of having kids, such as spending less time micromanaging them. Look at Bryan Caplan's interview with 80k about Selfish Reasons To Have More Kids.
Caplan went over how most of the child's behavior are controlled by genes, not by the parent. This allows parents to do 80/20 and do the most cost-effective parenting behavior, if the parents want to raise a good child.
Also, I won't be surprised if the desire to have kids is controlled by genes. If it is, then some people really want kids and others don't, and that is okay. But in rare cases, it could be impactful due to EA optics and productivity reasons (being more energized).
Thank you Tom.
To clarify, I am planning to finish my undergrad at A&M and I will use Karnovsky’s aptitude-based approach. So being in an area with EAs is not a big concern right now, but it is a long-term thing.
I also think that staying in Texas can let me network better with groups that EAs have a hard time reaching. Think O&G and conservative leaders. My university also has a killer alumni network, but only in Texas. I don’t think it will help me directly, but it can get me career capital quickly.
I’m going to look at what kind of jobs engineers at A&M in my EA group will like (High Impact Engineering). Earning-to-Give is still an awesome option for most people.
What is your general advice for people who are still in college and wanting to work in EA, but are not geographically in any EA heavy-areas?
I am guessing building career capital, but I guess that means accepting I will throw away my impact if AGI comes in the next 10 years. That is okay, I have pretty uncertain AI timelines. I am living in Texas and doing things like moving to DC / silicon valley would mean losing most of my network.
For reference, I just finished freshman year of electrical engineering. I am considering technical sales as an increasingly better way to earn career capital than regular engineering (I think the most impactful roles require social skills more than technical ones), plus many other benefits.
I am also asking this question on behalf of students who might join my university effective altruism group. We are struggling with finding EA aligned people at Texas A&M university, and my hypothesis is that we are attracting the wrong people. We will roll out the next round of recruitment next month, but I wonder what kind of majors are a good fit for EA.
There could be another reason EAs and rationalists specifically value life a lot more. Suppose there's at least a 1% chance of AI going well and we live in a utopia and achieve immortality and can live for 1 billion years at a really high level. Then the expected value of life is 10,000,000 life-years. It could be much greater too, see Deep Utopia.
Anecdotally, I agree with the secularization hypothesis. Does this imply people should be more religious?
While I like it, stories often sensationalize these issues such as "AI by 2027 we're all gonna die" without providing good actionable steps. It almost feels like the climate change crisis by environmentalists that say "We're all gonna die by 2030 because of climate change! Protest in the streets!"
I know stories are very effective in communicating the urgency of AGI, and the end of the video has some resources about going to 80k. Nonetheless, I feel some dread such as "oh gosh, there's nothing I can do," and that is likely compounded by YouTube's younger audience (for example college students will graduate after 2027).
Therefore, I suggest the later videos should give actionable steps or areas if someone wants to reduce risks from AI. Not only will it relieve the doomerism but it will give actual relevant advice for people to work on AI.
While I do agree with your premise on arithmetic, the more valuable tools are arithmetic-adjacent. I am thinking of game theory, Bayesian reasoning, probability, expected value, decision modeling, and so on. This is closer to algebra and high school math, but still pretty accessible. See this post.
The main reason why people struggle with applying arithmetic to world modeling is because transfer learning is really difficult, and EAs/rationalists are much better at applying transfer learning than the regular person. I notice this in my EA group: students who are engineers and aced differential equations and random variables quite struggle with Bayesian reasoning, even though they learned Bayes' theorem.
I feel like many of these risks could go either way as annihilation or immortality. For example, changing fundamental physics or triggering vacuum decay could unlock infinite energy, which could lead to an infinitely prosperous (and protected) civilization.
Essentially, just as there are galactic existential risks, there are galactic existential security events. One potential idea would be extracting dark energy from space to self-replicate in the intergalactic void to continually expanding forever.
Some of the potential ideas, coming from a politically active university:
- Some sister organizations which can be politically focused on a cause area. One example is FAI which used to be funded by OpenPhil.
- I wonder if EAs should be openly partisan instead of just hiding their political viewpoints. Of course, to prevent the EA forum from becoming a reddit threat the amount of posting should be roughly be equal among both types of political parties.
-- Example: Suppose EA is made out of 80% X and 20% Y. Member of Y should post at a frequency four times higher than X, so we get roughly 50/50 split.
Some other things: To reduce polarization, EA could deprioritize some areas which are seen as very partisan and not as effective. A concrete example: my university is funded by factory farms and we're proud of it. We also have worldviews that since humans are made in the image of God, humans are infinitely more valuable than animals (animals only have instrumental value). Thus saying "abandoning factory farming" would be reputational suicide, as it would be the same as destroying the foundation my university is created on.