PH

Patrick Hoang

Electrical Engineer Student @ Texas A&M University
135 karmaJoined Pursuing an undergraduate degreeCollege Station, TX, USA

Bio

Howdy!

I am Patrick Hoang, a student at Texas A&M University.

How others can help me

Others can probably help me with community building at Texas A&M University.

How I can help others

I am planning to start an Effective Altruism group at Texas A&M. This is my plan:

Summer 2024: Non-Trivial Fellowship

Early Fall 2024 Q1: Find existing organizations at Texas A&M and understand Texas A&M culture/values

Late Fall 2024 Q2: Finding people who might be interested in EA; networking

Early Spring 2025 Q3: Get some people to do the EA Introductory Fellowship

Late Spring 2025 Q4: Start an MVP, such as a 6-8 week reading group. 

Summer 2025: Do some back-preparations for advertising the group

Fall 2025: Launch!

Comments
14

While I do agree with your premise on arithmetic, the more valuable tools are arithmetic-adjacent. I am thinking of game theory, Bayesian reasoning, probability, expected value, decision modeling, and so on. This is closer to algebra and high school math, but still pretty accessible. See this post.

The main reason why people struggle with applying arithmetic to world modeling is because transfer learning is really difficult, and EAs/rationalists are much better at applying transfer learning than the regular person. I notice this in my EA group: students who are engineers and aced differential equations and random variables quite struggle with Bayesian reasoning, even though they learned Bayes' theorem.

Patrick Hoang
1
0
0
60% disagree

I feel like many of these risks could go either way as annihilation or immortality. For example, changing fundamental physics or triggering vacuum decay could unlock infinite energy, which could lead to an infinitely prosperous (and protected) civilization. 

Essentially, just as there are galactic existential risks, there are galactic existential security events. One potential idea would be extracting dark energy from space to self-replicate in the intergalactic void to continually expanding forever.

Even if the goal is communication, it could be the case that normalizing strong attractive titles could lead to more clickbait-y EA content. For example, we could get: "10 Reasons Why [INSERT_PERSON] Wants to Destroy EA."

Of course, we still need some prioritization system to determine which posts are worth reading (typically via number of upvotes).

I enjoyed reading this post!

One thing I would like to add is in terms of getting jobs, it is fundamentally a sales process. This 80k article really highlighted this for me. Sales and interpersonal communication also play a huge role in the currently neglected EA skills (management, communication, founder, generalist). I'm currently writing a forum post so hopefully I can get that out soon.

I was among the three that defected. I warned yall!

I defected! Everyone, if you want to lose, choose DEFECT

Patrick Hoang
1
0
0
50% ➔ 57% disagree

I think the most likely outcome is not necessarily extinction (I estimate <10% due to AI) but rather an unfulfilled potential. This may be humans simply losing control over the future and becoming mere spectators and AI not being morally significant in some way.

I feel like this is too short notice with EAG conferences. Three weeks is not a lot of time between receiving your decision and flying to the Bay Area making arrangements. Maybe it is because I am a student.

What should EAs not in a position to act under short AI timelines do? You can read my response here but not all of us are working in AI labs nor expect to break in anytime soon.

You also suggested having a short-timeline model to discount things after 5+ years:

Plans relying on work that takes several years should be heavily discounted - e.g. plans involving ASL5-level security of models if that’s 5+ years away

But I wouldn't apply such a huge discount rate if one still believes for a chance of longer AGI timelines. For example, if you believe AGI only has a 25% chance of occurring by 2040, you should discount 15+ year plans only by 25%. The real reason to discount certain long-term plans is because they are not tractable. (i.e. I think executing a five-year career plan is tractable, but ALS5-level security is probably not due to government's slow speed)

A lot of EAs do think AI safety could become ridiculously important (i.e. some probability mass of very short timelines) but are not in the position to do anything, which is why they focus on more tractable areas (i.e. global health, animal welfare, EA building) under the assumption of longer AI timelines. Especially because there's a lot of uncertainty about when AGI would come.

My internal view is 25% of TAI by 2040 and 50% of TAI by 2060, where I define TAI as an AI with the ability to autonomously perform AI research. They may have shifted in light of DeepSeek but what am I supposed to do? I'm just a freshman college student at a non-prestigious university. Am I supposed to drop all commitments I have, speed-run my degree, get myself to work in a highly competitive AI lab which would probably require a Ph. D., work on technical alignment hoping to get a breakthrough? If TAI comes within 5 years, it would be the right move, but if I'm wrong then I would end up with very shallow skills without much experience.

We have the following Pascal matrix (drafted my GPT):

DecisionAGI Comes Soon (~2030s)AGI Comes Late (~2060s+)
Rush into AI Now🚀 Huge Impact, but only if positioned well😬 Career stagnation, lower expertise
Stay on Current Path😢 Missed critical decisions, lower impact📈 Strong expertise, optimal positioning

I know the decision is not binary, but I am definitely willing to forfeit 25% of my impact by betting on the AGI comes late scenario. I do think non-AI cause areas should use AI projection in their deliberation and ToC but I think it is silly to cut out everything that happens after 2040 with respect to the cause area. 

However, I do think EAs should have a contingency plan where they should speedrun to AI safety if and only if (one of multiple conditions occur; i.e. even conservative superforecastors project AGI before 2040, or a national emergency is declared). And we can probably hedge against the AGI comes soon scenario by buying long-term NVIDIA call options.

Load more