Bio

Participation
5

I am a third-year grad student, now studying Information Science, and I am hoping to pursue full-time roles in technical AI Safety from June '25 onwards. I am spending my last semester at school working on an AI evaluations project and pair programming through the ARENA curriculum with a few others from my university. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones). 

Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism — the EA university group at the University of Arizona. If you are a movement builder, let's get in touch! 

Career-wise, I am broadly interested in x/s-risk reduction and earning-to-give for animal welfare. Always happy to chat about anything EA!

How others can help me

Career-related:

  • How to upskill better for Safety engineering positions.
  • Full-time internship, research opportunities, or jobs from Summer 2025 onwards.
  • Entrepreneurial project ideas related to AI Safety
    • But broadly tech-y ideas are welcome!
  • Networking with those interested or involved in entrepreneurship for earning-to-give. 

Other: 

  • Meeting other EA University group organizer, learning about post-introductory fellowship activities, and how to scale up a University group to an academic program/adjacent.
  • Chatting about different visions of avant-garde EA!

How I can help others

  • I can share my experience running an EA group at a US public university.
  • I can share the reasons I chose to attend graduate school, information about the application process, the state of academia, and whether EAs should consider this an option.
  • I consider myself decently well-versed with core EA ideas, and I'm happy to chat with newer EAs and point them to the right resources/people.
  • I can give people insights into my career planning process, key decisions I have taken and why (like switching out of my Ph.D.), and plans I have for the future.
  • My experience upskilling in AI Safety so far, mistakes and triumphs and more. Specifically, I am happy to chat about paper replications, projects, and courses (such as ARENA) that I have pursued so far.

Comments
73

Thanks, great post! 

A few follow-up questions and pushbacks:

  • Even if cannibalization happens, here are three questions that "multiple well-designed studies analyzing substitution effects demonstrated that cultivated and plant-based meats appeal to sufficiently different consumer segments" may not answer:
     
    • Would commercially viable cultivated meat more favorably alter consumer preferences over time?
    • A non-negligible portion of veg*ns abandon their veg*nism — would introduction of cultivated meat improve retention of animal-free consumption patterns?
    • How would introduction of cultivated meat affect flexitarian dietary choices? Flexitarians eat a combination of animal- and plant-based meat. When cultivated meat becomes commercially viable, would flexitarians replace the former or the latter with cultivated meat?

      If the answer is a yes to any of these, I think that is a point in favor of cultivated meat. I expect cultural change to be a significant driver of reduced animal consumption, and this cultural change will only be possible if there is a stable class of consumers who normalize consumption of animal-free products.

To draw a historical parallel, when industrial chicken farming developed in the second half of the 20th century, people didn't eat less of other meats; they just ate chicken in addition.

  • Is this true? It seems that as chicken did displace beef consumption by 40% (assuming consumption ~ supply) or am I grossly misunderstanding the chart above?

    • Further, isn't there an upper bound to how much addition can happen? Meat became cheap and widely available, incomes rose, people started eating more of everything, so consumption increased. But there is only so much more that one can eat, so at some point people started making cost-based trade-offs between beef and chicken. If cultivated chicken were to be cheaper than animal-based beef or chicken, shouldn't we expect people to start making similar trade-offs?
akash 🔸
2
1
0
70% disagree

AGI by 2028 is more likely than not

I hope to write about this at length once school ends, but in short, here are the two core reasons I feel AGI in three years is quite implausible:
 

  • The models aren't generalizing. LLM's are not stochastic parrots, they are able to learn, but the learning heuristics they adopt seem to be random or imperfect. And no, I don't think METR's newest is evidence against this.[1]
     
  • It is unclear if models are situationally aware, and currently, it seems more likely than not that they do not possess this capability. Laine et al. (2024) shows that current models are far below human baselines of situational awareness when tested on MCQ-like questions. I am unsure how models would be able to perform long-term planning—a capability I would consider is crucial for AGI—without being sufficiently situationally aware.

 

  1. ^

    As Beth Barnes put it, their latest benchmark specifically shows that "there's an exponential trend with doubling time between ~2 -12 months on automatically-scoreable, relatively clean + green-field software tasks from a few distributions." Real world tasks rarely have such clean feedback loops; see Section 6 of METR's RE-bench paper for a thorough list of drawbacks and limitations.

akash 🔸
2
1
0
70% ➔ 50% disagree

Should EA avoid using AI art for non-research purposes?

Voting under the assumption that by EA, you mean individuals who are into EA or consider themselves to be a part of the movement (see "EA" is too vague: let's be more specific).

Briefly, I think the market/job displacement and environmental concerns are quite weak, although I think EA professionals should avoid using AI art unless necessary due to reputational and aesthetic concerns. However, for images generated in a non-professional context, I do not think avoidance is warranted.

(meta: why are people downvoting this comment? I disagree voted but there is nothing in this comment that makes me go, "I want less comments like this on the Forum")

This helps. That is not at all how I interpreted 'our answer to both of your questions is "no."' Apologies!

our answer to both of your questions is "no." 

As much as I appreciate the time and effort you put into the analysis, this is a very revealing answer and makes me immediately skeptical of anything you will post in the future. 

The linked article really doesn't justify why you effectively think that not a single piece of information would change the results of your analysis. This makes me suspect that, for whatever reason, you are pre-committed to the belief "Sinergia bad."

Correct me if I am misinterpreting something or if you have explained why you are certain beyond an ounce of doubt that 1) there is no piece of information that would lead to different conclusions or interpretation of claims and 2) why there is no room for reasonable disagreement.

[This comment is no longer endorsed by its author]Reply

it's also a big clear gap now on the trusted, well known non-AI career advice front

From the update, it seems that:

  • 80K's career guide will remain unchanged
    • I especially feel good about this, because the guide does a really good job of emphasizing the many approaches of pursuing an impactful career
    • n = 1 anecdotal point: during tabling early this semester, a passerby mentioned that they knew about 80K because a professor had prescribed one of the readings from the career guide in their course. The professor in question and the class they were teaching had no connection with EA, AI Safety, or our local EA group.
      • If non-EAs also find 80K's career guide useful, that is a strong signal that it is well-written, practical, and not biased to any particular cause
      • I expect and hope that this remains unchanged, because we prescribe most of the career readings from that guide in our introductory program
  • Existing write-ups on non-AI problem profiles will also remain unchanged
  • There will be a separate AGI career guide
  • But the job board will be more AI focused

Overall, this tells me that groups should still feel comfortable sharing readings from the career guide and on other problem profiles, but selectively recommend the job board primarily to those interested in "making AI go well" or mid/senior non-AI people. Probably Good has compiled a list of impact-focused job boards here, so this resource could be highlighted more often.

  1. Another place people could be directed for career advice: https://probablygood.org/
  2. Since last semester, we have made career 1-on-1s a mandatory part of our introductory program.

    1. This semester, we will have two 1-on-1s
      1. The first one will be a casual conversation where the mentee-mentor get to learn more about each other
      2. The second one will be more in-depth, where we will share this 1-on-1 sheet (shamelessly poached from the 80K), the mentees will fill it out before the meeting, have a ≤1 hour long conversation with a mentor of their choice, and post-meeting, the mentor will add further resources to the sheet that may be helpful.

    The advice we give during these sessions ends up being broader than just the top EA ones, although we are most helpful in cases where:

    — someone is curious about EA/adjacent causes
    — someone needs graduate school related questions
    — general "how to best navigate college, plan for internships, etc" advice

    Do y'all have something similar set up? 

Makes sense. Just want to flag that tensions like these emerge because 80K is simultaneously a core part of the movement and also an independent organization with its goals and priorities. 

Upvoted and I endorse everything in the article barring the following:

> If you are reasonably confident that what you are doing is the most effective thing you can do, then it doesn’t matter if it fully solves any problem

I think most people in playpump-like non-profits and most individuals who are doing something feel reasonably confident that their actions are as effective as they could be. Prioritization is not taken seriously, likely because most haven't entertained the idea that differences in impact might be huge between the median and the most impactful interventions. On a personal level, I think it is more likely than not that people often underestimate their potential, are too risk-averse, and do not sufficiently explore all the actions they could be taking and all the ways their beliefs may be wrong. 

IMO, even if you are "reasonably confident that what you are doing is the most effective thing you can do," it is still worth exploring and entertaining alternative actions that you could take.

Load more