Bio

Participation
6

I recently graduated with a master's in Information Science. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones). 

Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism — the EA university group at the University of Arizona. If you are a movement builder, let's get in touch! 

Career-wise, I am broadly interested in capital generation, x/s-risk reduction, and earning-to-give for animal welfare. Always happy to chat about anything EA!

How others can help me

Career-related:

  • Tech-y entrepreneurial project ideas.
    • Ideas related to AIS are welcome!
  • How to upskill better for Safety engineering positions.
  • Full-time internship, research opportunities, or jobs from Summer 2025 onwards.
  • Networking with those interested or involved in entrepreneurship for earning-to-give. 

Other: 

  • Meeting other EA University group organizer, learning about post-introductory fellowship activities, and how to scale up a University group to an academic program/adjacent.
  • Chatting about different visions of avant-garde EA!

How I can help others

  • I can share my experience running an EA group at a US public university.
  • I can share the reasons I chose to attend graduate school, information about the application process, the state of academia, and whether EAs should consider this an option.
  • I consider myself decently well-versed with core EA ideas, and I'm happy to chat with newer EAs and point them to the right resources/people.
  • I can give people insights into my career planning process, key decisions I have taken and why (like switching out of my Ph.D.), and plans I have for the future.
  • My experience upskilling in AI Safety so far, mistakes and triumphs and more. Specifically, I am happy to chat about paper replications, projects, and courses (such as ARENA) that I have pursued so far.

Comments
80

we may take action up to and including building new features into the forum’s UI, to help remind users of the guidelines.

Random idea: for new users and/or users with less than some threshold level of karma and/or users who use the forum infrequently, Bulby pops up with a little banner that contains a tl;dr on the voting guidelines. Especially good if the banner pops up when a user hovers their cursor over the voting buttons. 

EAG London would be the perfect place to talk about this with OP folks. Either way, all the best fundraising!

There is going to be a Netflix series on SBF titled The Altruists, so EA will be back in the media. I don't know how EA will be portrayed in the show, but regardless, now is a great time to improve EA communications. More specifically, being a lot more loud about historical and current EA wins — we just don't talk about them enough!

A snippet from Netflix's official announcement post:

Are you ready to learn about crypto?

Julia Garner (OzarkThe Fantastic Four: First Steps, Inventing Anna) and Anthony Boyle (House of Guinness, Say Nothing, Masters of the Air) are set to star in The Altruists, a new eight-episode limited series about Sam Bankman-Fried and Caroline Ellison.

Graham Moore (The Imitation GameThe Outfit) and Jacqueline Hoyt (The Underground Railroad, Dietland, Leftovers) will co-showrun and executive produce the series, which tells the story of Sam Bankman-Fried and Caroline Ellison, two hyper-smart, ambitious young idealists who tried to remake the global financial system in the blink of an eye — and then seduced, coaxed, and teased each other into stealing $8 billion.

Assuming this is true, why would OP pull funding? I feel Apart's work strongly aligns with OP's goals. The only reason I can imagine is that they want to move money away from the early career talent building pipeline to more mid/late-stage opportunities. 

akash 🔸
2
0
1
90% disagree

The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet

Intuitively seems very unlikely.

  1. The Chicxulub impact wiped out dinosaurs but not smaller mammals, fish, and insects. Even if a future extinction event caused a total ecosystem collapse, I would expect that some arthropods will  be able to adapt and survive.
  2. I feel a goal-driven, autonomous ASI won't care much about the majority of non-humans. We don't care about anthills we trample when constructing buildings (ideally, we should); similarly an ASI would not intentionally target most non-humans — they aren't competing for the same resources or obstructing the ASI's goals.

Thanks, great post! 

A few follow-up questions and pushbacks:

  • Even if cannibalization happens, here are three questions that "multiple well-designed studies analyzing substitution effects demonstrated that cultivated and plant-based meats appeal to sufficiently different consumer segments" may not answer:
     
    • Would commercially viable cultivated meat more favorably alter consumer preferences over time?
    • A non-negligible portion of veg*ns abandon their veg*nism — would introduction of cultivated meat improve retention of animal-free consumption patterns?
    • How would introduction of cultivated meat affect flexitarian dietary choices? Flexitarians eat a combination of animal- and plant-based meat. When cultivated meat becomes commercially viable, would flexitarians replace the former or the latter with cultivated meat?

      If the answer is a yes to any of these, I think that is a point in favor of cultivated meat. I expect cultural change to be a significant driver of reduced animal consumption, and this cultural change will only be possible if there is a stable class of consumers who normalize consumption of animal-free products.

To draw a historical parallel, when industrial chicken farming developed in the second half of the 20th century, people didn't eat less of other meats; they just ate chicken in addition.

  • Is this true? It seems that as chicken did displace beef consumption by 40% (assuming consumption ~ supply) or am I grossly misunderstanding the chart above?

    • Further, isn't there an upper bound to how much addition can happen? Meat became cheap and widely available, incomes rose, people started eating more of everything, so consumption increased. But there is only so much more that one can eat, so at some point people started making cost-based trade-offs between beef and chicken. If cultivated chicken were to be cheaper than animal-based beef or chicken, shouldn't we expect people to start making similar trade-offs?
akash 🔸
2
1
0
70% disagree

AGI by 2028 is more likely than not

I hope to write about this at length once school ends, but in short, here are the two core reasons I feel AGI in three years is quite implausible:
 

  • The models aren't generalizing. LLM's are not stochastic parrots, they are able to learn, but the learning heuristics they adopt seem to be random or imperfect. And no, I don't think METR's newest is evidence against this.[1]
     
  • It is unclear if models are situationally aware, and currently, it seems more likely than not that they do not possess this capability. Laine et al. (2024) shows that current models are far below human baselines of situational awareness when tested on MCQ-like questions. I am unsure how models would be able to perform long-term planning—a capability I would consider is crucial for AGI—without being sufficiently situationally aware.

 

  1. ^

    As Beth Barnes put it, their latest benchmark specifically shows that "there's an exponential trend with doubling time between ~2 -12 months on automatically-scoreable, relatively clean + green-field software tasks from a few distributions." Real world tasks rarely have such clean feedback loops; see Section 6 of METR's RE-bench paper for a thorough list of drawbacks and limitations.

akash 🔸
3
1
0
70% âž” 50% disagree

Should EA avoid using AI art for non-research purposes?

Voting under the assumption that by EA, you mean individuals who are into EA or consider themselves to be a part of the movement (see "EA" is too vague: let's be more specific).

Briefly, I think the market/job displacement and environmental concerns are quite weak, although I think EA professionals should avoid using AI art unless necessary due to reputational and aesthetic concerns. However, for images generated in a non-professional context, I do not think avoidance is warranted.

Load more