kta

432 karmaJoined Working (0-5 years)

Participation
5

  • Completed the Introductory EA Virtual Program
  • Attended an EA Global conference
  • Attended an EAGx conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Posts
2

Sorted by New
4
kta
· · 1m read

Comments
47

A lot of people have said sharing these notes were helpful, so sharing it here on the EAF! Here are notes on NTI | bio’s recent event with Dr. Lu Borio on H5N1 Bird Flu, in case anyone here would find it helpful!

I... read this just today... and I was like wut???

...Until I saw the hat then the date XD

kta
4
0
0
1

Thanks for making this, Willem and David! Really interesting. After seeing the IRI statements and then the mean scores, it didn't initially look low to me, but when compared to the average IRI scores across the years, that's certainly a difference! It made me a little sad actually. I wonder what it might look like at present. There's a part of me that feels it can look similar still, despite the changes in the EA community. Also, while it might be relatively low-priority I for one would personally be fascinated/excited to see this being done again! 

kta
15
3
0

(Not an AI welfare/safety expert by any stretch, just adding my two cents here! Also I was very piqued by the banner and loved hovering over the footnote! I've thought about digital sentience, but this banner and this week really put me into a "hmm..." state)

My view leans towards "moderately disagree." (I fluctuated between this, neutral, and slightly agree.) For context, when it's AI safety, I'd say "highly agree." Thoughts behind my current position:

Why I'd prioritize it less: 

  • I consider myself longtermist, but I have always grappled with the opportunity costs of highly prioritizing more "speculative" areas. I care about high EV areas, I also grapple with deprioritizing very tangible cause areas with existing beings that have high EV too. When I looked at the table below, I'd lean towards giving more resources towards AW versus making AI welfare right now a priority. 
  • I also think about how, if we divert more resources into AI welfare, I worry about the ramifications of diverting more EAs into a very dense, specialized sector. While this specialization is important, I am concerned that it might sometimes lead to a narrower focus that doesn't fully account for the broader, interconnected systems of the world. In contrast, fields like biosecurity often consider a wider range of factors and have a more integrative perspective. This more holistic view can be crucial in addressing complex, multifaceted issues, and one reason I would prioritize AI welfare less is the opportunity costs towards areas that may be more holistic (not saying AI welfare has no reason to be considered holistic)
  • I have some concerns that trying to help AI right now might make things worse since we don't fully know yet what's being done now that can make things riskier? (Nathan said something to this effect in this thread).
  • I don't know to what extent AI welfare is irreversible compared to unaligned AI
  • It seems less likely for multiplanetary civilizations to develop with advanced AI, reducing likelihood of AI systems across the universe, which reduces my prioritizing AI welfare on a universal scale

Why I'd still prioritize it: 

  • I can't see myself prescribing a 0% chance AI would be sentient, and I can't see myself prescribing less than (edit:) 2% of resources and talent in effective altruism to something wide-scale I'd hold a possibility of being considered sentient, even if it might be less standard (i.e. more outside average moral circles) because of big value creation, just generally preventing suffering, and potentially preventing additional happiness, all of which I'm highly for.
  • I think exploratory and not very tapped-in work needs to be done more, and just establishing enough baseline infrastructure is important for this high EV type of cause (assuming we would say AI will be very widespread)
  • I like the trammell's animal welfare analogy

Overall, I agree that resources and talent should be allocated to AI welfare because it's prudent and can prevent future suffering. However, I moderately disagree with it being an EA priority due to its current speculative nature and how I think AI safety. I think AI safety and solving the alignment problem should be a priority, especially in these next few years, though, and hold some confidence in preventing digital suffering. 

Other thoughts:

  • I wonder if there'd ever be a conflict between AI welfare and human welfare or the welfare of other beings. Haven't put much thought here. Something that immediately comes to mind is if advanced AI systems would require substantial energy and infrastructure, potentially competing with human needs. From a utilitarian point of view, this presents a significant dilemma. However, there's the argument that solving AI alignment could mitigate these issues, ensuring that AI systems are developed and managed in ways that do not harm human welfare. My current thinking is that there's less likely potential for conflict between AI and human welfare if we solve the alignment problem and improve the policy infrastructure around AI. I might compare bioethics to historical precedents, showing that ethical alignment leads to better welfare outcomes
  • Some media that have made me truly feel for AI welfare are "I, Robot," "Her," Black Mirror's "Joan is Awful," and "Klara and the Sun”!

this is super helpful! would be cool if we can see %s given to insect sentience or other smaller sub cause areas like that. does anyone have access to that?

Ah okay good to know, thanks Henri!

kta
8
0
0
4

Thanks for making this 🥺 honestly just reading you write words about RSI and not getting out of bed, and then having you even recommend rest, for some reason slaps me hard? 🥺

Ah okie cool, and yeah for sure!

Cool that you did this, Oscar! What made you make this?

It seems like, regarding EA engagement, there’s a significant impact in well-organized city groups in smaller countries, leading to a concentrated effect. I read up a bit on EA Estonia/Estonia as a result of this post (didn't know much about them before this!), and they’re a relatively small country with concentrated efforts in key urban centers (Tallinn, the capital; Tartu, a university city). The synergy between the two seems to have the potential to create a concentrated and cohesive national EA network. The idea of cohesive communities <> smaller countries makes sense too. 

Also, I imagine smaller countries with single/few concentrated influential unis/intellectual hubs can lead to higher EA visibility/network cohesivity/potential EA engagement. E.g. Estonia with University of Tartu? New Zealand and University of Auckland? Switzerland and ETH Zurich? Norway and University of Oslo? (People with more knowledge here, please correct me if I'm wrong!)

Load more