yanni kyriacos

Co-Founder & Director @ AI Safety ANZ, GWWC Advisory Board Member (Growth)
1097 karmaJoined Working (15+ years)


Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).


Sorted by New


Good point. In that case the hypothetical user isn't using it as a forum (i.e. for discourse)

Something bouncing around my head recently ... I think I agree with the notion that "you can't solve a problem at the level it was created".

A key point here is the difference between "solving" a problem and "minimising its harm".

  • Solving a problem = engaging with a problem by going up a level from which is was createwd
  • Minimising its harm = trying to solve it at the level it was created

Why is this important? Because I think EA and AI Safety have historically focussed (and has their respective strengths in) harm-minimisation.

This applies obviously the micro. Here are some bad examples:

  • Problem: I'm experiencing intrusive + negative thoughts
    • Minimising its harm: engage with the thought using CBT
    • Attempting to solve it by going meta: apply meta cognitive therapy, see thoughts as empty of intrinsic value, as farts in the wind
  • Problem: I'm having fights with my partner about doing the dishes
    • Minimising its harm: create a spreadsheet and write down every thing each of us does around the house and calculate time spent
    • Attempting to solve it by going meta: discuss our communication styles and emotional states when frustration arises

But I also think this applies at the macro:

  • Problem: People love eating meat
    • Minimising harm by acting at the level the problem was created: asking them not to eat meat
    • Attempting to solve by going meta: replacing the meat with lab grown meat
  • Problem: Unaligned AI might kill us
    • Minimising harm by acting at the level the problem was created: understand the AI through mechanistic interpretability
    • Attempting to solve by going meta: probably just Governance

as of comment, 6 agrees and 6 disagrees. perfect :) 

I think it is good to have some ratio of upvoted/agreed : downvotes/disagreed posts in your portfolio. I think if all of your posts are upvoted/high agreeance then you're either playing it too safe or you've eaten the culture without chewing first.

Media is often bought on a CPM basis (cost per thousand views). A display ad on LinkedIn for e.g. might cost $30 CPM. So yeah I think merch is probably underrated. 

Thanks for pointing this out :)

I think longview philanthropy might look after HNW individuals in the EA space?

Basically, I think there is a good chance we have 15% unemployment rates in less than two years caused primarily by digital agents.

Totally different. I had a call with a voice actor who has colleagues hearing their voices online without remuneration. Tip of the iceberg stuff.

Yeah the problem with some surveys is they measure prompted attitudes rather than salient ones.

Load more