yanni kyriacos

Director & Movement Builder @ AI Safety ANZ, GWWC Advisory Board Member (Growth)
819 karmaJoined Dec 2020Working (15+ years)

Bio

Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).

Posts
20

Sorted by New

Comments
147

Hi Jason! Thanks for the reply. Would mind laying out why you believe it is impractical to ignore Émile? I think this is a crux.

Hi Ben! I suppose it depends on what each of us means by "taken seriously". I would prefer a post like this didn't get 185 karma, because I want us to collectively agree to make Émile irrelevant. 

I worry that, due to their high levels of openness and conscientiousness, EAs have an overly high "burden of proof" bar to consider someone a malicious actor. Taking one look at this Émile person's online content says to me he shouldn't be taken seriously. I can't tell if EAs lack a "gut instinct" OR they have one but ignore it to a harmful degree!

There have been multiple occasions where I've copy and pasted email threads into an LLM and asked it things like:

  1. What is X person saying
  2. What are the cruxes in this conversation?
  3. Summarise this conversation
  4. What are the key takeaways
  5. What views are being missed from this conversation

I really want an email plugin that basically brute forces rationality INTO email conversations.

Tangentially - I wonder if LLMs can reliably convert peoples claims into a % through sentiment analysis? This would be useful for Forecasters I believe (and rationality in general)

"And as I said, I expect such a project to appear soon."

I dont know whether to read this as "Zach has some inside information that gives him high confidence it will exist" or "Zach is doing wishful thinking" or something else!

Hi Zach! To clarify, are you basically saying you don't want to improve the project much more than where you've got it to? I think it is possible you've tripped over a highly impactful thing here!

Hi Guy! Thanks for commenting :) I am a bit confused by the analogy. Would you mind explaining it further?

I think this is a great idea! Is there a way to have two versions:

  1. The detailed version (with %'s, etc)
  2. And the meme-able version (which links to the detailed version)

Content like this is only as good as the number of people that see it, and while its detail would necessarily be reduced in the meme-able version, I think it is still worth doing.

The Alliance for Animals does this in the lead up to elections and it gets spread widely: https://www.allianceforanimals.org.au/nsw-election-2023

 

Thanks for taking the time to comment Michael! I appreciate it :)

I probably should have mentioned in my post that I've spent probably > 1000 hours consuming Buddhist related content and/or meditating, which gives me a narrow and deep "inside view" on the topic. My views (and comments below) are heavily informed by Tibetan Buddhism especially. Regarding your points:


"As I understand, enlightenment doesn't free you from all suffering. Enlightenment is better described as "ego death"

  • My 2 cents is that the path to Enlightenment can be started (but not fully realised) by glimpsing the illusory nature subject/object duality. The self is the ultimate "subject", so I agree that "ego death" is a viable path!
  • I think full Enlightenment frees someone from basically all unnecessary suffering (which in Buddhism is distinguished from pain). A simple formula is something like "discomfort x resistance = suffering". An enlightened person in my view wouldn't be attached to a particular moment or it's content, and therefore wouldn't "cling" to or "resist" it.

"Enlightenment is extremely hard to achieve (it requires spending >10% of your waking life meditating for many years) and doesn't appear to make you particularly better at anything. Like if I could become enlightened and then successfully work 80 hours a week because I stop caring about things like motivation and tiredness, that would be great, but I don't think that's possible."

  • I think full Enlightenment is extremely hard to achieve, like you said, but getting 10% of the way there is totally within a normal persons grasp. I think it is plausible this could have the same increase in wellbeing for your average person as a good diet and exercise combined. Maybe more.
  • I think becoming partly Enlightened could make a person more altruistic but less driven. Hard to say!

If you're interested in exploring this further from a personal perspective, I recommend checking out Loch Kelly :)

Hi Matthew! I'd be curious to hear your thoughts on a couple of questions (happy for you to link if you've posted elsewhere): 

1/ What is the risk level above which you'd be OK with pausing AI?

2/ Under what conditions would you be happy to attend a protest? (LMK if you have already attended one!)

Load more