Joseph_Chu

616 karmaJoined Ontario, Canada
jlcstudios.com

Bio

Participation
1

An eccentric dreamer in search of truth and happiness for all. I formerly posted on Felicifia back in the day under the name Darklight and still use that name on Less Wrong. I've been loosely involved in Effective Altruism to varying degrees since roughly 2013.

Comments
119

Joseph_Chu
3
0
0
1
100% disagree

How much of a post are you comfortable for AI to write?

 

I have a particular writing style that I consider my "voice", and I fundamentally take pride in my writing skill and see writing as a craft and art form, so I refuse to use AI to write a single word of what I would publish to the world.

To me, using AI for writing is equivalent to having someone else write it for you.

Not quite a draft amnesty thing, but I have been playing with the idea of writing short stories or perhaps even novels that use time travellers as a vehicle for Longtermism. The idea is that time travellers from the far distant future are our descendents, the very people that Longtermism cares about, so their perspective could be something worth exploring in fiction.

Given, I'm more of a soft Longtermist, and creative writing is notoriously hard to make any kind of living out of, so I'm not sure to what extent this is worth doing/trying/exploring, even as just a side project.

I've been on this forum since 2014 and I -still- feel this way sometimes. Although, I will say Less Wrong is notably worse for this.

It does get better after you make a few comments/posts and notice people aren't jumping all over you. I used to be much more terrified, but now, I'm only kinda apprehensive whenever I post.

I've explored very similar ideas before in things like this simulation based on the Iterated Prisoner's Dilemma but with Death, Asymmetric Power, and Aggressor Reputation. Long story short, the cooperative strategies do generally outlast the aggressive ones in the long run. It's also an idea I've tried to discuss (albeit less rigorously) before as The Alpha Omega Theorem and Superrational Signalling. The first of those was from 2017 and got downvoted to oblivion, while the second was probably too long-winded and got mostly ignored.

There are a bunch of random people like James Miller and A.V. Turchin and Ryo who have had similar ideas that can broadly be categorized under Bostrom's concept of Anthropic Capture, or Game Theoretic Alignment, or possibly a subset of Agent Foundations. The ideas are mostly not taken very seriously by the greater LW and EA communities, so I'd be prepared for a similar reception.

Answer by Joseph_Chu15
3
2
1

We tried earlier. Carrick Flynn received substantial support from EA and the result was mediocre, with criticisms of EA actually having a negative effect on his campaign, as people pointed out the connection to the "billionaires and techbros" who apparently fund EA and such.

Also, the head of RAND, Jason Matheny, is an EA, and there's some connections between EA and the American NatSec establishment. CSET for instance was funded partly by OpenPhil. There is a tendency among a lot of EAs is to try not to be partisan and mostly support effective governance and policy kind of things.

That being said, Dustin Moskovitz, the billionaire who is the main donor behind what was previously called Open Philanthropy and is now Coefficient Giving, has donated significantly and repeatedly to Democrats. OpenPhil has historically been by far the largest funder of EA stuff, particularly since SBF fell from grace, so Dustin's contributions can be seen tacitly as EA support for the Dems.

So, I don't think it's accurate to say EAs have made absolutely no effort on this front. We have, and it has stupidly backfired before and we're in this very awkward position politically where the whole TESCREAL controversy makes the EA brand tarnished to the Left, even though past surveys have shown that most rank and file EAs are centre-left to left. It's a frustrating situation.

Oh man, I remember the days when Eliezer still called it Friendly and Unfriendly AI. I actually used one of those terms in a question when I was at a Q&A after a tutorial by the then less famous Yoshua Bengio at the 27th Canadian Conference on AI in 2014. He jokingly replied by asking if I was a journalist, before giving a more serious answer saying we were so far away from having to worry about that kind of thing (AI models back then were much more primitive, it was hard to imagine an object recognizer being dangerous). Fun times.

Strong upvoted as that was possibly the most compelling rebuttal to the simulation argument I've seen in quite a while, which was refreshing for my peace of mind.

That being said, it mainly targets the idea of a large-scale simulation of our entire world. What about the possibility that the simulation is for a single entity and that the rest of the world is simulated at a lower fidelity? I had the thought that a way to potentially maximize future lives of good quality would be to contain each conscious life in a separate simulation where they live reasonably good lives catered to their preferences, with the apparent rest of the world being virtual. Given, I doubt this conjecture because in my own opinion my life doesn't seem that great, but it seems plausible at least?

Also, that line about the diamond statue of Hatsune Miku was very, very amusing to this former otaku.

If I recall correctly, the old Felicifia forums (archive) had a lot of debates between negative and other utilitarians about this exact thing. There are also lots of other thought experiment-like "repugnant conclusions" that go with various forums of utilitarianism, including the "Hedonium Shockwave" idea, where you tile the universe with happiness generating computronium as the most efficient way to maximize utility.

The reality is, it's very hard to avoid weird hypothetical conclusions when you take as your ethics a simple rule like "minimize suffering" or "maximize happiness". This is a known problem with consequentialist ethics, and it's up to you if you want to bite the bullet or follow your moral intuitions.

I've thought about this a lot too. My general response is that it is very hard to see what one could do differently at a moment to moment level even if we were in a simulation. While it's possible that you or I are alone in the simulation, we can't, realistically, know this. We can't know with much certainty that the apparently sentient beings who share our world aren't actually sentient. And so, even if they are part of the simulation, we still have a moral duty to treat them well, on the chance they are capable of subjective experiences and can suffer or feel happiness (assuming you're a Utilitarian), or have rights/autonomy to be respected, etc.

We also have no idea who the simulators are and what purpose they have for the simulation. For all we know, we are petri dish for some aliens, or a sitcom for our descendents, or a way for people's minds on colony ships travelling to distant galaxies to spend their time while in physical stasis. Odds are, if the simulators are real, they'll just make us forget about whatever if we finally figure it out, so they can continue it for whatever reasons.

Given all this, I don't see the point in trying to defy them or doing really anything differently than what you'd do if this was the ground truth reality. Trying to do something like attempting to escape the simulation would most likely fail AND risk getting you needlessly hurt in this world in the process.

If we're alone in the sim, then it doesn't matter what we do anyway, so I focus on the possibility that we aren't alone, and everything we do does, in fact, matter. Give it the benefit of the doubt.

At least, that's the way I see things right now. Your mileage may vary.

One thing we could do to help EA seem more cool without compromising at all on truth and intellectual integrity is to emphasize that what we're doing is actually heroic. Like, we are literally saving lives (bednets) and protecting the helpless (animals) and trying to save the world from potential doom (AI safety).

That leans into our altruist angle. I think we could also lean into the effectiveness angle by comparing ourselves to heroic characters in fiction who use their intelligence to outwit the bad guys. I'm thinking BBC's Sherlock, Spock from Star Trek, Lelouch from Code Geass, Tony Stark aka Iron Man, HPMOR, etc. In fact, EAs are kinda like combining Tony Stark's genius with the sense of morality and decency of Steve Rogers aka Captain America.

We are like Lawful Good D&D Paladins in the sense of championing a righteous cause, and D&D Wizards in the sense of using our intelligence to solve the problems.

So, I think we should lean into the idea that being EA is heroic. We're trying to save the world. Many of us make real sacrifices (10% to charity, veganism, career pivots, etc.) to make the world a better place.

As for villains, I mean, there are many we could point to other than just Altman. Elon Musk is basically a caricature at this point. Not only is he racing to ASI with the least safety of any of the frontier competitors, but as leader of DOGE he cut USAID and essentially killed or at least abandoned all the people depending on that. Another obvious choice would be an unaligned ASI itself.

But I think, it's actually more important to show us as the heroes we are, than to name villains. People get mad at villains. People connect with heroes.

You might argue with AI safety in particular that it already sounds too sci-fi. I think, we can't avoid that, and we may as well take advantage of the tropes that our culture has to make the connections that can be made that resonate with people. Heroes saving the world is a lot more exciting and cool a frame than maximizing impact through targeted donations and direct work sounds, but in effect, in the real world, they are the same thing.

This is not PR or spinning facts. At the risk of sounding cheesy, our efforts really are heroic, and we deserve for our society and culture to appreciate that, and recognize that they too, can become heroes in our world.

Load more