ED

Ebenezer Dukakis

1220 karmaJoined

Comments
157

I'm honestly a little confused about why AI would inspire people to pursue money and power. Technological abundance should make both a lot less important. Relationships will be much more important for happiness. Irresponsible AI development is basically announcing to the world that you're a selfish jerk, which won't be good for your personal popularity or relationships.

I wish people would talk more about "sensitivity analysis".

Your parameter estimates are just that, estimates. They probably result from intuitions or napkin math. They probably aren't that precise. It's easy to imagine a reasonable person generating different estimates in many cases.

If a relatively small change in parameters would lead to a relatively large change in the EV (example: in Scenario 3, just estimate the "probability of harm" a teensy bit different so it has a few more 9s, and the action looks far less attractive!) — then you should either (a) choose a different action, or (b) validate your estimates quite thoroughly since the VoI is very high, and beware of the Unilateralist's Curse in this scenario, since other actors may be making parallel estimates for the action in question.

AI governance could be much more relevant in the EU, if the EU was willing to regulate ASML. Tell ASML they can only service compliant semiconductor foundries, where a "compliant semicondunctor foundry" is defined as a foundry which only allows its chips to be used by compliant AI companies.

I think this is a really promising path for slower, more responsible AI development globally. The EU is known for its cautious approach to regulation. Many EAs believe that a cautious, risk-averse approach to AI development is appropriate. Yet EU regulations are often viewed as less important, since major AI firms are mostly outside the EU. However, ASML is located in the EU, and serves as a chokepoint for the entire AI industry. Regulating ASML addresses the standard complaint that "AI firms will simply relocate to the most permissive jurisdiction". Advocating this path could be a high-leverage way to make global AI development more responsible without the need for an international treaty.

Note that Moskovitz's involvement in Asana probably influences his perspective on AI:

For us, there was only one answer. We quickly assembled a dedicated AI team with a clear mandate: find innovative ways to weave the frontier models into the very fabric of Asana in a safe and reliable way. We collaborated closely with leading AI pioneers like Anthropic and OpenAI to accelerate our efforts and stay at the forefront. We launched Asana AI and brought to market powerful generative AI features like smart status updates, smart summaries, and smart chat. The process of taking these first steps unlocked our learning and led us to countless new ideas. We knew this was just the beginning.

...

...We could see the power and potential of integrating AI into Asana's core collaborative features. With our Work Graph data model providing a perfect foundation for AI to understand the complex connections between goals, portfolios, projects, and tasks, we realized AI could work alongside human teammates in a natural way...

...

...We invite you to be the first to know about the latest developments in AI-powered collaborative work here...

...

https://asana.com/inside-asana/ai-transforming-work

Even if it just gets a few upvotes, it is still likely to show up when people search the forum or read all of the posts under a particular tag. Arguably, those are the readers you care most about, since they are most likely to be working in the area you're writing about.

There tends to be a winner-take-all effect for post upvotes. Perhaps it makes sense to assess quality according to log(upvotes)? A post with 100 upvotes is probably not 10x as good as a post with 10 upvotes. It just happened to hit the frontpage at a time when there was little competition or something like that.

This is a more general problem with social media. Many users are either getting more attention than they want, or less attention than they want. It should really be called "DIY broadcast media", not "social media".

Vote power should scale with karma

I think vote power has helped maintain a culture of expertise here on the forum.  It's hard to find quality discussion on the internet.

However, I think concerns about groupthink are valid.  We could try having a section of the forum with no voting as a "control group", for instance.  Or tweak the algorithm to penalize feedback loops.

The examples I gave --downvoting based on opinion not content, downvoting based on ideology, upvoting your ingroup, upvoting because they're you friends-- are all things that can be done while staying anonymous.

Your initial complaint was mass-downvoting, which is explicitly called out in the FAQ (based on your own quote!) as something the admins are willing to de-anonymize for, no?

You think I haven't done that?

If you had done it, I would expect your initial comment to contain something along the lines of: "I reached out privately to the admins, through standard channels, to complain about mass-downvoting. Despite the forum guidelines, they didn't do anything. Their stated reason was X."

Up to you. But I think voting does a tremendous amount to influence the forum's culture. Nudging people towards voting wisely, and talking about how to vote, seems pretty high-leverage to me. Right now, my sense is we're in a bit of a bad place, where people take karma scores too seriously given the low amount of thought that goes into them.

Voting is anonymous

I don't believe that is true for admins:

We will try to maximally protect privacy and pseudonymity, as long as it does not seriously interfere with our ability to enforce important norms on the Forum.

https://forum.effectivealtruism.org/posts/yND9aGJgobm5dEXqF/guide-to-norms-on-the-forum

This forum is fairly small. It seems relatively feasible for the admins to enforce norms manually.

But in any case, I encourage you to prove me wrong. I encourage you to reach out to the admins, and then report back here when nothing useful happens, as you seem to be predicting.

Retributive downvoting appears to be a bannable offense, according to the forum guide:

https://forum.effectivealtruism.org/posts/yND9aGJgobm5dEXqF/guide-to-norms-on-the-forum#Voting_norms 

I suggest you take your case up with the admins.

More generally, perhaps it would be valuable to publicize the voting guide better?  E.g. every time my mouse hovers over a voting widget, a random voting guideline could pop up, so over time I would learn all of the guidelines.  @Sarah Cheng 

I think the risk of groupthink death spirals is real, and I suspect I've been on the receiving end of it. "With great power comes great responsibility."

In any case, we should expect some heavy survivorship bias here in favor of the status-quo since EAs or potential EAs who get turned off by the karma system will either fully or largely leave the forum (e.g. me).

Do you post on the EA subreddit?  Everyone's vote power is equal there:

https://reddit.com/r/EffectiveAltruism/ 

IMO, the discussion quality on the subreddit is not great.  I'm unsure if that's because it lacks scaled vote power, or simply because it has fewer serious EAs and more random redditors.  I wonder what would happen if serious EAs made a dedicated effort to post on the subreddit, and bring the random redditors up to speed more etc.

Load more