ED

Ebenezer Dukakis

1299 karmaJoined

Comments
172

FWIW I certainly wouldn't tell anyone not to boycott ChatGPT. Decreasing OpenAI's revenue is good for the world.

I suppose if you're using a free account and blocking ads, you are adding costs without adding revenue. The important thing is to boycott acts which put money in OpenAI's pocket, which is not necessarily the same thing as boycotting all of their offerings.

It occurs to me that a person could create a nonprofit competitor to ChatGPT, which makes use of open models and donates excess revenue to AI alignment research. That way you can pay for a chatbot without contributing to AI advancement.

(I think the US example is perhaps a bit more complicated. It's not just very wealthy, it's also highly unequal and offers much weaker safety nets than most other liberal democracies. So the bitter politics may have more to do with material insecurity than with post-scarcity boredom.)

As I linked in my comment, ideologues in the US tend to be rather wealthy:

Progressive Activists have strong ideological views, high levels of engagement with political issues, and the highest levels of education and socioeconomic status. Their own circumstances are secure. They feel safer than any group, which perhaps frees them to devote more attention to larger issues of social justice in their society.

https://hiddentribes.us/profiles/#progressive-activists

The Devoted Conservatives are the counterpart to the Progressive Activists, but at the other end of the spectrum. They are one of the highest-income groups, and they feel happier and more secure than most other Americans.

https://hiddentribes.us/profiles/#devoted-conservatives

I worry that American ideologues have got all the lower levels of Maslow's hierarchy satisfied, and they are now pursuing self-actualization through partisanship.

Furthermore there appear to be a number of "urban legends" which float around the internet about the United States which are not true, or at least not as obviously true as you've been lead to believe. One blogger claims:

Common measures of poverty in the U.S. do not factor in taxes and transfers. The very things implemented to address the issue. We already “won” the “war on poverty” in absolute terms to reduce suffering—as measured by consumption. The same stunt is often done for inequality. If you don’t move the goalposts and count existing policy interventions, we’re already largely post-scarcity and highly egalitarian—to the extent the U.S. is more progressive and redistributive than any European country. Which is why poverty became positively correlated with obesity about the same time that bottom line dropped below 5% in the 1990s.

source, see also

One possibility I worry about is that as scarcity recedes, people will be relatively less motivated to play positive-sum cooperation games. With material goods less of a bottleneck, there's less motivation to cooperate in order to accumulate more of them. Such positive-sum games could be replaced by zero-sum petty status games or political hobbyism, like you see on social media for example. The US is an interesting case study, as a very wealthy country with bitter, Manichean politics -- there may be a connection.

If this theory is true, the influence of fanaticism could increase in the future as global economic growth progresses. Economic growth is probably helpful in the short term, to show people that positive-sum games are possible and worth playing. But the "hedonic treadmill" or diminishing marginal returns could dominate in the longer term. Sort of like how coffee stops working as well if you drink 4 cups every day.

The best approach might be to create and popularize more institutions which harmlessly dissipate human tribal instincts, e.g. sports fandom.

I'm typically a non-interventionist when it comes to foreign policy (probably fairly extreme by EA standards; I support US withdrawal from NATO). But it seems to me that the evaluation of a given foreign policy depends largely on what baseline you use for comparison purposes. If North Korea is used as the baseline for what communism can do to a country, modern Indonesia seems preferable by comparison.

Critics of US foreign policy typically use a high implicit baseline which allows them to blame the US no matter what the US does.

Consider a country with a bad government or some other political disaster of some sort.

  • If the US opposes the country's government, the US is to blame because it is "destabilizing" the country. ("The US destabilized Iraq.")

  • If the US collaborates with the country's government, the US is to blame because it is "propping up" an odious regime. ("The US propped up Suharto.")

  • If the US does nothing, the US is "complicit" through its inaction. ("The US is complicit in Russia's invasion of Ukraine.")

I suspect that this little trifecta is leading to increasing nihilism in US foreign policy circles.

Seems to me that if you do it right, there's a self-correcting element to emphasizing rationality. If an idea is wrong, it ought to be possible to refute it through rational argument. And if you aren't able to refute it--why are you so certain that it's a bad idea?

Seems to me that a lot of EA experience could actually be a negative, if it worsens organizational groupthink.

Number one is, we hire for capability and learning ability before we hire for expertise. We actually would rather hire smart, curious people than people who are deep, deep experts in one area or another... somebody who's been doing the same thing forever will typically just replicate what they've seen before. You need a mix, but we skew heavily towards people who are kind of open to new ideas and creative.

Google VP of people operations in 2013

In general EA orgs seem really overconfident to me about the quality of their candidate evaluation metrics. How many of these metrics have proven external validity? Seems kinda pointless to put a ton of effort into optimizing a metric, if you don't even know if it corresponds to what you actually want...

I find myself wondering if EAs have been doing the same thing forever and they typically just replicate what they've seen before 😁

"We need more people working on these neglected issues" doesn't necessarily mean that orgs have the management capacity to absorb more people.

Imagine I'm running a vegan restaurant. I've started serving my customers jackfruit tacos. They really like the tacos. So I run a giant advertising campaign all over the city telling people about my tacos. Come the weekend, my restaurant is flooded with customers. But after the first 50 customers, I run out of jackfruit, and the rest of the customers don't get to try the tacos. How do you think those customers would feel about my restaurant?

How would you feel, if you drove across town to try some jackfruit tacos which you learned about in an advertisement, and the restaurant was all out? You'd probably feel a sense of disappointment, and conclude that the restaurant is not very well run. If I told you "well the advertisement was technically right, the tacos are truly delicious" you'd probably be even more annoyed.

If you advertise EA as a place where talented people are needed in order to make the world a better place, and talented people arrive in EA, and they don't feel at all needed... they might not come back. Even if it's true in some technical sense that more people are needed in the abstract. Same way you might not come back to my vegan restaurant, even if it's technically true that the tacos are delicious. Replies like this miss the point, and give you a reputation for callous mismanagement. Eventually you burn through your entire potential customer base.

That doesn't necessarily mean you need to stop advertising. Just give people an accurate idea of what to expect, instead of hiding behind "it was technically correct". If the advertisement says "Jackfruit tacos available for first 50 customers", you won't be as annoyed if they are all out by the time you arrive.

The reason to fix voting power is that post-AGI, rapid population growth will become possible (whether of digital citizens, or biological ones via artificial wombs and robot child-rearers). If project voting were one-person one-vote, then whichever country grew its population the fastest could seize power.

This seems like a consideration against empowering democracies more broadly, if democracies would be controlled by the internal factions which grow their populations fastest.

It seems plausible to me that if you consider the universe of modern democratic nations, the first principal component of political disagreement within that citizenry is likely to be very intranational. (People often agree more with ideologically similar foreigners than with ideologically dissimilar co-nationals.)

In the same way US citizens often view state politics with an eye to affecting federal politics, citizens in democratic nations might view their national politics with an eye to affecting global governance. You might essentially be left with a single global polity with a single point of failure.

You argue that democracies are designed and tested to govern political power. But this sort of weird hypothetical seems fairly far from the regime that democracies have been designed and tested for.

I would suggest a very different approach: trying to move away from single-point-of-failure to the greatest possible extent, and designing global governance so it can withstand as many simultaneous failures as possible. It's especially important to reduce vulnerability to correlated failures.

On the bright side, we might end up getting an AI pause out of this, if the Netherlands wakes up and decides that it no longer wants to help supply chips for advanced AI which could either be (a) misaligned or (b) controlled by Trump. See previous discussion, protest. I reckon this moment represents a strong opportunity for Dutch EAs concerned with AI risks. Maybe get a TV interview where you explain how ASML is supplying chips to the US, then explain AI risk, etc.

In terms of red-teaming my own suggestion, I am somewhat worried about further politicizing the issue of AI / highlighting national rivalries. Seems best to push for symmetric restrictions on China--they are directly supplying materials to Russia for its war in Ukraine, after all. Eliezer Yudkowsky could be an interesting person to contact for red-teaming purposes, since he's strongly in favor of an AI pause, but also seems to resist any "international rivalry" framing of AI risk concerns?

Load more