This is a special post for quick takes by Clara Torres Latorre 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Just noticed that I tend to up/downvote and agree/disagree vote more or less depending on what the current vote count is at.

Standard herding bias at work.

Hoping that saying it out loud will make it weaker, and maybe other people can relate.

(Not trying to represent an institutional take here, other mods may disagree)

Would you mind spelling out the problem a bit? In my view, the current karma total is important info for me deciding whether to up or downvote something. 

For example, I might have downvoted this quick take if it was over 60 or so (because quick-takes above 60 are generally worth reading for a wide group), and yet I wouldn't downvote it at the number I found it (1) because it doesn't deserve to have negative karma[1]

In other words, I think of karma almost as the question "is this post/comment under-, over-, or correctly rated?", and I don't currently think that that's a problem. 

  1. ^

    Also TBF I generally avoid upvoting or downvoting anything about the Forum itself, since I might be biased. 

I would say doing the opposite would be a problem, like upvoting something partly because it has positive karma so "this must be valuable".

I'm not actively doing this nor endorsing it, I just caught myself having this reflex.

I try to avoid downvoting things that are already in the red, personally. Unless it's very bad. 

For context: Clara is right, there is good experimental evidence that this occurs in online comment forums. This is on top of the simple mechanism that more highly upvoted content is more likely to be seen for various reasons.

I'd assume this holds true for EA forum content. I do the same thing @Toby Tremlett🔹 is describing to some extent, but I'd be surprised if my system 2 thinking outweighs my system 1 on net in this regard. I suspect I personally do this most with very low Karma posts, which I neglect to upvote because of a vague embarrassment over the possibility of promoting content with some flaw I missed. 

I'm not sure I understood what you are saying here? Do you add more in the direction they are already tilting, or are just more likely to vote if its a high-vote-volume post? 

I am aware I vote based upon the current karma count.  If someone has a bunch of karma, then I don't mind downvoting. If the post or user has super little karma, I upvote it much more readily. Something has to be truly egregious for me to push it further into negative karma.

In the midrange I am less likely to vote at all, and vote more accurately: if it was personally valuable to me, if I feel its underrepresented, or if I feel like it would be better that more eyes see it then I upvote. My favorite thing is to disagree vote and then give karma for a valuable contribution. Then I feel like I'm (a True Rationalist =P) counteracting the natural "like+agree+karma" impulse. I try to vote like this as often as possible.

I would say I have a tendency to go with the crowd, yes, so voting in the same direction that is already there.

Which is the contrary as minding the current voting status as you suggest.

I think this (the first one) is a failure mode.

Speaking from what I've personally seen, but it's reasonable to assume it generalizes.

There's an important pool of burned out knowledge workers, and one of the major causes is lack of value alignment, i.e. working for companies that only care about profits.

I think this cohort would be a good target for a campaign:

  • Effective giving can provide meaning for the money they make
  • Dedicating some time to take on voluntary challenges can help them with burnout (if it's due to meaninglessness)

Tentatively and naively, I think this is accurate.

I'm wondering if there would be any way to target/access this population? If this campaigns existed, what action would it take? Some groups of people are relatively easy to access/target due to physical location or habits (college-aged people often congregate at/around college, vegan people often frequent specific websites or stores, etc.).

I imagine that someone much more knowledgeable about advertising/marketing than I am would have better ideas. All I can come up with off the top of my head is targeted social media advertisements: people who work at one of these several companies and who have recently searched for one of these few terms, etc.

Question: how to reconcile the fact that expected value is linear with preferences being possibly nonlinear?

Example: people are tipically willing to pay more than expected value for a small chance of a big benefit (lottery), or to remove a small chance of a big loss (insurance).

This example could be rejected as a "mental bias" or "irrational". However, it is not obvious to me that linearity is a virtue, and even if it is, we are human and our subjective experience is not linear.

  1. Look into logarithmic utility of money; there is some rich literature here
  2. For an altruistic actor, money becomes more linear again, but I don't have a quick reference here.
  1. Thank you for pointing out log utility, I am aware of this model (and also other utility functions). Any reasonable utility function is concave (diminishing returns), which can explain insurance to some extent but not lotteries.
  2. I could imagine that, for an altruistic actor, altruistic utility becomes "more linear" if it's a linear combination of the utility functions of the recipients of help. This might be defensible, but it is not obvious for me unless that actor is utilitarian, at least in their altruistic actions.

(just speculating, would like to have other inputs)

 

I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk.

I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.

My take is that I think there are strong arguments for why AI x-risk is overwhelmingly more important than narrow AI, and I think those arguments are the main reason why x-risk gets more attention among EAs.

Thank you for your comment. I edited my post for clarity. I was already thinking of x-risk or s-risk (both in AGI risk and in narrow AI risk).

Ah I see what you're saying. I can't recall seeing much discussion on this. My guess is that it would be hard to develop a non-superintelligent AI that poses an extinction risk but I haven't really thought about it. It does sound like something that deserves some thought.

When people raise particular concerns about powerful AI, such as risks from synthetic biology, they often talk about them as risks from general AI, but they could come from narrow AI, too. For example some people have talked about the risk that narrow AI could be used by humans to develop dangerous engineered viruses.

My uninformed guess is that an automatic system doesn't need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).

For example, the machine doesn't need to be agentic if there is a human agent deciding to make bad stuff happen.

So I think it would be an important point to discuss, and maybe someone has done it already.

Curated and popular this week
Relevant opportunities