I dropped out of a MSc. in mathematics at a top university, in order to focus my time on AI safety.
Executives are under intense pressure to make profit to prevent the business from going bankrupt, and maybe to get bonuses or reputation, but the pressure to avoid getting voted out by shareholders is relatively less.
Charities have a lot of the same pressures (minus the bonuses).
I don't have any expertise, I may be totally wrong.
How would you rate current AI labs by their bad influence or good influence? E.g. Anthropic, OpenAI, Google DeepMind, DeepSeek, xAI, Meta AI.
Suppose that the worst lab has a -100 influence on the future, for each $1 they spend. A lab half as bad, has a -50 influence on the future for each $1 they spend. A lab that's actually good (by half as much) might have a +50 influence for each $1.
What numbers would you give to these labs?[1]
It's possible this rating is biased against smaller labs since spending a tiny bit increases "the number of labs" by 1 which is a somewhat fixed cost. Maybe pretend each lab was scaled to the same size to avoid this bias against smaller labs.
(Kind of crossposted from LessWrong)
My silly idea is that your voting power should not scale with your karma directly, but should scale with the number of unique upvotes minus the number of unique downvotes you received. This prevents circular feedback.
Reasons
Hypothetically, if you had two factions which consistently upvote themselves, A with 67 people, and B with 33 people. People in A will have twice as many unique upvotes as people in B, and their comments can have up to 4 times more karma (in the simplistic case where voting power scales linearly with karma).
However, if voting power depends not on unique upvotes but on karma, then at first people in A will still have twice as many unique upvotes as people in B, and their comments will still have more than 4 times more karma. But then, (in the simplistic case where voting power scales linearly with karma), their comments will have 8 times more karma. Which further causes their comments to have 16 times more karma.
This doesn't happen in practice because voting power doesn't scale linearly with karma (thank goodness), but circular feedback is still partially a problem.
Technically I agree that 100% consequentialists should be strong longtermists, but I think if you are moderately consequentialist, you should only sometimes be a longtermist. When it comes to choosing your career, yes, focus on the far future. When it comes to abandoning family members to squeeze out another hour of work, no. We're humans not machines.