I'm a senior software developer in Canada (earning ~US$70K in a good year) who, being late to the EA party, earns to give. Historically I've have a chronic lack of interest in making money; instead I've developed an unhealthy interest in foundational software that free markets don't build because their effects would consist almost entirely of positive externalities.
I dream of making the world better by improving programming languages and developer tools, but AFAIK no funding is available for this kind of work outside academia. My open-source projects can be seen at loyc.net, core.loyc.net, ungglish.loyc.net and ecsharp.net (among others).
Also: critical feedback can be good. Even if painful, it can help a person grow. But downvotes communicate nothing to a commenter except "f**k you". So what are they good for? Text-based communication is already quite hard enough without them, and since this is a public forum I can't even tell if it's a fellow EA/rat who is voting. Maybe it's just some guy from SneerClub―but my amygdala cannot make such assumptions. Maybe there's a trick to emotional regulation, but I've never seen EA/rats work that one out, so I think the forum software shouldn't help people push other people's buttons.
I haven't seen such a resource. It would be nice.
My pet criticism of EA (forums) is that EAs seem a bit unkind, and that LWers seem a bit more unkind and often not very rationalist. I think I'm one of the most hardcore EA/rationalists you'll ever meet, but I often feel unwelcome when I dare to speak.
Like:
Look, I know I'm too thin-skinned. I was once unable to work for an entire day due to a single downvote (I asked my boss to take it from my vacation days). But wouldn't you expect an altruist to be sensitive? So, I would like us to work on being nicer, or something. Now if you'll excuse me... I don't know I'll get back into a working mood so I can get Friday's work done by Monday.
Okay, not a friendly audience after all! You guys can't say why you dislike it?
Story of my life... silent haters everywhere.
Sometimes I wonder, if Facebook groups had downvotes, would it be as bad, or worse? I mean, can EAs and rationalists muster half as much kindness as normal people for saying the kinds of things their ingroup normally says? It's not like I came in here insisting alignment is easy actually.
I only mentioned human consciousness to help describe an analogy; hope it wasn't taken to say something about machine consciousness.
I haven't read Superintelligence but I expect it contains the standard stuff―outer and inner alignment, instrumental convergence etc. For the sake of easy reading, I lean into instrumental convergence without naming it, and leave the alignment problem implicit as a problem of machines that are "too much" like humans, because
I don't incorporate Yudkowsky's ideas because I found the List of Lethalities to be annoyingly incomplete and unconvincing, and I'm not aware of anything better (clear and complete) that he's written. Let me know if you can point me to anything.
My feature request for EA Forum is the same as my feature request for every site: you should be able to search within a user (i.e. a user's page should have a search box). This is easy to do technically; you just have to add the author's name as one of the words in the search index.
(Preferably do it in such a way that a normal post cannot do the same, e.g. you might put "foo
authored this post" in the index as @author:foo
but if a normal post contains the text "@author:foo
" then perhaps the index only ends up with @author
(or author
) and foo
, while the full string is not in the index (or, if it is in the index, can only be found by searching with quotes a la Google: "@author:foo"
)
I didn't see a message about kneecaps, or those other things you mentioned. Could you clarify? However, given Torres' history of wanton dishonesty ― I mean, prior to reading this article I had already seen Torres lying about EA ― and their history of posting under multiple accounts to the same platform (including sock puppets), if I see an account harassing Torres like that, I would (1) report the offensive remark and (2) wonder if Torres themself controls that account.
Sorry if I sounded redundant. I'd always thought of "evaporative cooling of group beliefs" like "we start with a group with similar values/goals/beliefs; the least extreme members gradually get disengaged and leave; which cascades into a more extreme average that leads to others leaving"―very analogous to evaporation. I might've misunderstood, but SBF seemed to break the analogy by consistently being the most extreme, and actively and personally pushing others away (if, at times, accidentally). Edit: So... arguably one can still apply the evaporative cooling concept to FTX, but I don't see it as an explanation of SBF himself.
You know what, I was reading Zvi's musings on Going Infinite...
Q: But it’s still illegal to mislead a bank about the purpose of a bank account.
Michael Lewis: But nobody would have cared about it.
He seems to not understand that this does not make it not a federal crime? That ‘we probably would not have otherwise gotten caught on this one’ is not a valid answer?
Similarly, Lewis clearly thinks ‘the money was still there and eventually people got paid back’ should be some sort of defense for fraud. It isn’t, and it shouldn’t be.
...
Nor was Sam a liar, in Lewis’s eyes. Michael Lewis continued to claim, on the Judging Sam podcast, that he could trust Sam completely. That Sam would never lie to him. True, Lewis said, Sam would not volunteer information and he would use exact words. But Sam’s exact words to Lewis, unlike the words he saw Sam constantly spewing to everyone else, could be trusted.
It’s so weird. How can the same person write a book, and yet not have read it?
And it occurred to me that all SBF had to do was find a few people who thought like Michael Lewis, and people like that don't seem rare. I mean, don't like 30% of Americans think that the election was stolen from Trump, or that the cases against Trump are a witch hunt, because Trump says so and my friends all agree he's a good guy (and they seek out pep talks to support such thoughts)? Generally the EA community isn't tricked this easily, but SBF was smarter than Trump and he only needed to find a handful of people willing to look the other way while trusting in his Brilliance and Goodness. And since he was smart (and overconfident) and did want to do good things, he needed no grand scheme to deceive people about that. He just needed people like Lewis who lacked a gag reflex at all the bad things he was doing.
Before FTX I would've simply assumed other EAs had a "moral gag reflex" already. Afterward, I think we need more preaching about that (and more "punchy" ways to hammer home the importance of things like virtues, rules, reputation and conscientiousness, even or especially in utilitarianism/consequentialism). Such preaching might not have affected SBF himself (since he cut so many corners in his thinking and listening), but someone in his orbit might have needed to hear it.
I've been thinking that there is a "fallacious, yet reasonable as a default/fallback" way to choose moral circles based on the Anthropic principle, which is closely related to my article "The Putin Fallacy―Let’s Try It Out". It's based on the idea that consciousness is "real" (part of the territory, not the map), in the same sense that quarks are real but cars are not. In this view, we say: P-zombies may be possible, but if consciousness is real (part of the territory), then by the Anthropic principle we are not P-Zombies, since P-zombies by definition do not have real experiences. (To look at it another way, P-Zombies are intelligences that do not concentrate qualia or valence, so in a solar system with P-zombies, something that experiences qualia is as likely to be found alongside one proton as any other, and there are about 10^20 times more protons in the sun as there are in the minds of everyone on Zombie Earth combined.) I also think that real qualia/valence is the fundamental object of moral value (also reasonable IMO, for why should an object with no qualia and no valence have intrinsic worth?)
By the Anthropic principle, it is reasonable to assume that whatever we happen to be is somewhat typical among beings that have qualia/valence, and thus, among beings that have moral worth. By this reasoning, it is unlikely that the sum total |W| of all qualia/valence in the world is dramatically larger than the sum total |H| of all qualia/valence among humans, because if |W| >> |H|, you and I are unlikely to find ourselves in set H. I caution people that while reasonable, this view is necessarily uncertain and thus fallacious and morally hazardous if it is treated as a certainty. Yet if we are to allocate our resources in the absence of any scientific clarity about which animals have qualia/valence, I think we should take this idea into consideration.
P.S. given the election results, I hope more people are doing now the soul-searching we should've done in 2016. I proposed my intervention "Let's Make the Truth Easier to Find" on EA Forum in March 2023. It's necessarily a partial solution, but I'm very interested to know why EAs generally weren't interested in it. I do encourage people to investigate for themselves why Mr. Post Truth himself has roughly the same popularity as the average Democrat―twice.