Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Some quick responses to Nuño’s article about EA Forum stewardship I work on the CEA Online Team, which runs the Forum, but I am only speaking for myself. Others on my team may disagree with me. I wrote this relatively quickly so I wouldn’t be surprised if I changed my mind on things upon reflection. Overall, I really appreciated Nuño’s article, and did not find it to be harsh, overly confrontational, or unpleasant to read. I appreciated the nice messages that he included to me, a person working on the Forum, at the start and end of the piece. On the design change and addition of features: People have different aesthetic preferences, and I personally think the current Forum design looks nicer than the 2018 version, plus I think it has better usability in various ways. I like minimalism in some contexts, but I care more about doing good than about making the Forum visually pleasing to me. To that end, I think it is correct for the Forum to have more than just a simple frontpage list of posts plus a “recent discussion” feed (which seems to be the entirety of the 2018 version). For example, I think adding the “quick takes” and “popular comments” sections to the home page have been really successful. By making quick takes more salient, we’ve encouraged additional discussions on the site (since quick takes are intended for less polished posts). “Popular comments” helps to highlight ongoing discussions in posts that may not be visible elsewhere. I take the fact that LessWrong borrowed these sections for their site as further evidence of their value, and in fact LessWrong has had some impactful discussions happen in their quick takes section. As another example, features like the “Groups directory” and the “People directory” are not available anywhere else online, and I view them as more like “essential infrastructure for the EA community”. I think it’s reasonable for those to live on the EA Forum, where people already gather to talk about EA things and look for some E
14
Lizka
7h
0
A note on how I think about criticism (This was initially meant as part of this post,[1] but while editing I thought it didn't make a lot of sense there, so I pulled it out.) I came to CEA with a very pro-criticism attitude. My experience there reinforced those views in some ways,[2] but it also left me more attuned to the costs of criticism (or of some pro-criticism attitudes). (For instance, I used to see engaging with all criticism as virtuous, and have changed my mind on that.) My overall takes now aren’t very crisp or easily summarizable, but I figured I'd try to share some notes. ... It’s generally good for a community’s culture to encourage criticism, but this is more complicated than I used to think. Here’s a list of things that I believe about criticism: 1. Criticism or critical information can be extremely valuable. It can be hard for people to surface criticism (e.g. because they fear repercussions), which means criticism tends to be undersupplied.[3] Requiring critics to present their criticisms in specific ways will likely stifle at least some valuable criticism. It can be hard to get yourself to engage with criticism of your work or things you care about. It’s easy to dismiss true and important criticism without noticing that you’re doing it.  1. → Making sure that your community’s culture appreciates criticism (and earnest engagement with it), tries to avoid dismissing critical content based on stylistic or other non-fundamental qualities, encourages people to engage with it, and disincentivizes attempts to suppress it can be a good way to counteract these issues.  2. At the same time, trying to actually do anything is really hard.[4] Appreciation for doers is often undersupplied. Being in leadership positions or engaging in public discussions is a valuable service, but opens you up to a lot of (often stressful) criticism, which acts as a disincentive for being public. Psychological safety is important in teams (and communities), so it’s u
10
Lizka
7h
0
A note on mistakes and how we relate to them (This was initially meant as part of this post[1], but I thought it didn't make a lot of sense there, so I pulled it out.) “Slow-rolling mistakes” are usually much more important to identify than “point-in-time blunders,”[2] but the latter tend to be more obvious. When we think about “mistakes”, we usually imagine replying-all when we meant to reply only to the sender, using the wrong input in an analysis, including broken hyperlinks in a piece of media, missing a deadline, etc. I tend to feel pretty horrible when I notice that I've made a mistake like this. I now think that basically none of my mistakes of this kind — I’ll call them “Point-in-time blunders” — mattered nearly as much as other "mistakes" I've made by doing things like planning my time poorly, delaying for too long on something, setting up poor systems, or focusing on the wrong things.  This second kind of mistake — let’s use the phrase “slow-rolling mistakes” — is harder to catch; I think sometimes I'd identify them by noticing a nagging worry, or by having multiple conversations with someone who disagreed with me (and slowly changing my mind), or by seriously reflecting on my work or on feedback I'd received.  ... This is not a novel insight, but I think it was an important thing for me to realize. Working at CEA helped move me in this direction. A big factor in this, I think, was the support and reassurance I got from people I worked with.  This was over two years ago, but I still remember my stomach dropping when I realized that instead of using “EA Forum Digest #84” as the subject line for the 84th Digest, I had used “...#85.” Then I did it AGAIN a few weeks later (instead of #89). I’ve screenshotted Ben’s (my manager’s) reaction. ... I discussed some related topics in a short EAG talk I gave last year, and also touched on these topics in my post about “invisible impact loss”.  An image from that talk. 1. ^ It was there because
Of 1500 climate policies that have been implemented over the past 25 years, the 63 most successful ones are in this article (that I don't have access to, but a good summary is here). The 63 policies reduced between 0.6 billion and 1.8 billion metric tonnes CO2 emissions. The typical effects that the 63 most effective policies had, could close the emissions gap by 26%-41%. Pricing is most effective in developed countries, while regulations are the most effective policies in developing countries. The climate policy explorer shows the best policies for different countries and sectors. I just wanted to write this if EA:s who are interested in climate change and policy have missed this. Kind regards, Ulf Graf
I've been looking at the numbers with regards to how many GPUs it would take to train a model with as many parameters as the human brain has synapses. The human brain has 100 trillion synapses, and they are sparse and very efficiently connected. A regular AI model fully connects every neuron in a given layer to every neuron in the previous layer, so that would be less efficient. The average H100 has 80 GB of VRAM, so assuming that each parameter is 32 bits, then you have about 20 billion per GPU. So, you'd need 10,000 GPUs to fit a single instance of a human brain in RAM, maybe. If you assume inefficiencies and need to have data in memory as well you could ballpark another order of magnitude so 100,000 might be needed. For comparison, it's widely believed that OpenAI trained GPT4 on about 10,000 A100s that Microsoft let them use from their Azure supercomputer, most likely the one listed as third most powerful in the world by the Top500 list. Recently though, Microsoft and Meta have both moved to acquire more GPUs that put them in the 100,000 range, and Elon Musk's X.ai recently managed to get a 100,000 H100 GPU supercomputer online in Memphis. So, in theory at least, we are nearly at the point where they can train a human brain sized model in terms of memory. However, keep in mind that training such a model would take a ton of compute time. I haven't done to calculations yet for FLOPS so I don't know if it's feasible yet. Just some quick back of the envelope analysis.