New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
25
New? Start here! (Useful links)
Lizka
· 1y ago · 2m read
24
Open Thread: April — June 2023
Lizka
· 2mo ago · 1m read

Posts tagged community

Shortform

6
14h
1
I've been reading about performance management, and a section of the textbook I'm reading focuses on The Nature of the Performance Distribution. It reminded me a little of Max Daniel's and Ben Todd's How much does performance differ between people? [https://forum.effectivealtruism.org/posts/ntLmCbHE2XKhfbzaX/how-much-does-performance-differ-between-people], so I thought I'd share it here [https://dark-sand-537.notion.site/The-Nature-of-the-Performance-Distribution-e25acae2be094f51b29a0843d0d722fb] for anyone who is interested. The focus is less on true outputs and more on evaluated performance within an organization. It is a fairly short and light introduction, but I've put the content here [https://dark-sand-537.notion.site/The-Nature-of-the-Performance-Distribution-e25acae2be094f51b29a0843d0d722fb] if you are interested. A theme that jumps out at me is situational specificity [https://psycnet.apa.org/record/2008-13470-042], as it seems some scenarios follow a normal distribution, some scenarios are heavy tailed, and some probably have a strict upper limit. This echoes the emphasis that an anonymous commented shared on the Max's and Ben 's post: I'm roughly imaging an organization in which there is a floor to performance (maybe people beneath a certain performance level aren't hired), and there is some type of barrier that creates a ceiling to performance (maybe people who perform beyond a certain level would rather go start their own consultancy rather than work for this organization, or they get promoted to a different department/team). But the floor or the ceiling could be more more naturally related to the nature of the work as well, as in the scenario of an assembly worker who can't go faster than the speed of the assembly line. This idea of situational specificity is paralleled in hiring/personnel selection, in which a particular assessment might be highly predictive of performance in one context, and much less so in a different context. This is the reaso
5
18h
2
I suggest there is waaaay to much to be on top of in EA and noone knows who is checking what. So some stuff goes unchecked. If there were a narrower set of "core things we study" then it seems more likely that those things would have been gone over by someone in detail and hence fewer errors in core facts.
11
2d
1
Why doesn't EA focus on equity, human rights, and opposing discrimination (as cause areas)? KJonEA asks: 'How focused do you think EA is on topics of race and gender equity/justice, human rights, and anti-discrimination? What do you think are factors that shape the community's focus? [https://forum.effectivealtruism.org/posts/zgBB56GcnJyjdSNQb/how-focused-do-you-think-ea-is-on-topics-of-race-and-gender]' In response, I ended up writing a lot of words, so I thought it was worth editing them a bit and putting them in a shortform. I've also added some 'counterpoints' that weren't in the original comment.  To lay my cards on the table: I'm a social progressive and leftist, and I think it would be cool if more EAs thought about equity, justice, human rights and discrimination - as cause areas to work in, rather than just within the EA community. (I'll call this cluster just 'equity' going forward). I also think it would be cool if left/progressive organisations had a more EA mindset sometimes. At the same time, as I hope my answers below show, I do think there are some good reasons that EAs don't prioritize equity, as well as some bad reasons.  So, why don't EAs priority gender and racial equity, as cause areas?  1. Other groups are already doing good work on equity (i.e. equity is less neglected) The social justice/progressive movement has got feminism and anti-racism pretty well covered. On the other hand, the central EA causes - global health, AI safety, existential risk, animal welfare -are comparatively neglected by other groups. So it kinda makes sense for EAs to say 'we'll let these other movements keep doing their good work on these issues, and we'll focus on these other issues that not many people care about'. Counter-point: are other groups using the most (cost)-effective methods to achieve their goals? EAs should, of course, be epistemically modest; but it seems that (e.g.) someone who was steeped in both EA and feminism, might have some great suggesti
7
1d
TL;DR: Someone should probably write a grant to produce a spreadsheet/dataset of past instances of where people claimed a new technology would lead to societal catastrophe, with variables such as “multiple people working on the tech believed it was dangerous.” Slightly longer TL;DR: Some AI risk skeptics are mocking people who believe AI could threaten humanity’s existence, saying that many people in the past predicted doom from some new tech. There is seemingly no dataset which lists and evaluates such past instances of “tech doomers.” It seems somewhat ridiculous* to me that nobody has grant-funded a researcher to put together a dataset with variables such as “multiple people working on the technology thought it could be very bad for society.” *Low confidence: could totally change my mind  ——— I have asked multiple people in the AI safety space if they were aware of any kind of "dataset for past predictions of doom (from new technology)"? There have been some articles and arguments floating around recently such as "Tech Panics, Generative AI, and the Need for Regulatory Caution [https://datainnovation.org/2023/05/tech-panics-generative-ai-and-regulatory-caution/]", in which skeptics say we shouldn't worry about AI x-risk because there are many past cases where people in society made overblown claims that some new technology (e.g., bicycles, electricity) would be disastrous for society. While I think it's right to consider the "outside view" on these kinds of things, I think that most of these claims 1) ignore examples of where there were legitimate reasons to fear the technology (e.g., nuclear weapons, maybe synthetic biology?), and 2) imply the current worries about AI are about as baseless as claims like "electricity will destroy society," whereas I would argue that the claim "AI x-risk is >1%" stands up quite well against most current scrutiny. (These claims also ignore the anthropic argument/survivor bias—that if they ever were right about doom we wouldn
11
3d
3
One of my current favorite substacks [https://weibo.substack.com/]: this author just takes a random selection of Weibo posts every day and translates them to English, including providing copies of all the videos. Weibo is sort of like "Chinese Twitter". One of my most consistently read newsletters! H/T to @JS Denain [https://forum.effectivealtruism.org/users/js-denain-1?mention=user] for recommending this newsletter to me a while ago :)
Load more

Recent discussion

Hi All, just seeing if there's a particular reason why development and growth for lower income countries is still lumped in with global health in the EA nomenclature (in the forums, the button only says "global health"!)

I think there's a lot of growing consensus that there is a lot of potential low hanging fruit that the EA community can do to increase growth in low and low middle income countries. By lumping these together it almost makes it seem that anything that is targeting these areas is only focused on health.

It would be good to see this distinction and more of a focus overall on global development priorities.

Answer by JeremyRMay 31, 202310

There's an "economic growth" topic on the EA Forum (under the parent topic of Global Health & Development). Is that distinct from what you mean by Global Development? 

In a separate but related vein, are there any organizations  / funds that are EA-aligned and working in this area? 

1Answer by Constance Li2h
I agree that this should be its own field! I've noticed that I will often conflate the two for ease of speaking/writing. I feel similarly when I need to refer to folks that are not men and instead of saying women/nonbinary/trans, I will just say "women" for short. I don't like that I do this and have reflected on it extensively. I think it is because my brain is wired to prioritize getting my original thought out there instead of being sensitive to the nuances of a group's identity and structure. However, there is a lot that can be done to make the distinction more automatic and less cognitively demanding through setting/reinforcing social norms and environmental structure.  I think having a "global development" button and category on the EA forum would be a low effort way to kick it off as a social norm and make it easier for people to start separating the two in their minds. I think it's also a much more tractable switch than with gender since often when we refer to social groups, we usually want to be more inclusive. In EA NYC, we have a subgroup meeting for Women and Nonbinaries of EA NYC which we just have to abbreviate as "WANBEANY" and even that is not optimally inclusive because what if a trans man wants to join or what if a trans femme person doesn't want to identify as either woman or nonbinary?  In the case of global health and global development, it would be good to talk about the two more independently since they have very different approaches to the same problem.

As you step into the bustling streets of a vibrant city, you come across a small, unassuming building nestled between two trendy coffee shops. Its simple sign reads "Effective Altruism Infoshop." Intrigued by the term you've heard buzzing around, you decide to step inside and explore.

As you open the door, a wave of warmth and intellectual curiosity greets you. The space is well-lit, with shelves filled with books, pamphlets, and colorful posters adorning the walls. Soft instrumental music plays in the background, providing a soothing ambiance. The room is abuzz with conversation as individuals engage in lively discussions about various global issues and their potential solutions.

At the entrance, you're greeted by a friendly volunteer who introduces themselves as Alex. They offer you a warm smile and kindly...

1. Tl;dr

Many reports indicate that indoor air quality (IAQ) interventions are likely to be effective at reducing respiratory disease transmission. However, to date there’s been very little focus on the workforce that will implement these interventions. I suggest that the US Heating, Ventilation and Air Conditioning (HVAC) and building maintenance workforces have already posed a significant obstacle to these interventions, and broad uptake of IAQ measures will be significantly hindered by them in the future. The impact will vary in predictable ways depending on the nature of the intervention and its implementation. We should favor simple techniques with improved oversight and outsource or crosscheck technically complex work to people outside of the current HVAC workforce. We should also make IAQ conditions and devices as transparent as possible to both experts...

Lizka
2h20

I appreciate this post, thanks for sharing it! I'm curating it. I should flag that I haven't properly dug into it (curating after a quick read), and don't have any expertise in this. 

Another flag is that I would love to see more links for parts of this post like the following: "In an especially egregious example, one of the largest HVAC companies in the state had its Manual J submission admin go on vacation. The temporary replacement forgot to rename files and submitted applications named for their installed capacity (1 ton, 2 ton, 3 ton, etc.), revea... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

When I first join Anne Nganga on our video call, she apologizes for the background noise of a fan running. “The weather here is hot and humid,” she says. “I have to have a fan or the AC on at all times if I am to enjoy being indoors.”

Anne is originally from Kenya, but she’s calling me from the island of Zanzibar, where she’s been facilitating the 2023 Effective Altruism Africa Residency Fellowship. It took place in the first three months of 2023, connecting EAs working on projects “addressing the most pressing problems in Africa”.

Residencies have become an increasingly popular option for Effective Altruism community building. They typically involve a group of people who work on Effective Altruism or related topics professionally working and living in the...

Would like to connect with the cohort members or the team organising  this  community. 

59
Yelnats T.J.
13h
I think the title is misleading. Africa is a large continent, and this was just one fellowship of ~15 people (of which I was one). There are some promising things going on in EA communities in Africa. At the same time, and I speak for several people when I say this, EA community building seems quite neglected in Africa, especially given how far purchasing power goes. And many community building efforts to date have been off the mark in one way or another. I expect this to improve with time. But I think a better barometer of the health of EA in Africa is the communities that have developed around Africa metropolises (e.g. EA Abuja, EA Nairobi). I also dislike Fumba being framed to the broader EA community as the perfect compromise. Fumba town was arguably the thing that the residents most disliked. There are a lot of valid reasons as to why the residency took place in Fumba, but this general rosy framing of the residency overlooks the issues it had and, more importantly, the lessons learned from them.
6
Kirsten
6h
I appreciate the feedback, and would love to hear more about your experience (I think many of us would!)

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

Subscribe here to receive future versions.


Yoshua Bengio makes the case for rogue AI

AI systems pose a variety of different risks. Renowned AI scientist Yoshua Bengio recently argued for one particularly concerning possibility: that advanced AI agents could pursue goals in conflict with human values.

Human intelligence has accomplished impressive feats, from flying to the moon to building nuclear weapons. But Bengio argues that across a range of important intellectual, economic, and social activities, human intelligence could be matched and even surpassed by AI.

How would advanced AIs change our world? Many technologies are tools, such as toasters and calculators, which humans use to accomplish our goals....

9
Linch
9h
Unless I'm misunderstanding something important (which is very possible!) I think Bengio's risk model is missing some key steps.  In particular, if I understand the core argument correctly, it goes like this: As stated, I think this argument is unconvincing. Because for superhuman rogue AIs to be catastrophic for humanity, they need to not only be catastrophic for 2023_Humanity but also for humanity even after we also have the assistance of superhuman or near-superhuman AIs.  If I was trying to argue for Bengio's position, I would probably go down one (or more) of the following paths:  1. Alignment being very hard/practically impossible: If alignment is very hard and nobody can reliably build a superhuman AI that's sufficiently aligned that we trust it to stop rogue AI, then the rogue AI can cause a catastrophe unimpeded 1. Note that this is not just an argument for the possibility of rogue AIs, but an argument against non-rogue AIs. 2. Offense-defense imbalance: Perhaps it's easier in practice to create rogue AIs to destroy the world than to create non-rogue AIs to prevent the world's destruction. 1. Vulnerable world: Perhaps it's much easier to destroy the world than prevent its destruction 1. Toy example: Suppose AIs with a collective intelligence of 200 IQ is enough to destroy the world, but AIs with a collective intelligence of 300 IQ is needed to prevent the world's destruction. Then the "bad guys" will have a large head start on the "good guys." 2. Asymmetric carefulness: Perhaps humanity will not want to create non-rogue AIs because most people are too careful about the risks. Eg maybe we have an agreement among the top AI labs to not develop AI beyond capabilities level X without alignment level Y, or something similar in law (and suppose in this world that normal companies mostly follow the law and at least one group building rogue AI
aogara
3h20

That’s a good point! Joe Carlsmith makes a similar step by step argument, but includes a specific step about whether the existence of rogue AI would lead to catastrophic harm. Would have been nice to include in Bengio’s.

Carlsmith: https://arxiv.org/abs/2206.13353

This post was written by Simon Goldstein, associate professor at the Dianoia Institute of Philosophy at ACU, and Cameron Domenico Kirk-Giannini, assistant professor at Rutgers University, for submission to the Open Philanthropy AI Worldviews Contest. Both authors are currently Philosophy Fellows at the Center for AI Safety. This is a crosspost from LessWrong.

 

Abstract: Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they have desires and beliefs, and...

1
ShayBenMoshe
18h
Thanks for responding so quickly. I think the following might be a difference in our views: I expect that people will(/are) trying to train LLM variants that are RLHFed to express agentic behavior. There's no reason to have one model to rule them all - it only makes sense to have a distinct models for short conversations and for autonomous agents. Maybe the agentic version would get a modified prompt including some background. Maybe it will be given context from memory as you specified. Do you disagree with this? Given all of the above, I don't see a big difference between this and how other agents (humans/RL systems/what have you) operate, aside maybe from the fact that the memory is more external. In other words - I expect your point (i) to be in the prompt/LLM weights variant (via RLHF or some other modification, (ii) this is the standard convergent instrumental goals argument (which is relevant to these systems as much as to others, a priori), and (iii) again by this external memory (which could for example be a chain of thought or otherwise).
1
cdkg
16h
Hello, If you're imagining a system which is an LLM trained to exhibit agentic behavior through RLHF and then left to its own devices to operate in the world, you're imagining something quite different from a language agent. Take a look at the architecture in the Park et al. paper, which is available on ArXiv — this is the kind of thing we have in mind when we talk about language agents. I'm also not quite sure how the point about how doing RLHF on an LLM could make a dangerous system is meant to engage with our arguments. We have identified a particular kind of system architecture and argued that it has improved safety properties. It's not a problem for our argument to show that there are alternative system architectures that lack those safety properties. Perhaps there are ways of setting up a language agent that wouldn't be any safer than using ordinary RL. That's ok, too — our point is that there are ways of setting up a language agent that are safer. 

Thanks Cameron. I think that I understand our differences in views. My understanding is that you argue that language agents might be a safe path (I am not sure I fully agree with this, but I am willing to be on board so far).

Our difference then is, as you say, in whether there are models which are not safe and whether this is relevant. In Section 5, on the probability of misalignment, and in your last comment, you suggest that it is highly likely that language agents are the path forward. I am not at all convinced that this is correct (e.g., I think that i... (read more)

TL;DR

Everything that looks like exponential growth eventually runs into limits and slows down. AI will quite soon run into limits of compute, algorithms, data, scientific progress, and predictability of our world. This reduces the perceived risk posed by AI and gives us more time to adapt.

Disclaimer

Although I have a PhD in Computational Neuroscience, my experience with AI alignment is quite low. I haven’t engaged in the field much except for reading Superintelligence and listening to the 80k Hours podcast. Therefore, I may duplicate or overlook arguments obvious to the field or use the wrong terminology.

Introduction

Many arguments I have heard around the risks from AI go a bit like this: We will build an AI that will be as smart as humans, then that AI will be able...

I'm excited to announce the opportunity to join the Nomadic Effective Altruism House for any duration between January 2024 and April 2024. If you're interested in being part of this unique experience, please fill out this form!

Decisions will be made on a rolling basis with the main emphasis being choosing a cohort that will be able to work and live productively together.

The final selection will be completed by November 1, 2023. 

Estimated time to complete form is 15 - 30 min
 

Description:

Escape the winter in the northern regions and join in an extraordinary Effective Altruism co-living community. The plan is to rent a house in a warm location with a low cost of living and tourist-friendly visa requirements. Some potential locations include Mexico City, Costa Rica, Thailand, and more.

The...

1
Akash Kulgod
18h
FYI I was intrigued by this post/concept but am thrown off that the vision is application based. I don't think this is the optimal way to get this going but I could be wrong
Jason
3h20

What alternative screening/filtering mechanism would you suggest?

4
Constance Li
17h
I think it's good to be thoughtful about who comes into a co-living community. There can be a lot of friction if it there is even one person who doesn't get along well with the others. Please feel free to start another housing community that is not application based if you want to experiment with something more open though! (We could even coordinate for the communities to be in the same city)

CEARCH is currently researching hypertension and diabetes mellitus (type 2) as potentially impactful philanthropic cause areas. In particularly, we have identified taxes on sodium, as well as on sugar-sweetened beverages (i.e. soda), to be potentially extremely cost-effective interventions that the philanthropic community should support through the provision of additional funding and talent.

However, taxation does have the downside of reducing freedom of choice, and we are interested in getting the community's moral weights on the value of such freedom of choice (i.e. getting a sense of how bad we think this downside is, relative to the health benefits).

Hence, we would be grateful if the EA community (and indeed, the broader public) took the time to fill up this moral weights survey (perhaps 1-5 minutes of your time): https://docs.google.com/forms/d/1Wgszgv7u3PLBRYLd92hqoDrpj4DkdxU0cC0Y3bx0mHo/. This will directly inform our CEAs and our future recommendations to Charity Entrepreneurship, the donors we work with, and our partners in government and the policy advocacy space.

 


 

Oh dear, sorry for the mistake. Thanks Jereon for flagging it, and Edo for fixing it!