Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more

Any hints / info on what to look for in a mentor / how to find one? (Specifically for community building.)

I'm starting as a national group director in september, and among my focus topics for EAG London are group-focused things like "figuring out pointers / out of the box ideas / well-working ideas we haven't tried yet for our future strategy", but also trying to find a mentor.

These were some thoughts I came up with when thinking about this yesterday:
 - I'm not looking for accountability or day to day support. I get that from inside our local group.
&n... (read more)

Having a savings target seems important. (Not financial advice.)

I sometimes hear people in/around EA rule out taking jobs due to low salaries (sometimes implicitly, sometimes a little embarrassedly). Of course, it's perfectly understandable not to want to take a significant drop in your consumption. But in theory, people with high salaries could be saving up so they can take high-impact, low-paying jobs in the future; it just seems like, by default, this doesn't happen. I think it's worth thinking about how to set yourself up to be able to do it if you do ... (read more)

There is going to be a Netflix series on SBF titled The Altruists, so EA will be back in the media. I don't know how EA will be portrayed in the show, but regardless, now is a great time to improve EA communications. More specifically, being a lot more loud about historical and current EA wins — we just don't talk about them enough!

A snippet from Netflix's official announcement post:

Are you ready to learn about crypto?

Julia Garner (OzarkThe Fantastic Four: First Steps, Inventing Anna) and Anthony Boyle (House of Guinness, Say Nothing, Masters of the

... (read more)
Showing 3 of 12 replies (Click to show all)
4
Eevee🔹
Oooh, I'd better get to work on my SBF musical 😂
3
Charlie_Guthmann
https://suno.com/song/be4cc4e2-15b2-42e7-b87f-86e367c0673d 

I worry that the pro-AI/slow-AI/stop-AI has the salient characteristics of a tribal dividing line that could tear EA apart:

  • "I want to accelerate AI" vs "I want to decelerate AI" is a big, clear line in the sand that allows for a lot clearer signaling of one's tribal identity than something more universally agreeable like "malaria is bad"
  • Up to the point where AI either kills us or doesn't, there's basically in principle no way to verify that one side or the other is "right", which means everyone can keep arguing about it forever
  • The discourse around it is mo
... (read more)

I want to clarify, for the record, that although I disagree with most members of the EA community on whether we should accelerate or slow down AI development, I still consider myself an effective altruist in the senses that matter. This is because I continue to value and support most EA principles, such as using evidence and reason to improve the world, prioritizing issues based on their scope, not discriminating against foreigners, and antispeciesism.

I think it’s unfortunate that disagreements about AI acceleration often trigger such strong backlash withi... (read more)

Showing 3 of 34 replies (Click to show all)
1
dirk
Holly herself believes standards of criticism should be higher than what (judging by the comments here without being familiar with the overall situation) she seems to have employed here; see Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent.
6
Erich_Grunewald 🔸
Hmm, that seems off to me? Unless you mean "severe disloyalty to some group isn't Ultimately Bad, even though it can be instrumentally bad". But to me it seems useful to have a concept of group betrayal, and to consider doing so to be generally bad, since I think group loyalty is often a useful norm that's good for humanity as a whole. Specifically, I think group-specific trust networks are instrumentally useful for cooperating to increase human welfare. For example, scientific research can't be carried out effectively without some amount of trust among researchers, and between researchers and the public, etc. And you need some boundary for these groups that's much smaller than all humanity to enable repeated interaction, mutual monitoring, and norm enforcement. When someone is severely disloyal to one of those groups they belong to, they undermine the mutual trust that enables future cooperation, which I'd guess is ultimately often bad for the world, since humanity as a whole depends for its welfare on countless such specialised (and overlapping) communities cooperating internally.

It's not that I'm ignoring group loyalty, just that the word "traitor" seems so strong to me that I don't think there's any smaller group here that's owed that much trust. I could imagine a close friend calling me that, but not a colleague. I could imagine a researcher saying I "betrayed" them if I steal and publish their results as my own after they consulted me, but that's a much weaker word.

[Context: I come from a country where you're labeled a traitor for having my anti-war political views, and I don't feel such usage of this word has done much good for society here...]

AI-generated video with human scripting and voice-over celebrates Vasili Arkhipov’s decision not to start WWIII.
https://www.instagram.com/p/DKNZkTSOsCk/

The EA Forum moderation team is going to experiment a bit with how we categorize posts. Currently there is a low bar for a Forum post being categorized as “Frontpage” after it’s approved. In comparison, LessWrong is much more opinionated about the content they allow, especially from new users. We’re considering moving in that direction, in order to maintain a higher percentage of valuable content on our Frontpage.

To start, we’re going to allow moderators to move posts from new users from “Frontpage” to “Personal blog”[1], at their discretion, but starting ... (read more)

I'm a 36 year old iOS Engineer/Software Engineer who switched to working on Image classification systems via Tensorflow a year ago. Last month I was made redundant with a fairly generous severance package and good buffer of savings to get me by while unemployed.

The risky step I had long considered of quitting my non-impactful job was taken for me. I'm hoping to capitalize on my free time by determining what career path to take that best fits my goals. I'm pretty excited about it. 

I created a weighted factor model to figure out what projects or learnin... (read more)

3
Toby Tremlett🔹
I love the model - and I'm happy to give feedback on ideas for EA Forum posts if that would ever be helpful! (I'm the Content Strategist for the Forum). 
1
Deco 🔹
That would be really useful! Some of my ideas for forum or blog posts are: *  Bi-weekly updates on what I've been working on. * Posting stuff I've worked on (mostly ML related). * Miscellaneous topics such as productivity and ADD. * Reviews of EA programmes I've taken part in or books I've read * Dumping my thoughts on a topic   I'm also interested in how you differentiate between content better suited for a blog or better suited for a forum?  

Out of that list I'd guess that the fourth and fifth (depending on topics) bullets are most suitable for the Forum. 


The basic way I'd differentiate content is that the Forum frontpage should all be content that is related to the project of effective altruism, the community section is about EA as a community (i.e. if you were into AI Safety but not EA, you wouldn't be interested in the community section), and "personal blog" (i.e. not visible on frontpage) is the section for everything that isn't in those categories. For example posts on "Miscellaneous... (read more)

Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks: 

  1. In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
  2. Is it possible to create a gover
... (read more)

Looks like Mechanize is choosing to be even more irresponsible than we previously thought. They're going straight for automating software engineering. Would love to hear their explanation for this.

"Software engineering automation isn't going fast enough" [1] - oh really?

This seems even less defensible than their previous explanation of how their work would benefit the world.

  1. ^

    Not an actual quote

Showing 3 of 6 replies (Click to show all)
2
Chris Leong
I bet the strategic analysis for Mechanize being a good choice (net-positive and positive relative to alternatives) is paper-thin, even given his rough world view.
6
Ryan Greenblatt
Might be true, doesn't make that not a strawman. I'm sympathetic to thinking it's implausible that mechanize would be the best thing to do on altruistic grounds even if you share views like those of the founders. (Because there is probably something more leveraged to do and some weight on cooperativeness considerations.)

Sometimes the dollar signs can blind someone and cause them not to consider obvious alternatives. And they will feel that they made the decision for reasons other than the money, but the money nonetheless caused the cognitive distortion that ultimately led to the decision.

I'm not claiming that this happened here. I don't have any way of really knowing. But it's certainly suspicious. And I don't think anything is gained by pretending that it's not.

As part of MATS' compensation reevaluation project, I scraped the publicly declared employee compensations from ProPublica's Nonprofit Explorer for many AI safety and EA organizations (data here) in 2019-2023. US nonprofits are required to disclose compensation information for certain highly paid employees and contractors on their annual Form 990 tax return, which becomes publicly available. This includes compensation for officers, directors, trustees, key employees, and highest compensated employees earning over $100k annually. Therefore, my data does not... (read more)

Productive conference meetup format for 5-15 people in 30-60 minutes

I ran an impromptu meetup at a conference this weekend, where 2 of the ~8 attendees told me that they found this an unusually useful/productive format and encouraged me to share it as an EA Forum shortform. So here I am, obliging them:

  • Intros… but actually useful
    • Name
    • Brief background or interest in the topic
    • 1 thing you could possibly help others in this group with
    • 1 thing you hope others in this group could help you with
    • NOTE: I will ask you to act on these imminently so you need to pay attent
... (read more)

I guess orgs need to be more careful about who they hire as forecasting/evals researchers in light of a recently announced startup.

Sometimes things happen, but three people at the same org...

This is also a massive burning of the commons. It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias. It is valuable for folks to be able to share information freely with folks at such forecasting orgs without having to worry about them going off and doing something like this.

However, this only works... (read more)

Showing 3 of 15 replies (Click to show all)

Short update - TLDR - mechanise is going straight for automating software engineering.

 

4
David Mathers🔸
Presumably there are at least some people who have long timelines, but also believe in high risk and don't want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)  I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. It's hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They don't have to be raging dogmatists to worry about this happening again, and it's reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.  *I'm less surely merely catastrophic biorisk from human misuse is low sadly. 
-1
Yarrow🔸
So, you want to try to lock in AI forecasters to onerous and probably illegal contracts that forbid them from founding an AI startup after leaving the forecasting organization? Who would sign such a contract? This is even worse than only hiring people who are intellectually pre-committed to certain AI forecasts. Because it goes beyond a verbal affirmation of their beliefs to actually attempting to legally force them to comply with the (putative) ethical implications of certain AI forecasts. If the suggestion is simply promoting "social norms" against starting AI startups, well, that social norm already exists to some extent in this community, as evidenced by the response on the EA Forum. But if the norm is too weak, it won’t prevent the undesired outcome (the creation of an AI startup), and if the norm is too strong, I don’t see how it doesn’t end up selecting forecasters for intellectual conformity. Because non-conformists would not want to go along with such a norm (just like they wouldn’t want to sign a contract telling them what they can and can’t do after they leave the forecasting company).

So, I have two possible projects for AI alignment work that I'm debating between focusing on. Am curious for input into how worthwhile they'd be to pursue or follow up on.

The first is a mechanistic interpretability project. I have previously explored things like truth probes by reproducing the Marks and Tegmark paper and extending it to test whether a cosine similarity based linear classifier works as well. It does, but not any better or worse than the difference of means method from that paper. Unlike difference of means, however, it can be extended to mu... (read more)

I'm organizing an EA Summit in Vancouver, BC, for the fall and am looking for opportunities for our attendees to come away from the event with opportunities to look forward to. Most of our attendees will have Canadian but not US work authorization. Anyone willing to meet potential hires, mentees, research associates, funding applicants, etc., please get in touch!

why do i find myself less involved in EA?

epistemic status: i timeboxed the below to 30 minutes. it's been bubbling for a while, but i haven't spent that much time explicitly thinking about this. i figured it'd be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don't expect to reflectively endorse all of these points later down the line. i think it's probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i'm not very keen on arguing about any of th... (read more)

Showing 3 of 4 replies (Click to show all)

Thanks for sharing your experiences and reflections here — I really appreciate the thoughtfulness. I want to offer some context on the group organizer situation you described, as someone who was running the university groups program at the time.

On the strategy itself:
 At the time, our scalable programs were pretty focused from evidence we had seen that much of the impact came from the organizers themselves. We of course did want groups to go well more generally, but in deciding where to put our marginal resource we were focusing on group organizers. I... (read more)

1
Mikolaj Kniejski
"why do i find myself less involved in EA?" You go over more details later and answer other questions like what caused some reactions to some EA-related things, but an interesting thing here is that you are looking for a cause of something that is not. > it feels like looking at the world through an EA frame blinds myself to things that i actually do care about, and blinds myself to the fact that i'm blinding myself. I can strongly relate, had the same experience. i think it's due to christian upbringing or some kind of need for external validation. I think many people don't experience that, so I wouldn't say that's an inherently EA thing, it's more about the attitude.   
5
Owen Cotton-Barratt
I appreciated you expressing this. Riffing out loud ... I feel that there are different dynamics going on here (not necessarily in your case; more in general): 1. The tensions where people don't act with as much integrity as is signalled * This is not a new issue for EA (it arises structurally despite a lot of good intentions, because of the encouragement to be strategic), and I think it just needs active cultural resistance * In terms of writing, I like Holden's and Toby's pushes on this; my own attempts here and here * But for this to go well, I think it's not enough to have some essays on reading lists; instead I hope that people try to practice good orientation here at lots of different scales, and socially encourage others to 2. The meta-blinding * I feel like I haven't read much on this, but it rings true as a dynamic to be wary of! Where I take the heart of the issue to be that EA presents a strong frame about what "good" means, and then encourages people to engage in ways that make aspects of their thinking subservient to that frame 3. As someone put it to me, "EA has lost the mandate of heaven" * I think EA used to be (in some circles) the obvious default place for the thoughtful people who cared a lot to gather and collaborate * I think that some good fraction of its value came from performing this role? * Partially as a result of 1 and 2, people are disassociating with EA; and this further reduces the pull to associate * I can't speak to how strong this effect is overall, but I think the directionality is clear I don't know if it's accessible (and I don't think I'm well positioned to try), but I still feel a lot of love for the core of EA, and would be excited if people could navigate it to a place where it regained the mandate of heaven.

I think it might be cool if an AI Safety research organization ran a copy of an open model or something and I could pay them a subscription to use it. That way I know my LLM subscription money is going to good AI stuff and not towards the stuff that AI companies that I don't think I like or want more of on net.

Idk, existing independent orgs might not be the best place to do this bc it might "damn them" or "corrupt them" over time. Like, this could lead them to "selling out" in a variety of ways you might conceive of that.

Still, I guess I am saying that to ... (read more)

Showing 3 of 5 replies (Click to show all)
8
Zach Stein-Perlman
fwiw I think you shouldn't worry about paying $20/month to an evil company to improve your productivity, and if you want to offset it I think a $10/year donation to LTFF would more than suffice.
3
harfe
Can you say more on why you think a 1:24 ratio is the right one (as opposed to lower or higher ratios)? And how might this ratio differ for people who have different beliefs than you, for example about xrisk, LTFF, or the evilness of these companies?

I haven't really thought about it and I'm not going to. If I wanted to be more precise, I'd assume that a $20 subscription is equivalent (to a company) to finding a $20 bill on the ground, assume that an ε% increase in spending on safety cancels out an ε% increase in spending on capabilities (or think about it and pick a different ratio), and look at money currently spent on safety vs capabilities. I don't think P(doom) or company-evilness is a big crux.

Mini Forum update: Draft comments, and polls in comments

Draft comments

You can now save comments as permanent drafts:

After saving, the draft will appear for you to edit:

1. In-place if it's a reply to another comment (as above)

2. In a "Draft comments" section under the comment box on the post

3. In the drafts section of your profile

The reasons we think this will be useful:

  • For writing long, substantive comments (and quick takes!). We think these are the some of the most valuable comments on the forum, and want to encourage more of them
  • For starting a comment on
... (read more)

A summary of my current views on moral theory and the value of AI

I am essentially a preference utilitarian and an illusionist regarding consciousness. This combination of views leads me to conclude that future AIs will very likely have moral value if they develop into complex agents capable of long-term planning, and are embedded within the real world. I think such AIs would have value even if their preferences look bizarre or meaningless to humans, as what matters to me is not the content of their preferences but rather the complexity and nature of their ... (read more)

Showing 3 of 7 replies (Click to show all)
3
akash 🔸
How confident are you about these views?
5
Matthew_Barnett
I'm relatively confident in these views, with the caveat that much of what I just expressed concerns morality, rather than epistemic beliefs about the world. I'm not a moral realist, so I am not quite sure how to parse my "confidence" in moral views.

From an antirealist perspective, at least on the 'idealizing subjectivism' form of antirealism, moral uncertainty can be understood as uncertainty about the result of an idealization process. Under this view, there exists some function that takes your current, naive values as input and produces idealized values as output—and your moral uncertainty is uncertainty about the output. 

80,000 Hours has completed its spin-out and has new boards

We're pleased to announce that 80,000 Hours has officially completed its spin-out from Effective Ventures and is now operating as an independent organisation.

We've established two entities with the following board members:

80,000 Hours Limited (a nonprofit entity where our core operations live):

  • Konstantin Sietzy — Deputy Director of Talent and Operations at UK AISI
  • Alex Lawsen — Senior Program Associate at Open Philanthropy and former 80,000 Hours Advising Manager
  • Anna Weldon — COO at the Centre for Ef
... (read more)
Load more