Any hints / info on what to look for in a mentor / how to find one? (Specifically for community building.)
I'm starting as a national group director in september, and among my focus topics for EAG London are group-focused things like "figuring out pointers / out of the box ideas / well-working ideas we haven't tried yet for our future strategy", but also trying to find a mentor.
These were some thoughts I came up with when thinking about this yesterday:
- I'm not looking for accountability or day to day support. I get that from inside our local group.
&n...
Having a savings target seems important. (Not financial advice.)
I sometimes hear people in/around EA rule out taking jobs due to low salaries (sometimes implicitly, sometimes a little embarrassedly). Of course, it's perfectly understandable not to want to take a significant drop in your consumption. But in theory, people with high salaries could be saving up so they can take high-impact, low-paying jobs in the future; it just seems like, by default, this doesn't happen. I think it's worth thinking about how to set yourself up to be able to do it if you do ...
There is going to be a Netflix series on SBF titled The Altruists, so EA will be back in the media. I don't know how EA will be portrayed in the show, but regardless, now is a great time to improve EA communications. More specifically, being a lot more loud about historical and current EA wins — we just don't talk about them enough!
A snippet from Netflix's official announcement post:
...Are you ready to learn about crypto?
Julia Garner (Ozark, The Fantastic Four: First Steps, Inventing Anna) and Anthony Boyle (House of Guinness, Say Nothing, Masters of the
I worry that the pro-AI/slow-AI/stop-AI has the salient characteristics of a tribal dividing line that could tear EA apart:
I want to clarify, for the record, that although I disagree with most members of the EA community on whether we should accelerate or slow down AI development, I still consider myself an effective altruist in the senses that matter. This is because I continue to value and support most EA principles, such as using evidence and reason to improve the world, prioritizing issues based on their scope, not discriminating against foreigners, and antispeciesism.
I think it’s unfortunate that disagreements about AI acceleration often trigger such strong backlash withi...
It's not that I'm ignoring group loyalty, just that the word "traitor" seems so strong to me that I don't think there's any smaller group here that's owed that much trust. I could imagine a close friend calling me that, but not a colleague. I could imagine a researcher saying I "betrayed" them if I steal and publish their results as my own after they consulted me, but that's a much weaker word.
[Context: I come from a country where you're labeled a traitor for having my anti-war political views, and I don't feel such usage of this word has done much good for society here...]
AI-generated video with human scripting and voice-over celebrates Vasili Arkhipov’s decision not to start WWIII.
https://www.instagram.com/p/DKNZkTSOsCk/
The EA Forum moderation team is going to experiment a bit with how we categorize posts. Currently there is a low bar for a Forum post being categorized as “Frontpage” after it’s approved. In comparison, LessWrong is much more opinionated about the content they allow, especially from new users. We’re considering moving in that direction, in order to maintain a higher percentage of valuable content on our Frontpage.
To start, we’re going to allow moderators to move posts from new users from “Frontpage” to “Personal blog”[1], at their discretion, but starting ...
I'm a 36 year old iOS Engineer/Software Engineer who switched to working on Image classification systems via Tensorflow a year ago. Last month I was made redundant with a fairly generous severance package and good buffer of savings to get me by while unemployed.
The risky step I had long considered of quitting my non-impactful job was taken for me. I'm hoping to capitalize on my free time by determining what career path to take that best fits my goals. I'm pretty excited about it.
I created a weighted factor model to figure out what projects or learnin...
Out of that list I'd guess that the fourth and fifth (depending on topics) bullets are most suitable for the Forum.
The basic way I'd differentiate content is that the Forum frontpage should all be content that is related to the project of effective altruism, the community section is about EA as a community (i.e. if you were into AI Safety but not EA, you wouldn't be interested in the community section), and "personal blog" (i.e. not visible on frontpage) is the section for everything that isn't in those categories. For example posts on "Miscellaneous...
Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks:
Looks like Mechanize is choosing to be even more irresponsible than we previously thought. They're going straight for automating software engineering. Would love to hear their explanation for this.
"Software engineering automation isn't going fast enough" [1] - oh really?
This seems even less defensible than their previous explanation of how their work would benefit the world.
Not an actual quote
Sometimes the dollar signs can blind someone and cause them not to consider obvious alternatives. And they will feel that they made the decision for reasons other than the money, but the money nonetheless caused the cognitive distortion that ultimately led to the decision.
I'm not claiming that this happened here. I don't have any way of really knowing. But it's certainly suspicious. And I don't think anything is gained by pretending that it's not.
As part of MATS' compensation reevaluation project, I scraped the publicly declared employee compensations from ProPublica's Nonprofit Explorer for many AI safety and EA organizations (data here) in 2019-2023. US nonprofits are required to disclose compensation information for certain highly paid employees and contractors on their annual Form 990 tax return, which becomes publicly available. This includes compensation for officers, directors, trustees, key employees, and highest compensated employees earning over $100k annually. Therefore, my data does not...
Productive conference meetup format for 5-15 people in 30-60 minutes
I ran an impromptu meetup at a conference this weekend, where 2 of the ~8 attendees told me that they found this an unusually useful/productive format and encouraged me to share it as an EA Forum shortform. So here I am, obliging them:
I guess orgs need to be more careful about who they hire as forecasting/evals researchers in light of a recently announced startup.
Sometimes things happen, but three people at the same org...
This is also a massive burning of the commons. It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias. It is valuable for folks to be able to share information freely with folks at such forecasting orgs without having to worry about them going off and doing something like this.
However, this only works...
So, I have two possible projects for AI alignment work that I'm debating between focusing on. Am curious for input into how worthwhile they'd be to pursue or follow up on.
The first is a mechanistic interpretability project. I have previously explored things like truth probes by reproducing the Marks and Tegmark paper and extending it to test whether a cosine similarity based linear classifier works as well. It does, but not any better or worse than the difference of means method from that paper. Unlike difference of means, however, it can be extended to mu...
I'm organizing an EA Summit in Vancouver, BC, for the fall and am looking for opportunities for our attendees to come away from the event with opportunities to look forward to. Most of our attendees will have Canadian but not US work authorization. Anyone willing to meet potential hires, mentees, research associates, funding applicants, etc., please get in touch!
epistemic status: i timeboxed the below to 30 minutes. it's been bubbling for a while, but i haven't spent that much time explicitly thinking about this. i figured it'd be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don't expect to reflectively endorse all of these points later down the line. i think it's probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i'm not very keen on arguing about any of th...
Thanks for sharing your experiences and reflections here — I really appreciate the thoughtfulness. I want to offer some context on the group organizer situation you described, as someone who was running the university groups program at the time.
On the strategy itself:
At the time, our scalable programs were pretty focused from evidence we had seen that much of the impact came from the organizers themselves. We of course did want groups to go well more generally, but in deciding where to put our marginal resource we were focusing on group organizers. I...
I think it might be cool if an AI Safety research organization ran a copy of an open model or something and I could pay them a subscription to use it. That way I know my LLM subscription money is going to good AI stuff and not towards the stuff that AI companies that I don't think I like or want more of on net.
Idk, existing independent orgs might not be the best place to do this bc it might "damn them" or "corrupt them" over time. Like, this could lead them to "selling out" in a variety of ways you might conceive of that.
Still, I guess I am saying that to ...
I haven't really thought about it and I'm not going to. If I wanted to be more precise, I'd assume that a $20 subscription is equivalent (to a company) to finding a $20 bill on the ground, assume that an ε% increase in spending on safety cancels out an ε% increase in spending on capabilities (or think about it and pick a different ratio), and look at money currently spent on safety vs capabilities. I don't think P(doom) or company-evilness is a big crux.
Draft comments
You can now save comments as permanent drafts:
After saving, the draft will appear for you to edit:
1. In-place if it's a reply to another comment (as above)
2. In a "Draft comments" section under the comment box on the post
3. In the drafts section of your profile
The reasons we think this will be useful:
A summary of my current views on moral theory and the value of AI
I am essentially a preference utilitarian and an illusionist regarding consciousness. This combination of views leads me to conclude that future AIs will very likely have moral value if they develop into complex agents capable of long-term planning, and are embedded within the real world. I think such AIs would have value even if their preferences look bizarre or meaningless to humans, as what matters to me is not the content of their preferences but rather the complexity and nature of their ...
From an antirealist perspective, at least on the 'idealizing subjectivism' form of antirealism, moral uncertainty can be understood as uncertainty about the result of an idealization process. Under this view, there exists some function that takes your current, naive values as input and produces idealized values as output—and your moral uncertainty is uncertainty about the output.
80,000 Hours has completed its spin-out and has new boards
We're pleased to announce that 80,000 Hours has officially completed its spin-out from Effective Ventures and is now operating as an independent organisation.
We've established two entities with the following board members:
80,000 Hours Limited (a nonprofit entity where our core operations live):