Anthropic issues questionable letter on SB 1047 (Axios). I can't find a copy of the original letter online.
I've been reviewing some old Forum posts for an upcoming post I'm writing, and incidentally came across this by Howie Lempel for noticing in what spirit you're engaging with someone's ideas:
"Did I ask this question because I think they will have a good answer or because I think they will not have a good answer?"
I felt pretty called out :P
To be fair, I think the latter is sometimes a reasonable persuasive tactic, and it's fine to put yourself in a teaching role rather than a learning role if that's your endorsed intention and the other party is on board...
Quick[1] thoughts on the Silicon Valley 'Vibe-Shift'
I wanted to get this idea out of my head and into a quick-take. I think there's something here, but a lot more to say, and I've really haven't done the in-depth research for it. There was a longer post idea I had for this, but honestly diving more than I have here into it is not a good use of my life I think.
The political outlook in Silicon Valley has changed.
Since the attempted assassination attempt on President Trump, the mood in Silicon Valley has changed. There have been open endorsements, e/acc ...
What is it for EA to thrive?
EA Infrastructure Fund's Plan to Focus on Principles-First EA includes a proposal:
The EA Infrastructure Fund will fund and support projects that build and empower the community of people trying to identify actions that do the greatest good from a scope-sensitive and impartial welfarist view.
And a rationale (there's more detail in the post):
...
- [...] EA is doing something special.
- [...] fighting for EA right now could make it meaningfully more likely to thrive long term.
- [...] we could make EA
If your employer/ manager/ funder/ relevant people said something like: ‘We have full confidence in you, your job is guaranteed and we want you to focus on whatever you think is best’ - would that change what you focus on? How much?
My personal impression is that significant increases in unrestricted funding (even if it were a 1-1 replacement for restricted funding) would dramatically change orgs and individual prioritisations in many cases.
To the extent that one thinks that researchers are better placed to identify high value research questions (which, to be clear, one may not in many cases), this seems bad.
Reading and engaging with the forum as good for a meta reason, engaging and encouraging other people to keep making posts because engagement seems to exist and they’re incentivized to post. Or even more junior people to try and contribute, idk what the ea forum felt like ~10 years ago, but probably lower standards for engagement.
Hey everyone, in collaboration with Apart Research, I'm helping organize a hackathon this weekend to build tools for accelerating alignment research. This hackathon is very much related to my effort in building an "Alignment Research Assistant."
Here's the announcement post:
2 days until we revolutionize AI alignment research at the Research Augmentation Hackathon!
As AI safety researchers, we pour countless hours into crucial work. It's time we built tools to accelerate our efforts! Join us in creating AI assistants that could supercharge the very research w...
Mental health org in India that follows the paraprofessional model
https://reasonstobecheerful.world/maanasi-mental-health-care-women/
#mental-health-cause-area
Meta has just released Llama 3.1 405B. It's open-source and in many benchmarks it beats GPT-4o and Claude 3.5 Sonnet:
Zuck's letter "Open Source AI Is the Path Forward".
A lot of people have said sharing these notes were helpful, so sharing it here on the EAF! Here are notes on NTI | bio’s recent event with Dr. Lu Borio on H5N1 Bird Flu, in case anyone here would find it helpful!
‘Five Years After AGI’ Focus Week happening over at Metaculus.
Inspired in part by the EA Forum’s recent debate week, Metaculus is running a “focus week” this week, aimed at trying to make intellectual progress on the issue of “What will the world look like five years after AGI (assuming that humans are not extinct)[1]?”
Leaders of AGI companies, while vocal about some things they anticipate in a post-AGI world (for example, bullishness in AGI making scientific advances), seem deliberately vague about other aspects. For example, power (will AGI companies hav...
I am very concerned about the future of US democracy and rule of law and its intersection with US dominance in AI. On my Manifold question, forecasters (n=100) estimate a 37% that the US will no longer be a liberal democracy by the start of 2029 [edit: as defined by V-DEM political scientists].
Project 2025 is an authoritarian playbook, including steps like 50,000 political appointees (there are ~4,000 appointable positions, of which ~1,000 change in a normal presidency). Trump's chances of winning are significantly above 50%, and even if he loses, Republic...
I'm not really sure this contradicts what I said very much. I agree the V-Dem evaluators were reacting to Trump's comments, and this made them reduce their rating for America. I think they will react to Trump's comments again in the future, and this will again make them likely reduce their rating for America. This will happen regardless of whether policy changes, and be poorly calibrated for actual importance - contra V-Dem, Trump getting elected was less important than the abolition of slavery. Since I think Siebe was interested in policy changes rather than commentary, this means V-Dem is a bad metric for him to look at.
I wanted to figure out where EA community building has been successful. Therefore, I asked Claude to use EAG London 2024 data to assess the relative strength of EA communities across different countries. This quick take is the result.
The report presents an analysis of factors influencing the strength of effective altruism communities across different countries. Using attendance data from EA Global London 2024 as a proxy for community engagement, we employed multiple regression analysis to identify key predictors of EA participation. The model incorpo...
I'm curious if you fed Claude the variables or if it fetched them itself? In the latter case, there's a risk of having the wrong values, isn't there?
Otherwise, really interesting project. Curious of the insights to take out of this, esp. for me the fact that Switzerland comes up first. Also surprising that Germany's not on the list, maybe?
Thanks!
Looking for people (probably from US/UK) to do donation swaps with. My local EA group currently allows tax-deductible donations to:
However, I would like to donate to the following:
If anyone is willing to donate these sums and have me donate an equal sum to one of the funds mentioned above - please contact me.
This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.
@Peter Wildeford @Matt_Lerner interested in similar. This in-depth analysis' was a bit strict in my opinion looking at file-level criteria:
https://www.metabase.com/blog/bus-factor
These massive projects were mostly maintained by 1 person last I checked a year ago:
https://github.com/curl/curl/graphs/contributors
https://github.com/vuejs/vue/graphs/contributors
https://github.com/twbs/bootstrap/graphs/contributors
https://github.com/laravel/laravel/graphs/contributors
https://github.com/pallets/flask/graphs/contributors
https://github.com/expressjs/express/graphs/...
I'm extremely excited that EAGxIndia 2024 is confirmed for October 19–20 in Bengaluru! The team will post a full forum post with more details in the coming days, but I wanted a quick note to get out immediately so people can begin considering travel plans. You can sign up to be notified about admissions opening, or to express interest in presenting, via the forms linked on the event page:
https://www.effectivealtruism.org/ea-global/events/eagxindia-2024
Hope to see many of you there!!
Do you like SB 1047, the California AI bill? Do you live outside the state of California? If you answered "yes" to both these questions, you can e-mail your state legislators and urge them to adopt a similar bill for your state. I've done this and am currently awaiting a response; it really wasn't that difficult. All it takes is a few links to good news articles or opinions about the bill and a paragraph or two summarizing what it does and why you care about it. You don't have to be an expert on every provision of the bill, nor do you have to have a group of people backing you. It's not nothing, but at least for me it was a lot easier than it sounded like it would be. I'll keep y'all updated on if I get a response.
I can highly recommend following Sentinel's weekly minutes, a weekly update from superforecasters on the likelihood of any events which plausibly could cause worldwide catastrophe.
Perhaps the weekly newsletter I look the most forward to at this point. Read previous issues here:
https://sentinel-team.org/blog/
IDK if this actually works since I only just signed up, but, the "Join us" button in top right leads to, "https://sentinel-team.org/contact/"
Seems you can add yourself to mailing list from there.