I used to visit SF in the summer from 2022 to 2024 and the SF scene felt quite disconnected compared to Berkeley. But now there is this 4-floor building where a lot of interesting people work, eat, and hang out every day.
Mox feels like this cozy coworking space where you can both work and relax. I know at least two people who describe it as their living room, but better and bigger.
I personally think that without Mox I'd have a way harder time finding a productive spot to work from in the bay, would have not met a bunch of people I work with every day, and would also not have smoothly organized some cool events like two of my documentary screenings where 100+ people showed up.
Wouldn't investors fire Dario and replace him with someone who would maximize profits?
Note: My understanding is that, as of November 2024, the Long-Term Benefit Trust controls 3 of 5 board seats, so investors alone cannot fire him. However, a supermajority of voting shareholders could potentially amend the Trust structure first, then replace the board and fire him.
I understand how Scott Wiener can be considered an "AI Safety champion". However, the title you chose feels a bit too personality-culty to me.
I think the forum would benefit for more neutral post titles such as "Consider donating to congressional candidate Scott Wiener", or even "Reasons to donate to congressional candidate Scott Wiener".
Hi Alice, thanks for the datapoint. It's useful to know you have been a LessWrong user for a long time.
I agree with your overall point that the people we want to reach would be on platforms that have a higher signal-to-noise ratio.
Here are some reasons for why I think it might still make sense to post short-form (not trying to convince you, I just think these arguments are worth mentioning for anyone reading this):
Thanks! Just want to add some counterpoints and disclaimers to that:
- 1. I want to flag that although I've filmed & edited ~20 short-form clips in the past (eg. from June 2022 to July 2025) around things like AI Policy and protests, most of the content I've recently been posting as just been clips from other interviews. So I think it would also be unfair to compare my clips and original content (both short-form and longform), which is why I wrote this post. (I started doing this because I ran out of footage to edit shortform videos as I was trying to publish one TikTok a day, and these clips eventually reached way more people than what I was doing before, so I transitioned to doing that).
- 2. regarding comparing to high-production videos: I don't want to come across as saying we shouldn't compare work of different length or using different budgets. I think Marcus and Austin's attempt is honorable. Also, being able to correctly use a large budget to make a high-production video that reaches as many people as many lower budget videos requires a lot of skill, though once you have that level of skill then the amount of time you spend on a video to make it really good ends up leading to exponential results in views (if you make something that is 10% better, Youtube will push it much more than 10% more).
Glad you're working with some of the people I recommended to you, I'm very proud of that SB-1047 documentary team.
I would add to the list Suzy Shepherd who made Writing Doom. I believe she will relatively soon be starting another film. I wrote more about her work here.
For context, you asked me for data for something you were planning (at the time) to publish day-off. There's no way to get the watchtime easily on TikTok (which is why I had to do manual addition of things on a computer) and I was not on my laptop, so couldn't do it when you messaged me. You didn't follow up to clarify that watchtime was actually the key metric in your system and you actually needed that number.
Good to know that the 50 people were 4 Safety people and 46 people who hang at Mox and Taco Tuesday. I understand you're trying to reach the MIT-graduate working in AI who might somehow transition to AI Safety work at a lab / constellation. I know that Dwarkesh & Nathan are quite popular with that crowd, and I have a lot of respect for what Aric (& co) did, so the data you collected make a lot of sense to me. I think I can start to understand why you gave a lower score to Rational Animations or other stuff like AIRN.
I'm now modeling you as trying to answer something like "how do we cost-effectively feed AI Safety ideas to the kind of people who walk in at Taco Tuesday, who have the potential to be good AI Safety researchers". Given that, I can now understand better how you ended up giving some higher score to Cognitive Revolution and Robert Miles.
In two days (March 21st, 12-4pm), about 140 of us (event link) will be marching on Anthropic, OpenAI and xAI in SF asking the CEOs to make statements on whether they would stop developing new frontier models if every other major lab in the world credibly does the same. This comes after Anthropic removed its commitment to pause development from their RSP.
We'll be starting at 500 Howard St, San Francisco (Anthropic's Office, full schedule and more info here). This is shaping to be the biggest US AI Safety protest to date, with a coalition including Nate Soares (MIRI), David Krueger (Evitable), Will Fithian (Berkeley Professor) and folks representing PauseAI, QuitGPT, Humans First.