Jonny Spicer 🔸

Founder @ All In Labs
482 karmaJoined Working (6-15 years)London, UK
jonnyspicer.com

Bio

Participation
3

I founded All In Labs, a for-profit venture studio.

I'm based in London and interested in AI safety, forecasting, community building, rationality, mental health, games, running and probably other stuff.

Before I was a programmer I was a professional poker player, where I picked up the habit of calculating the EV of almost every action in my life, and subsequently discovered EA.

If you want to learn more about me, you can check out my website here: https://jonnyspicer.com

Comments
38

Bluesky is overwhelmingly negative about AI. 64% of Bluesky claims were classified as "very negative" versus only 15% on Truth Social.

I am confused by this claim - the graph above it suggests that 64% of Bluesky claims were classified as somewhat negative, and only 15% of Bluesky claims were classified as very negative. While I agree with your analysis that the sentiment on Bluesky skews a lot more negative than that on Truth Social, I do think it's notable that a greater proportion of Truth Social posts were very negative in sentiment as compared to Bluesky posts.

CE Incubated Charities Fund & RP

Tanaka Toshiko and Ogawa Tadayoshi, both Hibakusha (survivors of the Hiroshima and Nagasaki bombings) spoke at the closing ceremony for EAG London this year, although I couldn't find any explicit link between themselves and Nihon Hidankyo.

Have you considered talking/working with Sage on this? It sounds like something that would fit well with the other tools on https://www.quantifiedintuitions.org/

I'd be interested to see you weigh the pros and cons of making it easier to contribute - you don't explicitly say it in the post, but you imply that this would be a good thing by default. The forum is the way it is for a reason, and there are mechanisms put in place both by the forum team and by the community in order to try to keep the quality of the discussion high. 

For example, I would argue that having a high bar for posting isn't a bad thing, and the sliding-scale karma system that helps regulate that is, in extension, valuable. If writing a full post of sufficient quality is time consuming, then there is the quick takes section. 

The Alignment Forum has a significantly higher barrier to entry than this one does, but I think that is fairly universally regarded as an important factor in facilitating a certain kind of discussion either. I can see a lot of value in the EA forum trying to maintain it's current norms in order to mean it still has the potential for productive discussion between people who are sufficiently well-researched. I think meaningfully lowering the bar for participation would mean that the forum would lose some of its ability to generate anything especially novel or useful to the community and I think the quote you included:

For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet.

Somewhat points to that too. I think there should be other forums for people less familiar with EA to participate in discussions, and I think whether or not those currently exist is an interesting discussion.

Having said all that, I do wonder if that leaves the current forum community particularly vulnerable to groupthink. I'm not really sure what the solution to that is though.

My biggest takeaway from EA so far has been that the difference in expected moral value between the consensus choice and its alternative(s) can be vastly larger than I had previously thought.

I used to think that "common sense" would get me far when it came to moral choices. I even thought that the difference in expected moral value between the "common sense" choice and any alternatives was negligible, so much so that I made a deliberate decision not to invest time into thinking about my own values or ethics. 

EA radically changed my opinion, and now I hold the view that the consensus view is frequently wrong, even when the stakes are high, and that is possible to make dramatically better moral decisions by approaching them with rationality, and a better-informed ethical framework.

Sometimes I come across people who are familiar with EA ideas but don't particularly engage with them or the community. I often feel surprised, and I think the above is a big part of why. Perhaps more emphasis could be placed on this expected moral value gap in EA outreach?

Thanks for the feedback - it has indeed been a long time since I did high school statistics!

I specified that the numbers I gave were "approximations to prove my point" is because I know that I do not have a technical statistical model in my head, and I didn't want to pretend that was the case. Given this is a non-technical, shortform post, I thought it was clear what I meant - apologies if that wasn't so.

This is a good idea, thanks for the suggestion! I've never really tried any of the CFAR stuff but this seems like a good place to start. 

I'll give it a go over the weekend and if I'm struggling then I'll let you know and we can do it together :)

It means something like "my 90% confidence interval is 80% - 95%, with 90% as the mean".

Thanks for the suggestion! I have actually spent quite a lot of time thinking about this - I had my 80k call last April and this was their advice. I've hesitated against doing this for a number of reasons:
 

  • I'm worried that even if I do upskill in ML, I won't be a good enough software engineer to land a research engineering position, so part of me wants to improve as a SWE first
  • At the moment I'm very busy and a marginal hour of my time is very valuable, upskilling in ML is likely 200-500 hours, at the moment I would struggle to commit to even 5 hours per week
  • I don't know whether I would enjoy ML, whereas I know I somewhat enjoy at least some parts of the SWE work I currently do
  • Learning ML potentially narrows my career options vs learning broader skills, so it's hard to hedge
  • My impression is that there are a lot of people trying to do this right now, and it's not clear to me that doing so would be my comparative advantage. Perhaps carving out a different niche would be more valuable in the future.

There are probably good rebuttals to at least some of these points, and I think that is adding to my confusion. My intuition is to keep doing what I'm currently doing, rather than go try and learn ML, but maybe my intuition here is bad.

Edit: writing this comment made me realise that I ought to write a proper doc with the pros/cons of learning ML and get feedback on it if necessary. Thanks for helping pull this useful thought out of my brain :)

Load more