Quick takes

Quick[1] thoughts on the Silicon Valley 'Vibe-Shift'

I wanted to get this idea out of my head and into a quick-take. I think there's something here, but a lot more to say, and I've really haven't done the in-depth research for it. There was a longer post idea I had for this, but honestly diving more than I have here into it is not a good use of my life I think.

The political outlook in Silicon Valley has changed.

Since the attempted assassination attempt on President Trump, the mood in Silicon Valley has changed. There have been open endorsements, e/acc ... (read more)

Across all of this my impression is that, just like with Torres, there was little to no direct pushback

Strongly agree. I think the TESCREAL/e-acc movements badly mischaracterise the EA community with extremely poor, unsubstantiated arguments, but there doesn’t seem to be much response to this from the EA side. 

I think this is very much linked to playing a strong 'inside game' to access the halls of power and no 'outside game' to gain legitimacy for that use of power

What does this refer to? I'm not familiar. 

Other thoughts on this:

Publicly, the qu... (read more)

5
anormative
I've often found it hard to tell whether an ideology/movement/view has just found a few advocates among a group, or whether it has totally permeated that group.  For example, I'm not sure that Srinivasan's politics have really changed recently or that it would be fair to generalize from his beliefs to all of the valley. How much of this is actually Silicon Valley's political center shifting to e/acc and the right, as opposed to people just having the usual distribution of political beliefs (in addition to a valley-unspecific decline of the EA brand)? 
2
David Mathers
A NYT article I read a couple of days ago claimed Silicon Valley remains liberal overall.

I am very concerned about the future of US democracy and rule of law and its intersection with US dominance in AI. On my Manifold question, forecasters (n=100) estimate a 37% that the US will no longer be a liberal democracy by the start of 2029 [edit: as defined by V-DEM political scientists].

Project 2025 is an authoritarian playbook, including steps like 50,000 political appointees (there are ~4,000 appointable positions, of which ~1,000 change in a normal presidency). Trump's chances of winning are significantly above 50%, and even if he loses, Republic... (read more)

Showing 3 of 7 replies (Click to show all)
7
Fermi–Dirac Distribution
V-Dem indicators seem to take into account statements made by powerful politicians, not only their policies or other deeds. For example, I found this in one of their annual reports: My guess is that statements made by Trump were extreme outliers in how they betrayed little respect to democratic institutions, compared to statements made by earlier US presidents, and that affected their model. I think that's reasonable. It might not be fully reflective of lived reality for US citizens at the moment the statements are made, but it sure captures the beliefs and motives of powerful people, which is predictive of their future actions.  Indeed, one way to see the drop in 2017 is that it was able to predict a major blow to American democracy (Trump refusing to concede an election) 4 years in advance.

I'm not really sure this contradicts what I said very much. I agree the V-Dem evaluators were reacting to Trump's comments, and this made them reduce their rating for America. I think they will react to Trump's comments again in the future, and this will again make them likely reduce their rating for America. This will happen regardless of whether policy changes, and be poorly calibrated for actual importance - contra V-Dem, Trump getting elected was less important than the abolition of slavery. Since I think Siebe was interested in policy changes rather than commentary, this means V-Dem is a bad metric for him to look at.

4
SiebeRozendal
I don't really understand why so many people are downvoting this. If anyone would like to explain, that'd be nice!

What is it for EA to thrive? 

EA Infrastructure Fund's Plan to Focus on Principles-First EA includes a proposal:

The EA Infrastructure Fund will fund and support projects that build and empower the  community of people trying to identify actions that do the greatest good from a scope-sensitive and impartial welfarist view.

 

And a rationale (there's more detail in the post):

 

  • [...] EA is doing something special. 
  • [...]  fighting for EA right now could make it meaningfully more likely to thrive long term.
  • [...]  we could make EA
... (read more)
Showing 3 of 6 replies (Click to show all)

If being thoughtful, sincere and selfless is a core value, it seems like it would be more of a problem if influential people in the community felt they had to embrace the label even if they didn't think it was valuable or accurate

I suspect a lot of the 'EA adjacent' description comes from question marks about particular characteristics EA stances or image rather than doubting some of their friends could benefit from participating in the community, and that part of that is less a rejection of EA altogether and more an acknowledgement they often find themse... (read more)

2[comment deleted]
9
hbesceli
Some EA psychological phenomena Some things that people report in EA: * Impostor Syndrome * Impact obsession * Burnout * EA Disillusionment Are these EA phenomena? Also, are they psychological phenomena?  These things (I guess excluding EA disillusionment), don’t just exist within EA they exist within society in general, so it’s plausibly unfair to call them EA phenomena. Though it also seems to me that for each of these things, there’s somewhat strong fit with EA, and EA culture. Taking impostor syndrome as an example: EA often particularly values ambitious and talented people. Also, it seems to me there’s something of a culture of assessing and prioritising people on this basis. Insofar as it’s important for people to be successful within EA, it’s also important for people to be seen in a certain way by others (talented, ambitious etc.). In general, the stronger the pressure there is for people to be perceived in a certain way, the more prominent I expect impostor syndrome to be.  (I’m a bit wary of ‘just so’ stories here, but my best guess is that this in fact explanatory).  I think impostor syndrome and other things in this ballpark are often discussed as an individual/ psychological phenomena. I think such framings are pretty useful. And there’s another framing which is seeing it instead as a ~sociological phenomena - these are things which happen in a social context, as a result of different social pressures and incentives within the environment. I don’t know quite what to conclude here, in a large part because I don’t know how common these things are within EA, and how this compares to other places (or even what the relevant comparison class is). Though tentatively, if I’m asking ‘What does it look like for EA to thrive?’, then part of my answer is ‘being an environment where impostor syndrome, burnout, impact obsession and EA disillusionment are less common’.

I wanted to figure out where EA community building has been successful. Therefore, I asked Claude to use EAG London 2024 data to assess the relative strength of EA communities across different countries. This quick take is the result. 

The report presents an analysis of factors influencing the strength of effective altruism communities across different countries. Using attendance data from EA Global London 2024 as a proxy for community engagement, we employed multiple regression analysis to identify key predictors of EA participation. The model incorpo... (read more)

I'm curious if you fed Claude the variables or if it fetched them itself? In the latter case, there's a risk of having the wrong values, isn't there?

Otherwise, really interesting project. Curious of the insights to take out of this, esp. for me the fact that Switzerland comes up first. Also surprising that Germany's not on the list, maybe?

Thanks!

2
Lorenzo Buonanno🔸
I'm surprised that the "top 10" doesn't include Denmark, Austria, Belgium, and Germany, since they all have more population-adjusted participants than Ireland, are not English-speaking, are more distant from London, and have lower GDP per capita[1] Are we using different data? In general, I'm a bit sceptical of these analyses, compared to looking at the countries/cities with the most participants in absolute terms. I also expect Claude to make lots of random mistakes. 1. ^ But of course, Ireland's GDP is very artificial
2
James Herbert
But absolute terms isn’t very useful if we’re trying to spot success stories, right? Or am I misunderstanding something? But yeah, something seems off about Ireland. The rest of the list feels quite good though. David Moss said they have some per capita estimates in the pipeline, so I’m excited to see what they produce!

Looking for people (probably from US/UK) to do donation swaps with. My local EA group currently allows tax-deductible donations to:

  1. GiveWell - Top Charities Fund
  2. Animal Charity Evaluators - Top Charities Fund
  3. Against Malaria Foundation
  4. Good Food Institute
  5. <One other org that I don't want to include here>

However, I would like to donate to the following:

  1. GiveWell - All Grants Fund (~$1230)
  2. GiveDirectly (~$820)
  3. The Humane League (~$580)

If anyone is willing to donate these sums and have me donate an equal sum to one of the funds mentioned above - please contact me.

This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.

Showing 3 of 6 replies (Click to show all)
5
Joseph_Chu
Relevant XKCD comic. To further comment, this seems like it might be an intractable task, as the term "dependency hell" kind of implies. You'd have to scrap likely all of GitHub and calculate what libraries are used most frequently in all projects to get an accurate assessment. Then it's not clear to me how you'd identify their level of resourcing. Number of contributors? Frequency of commits? Also, with your example of the XZ attack, it's not even clear who made the attack. If you suspect it was, say, the NSA, would you want to thwart them if their purpose was to protect American interests? (I'm assuming you're pro-American) Things like zero-days are frequently used by various state actors, and it's a morally grey question whether or not those uses are justified. I also, as a comp sci and programmer, have doubts you'd ever be able to 100% prevent the risk of zero-days or something like the XZ attack from happening in open source code. Given how common zero-days seem to be, I suspect there are many in existing open source work that still haven't been discovered, and that XZ was just a rare exception where someone was caught.  Yes, hardening these systems might somewhat mitigate the risk, but I wouldn't know how to evaluate how effective such an intervention would be, or even, how you'd harden them exactly. Even if you identify the at-risk projects, you'd need to do something about them. Would you hire software engineers to shore up the weaker projects? Given the cost of competent SWEs these days, that seems potentially expensive, and could compete for funding with actual AI safety work.
5
Matt_Lerner
I'd be interested in exploring funding this and the broader question of ensuring funding stability and security robustness for critical OS infrastructure. @Peter Wildeford is this something you guys are considering looking at?

I'm extremely excited that EAGxIndia 2024 is confirmed for October 19–20 in Bengaluru! The team will post a full forum post with more details in the coming days, but I wanted a quick note to get out immediately so people can begin considering travel plans. You can sign up to be notified about admissions opening, or to express interest in presenting, via the forms linked on the event page:

https://www.effectivealtruism.org/ea-global/events/eagxindia-2024

Hope to see many of you there!!

Do you like SB 1047, the California AI bill? Do you live outside the state of California? If you answered "yes" to both these questions, you can e-mail your state legislators and urge them to adopt a similar bill for your state. I've done this and am currently awaiting a response; it really wasn't that difficult. All it takes is a few links to good news articles or opinions about the bill and a paragraph or two summarizing what it does and why you care about it. You don't have to be an expert on every provision of the bill, nor do you have to have a group of people backing you. It's not nothing, but at least for me it was a lot easier than it sounded like it would be. I'll keep y'all updated on if I get a response.

Both my state senator and my state representative have responded to say that they'll take a look at it. It's non-commital, but it still shows how easy it is to contact these people.

Ideas of posts I could write in comments. Agreevote with things I should write. Don't upvote them unless you think I should have karma just for having the idea, instead upvote the post when I write it :P

Feel encouraged also to comment with prior art in cases where someone's already written about something. Feel free also to write (your version of) one of these posts, but give me a heads-up to avoid duplication :)

(some comments are upvoted because I wrote this thread before we had agreevotes on every comment; I was previously removing my own upvotes on these but then I learned that your own upvotes don't affect your karma score)

Showing 3 of 35 replies (Click to show all)

Assessments of non-AI x-risk are relevant to AI safety discussions because some of the hesitance to pause or slow AI progress is driven by a belief that it will help eliminate other threats if it goes well.

I tend to believe that risk from non-AI sources is pretty low, and I'm therefore somewhat alarmed when I see people suggest or state relatively high probabilities of civilisational collapse without AI intervention. Could be worth trying to assess how widespread this view is and trying to argue directly against it.

2
Ben Millwood
This one might be for LW or the AF instead / as well, but I'd like to write a post about: * should we try to avoid some / all alignment research casually making it into the training sets for frontier AI models? * if so, what are the means that we can use to do this? how do they fare on the ratio between reduction in AI access vs. reduction in human access?
2
Ben Millwood
I made this into two posts, my first LessWrong posts: * Keeping content out of LLM training datasets * Should we exclude alignment research from LLM training datasets?

Lab-grown meat approved for pet food in the UK 

"The UK has become the first European country to approve putting lab-grown meat in pet food.

Regulators cleared the use of chicken cultivated from animal cells, which lab meat company Meatly is planning to sell to manufacturers.

The company says the first samples of its product will go on sale as early as this year, but it would only scale its production to reach industrial volumes in the next three years."

https://www.bbc.co.uk/news/articles/c19k0ky9v4yo

Also in the article "The Animal and Plant Health Agency - part of the Department for Environment, Food & Rural Affairs - gave the product the go-ahead."

I think there are a bunch of EAs working at Defra - I wonder if they helped facilitate this?

Something bouncing around my head recently ... I think I agree with the notion that "you can't solve a problem at the level it was created".

A key point here is the difference between "solving" a problem and "minimising its harm".

  • Solving a problem = engaging with a problem by going up a level from which is was createwd
  • Minimising its harm = trying to solve it at the level it was created

Why is this important? Because I think EA and AI Safety have historically focussed (and has their respective strengths in) harm-minimisation.

This applies obviously the micro. ... (read more)

To me, your examples at the micro level don't make the case that you can't solve a problem at the level it's created. I'm agnostic as to whether CBT or meta-cognitive therapy is better for intrusive thoughts, but lots of people like CBT; and as for 'doing the dishes', in my household we did solve the problem of conflicts around chores by making a spreadsheet. And to the extent that working on communication styles is helpful, that's because people (I'd claim) have a problem at the level of communication styles.
 

I think it is good to have some ratio of upvoted/agreed : downvotes/disagreed posts in your portfolio. I think if all of your posts are upvoted/high agreeance then you're either playing it too safe or you've eaten the culture without chewing first.

11
Ben Millwood
I think some kinds of content are uncontroversially good (e.g. posts that are largely informational rather than persuasive), so I think some people don't have a trade-off here.

Good point. In that case the hypothetical user isn't using it as a forum (i.e. for discourse)

4
yanni kyriacos
as of comment, 6 agrees and 6 disagrees. perfect :) 

What is "capabilities"? What is "safety"? People often talk about the alignment tax: the magnitude of capabilities/time/cost a developer loses by implementing an aligned/safe system. But why should we consider an unaligned/unsafe system "capable" at all? If someone developed a commercial airplane that went faster than anything else on the market, but it exploded on 1% of flights, no one would call that a capable airplane.

This idea overlaps with safety culture and safety engineering and is not new. But alongside recent criticism of the terms "safety" and "alignment", I'm starting to think that the term "capabilities" is unhelpful, capturing different things for different people.

3
skluug
I don’t think the airplane analogy makes sense because airplanes are not intelligent enough to be characterized as having their own preferences or goals. If there were a new dog breed that was stronger/faster than all previous dog breeds, but also more likely to attack their owners, it would be perfectly straightforward to describe the dog as “more capable” (but also more dangerous).
2
sawyer
I think people would say that the dog was stronger and faster than all previous dog breeds, not that it was "more capable". It's in fact significantly less capable at not attacking its owner, which is an important dog capability. I just think the language of "capability" is somewhat idiosyncratic to AI research and industry, and I'm arguing that it's not particularly useful or clarifying language. More to my point (though probably orthogonal to your point), I don't think many people would buy this dog, because most people care more about not getting attacked than they do about speed and strength. As a side note, I don't see why preferences and goals change any of this. I'm constantly hearing AI (safety) researchers talk about "capabilities research" on today's AI systems, but I don't think most of them think those systems have their own preferences and goals. At least not in the sense that a dog has preferences or goals. I just think it's a word that AI [safety?] researchers use, and I think it's unclear and unhelpful language. #taboocapabilities

I think game playing AI is pretty well characterized as having the goal of winning the game, and being more or less capable of achieving that goal at different degrees of training. Maybe I am just too used to this language but it seems very intuitive to me. Do you have any examples of people being confused by it?

David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune.

Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six ... (read more)

Showing 3 of 4 replies (Click to show all)
7
Ozzie Gooen
I've been thinking about this issue recently too. I think it's pretty clear in the case of Warren Buffet and other ultra-wealthy. Generally, I think EAs sort of live and breath this stuff, and billionaires/major donors are typically in a completely different world, and they generally barely care about it. I've been asking around about efforts to get more rich donors. I think Longview is often heralded as the biggest bet now, though of course it's limited in size. My guess is that there should be much more work done here - though at the same time - I think that this sort of work is quite difficult, thankless, risky (very likely to deliver no results), is often a big culture clash, etc. Like, we need to allocate promising people to spend huge amounts of time with a lot of mostly-apathetic and highly selfish (vs. what we are used to around EA) people, with a high likelihood of seeing no results after 5-30 years. 
2
david_reinstein
I think this is in the vein of what Jack Lewars is doing? https://www.linkedin.com/company/ultraphilanthropy/

Could be! 

I assume the space is big enough, it could another absorb 20-60 people plus.

I've also heard of some other high network projects coming from Charity Entrepreneurship, but I haven't investigated. 

titotal
54
14
18
2
2

I want make my prediction about the short-term future of AI. Partially sparked by this entertaining video about the nonsensical AI claims made by the zoom CEO. I am not an expert on any of the following of course, mostly writing for fun and for future vindication. 

The AI space seems to be drowning in unjustified hype, with very few LLM projects having a path to consistent profitabilitiy, and applications that are severely limited by the problem of hallucinations and the general fact that LLM’s are poor at general reasoning (compared to humans). It see... (read more)

Showing 3 of 15 replies (Click to show all)
4
Ben Millwood
For publically-traded US companies there are ways to figure out the variance of their future value, not just the mean, mostly by looking at option prices. Unfortunately, OpenAI isn't publically-traded and (afaik) has no liquid options market, but maybe other players (Nvidia? Microsoft?) can be more helpful there.
2
David Mathers
If you know how to do this, maybe it'd be useful to do it. (Maybe not though, I've never actually seen anyone defend "the market assigns a non-negligible probability to an intelligence explosion.)

It's not really my specific area, but I had a quick look. (Frankly, this is mostly me just thinking out loud to see if I can come up with anything useful, and I don't promise that I succeed.)

Yahoo Finance has option prices with expirations in Dec 2026. We're mostly interested in upside potential rather than downside, so we look at call options, for which we see data up to strike prices of 280.[fn 1]

In principle I think the next step is to do something like invert Black-Scholes (perhaps (?) adjusting for the difference between European- and American-style o... (read more)

Hey everyone, my name is Jacques, I'm an independent technical alignment researcher (primarily focused on evaluations, interpretability, and scalable oversight). I'm now focusing more of my attention on building an Alignment Research Assistant. I'm looking for people who would like to contribute to the project. This project will be private unless I say otherwise.

Side note: I helped build the Alignment Research Dataset ~2 years ago. It has been used at OpenAI (by someone on the alignment team), (as far as I know) at Anthropic for evals, and is now used as t... (read more)

Showing 3 of 5 replies (Click to show all)
6
jacquesthibs
As an update to the Alignment Research Assistant I'm building, here is a set of shovel-ready tasks I would like people to contribute to (please DM if you'd like to contribute!): Core Features 1. Setup the Continue extension for research: https://www.continue.dev/  * Design prompts in Continue that are suitable for a variety of alignment research tasks and make it easy to switch between these prompts * Figure out how to scaffold LLMs with Continue (instead of just prompting one LLM with additional context) * Can include agents, search, and more * Test out models to quickly help with paper-writing 2. Data sourcing and management * Integrate with the Alignment Research Dataset (pulling from either the SQL database or Pinecone vector database): https://github.com/StampyAI/alignment-research-dataset  * Integrate with other apps (Google Docs, Obsidian, Roam Research, Twitter, LessWrong) * Make it easy to look and edit long prompts for project context 3. Extract answers to questions across multiple papers/posts (feeds into Continue) * Develop high-quality chunking and scaffolding techniques * Implement multi-step interaction between researcher and LLM 4. Design Autoprompts for alignment research * Creates lengthy, high-quality prompts for researchers that get better responses from LLMs 5. Simulated Paper Reviewer * Fine-tune or prompt LLM to behave like an academic reviewer * Use OpenReview data for training 6. Jargon and Prerequisite Explainer * Design a sidebar feature to extract and explain important jargon * Could maybe integrate with some interface similar to https://delve.a9.io/  7. Setup automated "suggestion-LLM" * An LLM periodically looks through the project you are working on and tries to suggest *actually useful* things in the side-chat. It will be a delicate balance to make sure not to share too much and cause a loss of focus. This could be custom for the research with an option only to give automated suggestions post-research

We're doing a hackathon with Apart Research on 26th. I created a list of problem statements for people to brainstorm off of.

Pro-active insight extraction from new research

Reading papers can take a long time and is often not worthwhile. As a result, researchers might read too many papers or almost none. However, there are still valuable nuggets in papers and posts. The issue is finding them. So, how might we design an AI research assistant that proactively looks at new papers (and old) and shares valuable information with researchers in a naturally consumab... (read more)

2
jacquesthibs
I've created a private discord server to discuss this work. If you'd like to contribute to this project (or might want to in the future if you see a feature you'd like to contribute to) or if you are an alignment/governance researcher who would like to be a beta user so we can iterate faster, please DM me for a link!

A couple takes from Twitter on the value of merch and signaling that I think are worth sharing here:

1) 

2) 

Media is often bought on a CPM basis (cost per thousand views). A display ad on LinkedIn for e.g. might cost $30 CPM. So yeah I think merch is probably underrated. 

I'm pretty confident that a majority of the population will soon have very negative attitudes towards big AI labs. I'm extremely unsure about what impact this will have on the AI Safety and EA communities (because we work with those labs in all sorts of ways). I think this could increase the likelihood of "Ethics" advocates becoming much more popular, but I don't know if this necessarily increases catastrophic or existential risks.

Showing 3 of 7 replies (Click to show all)

Basically, I think there is a good chance we have 15% unemployment rates in less than two years caused primarily by digital agents.

2
yanni kyriacos
Totally different. I had a call with a voice actor who has colleagues hearing their voices online without remuneration. Tip of the iceberg stuff.
2
yanni kyriacos
Yeah the problem with some surveys is they measure prompted attitudes rather than salient ones.

What's the lower bound on vaccine development? Toby Ord writes in a recent post:

The expert consensus was that it would take at least a couple of years for Covid, but instead we had several completely different vaccines ready within just a single year

My intuition is that there's a lot more we can shave off from this. The reason I think this is because it seems like vaccine development is mostly bottlenecked by the human-trial phase, which can take upwards of months, whereas developing the vaccine itself can be done in far less time (perhaps a month, but som... (read more)

Load more