Thank you for writing out this argument! I had a quick question about #2. The earlier a pandemic would be caught by naive screening, the quicker its spread is likely to be. So despite the fact that early detection might buy less time, it might still buy plenty of value because each doubling of transmission occurs so quickly.
This still depends on mitigating the concerns you raised in #1 and #3, though.
Quick question on the intution pump about 7 minutes of the worst conceivable experience every day. Would you be aware that it happens after the fact? For me at least, a lot of what would make the randomly tortured day so terrible is how it affected the rest of my day, rather than the excruciating moments of pain themselves.
Thanks for releasing this. I'm curious what is the more interesting sample here: somewhat established alignment researchers (measured by the proxy that they have published a paper), or the general population of who filled out the sample (including those with briefer prior engagement)?
I filled out this survey because it got signal boosted in the AI Safety Camp slack. At the same time, there were questions about the funding viability of AI Safety Camp, so I was strongly motivated to fill it out for the $40 donation. At the same time, I'm not sure that I have engaged deeply enough with Alignment Research to be called an "alignment researcher." Given that AISC was the most common place donated to, this may have skewed the population in general.
Skimming through the visualization tool (very cool, thank you!), the personality questions didn't seem to be affected, but the political questions do seem to vary a little bit. For instance, among alignment researchers who have published a paper, around 55% support or strongly support pausing. On the other hand, among those who haven't published a paper, around 75% support or strongly support pausing. Which population should this analysis rely on?
I don't have a good answer to this, but I did read a blog post recently which might be relevant. In it, two philosophers summarize their paper which argues against drawing the conclusion that longtermists should hasten extinction rather then preventing it. (The instigation of their paper was this paper by Richard Pettigrew which argued that longtermism should be highly risk-averse. I realize that this is a slightly separate question, but the discussion seems relevant.) Hope this helps!
Thanks for the piece. I think there's an unexamined assumption here about the robustness of non-earth settlement. It may be that one can maintain a settlement on another world for a long time, but unless we get insanely lucky, it seems unlikely to me that you live on another planet without sustaining technology at or above our current capabilities. It may also be that in the medium-term these settlements are dependent on Earth for manufacturing resources etc. which reduces their independence.
This isn't fatal to your thesis (especially in the long-long term), but I think having a high minimum technology threshold does undercut your thesis to some extent.
tldr: A mathematics major graduating in May. Looking for next steps, in AI or elsewhere, but unsure of what exactly I want. Happy in general quantitative, policy, or operations roles.
Skills: Strong math background (+familiar with stats). Research skills (in math, AI Safety) including some coding (esp. python) and clear writing (won an outstanding poster at a math conference). Project management at Amazons operations internship; ran painting business for two summers, and finances for independent debate club at my school.
Location/remote: Currently in Philadelphia. Would move for a good opportunity, but prefer East Coast US/Remote
Availability & type of work: I can start in mid to late June (or later). Prefer full time or volunteering
Resume: Resume link. LinkedIn here.
Email/contact: DM me, or message me via LinkedIn.
Other notes: I have a great deal of uncertainty of what my longer term plan. If you have any suggestions, particularly ones which help answer whether I should return to graduate school at some point.
I generally like the innovation-as-mining hypothesis with regards to the science and with some respect to the arts, but I think that there is one issue with the logical chain.
You said that "[i]f not for this phenomenon [that ideas get harder to find], sequels should generally be better than the original," but I don't think this is necessarily true. I think a more likely reason that sequels aren't generally better than the original is mostly regression to the mean and selection effects, with two main causes:
I think a relevant example here is that of albums. There is this idea of a "sophomore slump" in albums, where a band's second album tends to be worse than their first. I don't think this is due to it being hard to make good albums after your first (quality generally seems to improve over the next few albums after that), but a shrinking pool of songs to choose from. On an artists debut album, they can choose pretty much any song they've ever written. On the second however, the artist is restricted to only the songs they've written after the first album and anything that wasn't good enough to make it onto the first. Even though they don't face the constraint of working within their pre-existing world, the quality decreases. I think that it is likely that that is the case here as well.
As such, I'm not sure exactly why "ideas get harder to find" should be the default. Another explanation for why the per-capita great successes could be lower than in the past has to do with competition and the importance of being the best. In sports, there is a lot of attention paid to the greats of the past, even though current athletes are at a higher level. The fiftieth best quarterback today in the NFL could very well be better than the best quarterback in 1980, but we remember the best quarterback then and not the fiftieth best now. As such, because the middle-brow gets cluttered (essentially, to be successful in the middlebrow, you need to be successful everywhere), there is much stiffer competition to be the best of the middle-brow today.
As such, you likely see more total success and less success per capita, which is about the pattern we see right now. That being said, I enjoyed the post.
Thanks for the post! This may not be helpful, but one thing I would be curious to see would be how the dispersion coefficient k (Discussed here; I'm sure there's a better reference source) affected the importance of having many sites. With COVID, a lot of transmission came from superspreader events, which intuitively would increase the variance of how quickly it spread in different sites. On the other hand, the flu has a low proportion of superspreader events, so testing in a well connected site might explain more of the variance?