niplav

1014 karmaJoined niplav.site

Bio

I follow Crocker's rules.

Comments
167

Nope, it was Yudkowsky in a Facebook group about AI x-risk around 2015 or 2016. He specifically said he didn't think deep learning was the royal road to AGI.

This is a narrow point[1] but I want to point out that [not deep learning] is extremely broad, and the usage of the term "good old-fashioned AI" has been moving around between [not deep learning] and [deduction on Lisp symbols], and I think there's a huge space of techniques inbetween (probabilistic programming, program induction/synthesis, support vector machines, dimensionality reduction à la t-SNE/UMAP, evolutionary methods…).


  1. A hobby-horse of mine. ↩︎

I find "epistemics" neat because it is shorter than "applied epistemology" and reminds me of "athletics" and the resulting (implied) focus on being more focused on practice. I don't think anyone ever explained what "epistemics" refers to, and I thought it was pretty self-explanatory from the similarity to "athletics".

I also disagree about the general notion that jargon specific to a community is necessarily bad, especially if that jargon has fewer syllables. Most subcultures, engineering disciplines, sciences invent words or abbreviations for more efficient communication, and while some of that may be due to trying to gatekeep, it's so universal that I'd be surprised if it doesn't carry value. There can be better and worse coinages of new terms, and three/four/five-letter abbreviations such as "TAI" or "PASTA" or "FLOP" or "ASARA" are worse than words like "epistemics" or "agentic".

I guess ethics makes the distinction between normative ethics and applied ethics. My understanding is that epistemology is not about practical techniques, and that one can make a distinction here (just like the distinction between "methodology" and "methods").

I tried to figure out if there's a pair of suffixes that try to express the difference between the theoretic study of some field and the applied version, Claude suggests "-ology"/"-urgy" (as in metallurgy, dramaturgy) and "-ology"/"-iatry" (as in psychology/psychiatry), but notes no general such pattern exists.

Yep, I wouldn't have predicted that. I guess the standard retort is: Worst case! Existing large codebase! Experienced developers!

I know that there's software tools I use >once a week that wouldn't have existed without AI models. They're not very complicated, but they'd've been annoying to code up myself, and I wouldn't have done it. I wonder if there's a slowdown in less harsh scenarios, but it's probably not worth the value of information of running such a study.

I dunno. I've done a bunch of calibration practice[1], this feels like a 30%, I'm calling 30%. My probability went up recently, mostly because some subjectively judged capabilities that I was expecting didn't start showing up.


  1. My metaculus calibration around 30% isn't great, I'm overconfident there, I'm trying to keep that in mind. My fatebook is slightly overconfident in that range, and who can tell with Manifold. ↩︎

niplav
2
1
0
40% ➔ 50% disagree

What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?

I put 30% on this possiblility, maybe 35%. I don't have much more to say than "time horizons!", "look how useful they're becoming in my dayjob & personal life!", "look at the qualitative improvement over the last six years", "we only need to automate machine learning research, which isn't the hardest thing to automate".

Worlds in which we get a bubble pop are worlds in which we don't get a software intelligence explosion, and in which either useful products come too late for the investment to sustain itself or there's not really much many useful products after what we already have. (This is tied in with "are we getting TAI through the things LLMs make us/are able to do, without fundamental insights".

Right, I'd forgotten that betting on this is hard. I was thinking if one could do a sort of cross-over between an end-of-the-world bet crossed with betting a specific on a proportion of one's net worth. This is the most fleshed-out proposal I've seen so far.

But I don’t want to give a stranger from another country a 7-year loan that I wouldn’t be able to compel them to repay once the time is up.

I wonder if this could be solved via a trusted third person who knows both bettors. (I think there are possible solutions here via blockchains, e.g. the ability to unilaterally destroy an escrow, but I guess that's going to become quite complicated, not worth the setup, and using a technology I guess you're skeptical of anyway)

I find this post quite good, especially the section I linked, specifically noting that solidarity≠altruism. Also this post.

I've been confused about the "defense-in-depth" cheese analogy. The analogy works in two dimensions, and we can visualize that constructing multiple barriers with holes will block any path from a point out of a three-dimensional sphere.

(What follows is me trying to think through the mathematics, but I lack most of the knowledge to evaluate it properly. Johnson-Lindenstrauss may be involved in solving this? (it's not, GPT-5 informs me))

But plans in the the real world real world are very high-dimensional, right? So we're imagining a point (let's say at ) in a high-dimensional space (let's say for large , as an example), and an -sphere around that point. Our goal is that there is no straight path from to somewhere outside the sphere. Our possible actions are that we can block off sub-spaces within the sphere, or construct n-dimensional barriers with "holes", inside the sphere, to prevent any such straight paths. Do we know the scaling properties of how many of such barriers we have to create, given such-and-such "moves" with some number of dimensions/porosity?

My purely guessed intuition is that, at least if you're given porous -dimensional "sheets" you can place inside of the -sphere, that you need many of them with increasing dimensionality . Nevermind, I was confused about this.

Whereas many people in EA seem to think the probability of AGI being created within the next 7 years is 50% or more, I think that probability is significantly less than 0.1%.

Are you willing to bet on this?

Yeah, I goofed by using Claude for math, not any of the OpenAI models, which are much better at math.

Load more