Matrice Jacobine🔸🏳️‍⚧️

Student in fundamental and applied mathematics
746 karmaJoined Pursuing a graduate degree (e.g. Master's)France

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Posts
41

Sorted by New

Comments
113

Topic contributions
1

Social media recommendation algorithms are typically based on machine learning and generally fall under the purview of near-term AI ethics.

Sorry, I don't know where I got that R from.

I'm giving a ∆ to this overall, but I should add that conservative AI policy think tanks like FAI are probably overall accelerating the AI race, which should be a worry for both AI x-risk EAs and near-term AI ethicists.

You can formally mathematically prove a programmable calculator. You just can't formally mathematically prove every possible programmable calculator. On the other hand, if you can't mathematically prove a given programmable calculator, it might be a sign that your design is an horrible sludge. On the other other hand, deep-learnt neural networks are definitionally horrible sludge.

Yes those quotes do refer to the need for a model to develop heterogeneous skills based on private information, and to adapt to changing situations in real life with very little data. I don't see your problem.

How is "heterogeneous skills" based on private information and "adapting to changing situation in real time with very little data" not what continual learning mean?

1) physical limits to scaling, 2) the inability to learn from video data, 3) the lack of abundant human examples for most human skills, 4) data inefficiency, and 5) poor generalization

All of those except 2) boil down to "foundation models have to learn once and for all through training on collected datasets instead of continually learning for each instantiation". See also AGI's Last Bottlenecks.

But the environment (and animal welfare) is still worse off in post-industrial societies than pre-industrial societies, so you cannot credibly claim going from pre-industrial to industrial (which is what we generally mean by global health and development) is an environmental issue (or an animal welfare issue). It's unclear if helping societies go from industrial to post-industrial is tractable, but that would typically fall under progress studies, not global health and development.

I don't think Karpathy would describe his view as involving any sort of discontinuity in AI development. If anything his views are the most central no-discontinuity straight-lines-on-graphes view (no intelligence explosion accelerating the trends, no winter decelerating the trends). And if you think the mean date for AGI is 2035 then it would take extreme confidence (on the order of variance of less than a year) to claim AGI is less than 0.1% likely by 2032!

Load more