I'm particularly interested in sustainable collaboration and the long-term future of value. I'd love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.
I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read - let me know your suggestions! In no particular order, here are some I've enjoyed recently
Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites
People who've got to know me only recently are sometimes surprised to learn that I'm a pretty handy trumpeter and hornist.
I like this, and it's simultaneously exciting and bewildering to take seriously the prospect of punting difficult things.
It could be worth emphasising more clearly that this is about (futurist) strategy, which is about as cognitive as things get. Other types of preparation and problem-solving have other critical inputs, and may face ~inherent delays. For those, 'punting' can look risky, especially if you expect later phases to move quite fast. This has bearing on strategy: it's worth attempting to foretell the kinds of lead-time-constrained preparation that might be needed to face upcoming challenges.
(A concrete example that stands out to me is bio monitoring and defenses. But in general I'd love to see more and richer work on characterising emerging threats, especially technological. Not necessarily from Forethought! Other kinds of lead-time-constrained activities might involve coalition building and spreading well-informed takes about important topics.)
Knowing these authors, my guess on ontology is that they might say that it could be instrumental in things like
These all look like activities with bearing on how to tackle 'early' challenges.
Helpful, thanks, I think I understand a little bit better now (still not yet sure what the specific tuple elements are doing)!
In case it's inspiring or can provoke useful critique, here are some areas where I think compounding/reuse can be really useful in epistemic activities are:
See also the collective epistemics discussion, if you haven't already, which I suspect might also be of interest to you!
Could you explain the community reuse thing again? I don't understand the tuples, but is the idea that query responses (which yield something like document sets?) can be cached with some identifiers? This helps future users by...? (Thinking: it can serve as a tag to a reproducible/amendable/updateable query, it can save someone running the exact same query again, ...)
That looks ambitious and awesome! I haven't looked deeply, but a few quick qs
Basically +1 here. I guess some relevant considerations are the extent to which a tool can act as antidote to its own (or related) misuse - and under what conditions of effort, attention, compute, etc. If that can be arranged, then 'simply' making sure that access is somewhat distributed is a help. On the other hand, it's conceivable that compute advantages or structural advantages could make misuse of a given tech harder to block, in which case we'd want to know that (without, perhaps, broadcasting it indiscriminately) and develop responses. Plausibly those dynamics might change nonlinearly with the introduction of epistemic/coordination tech of other kinds at different times.
In theory, it's often cheaper and easier to verify the properties of a proposal ('does it concentrate power?') than to generate one satisfying given properties, which gives an advantage to a defender if proposals and activity are mostly visible. But subtlety and obfuscation and misdirection can mean that knowing what properties to check for is itself a difficult task, tilting the other way.
Likewise, narrowly facilitating coordination might produce novel collusion with substantial negative externalities on outsiders. But then ex hypothesi those outsiders have an outsized incentive to block that collusion, if only they can foresee it and coordinate in turn.
It's confusing.
I appreciate this discussion a lot. Two things which stand out to me as deserving more emphasis.
First though, quickly framing 'good epistemic outcomes' as something like a product of 'people trying to understand clearly' and 'people can do that effectively'. (Of course these are interrelated, because people's willingness is obviously affected by the practicalities - more on that in point 2.)
OK, the things:
It looks to me like most of the object-level task of collective epistemics is the checking and generally piecing together good 'secondary research' (broadly construed). i.e. looking at provenance, tracking the evidence and reasoning dependencies for a claim, proactively gathering the best arguments for and against, reasons to downweight certain testimony etc.
Most of the overall task of collective epistemics may be in the motivating i.e. having more people more of the time actually trying to understand things with accuracy, rather than retreating into one or other alternative cognitive mode