Hide table of contents

TL;DR

Anyone I met at EAG 2024 where we had a meaningful conversation (broadly defined, e.g. not limited to formal 1-on-1, includes office hours, etc) can contact me to get a small (free) painting as an adventure in relational aesthetics (see below).  Search ‘kanad’ on Alignment slack or ukc10014 on EA/LW to DM.

Contextual soapbox

This last EA got me thinking again about aesthetics in the movement.  A few people, including me, have talked about why art/aesthetics and a deep engagement with culture are important for projects like EA, which I see as both a philosophical and a practical project.  Besides their instrumental value in communicating ideas and building a sense of community identity[1], aesthetics (on some views) might be  connected to developing and deliberating moral intuitions.  There is a lot written around this from a philosophy perspective, but a relatively accessible version comes from Wittgenstein.[2]

The para above contextualises a small artistic ‘experiment’ (or gesture, in art-speak) I am trying after EAG 2024.  This is inspired by Gwern’s story about AGI, Richard Ngo’s stories, Jack Clark’s stories, and of course Eliezer and Bostrom - but I note relatively little visual material exists in the EA/AI-adjacent spaces (there are documentaries and immersive projects like Aron Mill’s).[3]

Clippy Technical Objects

I had a lot of valuable interactions with people,  within and outside AI safety, at EAG 2024 and was interested in what ‘residue’ those interactions have, on either side: does anyone think differently about what they’re doing? Do worldviews change?  Mostly, this will play out over weeks and months, and while my impact on others might be minimal, few chats will likely have been very useful for me.  Anyway, I wanted to explore this idea of residue or ‘trace’ through a tangible thing.[4] 

Anyone I met at EAG this last weekend (or past/future collaborators) please DM me.[5] I have a series of lightweight (approx) 18x24x1cm paperclip-objects (paintings, at present) that I'm making and would send you one (for UK, I’ll absorb cost/shipping, but for overseas might need to charge). Importantly, please don’t feel compelled to do so (I personally hate getting gifts!).[6]

I think of these as objects that materialise/prompt thinking about aesthetics/philosophy of post-AGI worlds.[7] They encode the following propositions/ideas:

  • Firstly, an intelligence(s) that tile(s) the universe with paper clips, squiggles, hedonium, etc, seems straightforwardly bad.  But as an (~as improbable) provocation: what about one that tiles it with paintings, sculptures, vast planet-sized biological paperclip-shaped gardens full of diverse, happy plants and insects?[8] 
  • Secondly, I see a lot of parallels between art objects, the art market, and financial markets (many of which have, at best, ethically ambiguous implications).  At the same time, there are alternative, non-financial, non-object, or gift-based currents in art (particularly over the past few decades) that might be interesting to some - happy to discuss.
  • Thirdly, what connection aesthetics has to ethics? Like, is there any reason to believe people who appreciate art are ‘better’ people? I happen not to think so, but maybe there is.[9]
  • Fourthly, and more speculatively, maybe advanced intelligence would still keep humans around as a sort of a source of creativity, as someone at EAG told me, or a rather special source of randomness - this would be interesting to discuss.[10]
  • Lastly, an artwork can serve as an object of contemplation or meditation (Western medieval relics, certain everyday objects in Japanese culture) -- and as a reminder of striving for a better future, something most people in EA are working towards, even if they differ on cause areas or methods.
  1. ^

    With appropriate caveats for going too far - avoiding cult-like behaviour.

  2. ^

    I also find Alva Noe useful; or this book on posthumanism, or for a more concrete/less philosophical analysis in the context of surveillance systems and media studies.

  3. ^

    A difference between what I’m doing and these examples is I’m not explicitly trying to change anyone in particular’s mind about x- or s-risks, AI, or EA - this is much closer to ‘art for art’s sake’ and therefore might not fit narrow ‘theory of change’ or utility-/effectiveness-based value criteria.

  4. ^

    This roughly falls within the field of relational aesthetics - see accessible and academic explanations.

  5. ^

    Collaborations might include AI Safety Camp or the PIBBSS reading group or informal research pairings; I plan to continue the project for the rest of 2024, assuming it doesn’t become too expensive in money or time terms, and/or subsequent reflection/conversation doesn’t cause me to update on how interesting the idea is.

  6. ^

    These are objects in the sense that I might not make them like traditional paintings - parts could be 3D printed, or could have embedded computation - depends on cost/time.

  7. ^

    I use the word ‘object’ intentionally to get away from words like ‘art’ that have loaded meanings in culture (or just are overused and ambiguous); but I’m also getting at the difference between aesthetic things and technical things; a discussion more relevant to computer science/AI is here.  In twentieth-century minimalist and post-minimalist sculpture, there is also the idea of the specific object, e.g. Donald Judd.

  8. ^

    Think Lem’s Solaris or Iain M. Banks’ Ronte with their dancing spaceships in Hydrogen Sonata. In other words, is there an aesthetic angle, perhaps from the Fun Theory perspective, to the standard argument?  I'm ignoring the fact that the paperclip/squiggle-maximiser thought experiment has become more nuanced - from a story about outer alignment to one of inner alignment.

  9. ^

    Current world events often take me back to the opening quote in this essay, which analyses a question from literary critic George Steiner: “How can men come home from their day’s butchery and falsehood to weep over Rilke or play Schubert?”

  10. ^

    Specifically, what are the ways that the multi-scale randomness generated by 8 billion humans is different from, say, 8 billion AIs somewhat more advanced than today’s SOTA, say with robust multi-modal, ~1-month planning, modest goal-formation, and online learning abilities?

1

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities