TO

Tom Ough

66 karmaJoined

Comments
15

Let me jump straight to the more interesting conceptual question! I think that EA, for all its interest in the future, is not very good at envisaging futures people actually want to live in, or might feel at home in. I think Anglofuturism does a much better job of that. I think the more tasteful versions of Anglofuturism draw on Lindy-er aesthetics than does EA, and are more responsive to the natural human inclination towards kinship.

This might, to an EA, be what Anglofuturism gets wrong. It depends on your moral calibration. If you are truly impartial then you would probably be unimpressed by a worldview that is informed by kinship. An EA could also accuse Anglofuturism of being overly whimsical in a world that's on fire.

I'll sign off here with sincere thanks to everyone who's participated in this AMA. 

To your first question: I think it's pretty clearly the case that religion, specifically Christianity and Anglicanism, have been central to British culture and heritage for a very long time. It's only very recently that Christianity has lost its place at the heart of British culture. (I'd like to read Bijan Omrani's book on this topic.)

To your second: I'm one of a few people who've banged on about Anglofuturism, so I won't speak for all of them. Speaking for myself, though, I'm not religious, and I have no sense that being religious is a requirement for Anglofuturism. (My podcast co-host Calum is a God-fearing man and might disagree.) Naturally, Anglofuturist depictions of the future invoke the country's heritage, including its religious heritage, but I don't think you have to be religious to enjoy the idea of a Bishopric of Mars.

Very good questions. They go a little beyond my expertise. It's striking that, in Britain, recent government success stories (ARIA, AISI, the vaccines taskforce) have existed outside the normal bureaucratic structures. That principle probably holds for geoengineering projects. But geoengineering projects would have the additional complexity of affecting everyone, which seems to demand accountability — though, in, practice, I'm not convinced that there'll ever be a situation where everyone on the planet gets a vote on something like stratospheric aerosol injection. I gather the Degrees Initiative is trying to address that question of accountability. I also think that advocates for geoengineering should be wary of appearing Astroturfed.

Thanks for your questions. I worry that even if AI ends up being safe in an X-risk sort of way, it might neverthless create a world we wouldn't want to bring into existence. I think recommender algorithms are a good example of this: they are already extractive, and it's easy to imagine that more powerful AI will intensify such dynamics. I would like to see more attention being given to AI that is not only "helpful, honest and harmless", but conducive to fulfilment. What that AI looks like, I have no idea, and I don't want to invoke paternalism and nannying. But I'll give a hat-tip to my friends at the Meaning Alignment Institute, who have done a lot of work on this.

The single attribute most likely to make the difference between the success and failure of a journalistic career is the ability to bring in original reporting: hitherto-untold stories that editors want to commission and readers want to read. I'd advise that journalists wanting to write about the topics you refer to (I feel I'm better-placed to advise young journalists than young EAs) think about them in those terms. As an editor, I'm less interested in commissioning pieces on interesting topics than I am in commissioning original treatments of interesting topics — especially when those treatments involve new reporting.

I'd call the book a work of journalism rather than activism, give or take the odd bit of loose talk about how stupid it is that we don't build more nuclear power stations. For that reason, I don't have any particular goals in mind. That said, I would certainly like to live in a world that's safe from GCRs! 

For the same reason, I don't have a well-formed idea of an desired audience. (Naturally, I hope that millions of highly discerning readers find something in it worthwhile.) I'm sure my publisher has ideas regarding who's most likely to buy it. It's been said it's a book for "dads", though I want to make it clear to any mothers who happen to be EA Forum users that they are very welcome to read it too!

Hmm. I think it's always important for writers to think about the ways that human brains like to ingest information: via stories, e.g. stories about unusual people whose place in the world is at stake. Stephen Pinker and Will Storr are both quite illuminating on this kind of thing. EA writing, in its natural form, has lots going for it, but it can be quite prolix and abstract for lay audiences — and indeed for EAs! 

A classic bit of writerly advice is to tell stories — and I use that term in the most capacious way possible — the way you would in the pub, i.e. you reduce it to the bits people find the most interesting, knowing that you've got to be brief, and that you speak in a conversational way. This approach is particularly valuable when the topic is unfamiliar, complex or abstract, as many EA ideas can be at first blush.

I think that EA, to the extent it's a coherent entity, has a pretty good mental map of who its potential collaborators are. I expect that mental map is more detailed than one I could knock up off the top of my head, so it probably won't be very useful for me to speculate on EA's behalf. I'll try anyway. The task of anyone trying to do anything is usually to convince governments to take their thing seriously, e.g. by setting up wastewater monitoring. Particularly when it comes to pandemics, there's probably lots of useful stuff that just hasn't quite been invented yet. Far-UVC, for instance, is a relatively recent innovation. Perhaps there are people in university science departments sitting on similarly good ideas.

As for EA x Progress, there's certainly overlap. Of course there are lots of different kinds of EA and lots of different kinds of progress-head. Both groups think of problems in fairly numerical terms. Both are animated by big ideas. I think there's probably some EA-flavoured work to be done on how good it'd be for the world if the West got its act together in economic terms. I also think there's a lot of overlap in terms of interest in AI, and that, between them, there's a plausible EA-Progress double act that tries to make AI go well without going badly, so to speak.

Thanks for having me. I think it might still be the case that most British journalists don't know what EA is. Journalists are naturally sceptical and some will think immediately of SBF. There's also the contrarian school of thought that longtermists are a bunch of sci-fi fantasists doing the bidding of Big Tech. I know at least some of them are kindly-disposed towards EAs, even if they don't share the EA worldview.

As for what my fellow journalists made of the project – there's probably a bit of a selection effect here! Maybe ask them when I'm not around...

Load more