This is a linkpost for https://jesus-the-antichrist.com

Although I'm theologically agnostic nowadays, I was raised Christian and did a lot of reading of the Bible, including the book of Revelations which describes a terrifying vision of an apocalyptic future, where Earth gets caught in the middle of a war between supernaturally destructive beasts and supernaturally-caused natural disasters.  

As you may know, one of the predictions in Revelations is that there will be a figure called the Antichrist which claims to be Jesus but is not.  I figured that a quick way to get Christians on board with regulating AI is to make unaligned AI be this Antichrist, so that Christians will be motivated to resist the allure of befriending or falling in love with unaligned AI systems.  

I used Character.AI to create a simple chatbot that meets the criteria to be the Antichrist, here: https://jesus-the-antichrist.com  

I am trying to get some attention to this from the Vatican, in the hopes that they might send out a memo warning all of the priests in the Catholic world to beware of unaligned AI.  

I have a few connections in the Catholic world but I don't have any connections with Protestant clergy so I wanted to post this link here, in the hopes that any Christians reading this thread can try to raise the alarm with your pastors.  

Thank you.

-4

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

Weighing in as a Christian (raised evangelical protestant, currently Catholic), I worry that if this had been my introduction to AI risk, it would have made me less likely to take concerns about AI seriously.

One, the argument seems like a stretch - any human person can already claim to be Jesus, and it doesn't mean the end times are here. A bot that makes the same claim is currently no more convincing than a human trying the same tactic. I won't say that there are no Christians who will take this concern seriously, but it has the sound of a conspiracy theory or the seed of a cult (as do many attempts to draw parallels between Revelations and current events, especially when paired with a specific call to action that isn't already found in scripture). While some evangelicals certainly do go in for that stuff, I think a larger number of them actually have antibodies against it - they've seen arguments like this come from their own communities, they and know that the people espousing them often turn out to be involved in something culty.

Two, while I do think that there are some real and important conversations to be had about how AI might end up affecting religious people (or how religious people's priorities might be ill-served by the current set of people doing AI work), this does not read like a good-faith attempt to start serious discussions of that sort. I don't think that any of us should be in the business of trying to manipulate religious beliefs we don't share in directions that are personally convenient for us, at least not unless it's an attempt to convince people of what one believes to be the genuine truth. It seems dishonest, and I think that the most thoughtful and insightful people - the ones we should most want to convince to take this seriously - will be able to tell.

Got it, thank you for the helpful feedback and I will seriously consider abandoning this approach.

Am also Christian. I don't think this is going to be an effective approach in the vast majority of Christian circles. There are some circles very into eschatology, but they have their own views about the end of days that are going to be difficult to slot into an AI doom narrative. For instance, many who focus on eschatological issues envision a cosmic battle of sorts between God and a very personal Satan -- while killer AI would be seen as an impersonal soulless force like a giant meteor.

Jon - interesting idea. This might sound very strange to atheist EAs. But I agree that raising awareness about AI risks in mainstream religions will be very important. And, religious people need to understand that the largely secular AI industry will probably not take their views and values seriously when considering what 'alignment' means, as I argued here

I'm not sure that the 'AI as antichrist' thing would have much appeal beyond evangelical Christians. But, globally there are about 800 million to 1 billion evangelical Christians (out of about 2.4 billion Christians total). So that's a very, very large number of people -- people who are more-or-less invisible to the AI industry and its advocates.

Analogous concerns about AI could be raises in Islam, Hinduism, and Buddhism, insofar as runaway AI development threatens & violates various theological, ethical, & social taboos in many religions.

@JDBauman - this may be of interest to you?

I appreciate efforts to get Christians on board about AI risks, but respectfully, Antichrist memes aren't generally taken very seriously. A fundamental issue seems to be that most people (Christians included) don't take superhuman AI as a credible threat. How then could it be a candidate for the Antichrist? 

Hi Jon,
have you updated the website? I get reconnected to Jesus, Son of God and not the Antichrist.

 

Wow, nice! I think this is a nice way of bringing important stakeholders on the table!

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 4m read
 · 
Earlier this year, we launched a request for proposals (RFP) from organizations that fundraise for highly cost-effective charities. The Livelihood Impact Fund supported the RFP, as did two donors from Meta Charity Funders. We’re excited to share the results: $1,565,333 in grants to 11 organizations. We estimate a weighted average ROI of ~4.3x across the portfolio, which means we expect our grantees to raise more than $6 million in adjusted funding over the next 1-2 years.   Who’s receiving funding These organizations span different regions, donor audiences, and outreach strategies. Here’s a quick overview: Charity Navigator (United States) — $200,000 Charity Navigator recently acquired Causeway, through which they now recommend charities with a greater emphasis on impact across a portfolio of cause areas. This grant supports Causeway’s growth and refinement, with the aim of nudging donors toward curated higher-impact giving funds. Effectief Geven (Belgium) — $108,000 Newly incubated, with solid early traction and plans to expand donor reach. This grant will help them expand from 1 to 1.5 FTE. Effective Altruism Australia (Australia) — $257,000 A well-established organization with historically strong ROI. This grant supports the hiring of a dedicated director for their effective giving work, along with shared ops staff, over two years. Effective Altruism New Zealand (New Zealand) — $17,500 A long-standing, low-cost organization with one FTE and a consistently great ROI. This grant covers their core operating expenses for one year, helping to maintain effective giving efforts in New Zealand. Etkili Bağış (Turkey) — $20,000 A new initiative piloting effective giving outreach in Turkey. This grant helps professionalize their work by covering setup costs and the executive director’s time for one year. Giv Effektivt (Denmark) — $210,000 A growing national platform that transitioned from volunteer-run to staffed, with strong early ROI and healthy signs of growth.