Beige rocks over beige earth, Augustin walks and, being one of those for whom the horizon lies on the ground, shades of beige he sees. “Remember that time I dove into the ocean to catch that pearl you had spotted?” Ian, framing his words with wide and quick gestures, asks his twin brother. Augustin, contracted elbows and hands in his pockets, follows the way, not the overwrought words. Ian, the older twin by four minutes, touches the nonchalance of his brother by the shoulder, “Remember that time I dove into the ocean to catch that pearl you had spotted?”. “You almost died”, Augustin lets an answer escape between his teeth. “It was the biggest pearl ever!” Ian follows his goal. “It wasn’t a pearl”, Augustin insists. “It was!” a pattern arises, “Pearls do not die!”. “After all those years, you are still angry?” Ian slaps his brother’s shoulder, “Do not blame me, I told you, you should have given the pearl to that girl of yours!” Augustin blinks one eye, “Nah, she wasn’t the one”. “Who cares!” Ian almost screams, “the pearl got rotten anyway!”. “That is my point. Pearls do not die”, he claims, once again having to repeat the same answer to his narrow-minded brother. After a blink, his eyes spot a brighter beige in the soil. Without taking his hands out of his pockets, he lets his right foot nose it.

“A lamp!” Ian screams to his brother’s foot before jumping right into it. He grabs the lamp, cleans it, and raises the object against the sun. The clarity challenges Augustin vision. “A genie!” Ian exclaims. The spectrum murmurs, “Who found me?” Ian expresses his sincere belief through an exclamation, “I!”. Augustin’s eyebrows convey his knowledge that he had done it. The genie, practical as any disembodied being, proposes a solution. “Instead of the usual three wishes to one person, I will concede two wishes, one for each of you. So, my advice is, think wise and-” “I want to have everything that I wish, whenever I wish”, Ian jumps with his answer into the middle of the genie’s sentence. “You wish it. You get it”, the genie concedes. The spectrum and the brother now look at Augustin. He calmly says, “I want to have the right wishes”, to what the spectrum concedes before disappearing. Augustin looks around only to find no sign of Ian. He has no desire of knowing where did his brother go.

The sweetest fig travels in the hand of a charming woman through the warm waters of a pool to find Ian’s mouth. Years go as fast as the breath between sentences when one instantly gets everything one wishes. Or so, Augustin supposes before he, after five years without any contact, enters his brother's palace. The once mirror looking twins now look like complementary opposites, the bouba and the kiki. Ian, enjoying the lack of sufficient reason characteristic of every excess, has barely moved ever since. “May I offer you something to drink, brother?”, he states in a fluffy voice, “Needless to say, you can have whatever you want.” Augustin anwers that "Water is fine." His perfectly exercised outlooks and wise desire discharged a feeling in Ian with which he had long lost touch, envy. He reacted accordingly, “A wise choice, indeed. I offer you the purest water of the purest source, a taste immaculate as a mother’s kiss". “Good choice”, Augustin agrees. “Well, what can I say,” Ian chuckles, “No one in the history of humanity had their aesthetic pleasures as trained as mine.”

A cup of crystalline water materializes as Ian raises his hand. Looking through it, the sharp silhouette of his brother becomes much wider. Ian confesses to himself “It is a trap!” Augustin does not move to grab the water. “Forget it”, Ian continues, “You would never understand how hard it is to refrain when you always have what you wish.” Some minutes putrefy until Augustine, with a suffocated voice that appears to rebel against his appearance, reveals the only thing he envied among everything his brother ever possessed and enjoyed “But, at least, you could have changed your wishes.”
 

[This fable started as an idea to convey the importance of getting things right before getting things, which suits the EA approach. However, as it developed, it went on to contemplate also a potential problem of only getting what is right, inspired by Bostrom’s “all-work-and-no-fun” dystopia. As such, the characters were thought to represent either Narrow AIs that can allow us to get whatever we want, or a General AI that determine what we should want for our own best interests.]

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 7m read
 · 
Recently, @Lizka and @Ben_West🔸 published A shallow review of what transformative AI means for animal welfare. The main conclusion of this review was that animal welfare interventions should be heavily temporally discounted due to the possibility of transformative AI on short timelines. A reaction I had when reading this piece was that things tend to happen very slowly in animal agriculture, and even big wins like a corporate welfare commitment can take years before a specific animal is concretely better off. I therefore thought it might be useful to look at some of the main animal welfare interventions and assess how quickly they can help animals in the best case scenario.  A conclusion from this analysis is that animal interventions vary significantly from how quickly they start to have impact, with some interventions having impact almost immediately, some having predictable impact within some period of time (which can be up to a few years), and some having impact at some unspecified point in the future. Optimizing for speed to impact might be a new kind of frame under which animal advocates can evaluate and prioritize interventions.  These are just some preliminary thoughts that I wanted to get out there in the spirit of "shallow reviews" (also given the analysis itself is about the importance of speed). I'd welcome additional thoughts / feedback / pushback. Lowering / shifting meat demand Many animal welfare interventions achieve impact through lowering demand for animal products. In this category, I include things like starting a plant-based meat company, or doing vegan advocacy.  While it seems clearly possible to have wins in this area extremely quickly, there will often be a delay on having impact because of the structure of the supply chain for animal products.  For the simplest example, a beef cow in the US is generally 18-22 months old when they are slaughtered. This means that, to some extent, the beef supply in the US for the next 18-22 months h
 ·  · 6m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > How I decided what to say — and what not to I’m excited to share my TED talk. Here I want to share the story of how the talk came to be, and the three biggest decisions I struggled with in drafting it. The backstory Last fall, I posted on X about Trump’s new Secretary of Agriculture, Brooke Rollins, vowing to undo state bans on the sale of pork from crated pigs. I included an image of a pig in a crate. Liv Boeree, a poker champion and past TED speaker, saw that post and was haunted by it. She told me that she couldn’t get the image of the crated pig out of her head. She resolved that if she won prize money at her next poker tournament she’d give 20% of it away to help factory-farmed animals. She won $2.8 million. And she not only donated 20% of it, she also started posting to her many followers about factory farming, invited me on her podcast … and then invited me to speak at TED. (She was a guest curator at this year’s conference.) This was a huge opportunity. I don’t think the main TED stage has ever had a talk solely about factory farming before. (TED’s head Chris Anderson told me later that he regretted that TED hadn’t tackled the topic until now.) So I really didn’t want to mess it up. I knew what I wanted to convey: the moral urgency that we address factory farming. But I didn’t know how best to convey it. In particular, I struggled with three questions: how to talk about a moral atrocity, what my big idea would be, and what to ask for. Everyone likes an origin story. Thankfully my parents still had this stereotypically-New Zealand photo of me on our small sheep farm growing up. I was always more into the picnics than the farm work. Photo: Gilberto Tadday / TED. How to talk about a moral atrocity? Perha
 ·  · 50m read
 · 
In 2021, a circle of researchers left OpenAI, after a bitter dispute with their executives. They started a competing company, Anthropic, stating that they wanted to put safety first. The safety community responded with broad support. Thought leaders recommended engineers to apply, and allied billionaires invested.[1] Anthropic’s focus has shifted – from internal-only research and cautious demos of model safety and capabilities, toward commercialising models for Amazon and the military. Despite the shift, 80,000 Hours continues to recommend talented engineers to join Anthropic.[2] On the LessWrong forum, many authors continue to support safety work at Anthropic, but I also see side-conversations where people raise concerns about premature model releases and policy overreaches. So, a bunch of seemingly conflicting opinions about work by different Anthropic staff, and no overview. But the bigger problem is that we are not evaluating Anthropic on its original justification for existence. Did early researchers put safety first? And did their work set the right example to follow, raising the prospect of a ‘race to the top’? If yes, we should keep supporting Anthropic. Unfortunately, I argue, it’s a strong no.  From the get-go, these researchers acted in effect as moderate accelerationists. They picked courses of action that significantly sped up and/or locked in AI developments, while offering flawed rationales of improving safety.   Some limitations of this post: * I was motivated to write because I’m concerned about how contributions by safety folks to AGI labs have accelerated development, and want this to be discussed more. Anthropic staff already make cogent cases on the forum for how their work would improve safety. What is needed is a clear countercase. This is not a balanced analysis. * I skip many nuances. The conclusion seems roughly right though, because of overdetermination. Two courses of action – scaling GPT rapidly under a safety guise, starting a ‘