Hide table of contents

Originally published here: https://guzey.com/ai/planes-vs-birds/

Note: Parts of this essay were written by GPT-3, so it might contain untrue facts.

Introduction

Many of my friends are extremely excited by planes, rockets, and helicopters. They keep showing me videos of planes flying at enormous speed, rockets taking off from the ground while creating fiery infernos around them, and of helicopters hovering midair seemingly denying the laws of gravity.

I've been on a plane already, and it was nothing special. It was just a big metal tube with a bunch of people inside. It was loud and it smelled weird and I had to sit in a tiny seat for hours. So what is it that makes planes so special? Is it the fact that they're machine? Is it the fact that they're big? Is it the fact that they cost a lot of money?

Here's the thing: all human-built artificial flight (AF) machines are incredibly specialized and are far away from being able to perform most of the tasks birds -- the only general flight (GF) machines we are aware of -- can perform.

More than 200 years after hot air balloons became operational and more than 100 years after the first planes flew, it's clear that building a GF machine is much harder than anticipated and that we are nowhere close to reaching bird-level abilities.

1. Planes vs eagles

First, take a look at this video of an eagle catching a goat, throwing it off a cliff, and then feasting on it:

I haven't ever seen a plane capable of catching a live animal and deliberately throwing it off a cliff. Not in 1922, not in 2022. Not even a tech demo. Such a feat vastly exceeds the abilities of any planes we have built, however fast they can fly.

2. Planes vs cuckoos

Second, let's watch this video of a cuckoo chick ejecting the eggs of its competitors out of a nest:

You could say that this ability has nothing to do flight but, again, this misses the forest for the trees. Building a GF machine is not about Goodharting random "flight" benchmarks by flying high and fast, it's about real-world performance on tasks GF machines created by nature are capable of. And, however impressive planes are, as soon as we try to see how well they perform in the real-world, they can't even match a cuckoo chick.

3. Planes vs a hummingbirds

Third and final example. Take a look at the hummingbird's amazing ability to maintain stability in the harshest aerial conditions:

Take any plane we have built and it stands no chance of survival placed in anything even close to these kinds of conditions, while a tiny-yet-mighty hummingbird doesn't break a sweat navigating essentially a tornado.

Future of bird jobs: no plane danger

Birds can flap their wings up to three times per second, whereas the fastest human-made aircraft only flaps its wings at 0.3 times per second. Birds can fly for long periods of time, whereas airplanes need to refuel regularly. Birds use orders of magnitude less energy to lift the same amount of mass in the air, compared to planes.

Planes, rockets, and helicopters are (optimistically) decades away from being able to carry out most of the tasks birds are capable of. Therefore, for the foreseeable future, most bird jobs such as carrying messages (pigeons), carrying cargo (pigeons), hunting (hawks), and others, will remain safe from being displaced by human-built AF machines.

Even if planes start to approach birds in some of their abilities, birds will be able to simply move towards performing other jobs. For example, planes can't navigate by themselves. So perhaps they will carry messages in simple conditions or to short distances, while pigeons will move towards specializing in complex message carrying or will learn to supervize plane routing, e.g. by piloting planes or by flying alongside and course-correcting them.

Birds can further make themselves safe from future job displacement by investing in their children's education, ensuring their long-term employability in the face of the rise of AF machines.

Conclusion

At the end of the day, I just don't see how human-built AF machines we are building right now could fundamentally change the way wars are fought, business and travel are conducted, or how they would allow us to do anything even close to true spaceflight (if you want to venture into the true lunatic-territory).

After all, if human-built AF machines are unable to match the abilities of a bird toddler, how could they possibly displace most bird jobs?

Comments2


Sorted by Click to highlight new comments since:

I must disagree.  I roasted a large plane for Thanksgiving yesterday and it was incomparable to a bird.  For tips on brining your plane, see here: https://en.wikipedia.org/wiki/US_Airways_Flight_1549

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s
 ·  · 2m read
 · 
Summary Arkose is an early-stage AI safety fieldbuilding nonprofit focused on accelerating the involvement of experienced machine learning professionals in technical AI safety research through direct outreach, one-on-one calls, and public resources. Between December 2023 and June 2025, we had one-on-one calls with 311 such professionals. 78% of those professionals said their initial call accelerated their involvement in AI safety[1].  Unfortunately, we’re closing due to a lack of funding.  We remain excited about other attempts at direct outreach to this population, and think the right team could have impact here. Why are we closing? Over the past year, we’ve applied for funding from all of the major funders interested in AI safety fieldbuilding work, and several minor funders. Rather than try to massively change what we're doing to appeal to funders, with a short funding runway and little to no feedback, we’re choosing to close down and pursue other options. What were we doing? Why? * Calls: we ran 1:1 calls with mid-career machine learning professionals. Calls lasted an average of 37 minutes (range: 10-79), and we had a single call with 96% of professionals we spoke with (i.e. only 4% had a second or third call with us). On call, we focused on: * Introducing existential and catastrophic risks from AI * Discussing research directions in this field, and relating them to the professional’s areas of expertise. * Discussing specific opportunities to get involved (e.g. funding, jobs, upskilling), especially ones that would be a good fit for the individual. * Giving feedback on their existing plans to get involved in AI safety (if they have them). * Connecting with advisors to support their next steps in AI safety, if appropriate (see below). * Supportive Activities: * Accountability: after calls, we offered an accountability program where participants set goals for next steps, and we check in with them. 114 call participants set goals for check