Suffering should not exist.
Suffering risks have the potential to be far, far worse than the risk of extinction. Negative utilitarians and EFILists may also argue that human extinction and biosphere destruction may be a good thing or at least morally neutral, since a world with no life would have a complete absence of suffering. Whether to prioritize extinction risk depends on the expected value of the far future. If the expected value of the far future is close to zero, it could be argued that improving the quality of the far future in the event we survive is more important than making sure we survive.
A lot of people will probably dismiss this due to it being written by a domestic terrorist, but Ted Kaczynski's book Anti-Tech Revolution: Why and How is worth reading. He goes into detail on why he thinks the technological system will destroy itself, and why he thinks it's impossible for society to be subject to rational control. He goes into detail on the nature of chaotic systems and self-propagating systems, and he heavily criticizes individuals like Ray Kurzweil. Robin Hanson critiqued Kaczynski's collapse theory a few years ago on Overcoming Bias. It's an interesting read if nothing else, and has some interesting arguments.
I suspect there's a good chance that populations in Western nations could be significantly higher than predicted according to your link. The reason for this is that we should expect natural selection to select for whatever traits maximize fertility in the modern environment, such as higher religiosity. This will likely lead to fertility rates rebounding in the next several generations. The sorts of people who aren't reproducing in the modern environment are being weeded out of the gene pool, and we are likely undergoing selection pressure for "breeders" with a strong instinctive desire to have as many biological children as possible. Certain religious groups, like the Old Order Amish, Hutterites, and Haredim are also growing exponentially, and will likely be demographically dominant in the future.
Alignment being solved at all would require alignment being solvable with human-level intelligence. Even though IQ-augmented humans wouldn't be "superintelligent", they would have additional intelligence that they could use to solve alignment. Additionally, it probably takes more intelligence to build an aligned superintelligence than it does to create a random superintelligence. Without alignment, chances are that the first superintelligence to exist will be whatever superintelligence is the easiest to build.