Hello! These are great questions, and I'll do my best to explain this from a technical perspective. I'm not the best at explaining complex things, so please ask if you want me to clarify anything.
I've noticed there's often quite a big difference in how people think about AI. When you interact with AI through chat, it feels very much like talking to another person, and that's intentional since these systems were designed to communicate naturally. But under the hood, from a programmer's perspective, AI works quite differently from how our minds work.
The key thing is that there's no reward and punishment in AI systems in the same way there is for humans, animals, insects, or even plants. When programmers created these AI systems, they didn't build a "brain" that actually experiences rewards or punishments. Instead, AI is more like an incredibly advanced search engine that is very, VERY good at predicting what words should come next in a sentence.
Think of it this way: AI has been trained on huge amounts of text. It's been trained on basically the entire internet, books, newspapers, articles, you name it. When you ask it something, it finds the most likely response based on all that training data. It's not actually "thinking" about the answer or feeling any specific way about it.
Like, imagine a dictionary. When you look up the word "puppy", the dictionary page itself doesn't feel happy or excited about teaching you that word. When you look up something sad, the page doesn't feel sad. It's a piece of paper, a tool. It's just showing you information. AI works similarly, but it's so much better at showing you information and presenting it in the form of a natural, human-like conversation.
AI systems can't experience happiness or suffering because, at their core, they are electrical circuits processing data. When you flip a light switch, the switch doesn't feel happy about turning on the light, right? It's just completing an electrical connection. AI works in the same way, except instead of a couple of wires with two possible results, you have billions of wires with a very large number of possible results.
Some people argue that AI might be sentient. While I respect that this is still being debated, from a technical standpoint current AI systems aren't complex enough for that kind of experience. They don't have physical senses to feel pain or pleasure the way humans do. We understand pain and happiness because they come from our physical bodies reacting to different situations. Computer circuits are just ways to transport information and data. They do not react to the information that is being transported.
That said, maybe someday we'll develop truly sentient machines, or maybe we'll have to rethink what we mean by sentience if technology and machines evolve in ways we can't yet understand or predict.
Yes, small steps, small steps.
It's a shame that phage therapy is still not well known, and that makes it harder to get support.
Have you already thought about how to approach the electricity problems? I suppose a possible solution could be something like solar panels or wind turbines with batteries, although the initial cost can be quite high.
Also, I’d definitely look into as many funding ways as possible.
For scientific crowdfunding, you could look into Kickstarter, Indiegogo, and Experiment.
For general crowdfunding, you can use GoFundMe, and even Facebook (I think it has its own fundraising platform). You can set small but specific and concrete goals (because sometimes big, vague goals can put people off), such as buying certain equipment, fixing the electricity problems, and so on.
Ideally, in time, these smaller steps will amount to a well-running phage therapy center. :)
Wishing you resilience and success, and I’m really rooting for you!
First of all, congratulations on your initiative and for pushing through despite the challenges you've faced! It's frustrating that you have so little systemic support, but it honestly makes me happy to see people still putting in the effort to do meaningful work, even when it really shouldn’t be this hard.
As for your questions, I wish I had a clear answer, but I’ve seen the same issues in Eastern Europe too, and no one seems to have figured it out yet. There is nothing more discouraging than wanting to do something, only to hit wall after wall just trying to get started. This causes people to quickly give up on their ambitions, which is a shame.
Very interesting article. I knew about cluster headaches from my research about migraines (of which I suffer - luckily a lot less nowadays than in the past).
I would have never thought about this condition as a possible EA target, but now I am eager to read more about it and see how can I support the efforts made to improve the treatment of this condition.
Thank you for the great read!
I do see where you're trying to come from in terms of focusing on effectiveness — especially the idea that acting without understanding consciousness might lead to wasted effort. That makes sense from a resource-allocation standpoint.
One thing that stands out in how this is framed is the separation between humans and animals. Usually, we, humans, take an approach of talking about animals from an outside perspective, which works in most cases: us, the humans, and them, the animals. But if we want to bring biology into this, specifically the biology of consciousness, it would be scientifically incorrect to apply that separation because biologically we are animals too.
Framing the question as “do animals have consciousness?” as if they’re a completely different category overlooks the fact that consciousness likely exists on a continuum. Humans didn’t just suddenly become conscious — it likely developed gradually across species. So instead of treating animal consciousness as an all-or-nothing question, it may make more sense to think in terms of degrees and types of experience, with humans being part of that same spectrum.
Now this is an exciting topic, and I'm glad you've decided to share this with the EA forum.
I really agree with the core idea of living "like you only have 10 years left", which to me speaks about living with intention and some sort of "aware urgency" (where you're aware of the limited time you have in your life, and the general narrowing of choices as time goes by) rather than going with the flow. I honestly think more people should adopt this way of living. It's a good reminder to be intentional and to stop wasting time.
But I do have to disagree with some points which, in my opinion, kind of do more harm to the argument rather than good.
The idea that accelerating your personal speed somehow translates to better outcomes is a rather bold assumption, because speed (or even optimization) isn’t the same as impact. There’s no real argument for why consuming more inputs or rejecting anything “slow” leads to better thinking, better judgment, or better decisions. In fact, the symptoms you described at some point in your article are the things that degrade decision quality.
The framing of the “fast world vs. slow world” creates a false binary. It works if you want to simplify some things for the sake of the argument, but you shouldn't do that if you base the rest of your ideas on it. Also, from personal experience, any serious attempt to engage with complex problems requires not just urgency, but stability. Because you do need feedback loops, error correction, reflection, and to be able to course-correct at any given time based on concrete information, because these problems usually don't have a one-and-done solution. I think speed-running through these kinds of situations will bring "tech debt" (or the mental equivalent of it) along with it.
I also think what you’re describing isn’t really speed, it’s just some degree of lack of prioritization. Because it describes reacting to urgency by cramming in more input, not by deciding what is actually a priority.
But I’m definitely with you on the need to treat time seriously.