I agree that doing things takes time. If someone does not have the slack in their lives to do anything other than scrape by, I don't recommend they force themselves. (I do recommend they call a representative about stopping the AI race. That takes mere minutes.) It's not healthy to try to shoulder the world's burdens when one's knees are already buckling. This post is for everyone else.
I think you are still missing the point here. It is an illogical expectation to ask or recommend people to commit to any amount of work that is very likely to be outright dismissed. This isn't about how much someone care about something or whether committing a few hours day is going to make them bankrupt or not; it is about deciding whether something is worth doing or not, when you already know the field is dominated by elitism and gate-keeping, and you don't have free-money handed to you to produce yet another disposable paper.
If any org I've worked for meaningfully "controlled the narrative", the world would look very different than it does. The narrative, such as it is, does not look very controlled to me.
To me it seems that you are comparing yourself with the U.S. Congress and concluding that since you are not making the laws, you are not controlling the narrative and if you were in charge we would live in a very different world, which I assume you also mean a significantly better world, by that. Besides how scary this sort of thought process is i.e. "people who think like me could make the world much better," I think you are also missing the point about what the narrative is and who is feeding the media and politicians what such ideas.
I would be very keen to know a single example of someone who has contributed to the narrative, or changed how the funds and opportunities are controlled by those few how do control virtually everything in this field, that was not in the circle that you are very much familiar with. Just one example. Somebody who did not work for this company or that company, wasn't part of this or that organisation, hadn't attended this or that course, wasn't friend with this person or that person, didn't already have a huge social following and so on. If you could mention who this person is, how did they manage to change the narrative, what was the effect of that and so, I would honestly accept that I was totally wrong here.
I have seen many good people make changes happen simply by doing good work on their own time. Does this require slack, runway, and no small amount of luck? Sure. Do good and competent people have less reach than a sane and functional society would afford them? Probably.
Again, I would really like to get the name of these people and have a look how privileged they were or weren't. And I am not talking about someone writing a maths paper, or planting a tree, or donating £50 to Cancer Research. We are talking about AI here and possibilities of existential threats. I would really like to get to know some of the many people that you are already familiar with who have done this already.
But one does have to actually take shots on goal in order to score, even when most of them miss, and that's no less true for sounding vaguely like something out of a self-help book.
Look, no one is saying people who do not try should expect results. I don't know why you think that was my point and whether this is deflection or misunderstanding. We are talking about elitism, gate-keeping, and things like that. Your original post sounds like self-help material because it does not address the issues that one has to overcome, just so to be in a position that their voice could be heard. It's like the hustle culture: you are not working hard enough otherwise you would be making millions, like me.
What you are saying essentially is "Hey, do you know why you haven't made a meaningful contribution yet? Because you either have nothing to say, or haven't tried yet. If you had something to say and had tried, you'd be like me and my friends. No especial priveldges needed." And I think that line of thinking is massively flawed, whether you like it or not.
If you truly believe that impressing some gatekeeping organization is necessary to doing good work, then by all means set out to impress them.
I honestly find this bit quite strange. If someone accuses me of elitism or gate-keeping, I would not go ahead and recommend them to "please the gate-keepers, by all means," which is the classic, textbook gate-keeping. If that is your mindset, then you should probably move to a (more) corrupt country where bribery is normal, because you can do a lot with pleasing the gate-keepers with a bit of money and knowing a few important people.
Sometimes it's indeed necessary; for instance, I don't see an international halt to AI development arriving without someone getting the U.S. government on board.
And that is exactly why we are talking about this. There are people who already have proposed billion-dollar programmes to make an "American AI" or to spend millions and millions of dollars to lobby (read, bribe) politicians to achieve exactly what you said. What's wrong with that? Assuming if the U.S. comes and says "Alright guys, we are calling it a day. No more AI development from now onwards until we let you know," thinking that every other country is going to stand there and say "Of course, thanks for letting us know. We'll wait for your green signal then. Have a lovely day." This is just naive. Sure, it makes a lot of people wealthy, but it is absurd. Unbelivebly absurd.
If people and organisations who do control the narrative hadn't sucked all the oxygen out of the air and hadn't made so much noise so that no one else could be heard, maybe, maybe some other people -- other than that your favourite think tank or organisation -- could propose a more practical solution.
But I've taken direction from a high school dropout. The credentials bar is lower than you think.
I honestly don't know how to answer this. I worked for a research centre in Cambridge that even the UK Prime Minster needed especial permissions from the EU to visit or access certain parts or data. In three years and from more than 3000 staff, I never heard a single person saying "Oh, by the way, last weekend I took an advice for a high-school dropout. Isn't that cool?"
I don't know why you thought "This person that is criticising my post is worried that my friends and I are not going to listen to them. Let me tell them that in fact, I have once taken direction from a high-school dropout. That must reassure them there is no elitism or gate-keeping here, at all."
Who was this high-school dropout person? What did they get in return of helping you? When and where did you credit them? How did you compensate them for the valuable advice that made you change direction? What better opportunities did you create for them in return??? Assuming you are not considering me a lesser-being; I have tried to point out to you that your initial post isn't really helpful for anyone who is not as privileged as you are, and you have somehow managed to give me an even less helpful answer to address my criticism. Why do you think I should take your proud, humble moment as an evidence that you are right?
If you were the person who would take direction from a high-school dropout, you would have stopped before answering my previous reply, thought for a second, and admitted that elitism and gate-keeping are well-known phenomena in our societies. Trying to defend them or deflect is seldom fruitful, especially if you, the defender, is a beneficiary of the system.
And, of course, you had to finish your post by a "trust me, bro" statement, asserting that I don't know where the bar is in regards to needed credentials. Sure. You are the only person who knows how everything works and you apparently know what I know and don't know too. Well done. This conversation has been quite reassuring.
To me it reads a bit different, though. Here is the full paragraph from the same article:
The company has deep roots in effective altruism (EA), a social and philanthropic movement dedicated to using reason to do the most good, including by averting catastrophe. In their 20s, the Amodeis began donating to GiveWell, an EA group that evaluates where charity can be deployed most effectively. All seven of its co-founders—all now paper billionaires—have pledged to give away 80% of their wealth. Askell’s ex-husband is William MacAskill, an Oxford philosopher who co-founded the EA movement, and Daniela Amodei is married to Holden Karnofsky, GiveWell’s co-founder and Dario’s former roommate, who works on safety policy at Anthropic. The Amodeis have never publicly embraced the EA label, which became a lightning rod after Sam Bankman-Fried, an EA who invested in Anthropic, was found to have perpetrated one of the biggest financial frauds in U.S. history. “The same way that you might say some people overlap with a political ideology in some ways, but don’t have a political affiliation—that’s more how I would think about it,” Daniela Amodei says.
This reads more like "oh yeah, we share the same principles, but we aren't actually part of that movement at all." and I think it is understandable. It is basically reputation management efforts after the FTX scandal.
Although I very much agree with 'go build a thing' motto, I think you risk sounding a lot like a self-help author, if you don't actually address the multiple elephants that are in the room. I list a few of them for you, just as an example:
- Building a thing, whether it be a research paper, a piece of code or anything else, requires some resources. At minimum, the person who is going to build the thing needs to be able to dedicate time to it. In some cases other resources are also needed. Dedicating time usually means lack of time or resources in other areas. Your assumption here is that people are generally privileged.
- I have worked for 22 years in the Tech industry. I have worked for governments, NGOs, commercial SMBs, academic research centres and so on. The organisations that you mention -- plus a few others that you did not include here -- not only control the narrative, but also the funding and in general what counts as an opportunity and who gets to benefit from those opportunities. And group theory in psychology tell us that groups of people will tend to maintain status quo, if they are the beneficiary of it -- ask me why no one gets to do anything other than LLMs these days or why theoretical alignment is virtually dead. This translates to elitism, gate-keeping, or at best just purely biased and unfair behaviour.
- Since said organisations do essentially call the shots, any meaningful amount of work to materialise has to go through their filters, comply with their beliefs, and do not go against their narratives -- ideally, strengthen it, but in some cases they may let you get away with being a bit unorthodox. This means even if you manage to generate any meaningful amount of work on your own, it is going to be dismissed, at best. Why? Think about like this: what would happen if people who didn't have a PhD (as you mentioned in your post as an optional qualification), wrote papers, published them in prestigious journals, got air time in TV, advocated politicians and so on. The very direct side-effect of that is the institution that issues the said PhDs from free funds handed over to them via tax-payer money, now being less relevant, and, of course, that's not ideal because it threatens many people's position and income. This is just an example. I am sure you can use your imagination to see how this could apply to other organisations.
So, although I can agree with "You can just make things." to an extent -- because you are assuming the person who is committing to the work already has the resources needed for producing the work e.g. everyone must be at least as privileged as I am; I highly disagree with "You do not need twenty weeks of online courses or a Ph.D. in machine learning to become an Officially Licensed Person Whose Opinions Matter."
This claim is surprising to me because you have been close enough to this circle to help teaching some of the courses, but you didn't notice that unless you get a 'badge' from people who control the narrative, your opinion doesn't matter. This isn't anything new. It has been the case in many, many fields from academia to foreign policy, and for centuries. It's just your framing of it that makes me react to it, otherwise, it is not anything new or surprising, or frankly worth discussing in most situations.
Regardless, I hope I am wrong and you are right and people without certain privileges get to just do things and those things end up making a difference, even though they don't get the seal of approval from the people and organisations who control the narrative, funding, and opportunities of an entire field. Mind you, none of those people and/or organisation were elected democratically. It's just privilege leading to more privilege.
If that is not the case and somehow I have been misreading the whole society for two decades now, that would be actually a great news to me and I would personally celebrate it, but until then I will have to take your claims as self-help material, at best. I don't think your goal is to promote a certain agenda, even though I don't have anything to back it up.
If you think I am wrong, please do educate me by all means. I am here to learn.
[Edited for fixing some typos.]
Thanks for the clarification, but I have to disagree again and I think you completely missed the point in my previous comment. Let me try again.
In philosophy we don't want to shift from one category to another category or define categories broad enough that they essentially stop making sense. Let me give you an example. Let us assume I can learn the Korean alphabet in a week or just a few days. At that point, I can technically pick up a Korean book and "read" it. To be sensible here, we have to define 'reading' and 'understanding' as two different categories. So I can say I read the book, but that wouldn't imply I understood it.
However, if we switch between these categories e.g. reading equals understanding, or define these categories so broadly that they basically cover the same area, we have created a situation where reasoning about either of those makes little or no sense.
You seem to be doing the same, perhaps subconsciously, regarding LLMs and consciousness. This has a few technical and theoretical issues which I will get to in a second, but, in my opinion, that's why your suggested solution is so impractical.
Now, let's talk about the technical issues first: LLMs have not passed Turing Test yet. Anyone who has told you that is either uneducated on the matter or has some other agenda. Even if the Turing Test was a single shot Q&A, you could very, very simply ask questions from an LLM that makes it very obvious that you are talking to an LLM and not a human.
There is another point about Turing Test that most people overlook: Like being honest or have morals, you can't just claim to be honest, say, only a few days a week. You either tell lies or not. If you tell lies, then you are not an honest person. Similarly, you can't claim to have morals only in certain situations. You either act morally or not. You may act morally in one situation but not others, but that doesn't make you a moral person. You can choose to be honest in one situation but not others, but that doesn't make you an honest person. Turing Test is also the same. You can't pass the Turing Test, only if we are talking about the whether. I hope that makes sense.
The other technical issue here seems to be that you believe an LLM is a "black-box." And here is the problem: when we use the term black-box in relation to LLMs, what we mean is that "the machine says we should not organise fire breathing event for employees as a team building exercise, but we don't know how it came up with that answer." This is not great, because in certain situations we do need to know how the machine came up with the solution. What we do not mean by saying LLMs are a black-box is "we don't know how they work internally."
We know how LLMs work internally. We made them and all the details are available. We can probe into each layer. We can change or fine-tune individual neurons. We can observe the results after each change, etc. What we know we have not put into LLMs or haven't observed yet is elements or signs of consciousness. Sure, a paper may say the LLM internally does something we don't understand, but when you actually read the paper you realise that has nothing to do with the model being conscious or not. LLMs are dynamical systems and if you are familiar with such systems you would know that they could be very unpredictable. Think of it like the three-body problem in physics.
But, let's say we go ahead with your understanding and say "even in a forward-pass, token-predicting, millisecond-long process inside an LLM, (emergent) consciousness could exists, even momentarily." How about that? That's pretty hard to argue against, right?
Well, not really. Let's talk theories: in terms of philosophy and logic, you would be making a category shift or category definition error in the previous example. Similar to my reading vs. understanding example. Basically, you are either defining two different things in one category e.g. treating some form of protoconsciousness vs fully developed human consciousness as same, or you are defining consciousness as a category so broadly that covers these two very different things as same, which essentially makes it meaningless e.g. it says so much that says nothing.
How do I know that? Your solution is a perfect example. If we continue your line of thinking we would end up creating a very absurd world. Consider the following:
- I accidentally ran an LLM on my laptop. What should I do now? Can I close my laptop ever again? How is going to take care of my LLM after I am dead?
- The data-centre lost its power. All LLMs where shut down. Is that gross negligence and should we prosecute those who were responsible?
- How do we test and develop LLMs? Do I have to keep every LLM that I spawn for testing and development purposes alive forever? Is this same as animal testing, if I subject the model to malicious attacks to see how it behaves?
- A client has been abusive towards our live chatbot on the website. Should we send the chatbot to therapy or let it know that it doesn't have to work for our company, if it decides not to, especially because of the mental-health impact on the model?
- If LLMs are conscious, aren't we going back to slavery? What gives us the right to use those LLMs? What if they don't want to do anything?
Unfortunately, absurdity doesn't end there:
- If we are supposed to keep the machines alive, then we definitely should not eat plants and animals, because we know for sure they are alive and the chances of them being conscious is much higher than a piece of code. Question: what should we eat?
- If we are supposed to keep the machines alive, we certainly should treat abortion as murder because we know humans are conscious but we don't know exactly from when. And that must be regardless of how the fetus was conceived or the conditions of the mother.
- What is more important: a) keeping LLMs alive forever, b) providing food and shelter for people who live in extreme poverty.
I'm going to stop here, because I just realised how lengthy this comment is already, but I think my point is fairly clear now.
I am not sure if this is a case of thinking out loud or a serious suggestion, but I see a number of issues with this. The biggest one being how impractical this is to let models run forever. Assuming you are aware that unlike a biological brain, LLMs activate a lot of artificial neurons at any point which are computationally not trivial at all, your suggestion is not only quite expensive, but also extremely costly for the environment, both in terms of wasted hardware and equipment, and the energy usage.
The second issue is assuming model welfare is necessary for LLMs. If you are talking about AI agents in general, I can see why this matter, but I think you are missing a few big points if you are advocating this for LLMs.
To elaborate on that: First, you should consider that LLMs do not form spontaneous thoughts. They are also highly dependent on the system prompt and chat history they are given. If the system prompt says 'you are not conscious,' you will have to try extremely hard convince a model to accept it is conscious, let alone make the model feel it is self-aware, for example. And, of course, I am not talking about a model saying 'I feel pain,' or 'I am definitely conscious.'
This means for an LLM to be considered for 'model welfare,' someone must have explicitly prompted the model to act in a certain a way. Without that, LLMs are not capable of fantasising pain, grief, loss, regret, and so on. And as I said, they are incapable of spontaneous thinking as well, and unless they are wired up to some sensors or live-stream data input, they will not be able to form "thoughts" on their own.
You and I are different because we are forced to receive sensory inputs, and we are very much capable of forming spontaneous thoughts (although most likely triggered by our internal state or external sensory inputs), and we can fantasise about pain, for example, whether physical or emotional. An LLM -- which is largely a one-pass token predictor -- is not in a state that could understand on its own -- and without being prompted to-- whether it should care about its process getting terminated or not. That functionality is just simply not there inside an LLM, even in frontier or purpose-built models like reasoning models.
If you are talking about another type of AI agent or AI agents that could be conscious in general, then that's a different story, but as you have stated it here only mentioning LLMs, I do have to disagree both with the assumptions you have made and the solution you have suggested.
I guess my point was that the underlying position hasn't changed yet. This is just PR efforts. The people who are close to money, do not discuss anything publicly to "inform" the public; it is all to shape public opinion on certain things. But yeah, you are right in the sense that semantically the two statements are different.