J

JackM

4121 karmaJoined

Bio

Feel free to message me on here.

Comments
709

I think animals very likely don't have that kind of experience

Why?

Can you expand on why you don't think most animals are moral patients?

I have downvoted the LLM answers. I don't like your approach of simply posting long conversations with LLMs on a forum for various reasons. Firstly, your prompts are such that the LLM provides very broad answers that don't go very deep into specific points and often don't engage with the specific arguments people have put forward. Secondly, your prompts are worded in a leading, biased way.

Here is an LLM opining on this very question (I know this is hypocritical but I thought it would be an amusing and potentially effective way to illustrate the point). Note the conclusion saying "leverage the LLM as a tool, not as a crutch".

Me: In an online debate someone is posting their conversations with a large language model to articulate key points. Is this a helpful approach?

ChatGPT: Using a large language model (LLM) to articulate key points in an online debate can be helpful in several ways, but it also has limitations. Here's a breakdown of both the advantages and potential drawbacks:

Helpful Aspects:

  1. Clarification and Articulation: An LLM can help users articulate complex ideas more clearly. If someone is struggling to express a nuanced point, an LLM can assist in providing structured arguments or phrasing that makes their case more understandable.
  2. Information Gathering: LLMs can provide quick access to relevant information, sources, and arguments, which can be useful in a debate. The user can get an overview of a topic or even a counterargument to refine their position.
  3. Balanced Perspectives: LLMs are often trained to offer balanced or well-rounded views on contentious topics. This can help individuals see both sides of an argument, leading to a more informed debate.
  4. Speed and Efficiency: Instead of spending time crafting long responses, individuals can use an LLM to generate content quickly. This can help sustain momentum in fast-paced online debates.

Drawbacks:

  1. Over-reliance on AI: There is a risk that someone using LLM responses might fail to engage deeply with the topic themselves, which can limit genuine intellectual exchange. If they're simply copying and pasting, the discussion can lose the personal insight or emotional nuance that human debates often require.
  2. Lack of Accountability: Since LLMs generate text based on patterns in data, they might present information that is not entirely accurate, up-to-date, or contextually appropriate. If someone relies heavily on these responses, they may inadvertently spread misinformation or make arguments that aren't fully grounded in fact.
  3. Detachment from Personal Viewpoint: Debates are often about more than just the articulation of key points—they're about personal beliefs, experiences, and the nuance of human emotion. Using an LLM could distance a participant from their personal views, making the discussion feel less genuine or relatable.
  4. Ethical Concerns: Not everyone may agree with using an AI-generated argument in a debate, particularly if the other side feels that they are debating a machine rather than a person. This could raise issues about transparency and fairness in discourse.

Overall, using LLMs can be helpful if integrated thoughtfully into personal arguments, but relying on them exclusively may weaken the authenticity and intellectual engagement of a debate. The key is balance: leveraging the LLM as a tool, not as a crutch.

OK thanks for your perspective, although it doesn't seem convincing to me. I could be more convinced by an argument that inequality / poverty in rich countries results in poor decision-making in those same rich countries.

If extinction and non-extinction are "attractor states", from what I gather, a state that is expected to last an extremely long time, what exactly isn't an attractor state?

Any state that isn't very persistent. For example, an Israel-Gaza ceasefire. We could achieve it, but from history we know it's unlikely to last very long. The fact that it is unlikely to last makes it less desirable to work towards than if we were confident it would last a long time.

The extinction vs non-extinction example is the classic attractor state example, but not the only one. Another one people talk about is stable totalitarianism. Imagine China or the US can win the race to superintelligence. Which country wins the race essentially controls the world for a very long time given how powerful superintelligence would be. So we have two different attractor states - one where China wins and has long-term control and one where the US wins and has long-term control. Longtermist EAs generally think the state where the US wins is the much better one - the US is a liberal democracy whereas China is an authoritarian state. So if we just manage to ensure the US wins we would experience the better state for a very long time, which seems very high value.

There are ways to counter this. You can argue the states aren't actually that persistent e.g. you don't think superintelligence is that powerful or even realistic in the first place. Or you can argue one isn't clearly better than the other. Or you can argue that there's not much we can do to achieve one state over other. You touch on this last point when you say that longtermist interventions may be subject to washing out themselves, but it's important to note that longtermist interventions often aim to achieve short-term outcomes that persist into the long-term, as opposed to long-term outcomes (I explain this better here).

Saving a life through bed nets just doesn't seem to me to put the world in a better attractor state which makes it vulnerable to washing out. Medical research doesn't either.

I'm skeptical of this link between eradicating poverty and reducing AI risk. Generally richer countries' governments are not very concerned about extreme poverty. To the extent that they are, it is the remit of certain departments like USAID that have little if any link to AI development. If we have an AI catastrophe it is probably going to be the fault of a leading AI lab like OpenAI and/or the relevant regulators or legislators not doing their job well enough. I just don't see why these actors would do any better just because there is no extreme poverty halfway across the world - as I say, global poverty is way down their priority list if it is on it at all.

Sorry a convo with an LLM isn't likely to convince me of anything, for starters the response on hedonism mainly consists of assertions that there are some philosophers that have opposing views to hedonism. I knew that already...

Is not a life which has a few moments of glory, perhaps leaves some lasting creative achievement, but has a sum of negative hedonistic experiences, a life worth living?

I would personally say no unless the moments of glory help others sufficiently to offset the negative experiences of the life in question.

In other words, I am a hedonist and I suspect a lot of others in this thread are too.

This moral theory just seems too ad-hoc and convoluted to me and ultimately leads to conclusions I find abhorrent i.e. animals can't speak up for themselves in a way that is clearly intelligible for humans so we are at liberty to inflict arbitrary amounts of suffering to them.

I personally find a utilitarian ethic much more intuitive and palatable, but I'm not going to get into the weeds trying to convince you to change your underlying ethic. 

Load more