TD

Tristan D

68 karmaJoined Working (0-5 years)Seeking workAustralia

Comments
24

Yes I agree it's bad to support a race, but it's not as simple as that.

Is OpenAI going for profit in order to attract more investment/talent a good or bad thing for AI safety? 

On the one hand people want American companies to win the AGI race, and this could contribute to that. On the other hand, OpenAI would be then be more tied to making profit which could conflict with AI safety goals.

It seems to me that the limiting step here is the ability to act like an agent. If we already have AI that can reason and answer questions at a PhD level, why would we need reasoning and question answering to be any better?

The point is, there are 8.7 million species alive today, therefore there is a possibility that a significant number of these play important, high impact, roles.

I have the opposite intuition for biodiversity. People have been studying ecosystem services for decades and higher biodiversity is associated with increased ecosystem services, such as clean water, air purification, and waste management. Higher biodiversity is also associated with reduce transmission of infectious diseases by creating more complex ecosystems limiting pathogen spread. Then we have the actual and possible discovery of medicinal compounds and links with biodiversity and mental health. These are high level examples of the benefits. The linked article gives the possibility of impact by considering two effects from bats and vultures. Multiply that effect by 1000+ other species, include all the other impacts previously mentioned and I can see how this could be high impact. 

There are a variety of views on the potential moral status of AI/robots/machines into the future.

With a quick search it seems there are arguments for moral agency if functionality is equivalent to humans, or when/if they become capable of moral reasoning and decision-making. Others argue that consciousness is essential for moral agency and that the current AI paradigm is insufficient to generate consciousness. 

I was also interested to follow this up. For the source of this claim he cites another article he has written 'Is it time for robot rights? Moral status in artificial entities' (https://link.springer.com/content/pdf/10.1007/s10676-021-09596-w.pdf).

This is fantastic! 

Do you know if anything like this exists for other cause areas, or the EA world more broadly? 

I have been compiling and exploring resources available for people interested in EA and different cause areas. There is a lot of organisations and opportunities to to get career advice, or undertake courses, or get involved in projects, but it is all scattered and there is no central repository or guide for navigating the EA world that I know of.

We are talking about making decisions whose outcome is one of the best things we can do for the far future.

An option can be the best thing you can do because it averts a terrible outcome, as opposed to achieving the best possible outcome.

This is probably a semantic disagreement but averting a terrible outcome could be viewed as one of the best things we can do for the far future. The part I was disagreeing with was when you said "I'm just saying one attractor state is better than the other in expectation, not that one of them is so great.". This gives the impression that longtermism is satisfied with prioritising one option in comparison to another, regardless of the context of other options which if considered would produce outcomes that are "near-best overall". And as such it's a somewhat strange claim that one of the best things you could do for the far future is in actuality "not so great". 

I don't understand some of what you're saying including on ambiguity.

My point could be ultimately be summarised by saying, how do you know that freedom (or any other value), will even makes sense in the far future, let alone valued? You don't. You're just assuming it makes sense and will be valued, because it makes sense and is valued now. While that may be sufficient for an argument in reference to the near future, I think it's a very weak argument to defend its relevance for the far future

At it's heart, the "inability to predict" arguments really hold strongly onto the sense that the far future is likely to be radically different and therefore you are making a claim to having knowledge of what is 'good' in this radically different future. 

Could I be wrong, sure, but we are doing things based on expectation.

I feel like "expectation" is doing far too much work in these arguments. It's not convincing to just claim something is likely or expected, that just begs the question, why is it likely or expected. 

Nevertheless I think the focus on non-existential risk examples like the US having dominance over China is a red herring for defending longtermism. I think the strongest claims are those for taking action on preventing existential risk. But there the action's are still subject to the same criticisms regarding the inability to predict how they will actually positively influence the far future. 

For example, take reducing exitential risk by developing some sort of asteroid defense system. While in the short term developing an asteroid defense system might seem to adequately contribute to the goal of reducing existential risk. It’s unclear how asteroid defense systems or other mitigation policies might interact with other technologies or societal developments in the far future. For example, advanced asteroid deflection technologies could have dual-use potentials (like space weaponization) that could create new risks or unforeseen consequences. Thus, while reducing risk associated with asteroid impacts has immediate positive effects, the net effect on the far future is more ambiguous.

There is also an accounting issue that distorts the estimates of the impact of particular actions on the far future. Calculating the expected value of minimising the existential risk associated with an asteroid impact for example, doesn't take into account changes in expected value over time. For a simple example, as soon as humans start living comfortably, in addition to but beyond Earth (for example on Mars), the existential risk from an asteroid impact declines dramatically, and further declines are made as we extend out further through the solar system and beyond. Yet the expected value is calculated on the time horizon whereby the value of this action, reducing risk from asteroid impact, will endure for the rest of time, when in reality, the value of this action, as originally calculated, will only endure for probably less than 50 years.

How did the book club on Deep Utopia go? Is there an online discussion of the book somewhere?

Load more