TD

Tristan D

70 karmaJoined Working (0-5 years)Seeking workAustralia

Comments
27

My understanding of their claim is more something like:

There will never be an AI system that is generally intelligent like a human is. Put differently, we can never have synoptic models of the behaviour of human beings. Where a synoptic model is a model that can be used to engineer the system or to emulate its behaviour.

This is because, in order for deep neural networks to work they need to be trained on data with the same variance as the data to which the trained algorithm will be applied. I.e. training data which is representative. General human intelligence is a complex system which doesn't have said distribution. There are mathematical limits on the ability to predict the behaviours of a system like this that does not have a distribution. Therefore whilst we can have gradually more satisfactory simple models of human behaviour (like chatGPT is for written language) they will never the same level as humans.

To put it simply: We can't create AGI by training algorithms on datasets, because human intelligence does not have a representative data set.

As such I think your response "It is possible to create new things without fundamentally understanding how they work internally" misses the mark. The claim is not that "we can't understand how to model complex systems therefore they can't be adequately modelled". It's more something like "there are fundamental limits to the possibility of adequately emulating complex systems (including humans), regardless of whether we understand them more or not". 

My personal take is that i'm unsure how important it is to be able to accurately model human intelligence. Perhaps modelling some approximate of human intelligence (in the way that chatGPT is approximately good enough as a written chat bot) is sufficient enough to stimulate the creation of something more approximately intelligent than that, and so on. In the same way that chatGPT can answer PhD level questions that the majority of humans cannot. 

Note: My understanding of Barry's argument is limited to this lecture (https://www.youtube.com/watch?v=GV1Ma2ehpxo) and this article (http://www.hunfi.hu/nyiri/AI/BS_paper.pdf). 

I think I might be missing what’s distinctive here. A lot of the traits listed — strong knowledge of the field, engagement with the community, epistemic humility, broad curiosity — seem like general predictors of success in any many fields.

Are you pointing at something that’s unusually clustered in EA, or is your claim more about how trainable and highly predictive this combination is within the EA context?

Have you had a chat with the 80k hours team?

Yes I agree it's bad to support a race, but it's not as simple as that.

Is OpenAI going for profit in order to attract more investment/talent a good or bad thing for AI safety? 

On the one hand people want American companies to win the AGI race, and this could contribute to that. On the other hand, OpenAI would be then be more tied to making profit which could conflict with AI safety goals.

It seems to me that the limiting step here is the ability to act like an agent. If we already have AI that can reason and answer questions at a PhD level, why would we need reasoning and question answering to be any better?

The point is, there are 8.7 million species alive today, therefore there is a possibility that a significant number of these play important, high impact, roles.

I have the opposite intuition for biodiversity. People have been studying ecosystem services for decades and higher biodiversity is associated with increased ecosystem services, such as clean water, air purification, and waste management. Higher biodiversity is also associated with reduce transmission of infectious diseases by creating more complex ecosystems limiting pathogen spread. Then we have the actual and possible discovery of medicinal compounds and links with biodiversity and mental health. These are high level examples of the benefits. The linked article gives the possibility of impact by considering two effects from bats and vultures. Multiply that effect by 1000+ other species, include all the other impacts previously mentioned and I can see how this could be high impact. 

There are a variety of views on the potential moral status of AI/robots/machines into the future.

With a quick search it seems there are arguments for moral agency if functionality is equivalent to humans, or when/if they become capable of moral reasoning and decision-making. Others argue that consciousness is essential for moral agency and that the current AI paradigm is insufficient to generate consciousness. 

I was also interested to follow this up. For the source of this claim he cites another article he has written 'Is it time for robot rights? Moral status in artificial entities' (https://link.springer.com/content/pdf/10.1007/s10676-021-09596-w.pdf).

Load more