TD

Tristan D

70 karmaJoined Working (0-5 years)Seeking workAustralia

Comments
26

I think I might be missing what’s distinctive here. A lot of the traits listed — strong knowledge of the field, engagement with the community, epistemic humility, broad curiosity — seem like general predictors of success in any many fields.

Are you pointing at something that’s unusually clustered in EA, or is your claim more about how trainable and highly predictive this combination is within the EA context?

Have you had a chat with the 80k hours team?

Yes I agree it's bad to support a race, but it's not as simple as that.

Is OpenAI going for profit in order to attract more investment/talent a good or bad thing for AI safety? 

On the one hand people want American companies to win the AGI race, and this could contribute to that. On the other hand, OpenAI would be then be more tied to making profit which could conflict with AI safety goals.

It seems to me that the limiting step here is the ability to act like an agent. If we already have AI that can reason and answer questions at a PhD level, why would we need reasoning and question answering to be any better?

The point is, there are 8.7 million species alive today, therefore there is a possibility that a significant number of these play important, high impact, roles.

I have the opposite intuition for biodiversity. People have been studying ecosystem services for decades and higher biodiversity is associated with increased ecosystem services, such as clean water, air purification, and waste management. Higher biodiversity is also associated with reduce transmission of infectious diseases by creating more complex ecosystems limiting pathogen spread. Then we have the actual and possible discovery of medicinal compounds and links with biodiversity and mental health. These are high level examples of the benefits. The linked article gives the possibility of impact by considering two effects from bats and vultures. Multiply that effect by 1000+ other species, include all the other impacts previously mentioned and I can see how this could be high impact. 

There are a variety of views on the potential moral status of AI/robots/machines into the future.

With a quick search it seems there are arguments for moral agency if functionality is equivalent to humans, or when/if they become capable of moral reasoning and decision-making. Others argue that consciousness is essential for moral agency and that the current AI paradigm is insufficient to generate consciousness. 

I was also interested to follow this up. For the source of this claim he cites another article he has written 'Is it time for robot rights? Moral status in artificial entities' (https://link.springer.com/content/pdf/10.1007/s10676-021-09596-w.pdf).

This is fantastic! 

Do you know if anything like this exists for other cause areas, or the EA world more broadly? 

I have been compiling and exploring resources available for people interested in EA and different cause areas. There is a lot of organisations and opportunities to to get career advice, or undertake courses, or get involved in projects, but it is all scattered and there is no central repository or guide for navigating the EA world that I know of.

Load more