SiobhanBall

173 karmaJoined

Comments
19

I agree with you. I think in EA this is especially the case because much of the community-building work is focused on universities/students, and because of the titling issue someone else mentioned. I don't think someone fresh out of uni should be head of anything, wah. But the EA movement is young and was started by young people, so it'll take a while for career-long progression funnels to develop organically. 

Hi Deena, first of all, congratulations on your new arrival! Fellow EA mum here.

So this is a cool business of which I was previously unaware, so thanks for posting.

A key question that came to mind when reading your post and site was: what’s stopping clients from going straight to EASE/your partners? I see that you offer a matchmaking service, but for those clients who are equally unfamiliar with you as they are your partners, the level of trust is the same either way. 

Also, how do you untangle the overlapping roles e.g. some of your individual partners now work as employees for some of your organisation partners offering similar services; could there be conflicts of interest there?

Ok but will you commit to funding Ubers home for all your Conference guests? #Stewardship 

I went even further. I added adjacent three times and ended up back where I started. 

I'm hoping for a SPIDER EDITION next year P.S love your work!! 

Don't do better. Is that better?

No I can't see it. Do better 

I thought I had gone off the deep end. I mean, I have but... even more...

I agree with these two points raised by others:

we already can't agree as humans on what is moral

Why would they build something that could disobey them and potentially betray them for some greater good that they might not agree with?

I’m mindful of the risk of confusion as one commenter mentioned that MA could be synonymous with social alignment. I think a different term is needed. I personally liked your use of the word ‘sentinel’. Sentinel —> sentience. Easy to remember what it means in this context: protecting all sentient life (through judicious development of AI). ‘Moral’ is too broad in my view. There are fields of moral consideration that have little to do with non-human sentient life/animals. So, again, I would change the name of the movement to more accurately and succinctly fit what it’s about. Not sure how far along you are with the MA terminology, though! 

You’ve said:

If humans agree they want an AI that cares about everyone who feels, or at least that is what we are striving  for, then classical alignment is aligned with a sentient centric AI. 

In a world with much more abundance and less scarcity, less conflict of interests between humans and non humans, I suspect this view to be very popular, and I think it is already popular to an extent.

I fear it is not yet popular enough to work on the basis that we can skip humanity’s recognition of animal sentience, and go straight to developing AI with that in mind. Unfortunately, the vast majority of humans still don’t rate animal sentience as being a good enough reason to stop killing them en masse, so it’s unlikely that they’re going to care about it when developing AI. I agree with your second part: AI will probably usher in an era where morals come easier because of abundance. But that’s going to happen after AGI, not before. To the extent that it’s possible for non-human animals to be considered now, at this stage of AI development, I think AI for Animals is already making waves there. 

So my key question is - what does MA seek to achieve, that isn’t already the focal point of AI for Animals? If I’ve understood correctly, you want MA to be a broader umbrella term for works which AI for Animals contributes to.

What I don’t understand is, what else is under that umbrella? 

Of all the possible directions, I think your suggestion of creating an ethical pledge is by far the strongest. That’s something tangible that we can get working on right away. 

TLDR: MA seems to be about developing AI with the interests of animals in mind. I have a hard time comprehending what else there is to it (I'm a bit thick though, so if I'm missing the point, please say!). If it is about animals, then I don’t think we need to obscure that behind broader notions of morality; we can be on-the-nose and say ‘we care about animals. We want everyone to stop harming them. We want AI to avoid harming them, and to be developed with a view to creating conditions whereby nobody is harming them anymore. Sign our pledge today!’ 

Load more