A

AïdaLahlou

Concert Pianist and Founder @ Oxford Concert Circle
65 karmaJoined Working (0-5 years)

Comments
21

IMO, the worst sub-group is the intersection formed by the group of people who call themselves 'rationalists', those who have sexist views, and those who are looking to be edgy/gain some notoriety. This is because this group will often try to use their 'rationalism' to justify their harmful '-isms' (sexism/racism etc)... using """"data"""" and """reason""", which generates a lot of controversy which helps them build more of a platform, etc. 

In my opinion this is the most dangerous subcategory of sexist people (as opposed to the people who are just casually sexist out of convenience / or just because they can, but don't have further motives beyond that) because if you dare questioning their methods or conclusions they call you 'woke', 'irrational', or 'unscientific' (by contrast, the former category will just accuse you of being too uptight, lacking a sense of humour, or making a mountain out of a molehill). These pseudo-rationalist people are dangerous because they are not simply being sexist, they are actively making the apology of sexism. As a woman, you can't win against them, because you're either agreeing with them that women are less smart/capable/intelligent, or you're disagreeing with their 'highly rationalist proof', which they will claim proves their point, as you're 'clearly' not clever or free-thinking enough to appreciate the 'evidence'. This of course helps them get more attention, as more and more people either want to strongly agree or strongly disagree with them. Online, this behaviour drives comments, likes, and algorithmic traffic towards their profile, which serves their notoriety goals. 

I've met my fair share of these over the years.

I also think that AGI is altogether still quite unlikely in the next decade, but I don't need AGI happening in the next decade to be worried about AI's current ability to destabilise our world in a meaningful and potentially catastrophic way. 

My main concern is that the pre-AI world was, IMO, not even as prepared as it could have been on "traditional risks": old risks like cyber attacks, geopolitical instability, military escalation, democracy erosion, and so on. I see AI as a complicating factor and a multiplier of those risks and my cautious nature makes me think we should hurry even more in disaster preparedness in general. 

Even without AGI in the picture, I think we are under prepared to deal with the risks associated with misuse current AI capabilities, which really just makes it cheaper and easier to do things like cyber attacks and disinformation campaigns at scale (and other things too, like building biological weapons etc). I'm also very concerned about the models being used in the military to launch missiles and eliminate targets without human oversight. These are things that are already happening and I think we are still not devoting enough attention to. 

In summary, because I feel we are not prepared enough TODAY, I see efforts to 1) limit the growth of AI capabilities and 2) have better safeguards against misuse of current capabilities as still important and valuable. 

It's very very possible that AI capabilities' growth will be halted or massively slowed down anyway due to a number of factors that you have already discussed (such as the AI bubble popping, or bottlenecks in hardware materials and so on), and I would cautiously welcome those as net positive things (for the reasons I mentioned), but I would also welcome any voluntary efforts to curtail the growth of AI future capabilities / increase safety, world cooperation, and regulations around current capabilities as a way to buy us time to become better prepared. 

Thank you for this important and courageous testimony. The response from the CEA leadership / individual co-workers involved was nowhere near appropriate here and you are absolutely right to call it out. It sounds like at least some people have shown some level of contrition after the facts which makes me hope that some lessons have been learned and internalised at a level that goes deeper than just making light amendments to the sexual harassment policy. 

This is enraging and I am really sorry you had to go through this. 

Hi, fellow Oxford neighbour here!

The AI Safety Atlas is an amazing resource, and just the type I think you are looking for (understandable by me, a pianist with zero STEM background beyond high-school). 

For your purposes I'd recommend Chapter 1.4 + maybe one or two extra chapters specifically around capabilities and 'the bitter lesson', then perhaps this video by Rational Animations about goal misgeneralisation.

Since you're in Oxford, I'd also recommend reaching out to the Oxford AI Safety Initiative, a student-led group doing amazing work to educate around issues of AI Safety. 

I've been doing their Core Fellowship this term and it has been amazing. 

But the obvious next question is: why are so many EA organisations located in extremely-high-cost cities?

 

Very fair challenge; I think the EA movement is quick to want to 'justify' being based in very expensive areas because of an argument that it's more talent-concentrated there, but there are some arguments to be made that it would be cheaper to replicate this kind of 'talent cauldron' in a cheaper location than sustain even just 'decent' standards of living in some of the world's most expensive locations such as the Bay Area. 

That's an excellent challenge and an argument for which I great sympathy for. 

THIS. 

The 'hyper-optimisation' approach that organisations adopt when trying to recruit the 'best' talent comes at the cost of a huge waste of time and energy for countless candidates that don't even stand a chance of getting a job is, according to me, a textbook example of maximisation gone wrong

What you suggest (limiting the number of application to a given number, say the first 70, and then stopping accepting applications) is in my view a good compromise since, as you say, after a certain point you're unlikely to get a noticeably better sample. Meanwhile, all the candidates who wouldn't realistically stand a viable chance just save themselves some time and don't apply. 

I'd like to contribute my two-cents in the form of a meta comment on the discussion above, particularly on the points made by @Yarrow Bouchard 🔸 and @David Mathers🔸 

What you guys are doing is the very valuable job of sifting through evidence and signals pointing towards factors that could either stall or accelerate progress towards AGI, and making some sort of epistemological analysis as to what evidence we should give more credit to when thinking about timelines, in order to influence our decision-making for very pragmatic day-to-day decisions like how should we best spend our money and time in order to have the best shot at creating goodness for Humanity. 

I have my own views on each individual points you raised, but regardless of my opinions, I'd like to talk about the practical uses of such analyses, and about the next step: namely, what do we do with all this? 

My best shot at a guiding principle for action in times of uncertainty is to try and act with the following reasoning:
What actions can I take, so that even if I turn out to have been completely wrong in my 'predictions', my actions are still very likely to make a positive impact on humanity / not be wasted?

In light of this: 
- Investing tons of money into stocks and shares into frontier AI companies would not be a wise course of action, because in the event of an AI bubble popping, I'd have lost valuable money that I could instead have invested in other impactful causes. 
- Investing in AI safety and AI security in the broad sense would be a wise course of action, because even if we turn out to be massively wrong as to when AGI will come, our safety investments would not be wasted and deliver actual benefits to society (e.g. improved democratic processes around AI policy, better cybersecurity, better biorisk security, etc.)

To illustrate this further, let's exaggerate ad absurdum and imagine for a moment that despite all the evidence we though we had, human-made climate change was actually a hoax, and actually the planet is fine without needing to intervene. 
EVEN if that were the case, the efforts made to combat climate change, such as making sure people and companies stop polluting as much, saving spaces for nature and biodiversity, etc etc would still not have been in vain, as by doing so we delivered really nice things for people and animals.

In other words, I think we should try to pick actions that address the worst case scenario, but also simultaneously wouldn't go to waste if we turned out to be massively wrong on how likely that worst case scenario is. 

Hi @abrahamrowe , would you be willing to share more information on this point? 



Organizations wanted this to exist.

  • Organizations would be happy to recruit candidates out of a shared hiring pool.

I'm preparing an article with @Anaeli V. 🔹 and others about this and would love some more evidence that organisations are looking for a simplified system. 

Could you also clarify this point? Why do you think it would generate no savings despite organisations reporting they would save a lot of time?
 

  • While this process seems like it might produce savings, based on the time savings organizations reported this would generate for them, my estimate was that the cost-effectiveness of a funder paying for this service to exist was pretty low.

 

I see and agree with your point regarding credibility. Would you mind sharing why you think your organisation didn't achieve the necessarily credibility in the eyes of recruiters, and what do you see as conducive to reaching the necessary credibility?
 

Thanks in advance for your help! :D 

Thank you so much for this. Commenting for reach and also because I want to re-read later in depth. Very much agree the system is broken, although the problem is more general and not EA focussed. However, I do agree with you that the EA ecosystem has huge potential for streamlining the process due to shared values and usually similar recruitment processes. 

I'm preparing a piece about it and will DM you the draft - would love to get your input on this

Load more