I am currently co-organiser of EA St Andrews and am studying Philosophy and Economics.
Otherwise, I am very much focused on personal development, object-level knowledge, and testing personal fit. Right now, I am volunteering for the Shrimp Welfare Project and am part of a research project within the Oxford Biosecurity Group.
In my free time, I really enjoy sports, going out, and music :))
Hey, thank you for this post!
It sounds extremely plausible that it should be a priority to avoid speciesism (on the grounds of intelligence and other factors) in AI (and reduce it in humans).
Just out of curiosity, it seems important to identify how our current species bias on the grounds of intelligence looks like:
a) 'any species with an average intelligence that is lower than average human intelligence is viewed to be morally less significant.'
b) 'any species that is (on average) less intelligent than one's own species, is morally less significant.'
If (a), would this imply that AI would not harm us based on speciesism on the grounds of inferior intelligence?
Would love to hear people's thoughts on this (although I realise that such a discussion, in general context, might be a distraction from the most important aspects to focus on: avoiding speciesism).
Yes, super interesting to see the transition of @tobyj 's thoughts on AI!
I wonder how much time it takes for the average EA without technical background and AI-related job to fully wrap their mind around the complexities of AI (given that now there are much more resources and discussions about this topic).
Obviously, there are many factors playing into this, but I would love to hear some rough estimates about this :))
This is super insightful and definitely sounds like highly valuable to do, in order to make decisions that have higher credence. Thank you!
I am wondering what people's thoughts on the following are:
TLDR: current undergraduate student looking for work experience in EA (-related) jobs; operations, communications, research
Skills & background: experience in EA community building and operations; volunteering for Shrimp Welfare Project; participant in an Oxford Biosecurity Security Research Project; helping to organise EAGxLondon (admissions marketing, production); interested in research and/or operations and open to any new experience; excellent academic background; stronger involvement with EA since summer 2023
Location/remote: flexible/no strong preference; if in person then preferably in Germany, the UK, or neighbouring countries
Availability & type of work: full-time internship (or volunteering) between mid-May and beginning of September 2024
Resume/CV/LinkedIn: https://docs.google.com/document/d/1KMQNPZKC7CPJUCgJGA6xLf335bJI7bI8/edit?usp=share_link&ouid=102732152920544712596&rtpof=true&sd=true
Email/contact: elisabeth03rieger@gmail.com; slack; dm on the forum
Other notes: I would like to up-skill, gain valuable experience to make considerate, high-impact career choices, and simultaneously am looking to work for a high-impact org/employer
In short, for me it is much less about feeling like the EA Forum is unwelcome of posts from people with less expertise (e.g. me) than others. It is more that if I share a post, I want it to be adding significantly new insights and not be an iteration of what has been written about already. This consequently takes time to roughly know what content exists already. My status quo when coming across something that is novel to me, I by default assume that it has probably been covered by others already.
However, I have not figured out whether I should start posting because the personal benefits of it (having a project to work towards, formulating arguments, etc.) could be sufficient enough for me to post, without paying too much attention to the benefits for other Fourm readers.