My background is in Industrial Engineering. I'm currently upskilling independently in AI Safety through project-based learning. I aim to combine broad project management abilities with AI Safety research skills, ideally oriented towards a medium-term role in research management. Alternatively I consider AI governance or founding an organisation that addresses a bottleneck in the AI Safety ecosystem.
I'm also involved in EA community building, co-organising the EA university group at UPF and facilitating introductory courses.
– Anything that can be helpful for breaking into AI Safety: advice, mentors, project ideas, collaboration, roles I should apply to, connections...
– Advice on doing independent work and, more broadly, on pursuing a career in EA
– Advice on community building
– Expose me to new ideas, challenge assumptions
– Meet people, get inspired, have fun!
– To others starting out: I'd love to connect! Have a low bar for reaching out
Maybe I can offer my humble advice for navigating early-career decisions and jumping into independent work and AI Safety. I'm always up for friendly chats. It would also be great to explore ways to collaborate on projects and build momentum together to do good.
– To more established professionals: I'd be keen to support any meaningful projects or roles where I could contribute.
I'm committed, curious, organised, and eager to learn. I bring strong analytical reasoning skills and a broad technical background, while currently upskilling in AI Safety and aiming at producing concrete output.
Maybe what humans need more than more advice is advice on how to actually apply advice — that is, better ways to bridge the gap between hearing it and living it?
So not just a list of steps or clever tips, but skills and mindsets for truly absorbing what we read, hear, discuss, and turning that into action. Which I feel might mean shifting from passively waiting for something to "click" to actively digging for what someone is trying to convey and figuring out how it could work for us, just as it worked for them.
Of course, not all advice will fit us, and that's fine. We can't expect to apply all advice we get, not even all advice that really resonates. Often, the greatest act of kindness we can do for ourselves isn't working to make ourselves more perfect, but understanding and accepting our imperfection and limitations.
However, realistically, I think the bigger reason we ignore most advice isn't that it's not for us — it's that we rarely pause to ask ourselves how it might look in practise or remind ourselves to follow through. That we waste the immense potential for transformation and for acquiring new habits and behaviours that's already out there.
Very good questions, thanks for asking.
Reading MASFA was one reason, certainly, although it wasn't enough. I already knew deep inside that social pressure was not a sufficient argument to justify the suffering of trillions of animals, and that it was necessary for people to go out of their way and face social disapproval if we wanted to make moral progress as a society. The book made these arguments more vivid and concrete, I started to think more about them — but yeah, they aren't enough, cause I already had the intuition. I had to make the connection.
Probably community played the biggest part. Engaging more with EAs in person, being an EA university group organiser. Knowing that, while I'd face social disapproval at times, there was a community where I could feel safe and at ease.
We are social animals and, for most of us, moral arguments don't have sufficient force for us to make such big decisions.
There was also this realisation that I believed in taking one's moral values and reasoning seriously in practice, yet that I wasn't acting accordingly. And I knew that I wanted to change that, to be consistent with myself, to be an example and be ahead of time in a way. We could call this "ego judo": these were egoic motivations, but if they could be redirected toward a positive outcome, then so be it. We need to use all leverage points.
But of course every case is different, and what worked for me will be different for others, so I'd be interested in hearing about others' cases.
I think it's very important to ask what makes some people make the connection and take their moral intuitions seriously, rather than ignore them. It's at the core of applied moral philosophy and, in the end, EA.