SK

Sudhanshu Kasewa

Advisor @ 80,000 Hours
158 karmaJoined

Comments
19

Thanks for doing this, Ben! 

Readers: Here's a spreadsheet with the above Taxonomy, and some columns which I'm hoping we can collectively populate with some useful pointers for each topic:

  1. Does [academic] work in this topic help with reducing GCRs/X-risks from AI?
  2. What's the theory of change[1] for this topic?
  3. What skills does this build, that are useful for AI existential safety?
  4. What are some Foundational Papers in this topic?
  5. What are some Survey Papers in this topic?
  6. Which academic labs are doing meaningful work on this topic?
  7. What are the best academic venues/workshops/conferences/journals for this topic?
  8. What other projects are working on this topic?
  9. Any guidance on how to get involved, who to speak with etc. about this topic?

For security reasons, I have not made it 'editable', but please comment on the sheet and I'll come by in a few days and update the cells.

[1] softly categorised as Plausible, Hope, Grand Hope

Hi Sven: Unfortunately this is a bit outside my wheelhouse, but you might want to reach out to the folks behind Integral -- I bet Milan would have great ideas about what skills to build in order to work on this cause.

Hi! Thanks for sharing your story. Some quick thoughts:

  1. Could you quit your part-time job and instead use that time better? Could you take on a different part-time position that lets you build useful skills and networks?
  2. I think it's probably okay to finish your masters (since you're already halfway through it); if you can find a job that's going to get you hands-on experience on, say, LLMs, and/or build skills that could be robustly useful in a variety of roles, it might tip the scales in favour of dropping out (but by then you'll be even closer to finishing, so maybe it makes sense to just complete it).
  3. With or without your masters, with or without your job, I think it's useful to get situationally aware about what's happening in AI, and get comfortable (proficient, even) with using AI tools to enhance your own productivity and growth.
  4. Benjamin Todd wrote about how not to lose your job to AI. I might not fully endorse this piece, but I think it's directionally correct and has some good ideas to think about.

All the best!

Hi Marc, thanks for the question.

Lots has been said about the value of PhDs:

  1. Lewis Hammond gives advice here about doing PhDs
  2. Rohin Shah talks a little about PhDs here in relation to AI safety work
  3. Adam Gleave makes a positive case here for AI safety

[Caveat: having dropped out of a PhD myself, I might be biased against doing one.] I think our piece on doing PhDs mostly holds up, but I'd make a few updates away from doing one:

  1. AI things might happen soon, and in many worlds it would be better to do good soon rather than build a lot of skill first. (This obviously doesn't apply if you are able to do good via your PhD as well.)
  2. For many of us, we're likely to really succeed at a PhD only if we are obsessed about it: an An advisee once said to me "Only do a PhD, if it's something that you would do for free, in your free time, after work, on nights and weekends."
    1. A softer version of this: I think many of us are not calibrated about what attitudes and behaviours help to do a PhD -- I certainly wasn't! Before committing, try to come to grips with what you're signing up for.
  3. Finally, I subscribe to a PhD not being an end-in-itself, and instead a way to get some role/job/opportunity, which otherwise might be very unlikely without it. In impact-focussed spaces, I don't think there are that many opportunities gate-kept by a PhD credential: academia/professorship, the "research scientist" title (but not necessarily the work), and maybe some policy positions. Orgs and managers who care about impact, care more about "can do you the work?" rather than "do you have the credential?"; could you get the legible skills to "do the work" in a paid job with better hours in fewer years, or do you have to do a PhD instead?

Hope this helps! All the best.

Hi Adam, so exciting that you want to use your skills for doing good. I'd go even further and say that "doing good" is its own goal to shoot for, and I want more folks thinking about "What are the best opportunities to do the most good?" first, and only then filtering by some subset of their relevant skills that might make them a good fit. 

This is described more in Part 4 our career guide, where we outline a framework -- Scale, Tractability, Neglectedness -- to identify global issues with some of the highest opportunities for positive impact. We've applied such an analysis in our work for many years, and have in-depth articles on what we believe to be some of the world's most pressing problems

We also host a job board to aggregate opportunities aimed at helping with these. If you're looking for more "goal-directed" next steps, perhaps scanning through those jobs can give you a sense of what the world needs right now, and how you can help. All the best!

Hi alphaplus, thank you for the questions. I'm glad to hear your health is improving.

I want to start by saying: your (written) English seems fine! Even if you're concerned about your speaking skills, you can always lean in to your written ability to connect, exchange ideas, and grow.

Without knowing more, it's hard to give very tailored advice, so here are some messages I think more folks should take seriously:

  1. AI could be a big deal, soon. It could create huge dangers.
    1. In light of this, lots of stuff needs doing, e.g. technical research, governance, cybersecurity, international cooperation.
      1. even if the most egregious risks of AI don't materialise soon (or at all), I'll claim (albeit without justification) that having an understanding of how these technologies are transforming the world puts one is a good place to help out in many future scenarios.

As a result, I advocate more people develop 'situational awareness', and make their plans keeping (the possibility of) rapid AI progress in mind.

To your main question of "What would you do if you were in my position?", there are several ways to progress. One procedure is articulated here:

  1. Make some best guesses (hypotheses) about which options seem best.
  2. Identify your key uncertainties about those hypotheses.
  3. Go and investigate those uncertainties.

The key point is to try things, get feedback, and update your beliefs, and try again. Once you have more clarity, you'll be able to aim for and commit to specific paths.

Finally, there are no real barriers to entry to engaging with Effective Altruism! If you think you'll find value in connecting with folks in the community, you absolutely should. In addition to this Forum, there are plenty of other spaces, e.g. EAG(x) events you can attend, or slack channels you can join.

Hi Stephanie,

Thanks for writing in. It's great that you're thinking of using your career to help AI go well, and have been building skills and applying for roles to that end. 

I'm sorry to hear you've been struggling with landing a role. Here are some ideas:

  1. Are you getting enough feedback on your application materials? It's good to solicit input from trusted sources, mentors, others on similar journeys as you, so that you're sure that you're not missing some key pieces in your applications.
  2. Are you talking/connecting to people in roles/orgs you're interested in? As we get more experience, it's more likely that we'll find roles through our network and collaborators; so it's useful to invest in those relationships, not just by being a jobseeker, but by aiming to help out and add value wherever you can (maybe by giving feedback/critiques, exchanging your expertise for something you want to learn, or volunteering in some way).
  3. Are you continuing to make your skills and experience legible? As my colleague @Matt Beard puts it "you should obsessively improve at an in-demand skill set in a legible way". Those skills could be within writing, speaking, research, analysis, code, hardware, interpersonal collaboration, project management, organisation building, strategic thinking, and so on. The idea here is analogous to "build it and they will come", and countless folks have translated their visible expertise into high-impact roles. Check out our skills pages for more details.
    1. A related thing is to use such public productivity and output as a way to increase your feedback surface area, to pick up on where you can grow. Aim to post your work in places where folks are happy to engage in good faith and offer constructive input. The EA Forum and LessWrong are great places for this!
  4. Are you maybe the right person to start a new org working on something because nobody else is doing it, and it's incredibly important? I often say in advising calls "You can just do things", because it's true and sometimes we forget that. Yes, it can be daunting, and it's worth considering your own personal circumstances, but all things considered, I want more people to be willing to take those kinds of risks.
    1. Similarly, you might even want to implement someone else's idea, or replicate or improve on an existing project -- there are plenty of excellent ideas out there that need more people executing them.
  5. Some more resources:
    1. Advice for mid-career folks looking for high-impact roles
    2. What to do when the job market isn't cooperating
    3. A (maybe slightly stale) list of social impact job boards

Hope this helps! All the best.

Quick thoughts:

  1. Great that you have work like the arxiv paper! You could even explicitly ask for feedback on that work
  2. Make it easy for people to understand your work: Try and answer questions like "Why did I do this? What did I learn and/or what update did I make? What is my theory of change?", and so on...
  3. Make it easy for people to engage with your work: Display it prominently, tweet about it, write a blogpost on lesswrong about it. Polish and publish the code base (see an example here), and so on...
  4. Everyone has their own style of building relationships. I think a powerful way to do so is to try and add value to others: can you summarise/discuss their work in public, or give them feedback, or extend it in an interesting way? Are there volunteer or part-time opportunities that you can help out with? Can you identify issues in their codebases and improve them?

Hi, thanks for the question!

I'm not sure anyone is confident what "optimal policy" is! Waiting to develop your own views on optimal policy can take some time; in the meanwhile you could check out what some researchers and think tanks have to say, e.g.:

  1. CLTR's policy proposals
  2. CAIS's policy proposals
  3. ARI's recommendations

Some longer reads:

  1. International AI safety report
  2. MIRI's AI governance research agenda
  3. OpenPhil's AI Governance RFP might have some pointers to some reasonable ideas 

Finally, for U.S. citizens, Emerging Tech Policy Careers is great! Check out their AI policy resources.

In general, applying is a great way to get feedback and get calibrated on what you bring to table and what you need to work on, as mentioned elsewhere in these comments, so, yes, I think, you should be biased towards applying to things. There are some nuances to this, including being aware of when a rejection is likely to result in a 'cooling off period' where you may not be able to reapply for 6-12 months.

Hi Edy,

I'm afraid I don't have any special insight into "how likely..." such funding is, but since the application isn't very costly (between 1 and 4 hours), it probably makes sense to apply anyway? And it's quite reasonable that you can reuse a bunch of stuff across applications too, so this investment at your end goes down.

Exciting that you've been doing so many things! I think it helps your chances if there is legible evidence of how your skills and thinking have evolved over the course of this year. I think it's also useful for grantmakers to know what you're aiming at: Which research agendas, what theories of change do you think are worth working on, and why?  Do your career transition plans align with those?

Finally, I think you should also considering applying to jobs! You've done quite a bit of stuff, and you might already be a good fit for some roles, so you should apply to things to get calibrated on whether you even need the funding.

Load more