I'm a first-year psychology Ph.D. student who has become quite down about the academic path. AI and LLMs, in particular, seem poised to make reading, writing, and storing information (i.e., a lot of what academics do) much less important. I also recently heard a talk where this team found that GPT-4 outperformed experts in predicting the results of unpublished psychology experiments. I have difficulty believing that in 20 years, there will still be a robust job market for academics. However, as I consider changing paths, I also think many other fields will experience the same. I have no clue what to pursue right now.

32

2
0

Reactions

2
0
Comments3


Sorted by Click to highlight new comments since:

This is an important question! I think you're right to imagine that traditional career paths are likely to shift a lot.

In terms of skills which are likely to remain useful for a good while, I want to highlight four areas:

First, general analytical skills and the ability to recognise important (vs unimportant or flawed) arguments. Even if we have assistance from language models, there will be important challenges in knowing which pieces of things to pay attention to.

Second, skills around in-person human interactions. These will be slow to be replaced by AI and they are crucial in several domains.

Third, and relatedly (since in-person interactions are an important component of how people understand and relate to things), developing social or political influence, broadly understood. Having a position to help others focus on what's important and make connections to try to ensure good outcomes could matter in lots of futures. Of course, this one comes with significant caveats: even well-intentioned influence may cause harm as well as help things; and jostling for influence can easily be negative sum. Approach with care!

Fourth, knowing how to get good use out of language models themselves. There is likely to be a period where centaurs (human-AI teams) outperform either pure AI or pure human teams. Having experience with the latest models and knowing how to get the best out of them will be helpful for staying at the forefront of the relevant labour force.

I think it should be possible to practice and develop these four classes of skill in many different local career paths, so I wouldn't want to make strong statements about what you should or shouldn't be pursuing in the short term.

I think an important thing to remember is that drastic change both obsoletes and creates jobs. I know that AI is not the same as prior technologies, but we've had very similar situations before with the first, second, and third industrial revolutions - with mass production and computing in particular. Many of the jobs we know of today will disappear, though new ones will appear (some are already starting to).

I think AI will struggle with a lot of the soft skills of academia, but I do agree that the field will be extensively changed. This isn't always a bad thing - I think the LLM and automation aspect will make scientific participation and discover much easier for large sections of society. My father is one of the most intelligent and driven men I know, but is mostly illiterate (and I mean this literally) for a variety of reasons, so I think for people like that AI will be useful.

To most directly answer your question, I'd say do what you love. Yes, that's really annoying and something you'd find on a motivational poster but if you genuinely love psychology academia then stick with it but keep an agile scout mindset to roll with the punches. If it was always just a job, play to very human strengths - look for roles which require a very human understanding of the world. Roles in HCI fit this, but there are a lot of roles in engineering and frontline science that are also almost impossible to automate within the next 20 years. Project management and people management roles also fit into this category.

Lots of this answer is opinion-based (and will be, naturally, given the request for opinions!) so others may disagree - but this is how I see it.

 

If you haven't had 80k advising before, that could be a great place to start! 

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
Although some of the jokes are inevitably tasteless, and Zorrilla is used to set up punchlines, I enjoyed it and it will surely increase concerns and donations for shrimp. I'm not sure what impression the audience will have of EA in general.  Last week The Daily Show interviewed Rutger Bregman about his new book Moral Ambition (which includes a profile of Zorrilla and SWP). 
 ·  · 2m read
 · 
Americans, we need your help to stop a dangerous AI bill from passing the Senate. What’s going on? * The House Energy & Commerce Committee included a provision in its reconciliation bill that would ban AI regulation by state and local governments for the next 10 years. * Several states have led the way in AI regulation while Congress has dragged its heels. * Stopping state governments from regulating AI might be okay, if we could trust Congress to meaningfully regulate it instead. But we can’t. This provision would destroy state leadership on AI and pass the responsibility to a Congress that has shown little interest in seriously preventing AI danger. * If this provision passes the Senate, we could see a DECADE of inaction on AI. * This provision also violates the Byrd Rule, a Senate rule which is meant to prevent non-budget items from being included in the reconciliation bill.   What can I do? Here are 3 things you can do TODAY, in order of priority: 1. (5 minutes) Call and email both of your Senators. Tell them you oppose AI preemption, and ask them to raise a point of order that preempting state AI regulation violates the Byrd Rule. * Find your Senators here. * Here’s an example of a call:  “Hello, my name is {YOUR NAME} and I’m a resident of {YOUR STATE}. The newest budget reconciliation bill includes a 10-year ban pre-empting state AI legislation without establishing any federal guardrails. This is extremely concerning to me – leading experts warn us that AI could cause mass harm within the next few years, but this provision would prevent states from protecting their citizens from AI crises for the next decade. It also violates the Byrd Rule, since preempting state AI regulation doesn’t impact federal taxes or spending. I’d like the Senator to speak out against this provision and raise a point of order that this provision should not be included under the Byrd Rule.” See here for sample call + email temp