P

Peter

712 karmaJoined Working (0-5 years)

Bio

Participation
2

Interested in AI safety talent search and development. 

How others can help me

  1. Discuss charity entrepreneurship ideas, nuts & bolts.
  2. Robustly good opportunities for positive impact in short AI timeline worlds <3 years
  3. Connect me with peers, partners, or cowriters for research or fiction. 

How I can help others

Making and following through on specific concrete plans. 

Comments
153

Topic contributions
2

Seems important to check whether the people hired actually fit into those experience requirements or have more experience. If the roles are very competitive then it could be much higher. 

This seems interesting. Are there ways you think these ideas could be incorporated into LLM training pipelines or experiments we could run to test the advantages and potential limits vs RLHF/conventional alignment strategies? Also do you think using developmental constraints and then techniques like RLHF could be potentially more effective than either alone?

Answer by Peter8
1
3

I'd like to see more rigorous engagement with big questions like where value comes from, what makes a good future, how we know, and how this affects cause prioritization. I think it's generally assumed "consciousness is where value comes from, so maximize it in some way." Yet some of the people who have studied consciousness most closely from a phenomenological perspective seem to not think that (e.g. zen masters, Tibetan lamas, other contemplatives, etc), let alone scale it to cosmic levels. Why? Is third person philosophical analysis alone missing something? 

The experiences of these people add up to millions of years of contemplation across thousands of years. If we accept this as a sort of "long reflection" what does that mean? If we don't, what do we envision differently and why? And are we really going to be able to do serious sustained reflection if/once we have everything we think we want within our grasp due to strong AI?

These are the kinds of things I'm currently thinking through most in my spare time and writing my thoughts up on. 

For 2, what's "easiest to build and maintain" is determined by human efforts to build new technologies, cultural norms, and forms of governance.

For 11 there isn't necessarily a clear consensus on what "exceptional" means or how to measure it, and ideas about what it is are often not reliably predictive. Furthermore, organizations are extremely risk averse in hiring and there are understandable reasons for this - they're thinking about how to best fill a specific role with someone who they will take a costly bet on. But this is rather different than thinking about how to make the most impactful use of each applicant's talent. So I wouldn't be surprised if even many talented people cannot find roles indefinitely for a variety of reasons: 1) the right orgs don't exist yet 2) funder market lag 3) difficulty finding opportunities to prove their competence in the first place (doing well on work tests is a positive sign but it's often not enough for hiring managers to hire on that alone), etc. 

On top of that, there's a bit of a hype cycle for different things within causes like AI safety (there was an interp phase, followed by a model evals phase, etc). Someone who didn't fit ideas of what's needed in the interpretability phase may have ended up a much better fit for model evals work when it started catching on, or for finding some new area to develop. 

For 12, I think it's a mistake to bound everyone's potential here. There are certainly some people who live far more selflessly and people who become much closer to that through their own efforts. Foreclosing that possibility is pretty different than accepting where one currently is and doing the best one can each day. 

Would be curious to hear more. I'm interested in doing more independent projects in the near future but am not sure how they'd be feasible. 

What do you think is causing the ball to be dropped?

Yes, what you are scaling matters just as much as the fact that you are scaling. So now developers are scaling RL post training and pretraining using higher quality synthetic data pipelines. If the point is just that training on average internet text provides diminishing real world returns in many real-world use cases, then that seems defensible; that certainly doesn't seem to be the main recipe any company is using for pushing the frontier right now. But it seems like people often mistake this for something stronger like "all training is now facing insurmountable barriers to continued real world gains" or "scaling laws are slowing down across the board" or "it didn't produce significant gains on meaningful tasks so scaling is done." I mentioned SWE-Bench because that seems to suggest significant real world utility improvements rather than trivial prediction loss decrease. I also don't think it's clear that there is such an absolute separation here - to model the data you have to model the world in some sense. If you continue feeding multimodal LLM agents the right data in the right way, they continue improving on real world tasks. 

Shouldn't we be able to point to some objective benchmark if GPT-4.5 was really off trend? It got 10x the SWE-Bench score of GPT-4. That seems like solid evidence that additional pretraining continued to produce the same magnitude of improvements as previous scaleups. If there were now even more efficient ways than that to improve capabilities, like RL post-training on smaller o-series models, why would you expect OpenAI not to focus their efforts there instead? RL was producing gains and hadn't been scaled as much as self-supervised pretraining, so it was obvious where to invest marginal dollars. GPT-5 is better and faster than 4.5. This doesn't mean pretraining suddenly stopped working or went off trend from scaling laws though. 

Maybe or maybe not - people also thought we would run out of training data years ago. But that has been pushed back and maybe won't really matter given improvements in synthetic data, multimodal learning, and algorithmic efficiency. 

  1. It seems more likely that RL does actually allow LLMs to learn new skills.
  2. RL + LLMs is still pretty new but we already have clear signs it exhibits scaling laws with the right setup just like self-supervised pretraining. This time they appear to be sigmoidal, probably based on something like each policy or goal or environment they're trained with. It has been about 1 year since o1-preview and maybe this was being worked on to some degree about a year before that.
  3. The Grok chart contains no numbers, which is so strange I don't think you can conclude much from it except "we used more RL than last time." It also seems likely that they might not yet be as efficient as OpenAI and DeepMind, who have been in the RL game for much longer with projects like AlphaZero and AlphaStar. 
Load more