Hide table of contents

I would like to gain mastery in the domain of alignment research. Deliberate practice is a powerful sledge hammer for gaining mastery. But unlike something like chess or piano, it's not clear to me how to use this sledge hammer for this domain. The feedback loops are extremely long, and the "correct action" is almost never known ahead of time or even right after doing the action.

What are some concrete ways I could apply deliberate practice to alignment research?

One way would be to apply it to skills that are sub-components of research, rather than trying to rapidly practice research end-to-end.

The sub-skill I've thought of that is the best fit to deliberate practice is solving math and physics problems, a la Thinking Physics or other textbook exercises. Being better at this would certainly make me a better researcher, but it might not be worth the opportunity cost, and if I ask myself, "Is this cutting the enemy with every strike?" then I get back a no.

Another thing I can think of is trying to deliberately practice writing, which is a big part of my research. I could try to be more like John, and write a post every week, to get lots of quick feedback. But is this fast enough for deliberate practice? I get the sense that the feedback cycle has to be almost real-time. Maybe doing tweet explanations is the minimal version of this?

I'd appreciate any other concrete ideas! (Note that my research style is much more mathy/agent-foundations flavored, so programming is not really a sub-skill of my research.)

19

0
0

Reactions

0
0
Comments4


Sorted by Click to highlight new comments since:

Not directly relevant to the OP, but another post covering research taste: An Opinionated Guide to ML Research (also see Rohin Shah's advice about PhD programs (search "Q. What skills will I learn from a PhD?") for some commentary.

"a la Thinking Physics or other textbook exercises."

Very much think this is the wrong move, for the reason you mention that it doesn't even have a clear intended path to cut the enemy. I would advise that for projects where there's an imaginable highly-detailed endstate you're trying to get to (as opposed to chess, where there are a million different checkmate patterns with few shared features that can guide your immediate next moves), you should start by mapping out the endstate. From there, you can backchain until you see a node you could plausibly forward-chain to--aka "opportunistic search".

I think the greatest bottleneck to producing more competent alignment researchers is basically self-confidence. People are too afraid of embarrassment, so they don't trust in their own judgment, so they won't try to follow it, so they won't grow better judgments by successively making embarrassing mistakes and correcting themselves. It's socially frowned upon to innocently take your own impressions seriously when there exists smarter people than you, and it reflects an oppressive "thou shalt fall in line" group mentality that I find really unkind.

Like a GAN that wants to produce art, but doesn't trust its own discriminator, so it atrophies and the only source of feedback that remains for the generator is the extremely slow loop of outside low-bandwidth opinion. Or like the pianist who's forgotten how to listen, and looks to their parent after every press of a key to infer whether it was beautifwl or not.

I think that researchers that intend to produce something should forget about probability. You're not optimising for accurate forecasts, you are optimising for building new models that can be tested and iteratively modified/abandoned until you have something that seems robust to all the evidence it catches. It's the difference between searching for sources of Bayesian evidence related to specific models you already know about, vs searching for information that maximises the expected Kullbeck-Leibler Divergence between all your prior and posterior intuitions in order to come up with new models no one's thought of before.

That means that you have to just start out trying to make your own models at some point, and you have to learn to trust your impressions so you're actively motivated to build them. Which also means you'll probably suffer in terms of your forecasting ability for a while until you get good enough. But if you're always greedily following the estimated-truth-gradient at every step, you have no momentum to escape being stuck in local optima.

I realise you were asking for concrete advice, but I usually don't think people are bottlenecked by lack of ideas for concrete options. I think the larger problem is upstream, in their generator, and resolving it lets them learn to generate and evaluate-but-not-defer-to ideas on their own.[1]

  1. ^

    Of course, my whole ramble lacks lots of nuance, disclaimers, and doesn't apply to all that it looks like I'm saying it applies to. But I'm not expecting you to defer to me, I'm revealing patterns that I hope people will steal and apply for themselves. Whether lack of nuance makes me be literally wrong is irrelevant. I'm not optimising for being judged "right" or "wrong"--this isn't a forecasting contest--I'm just trying to be helpfwl by revealing tools that may be used.

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies