https://cs.stanford.edu/~jsteinhardt/ResearchasaStochasticDecisionProcess.html

Via Gwern.

In this post I will talk about an approach to research (and other projects that involve high uncertainty) that has substantially improved my productivity. Before implementing this approach, I made little research progress for over a year; afterwards, I completed one project every four months on average. Other changes also contributed, but I expect the ideas here to at least double your productivity if you aren't already employing a similar process.

Many EA type activities could benefit from this framework!

33

0
0

Reactions

0
0
Comments5


Sorted by Click to highlight new comments since:

Even smart people will often intuitively (that is to say, without realizing it, or only dimly realize it) shy away from the part of the project that would provide information that would tell them they're doing the wrong thing. This is part of the value of things like gantt charts and other project maps in that even though the plans they are typically used to generate fail when colliding with reality, they can alert you to ways you are fooling yourself about the most uncertain parts of a project.

My own approach i describe as multiobjective optimization but more based on simulated annealing/statistical mechanics) and deals with 'stopping times' rather than 'fail rates' though they are closely connected. I think maybe many EA affiliated people will not go through that whole paper--at least the few i've met. (I was told to get a CS degree either at UCSF where i had a job in theoretical biology or stanford, so i chose the 'stopping time' or 'fail rate'. I was pretty succesful at failing. Completed failing at 4 projects in 4 months. Condoleeza Rice also teaches at Stanford now---she helped win the war in Afghanistsan, Iraq, etc. No, good deed goes unrewarded.

Could you please elaborate on how I could apply this? Or where can I learn more?

I just noticed your question, since I've only recently started looking at EA forums, and I mostly look at the discussions on science, economics, climate change, and on EA methodology and practice (eg the recent one about basic income projects in Malawi by Givedirectly or some other similarily named group). This is one reference https://en.wikipedia.org/wiki/Stopping_time . I am mostly self-educated in stochastic processes, but this is a standard topic in texts. It basically means if you are doing a search --or many searches -- you try to estimate how much time/resources you will spend pursuing one search (or allocate time and resources among several alternative searches), before you will call it a success, or a failure, and give up.

I sort of know this 'intuitively' from hiking in mountains---i sometimes have had to check out several different paths to get to where i want to go, so you try following one for a while, and then decide whether to keep going because it seems to going the right place, or you go back and repeat the search on other trails. (It has been the case that at times all the trail options turned out to 'fail' (headed to cliffs that were impassable to me) so you end up going nowhere at least for awhile.

Multiobjective optimization is another standard topic --utility maximization in economics is one example (often solved via calculus of variations, or for more complex problems via computer algorithms). Intuitively for me this is like a hike where have several attractive choices to go to (different scenic high spots, waterfalls, valleys, or areas with special kinds of flora and fauna --I'm a sort of amateur naturalist) , and usually you can't go everywhere (especially with time constraints) , so have to select some subset which is 'optimal' (and maybe save the ones you missed for another day).

Since my math skills are 'suboptimal' I have been trying to develop my own formalism, which is a 'toy model' (like many in stochastic processes---eg random walks, urn models, etc.) but may capture essence of more complex ones. Its a 'labor of love' and may go nowhere and is sort out of the mainstream. Also its an attempt to make these tropes of problems relativley simple so you dont need a PhD to get the idea, and maybe even apply it. The analogy might be a GPS on your phone or in your car--give you directions on where to go and what to do. .

I have been trying off and on to find people interested in this model---possibly as collaborators (but the few people I've talked either work on their own models, or else work using standard heavy duty computational or high level math formalisms). Also few of them work on applications of the kind I am interested in (which are close to some EA proejcts) --more often they are into investing, sometimes product development, or on allocating resources to best find terrorist cells and such.














Thank you!!

More from arikr
Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier