Hide table of contents
This is a linkpost for https://markxu.com/value-time

Summary

  • People often seem to implicitly value their time at the amount they can convert hours to dollars given their current skills.
  • However, the value of saving the marginal hour today is to increase the total number of one's working hours by one, resulting in a new hour at the end of one's career, not a new hour at their current skill level.
  • This suggests that people who expect their time to become valuable in the future must think their time is approximately just as valuable now, because saving time now gets them to the point where their time is valuable faster and gives them more of such time.
  • This analysis is complicated by various temporal dependencies (e.g. time discounting) that push the value of the current hour up or down compared to the value of the marginal hour at the end of one's career.
  • Under such a view, finding promising young altruists and speeding up their careers represents a significant value add.

Intro

Many people in my social circles have an amount they "value their time." Roughly speaking, if someone values their time at $50/hr, they should be willing to pay $50 to save an hour of time, or be paid $50 to do work that has negligible non-monetary value. Knowing this value can provide a simple decision rule for deciding which opportunities to trade money for time it's efficient for you to take. I will argue that a naive perspective on time evaluations generally results in an underestimate. This analysis suggests that altruistic actors with large amounts of money giving or lending money to young, resource-poor altruists might produce large amounts of altruistic good per dollar. I will analyze the situation mostly in terms of wages as expressed in dollars; however, readers might want to substitute "altruistic impact" instead. I will begin by analyzing a simplified situation, adding more nuance later.

The value of your time is the value of the marginal hour at the end of your career

If I currently have a job that lets me convert one hour of time into $50 dollars, then it's clear that I should take all time-saving opportunities at less than $50 dollars. (Note that this doesn't mean that I should pay $50 to save an hour of furniture assembly. Furniture assembly might be enjoyable, teach me valuable skills, etc.) However, this assumes that the benefits I receive from my job are entirely monetary. For most jobs, this will not be the case. If one is a software engineer, then much of the benefit of 1 hour of working as a software engineer will be the skills and experience gained during that hour. To be more specific, the hourly rate that a software engineer commands depends on their skill, which depends on training/experience in turn. Thus an hour of software engineering might increase expected future compensation by more than $50 (in fact, under plausible assumptions, this will be the primary benefit of the early part of most careers.)

To be more quantitative, let be the wage an employee with hours of experience can earn per hour. Suppose that you currently have hours of experience and your career in total will be hours long. The amount of dollars you expect to earn in the future is . (Note that a more precise analysis would have included a discount rate. Money now is worth more than money later because of investment possibilities.) A naive model of saving an hour of present time calculates the total earnings of your career as , meaning you should take an opportunity to save an hour at cost if and only if .

However, as stated above, this suggests that the marginal hour at the present is worth , your current wage. This is not what actually happens when you save one hour at the present. What actually happens is that your total earnings of your career will now be for a difference of instead of . Since one's expected wage at the end of a career is likely substantially higher than ones current wage (especially for people at the beginning of their careers), treating the value of one's time as instead of leads to an underestimate by .

For example, suppose that one is a quantitative trader. They currently earn $100/hr. However, with 20,000 hours (10 years, assuming 2000 working hours a year) of experience, they expect to earn $1000/hr. If they have no time-discount rate on money, then they should be willing to pay up to $1000 to save an hour of time presently, despite the fact that they will be net down $900 if they use that time to do work. Another way of seeing this is that saving an hour of time for your present self is in some sense the same thing as saving an hour of time for your future self, because it causes the future to arrive one hour earlier and be one hour longer. Thus, if you would be willing to trade an hour for $1000 in the future, you should also be willing to do so now.

This also suggests that the returns to working twice as much results in much more than twice the value produced. Naively, a 160,000 hour career produces the same value as two 80,000 hour careers. However, in reality, one of those careers is going to start with 80,000 hours of experience! This doesn't account for a lot of relative factors (being faster than competitors can produce much higher amounts of value) or aging-out effects like getting worse at working as you work more.

A corollary is that burning out for a year is a disaster, because it's equivalent to losing a final career year. Similarly, vacations and other such leisure activities have larger costs than one might have naively expected, since they delay career growth and shorten careers. For example, if someone who could have had a 40 year career burns out for a year, their career is now 39 years and is missing the year where they would have had 39 years of experience.

Temporal Dependence

One key factor missing in the above analysis is a temporal dependence on the value of wages. (The substitution of wages for altruistic impact is going to break down slightly and depend on complicated factors like the flow-through effects of altruism and whether standard investment returns are higher than altruistic flow-through effects. See Flow-Through Effects of Innovation Through the Ages and Giving Now vs. Later for a more nuanced discussion.) The most obvious form of temporal dependence is a monetary discount rate controlled by the ability to turn money now into more money later via standard investments. Such a discount rate suggests that our theoretical quantitative trader discussed above should not be willing to spend $1000 to save an hour of time at the present day, but rather spend an amount that would be equivalent to $1000 after 10 years of investment (approximately $500 at 7% yearly returns). I could write an equation expressing this, but I don't think it would lend much clarity.

Less standard but more accurate analyses would incorporate the relative differences in the value of money over time for your particular goals. For instance, it might be that the altruistic discount rate on dollars is much higher than the standard discount rate because there are altruistic opportunities available now that won't be available later, even if you had double the money. Another salient example is effective altruism movement building (meta-EA), which might get most of its value early on. One way to model this is that instead of producing value directly, people in meta-EA save other people's time (by getting them into the movement earlier), enabling them to produce more value later. If you think, for example, that this century is particularly important, then saving an early career altruistic professional 1 year of time in 2090 is going to get you the marginal year of someone with ~10 years of experience, compared to saving such a person 1 year in 2080, which gets the marginal year with ~20 years of experience. Depending on how quickly you think the value of someone's work goes up with respect to experience, then this might suggest large discount rates.

As another example, people working in AI Alignment (like me) might think that most valuable alignment work is going to be done in the ~10 years preceding transformative AI (TAI). If you think this date is about 2055 (see Holden's summary of Ajeya's Forecasting TAI from Biological Anchors), then the most important thing is to maximize your abilities as a researcher from 2045-2055. (It's possible that you should be making different bets, e.g. if you think you have more influence in worlds where TAI is sooner.) Since I'll probably still be working in 2055, saving a marginal year of time today gives me one extra year of research experience during the decade preceding TAI, but not any extra marginal years during that decade. (This does suggest that saving time during that decade is very valuable, though.) Of course, I am not modeling various effects that current research has on things like field building, which potentially dominates the value of my current work.

Actionables

This analysis suggests that people with the potential to earn high salaries/have high altruistic impact have high time value, not because they can produce useful work currently, but because it will get them to where they eventually will end up faster. Provided this holds qualitatively, it suggests a couple of things:

  • Care about the value of your time more and try to aggressively take opportunities to save it or spend it more effectively, even if this doesn't make that much sense in terms of the value you think you can currently generate.
    • This might mean that you should take out loans and such, so you have resources. If your expected future earnings are high, then things like hiring tutors to graduate school faster are likely worth the interest on the loan.
  • What you spend your free time doing actually kind of matters. Developing some skill one year faster increases the amount of value you produce on the margin by quite a bit.
  • Standard advice saying that young people have time to explore potential career options should be balanced against the cost of becoming less awesome in that particular career option because too much time was spent exploring.
    • For example, if someone is potentially a promising AI Aligner, and they take a year off college to travel the world and see the sights, this decreases the amount of research experience they have during the period around TAI by a year.
    • Exploration is still probably a good idea, but it should be traded off not against the value one would have produced directly, but rather the marginal increase in value that would have resulted from the increased growth/experience if that time wasn't spent exploring.
  • Finding promising young people and using large amounts of resources to speed up their careers probably has pretty good altruistic returns.
    • (If you think you're such a person and could benefit from additional resources, feel free to send me an email and I'll see what I can do.)
Comments1


Sorted by Click to highlight new comments since:

This analysis suggests that altruistic actors with large amounts of money giving or lending money to young, resource-poor altruists might produce large amounts of altruistic good per dollar.

A suspicious conclusion coming from a young altruist! (sarcasm)

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att