Hide table of contents

Is anyone familiar with H.R. 485? It has been introduced in the House, but it is not yet law.

According to the CRS "This bill prohibits all federal health care programs, including the Federal Employees Health Benefits Program, and federally funded state health care programs (e.g., Medicaid) from using prices that are based on quality-adjusted life years (i.e., measures that discount the value of a life based on disability) to determine relevant thresholds for coverage, reimbursements, or incentive programs".

I think the motivation might be to prevent discrimination against people with disabilities, but it seems to me like it goes too far.

It seems to me it would prevent the use of QALYs for making decisions such as is a particular cure for blindness worthwhile, and how might it compare to treatments for other diseases and conditions.

Is anyone familiar with this bill and able to shed more light on it?

44

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

I don't know a lot about this bill specifically, but here's my sense:

This bill has been pushed by disability activists, who are opposed to things like QALYs, which they consider ableist. Steve Pearson nicely summarizes why here:

Since the early days of CEA experts recognized that any extension of life for patients with a persistent disability would be “weighted” in the QALY by the (lower) quality of life assigned to that health state. For example, a treatment that extends life — but does not improve quality of life — for patients with a condition that requires mechanical ventilation would be assigned a lower QALY gain that a treatment that extends life exactly the same amount for patients with rheumatoid arthritis or cancer.

This bill currently has no democratic co-sponsors in the House (although this article says that there is "bipartisan interest"), and I do not think it has been introduced in the Senate. Thus, I suspect this bill is unlikely to get passed under a democratic administration, but I'm not sure about that.

Here is some background:

  1. US health care spending is out of control ($4.3 trillion; 18.3% of GDP in 2021); this is a massive, intractable problem, and this bill would certainly not help. 
  2. I do not think QALYs are super widely used in the US health care system as is. H.R. 485 represents an expansion of existing restrictions (see page 47) on the use of cost-effectiveness analysis in Medicare and other federal programs; the goal of this legislation is to fully ban the use of QALYs across all federal programs, which I think would include state Medicaid programs, since Medicaid is jointly financed by states and the federal government.
  3. Even in the absence of this bill, there is significant public opposition to the use of QALYs and similar metrics in US health care. (Health care rationing remains a very loaded issue in US politics.)
  4. In terms of things that are wrong with the US health care system, failure to use QALYs is a problem, but I think other things are bigger contributors to the widespread provision of low-value care.
  5. If this bill were to pass, I think (?) it'd still be possible to use things like evLYGs, which could play a similar role as QALYs in cost-effectiveness analysis, but "evenly measure any gains in length of life, regardless of the treatment’s ability to improve patients’ quality of life."

Tl;dr: I think passing this bill might be akin to shooting holes in the tires of a car that only had two wheels to begin with, and it currently looks unlikely to pass.

If it makes you feel any better, 90-95% of bills are never passed into law.

I don't have any additional information about this specific bill, but I'd guess that prevention of discrimination is exactly the point.

It seems to me it would prevent the use of QALYs for making decisions such as is a particular cure for blindness worthwhile,

Yeah, I think this is probably a good thing - QALY-based prices probably shouldn't be used to determine "determine relevant thresholds for coverage, reimbursements, or incentive programs" (ETA: particularly for federal programs in a health care system like that in the US).

and how might it compare to treatments for other diseases and conditions.

I'm not sure this would come into play here, since insurance coverage etc. (which, to my understanding, is what is in question here) doesn't operate with that logic, but I could be lacking in imagination.

Comments7
Sorted by Click to highlight new comments since:

Thanks for sharing, gordoni.

I am not familiar with the bill. However, I think wellbeing-adjusted life years are a better measure than quality-adjusted life years (see here).

WELLBYs are proposed in the doc you link as a measure specifically for non-heath and non-pecuniary measures. QALYs take subjective well-being into account, along with physical health metrics through the psychological component of HRQoL, so a shift to WELLBYs in this context just excludes the physical health component of QALYs in pricing physical health interventions.

Hi there,

Thanks for clarifying. In any case, I think we should only care about health and pecuniary benefits to the extent they affect wellbeing, so using WELLBYs still seems better than QALYs for assessing those too. In addition, I would prefer a wellbeing metric that measured happiness instead of life satisfaction.

This is getting into philosophical territory, so here’s a thought experiment. Let’s say you’d lost your legs. You had to choose between a $10 pill that instantly regrew your legs and restored your subjective well-being, and a $0 pill that only corrected any loss in subjective well-being from having lost your legs. Do you really choose the well-being only pill in this case?

Thanks for the thought experiment! 

So, in the example you described, I would pay the 10 $ to get my legs back. However, this is just because I am altruistic, and with my legs minus 10 $ I would have a greater positive impact in the world than without my legs (not having legs surely implies a loss in produtivity due to e.g. having to spend more time to move from one place to the other).

If the expected total hedonistic utility (ETHU) for all the moral patients excluding me was the same in both scenarios, I would be totally indifferent between the 2 options.

Interesting! Do you think that is a common view? And do you think that federal healthcare policy should be made by somehow tapping into commonsense moral intuitions? Or should a winning, even if unpopular, argument determine policy options?

Edit: perhaps we can value QALYs on the principle that we’re unlikely to be able to accurately track all contributors to total ETHU in practice, but having people maintain physical health is probably an important contributor to it in practice. Physical health has positive externalities that go beyond subjective well-being and therefore we should value it in setting healthcare policy.

Do you think that is a common view?

In the general population, no. It is hard to imagine wellbeing being the same without 2 legs, so people would answer the question ignoring the fact wellbeing would be the same.

And do you think that federal healthcare policy should be made by somehow tapping into commonsense moral intuitions? Or should a winning, even if unpopular, argument determine policy options?

I think commonsense moral intuitions should absolutely be taken into account. However, our intuitions can easily be misleading, so we should see whether they are consistent. For example, humans finding it much more intuitive to make comparisons between the mean level of wellbeing often results in people rejecting the Repugnant Conclusion, which follows from pretty undisputable premises.

Personally, I find expectational total hedonistic utilitarianism being true as intuitive as 1 = 1 being true. So, when asked about my preference about 2 situations in which ETHU is constant, I am always indifferent between them.

I also believe most disagreements about these thought experiments come from different interpretations about the meaning of wellbeing. For example, I think it is often said wellbeing does not allow for intrinsically valueing relationships, beauty, and freedom. However, all of these are words we use to describe conscious states, i.e. wellbeing. Another common argument is that people value unconscious objects for their own sake (not just for the sake of the observer). However, all things we call unconscious are actually conscious in expectation, because they have a non-null chance of being conscious, so they also relate to wellbeing in expectation.

Edit: perhaps we can value QALYs on the principle that we’re unlikely to be able to accurately track all contributors to total ETHU in practice, but having people maintain physical health is probably an important contributor to it in practice.

Great point! I agree there is a positive correlation between QALYs and ETHU, but guess the correlation between WELLBYs and ETHU is stronger. Anyways, I am not confident about this. I am mainly in favour of a more widespread usage of WELLBYs in order to change the focus to what actually matters, wellbeing. Even if WELLBYs is not a great measure of it, adopting them would hopefully eventually lead to the adoption of better metrics in the future.

Physical health has positive externalities that go beyond subjective well-being and therefore we should value it in setting healthcare policy.

I think ETHU is all that matters (see this related episode of The 80,000 Hours Podcast), and in that sense that are not positive/negative externalities which go beyond it. I suppose you are alluding to e.g. better health leading to economic growth, which tends to increase wellbeing even if it does not immediately impact it in the short term. However, I am generally quite uncertain about whether economic growth is good or bad. While subjective wellbeing has been increasing with greater consumption (say, since at least the industrial revolution), existential risk has increased too. In other words, improving health does not look to me like a robust way of achieving differential progress.

So maybe the focus should not be on QALYs nor WELLBYs, but on good metrics to achieve differential progress. Maybe ones about rationality? Being more rational means being better at achieving goals. So, to the extent high existential risk is not aligned with our goals, greater rationality will tend to decrease it. I guess this is part of the motivation for 80,000 Hours having epistemics and institutional decision-making as one of its most pressing problems.

In addition, I believe the attention should shift (on the current margin) from gross domestic product to things like total amount of compute, or cost of DNA screening, which are much more informative about the greater x-risks, advanced AI and engineered pandemics.

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies