J

jacobpfau

PhD Student @ NYU Alignment Research Group
127 karmaJoined
Interests:
Forecasting

Comments
47

The google form link seems not to work.

I would be particularly interested to know if 'technical AI academic' meant just professors, or included post-docs/PhDs.

Also are we to assume that any non 1person*year annotated question meant causing to exist an entirely new career-up-to-doom/TAI worth of work?

Lifeextension cites this https://pubmed.ncbi.nlm.nih.gov/24715076/ claiming "The results showed that when the proper dose of zinc is used within 24 hours of first symptoms, the duration of cold miseries is cut by about 50%" I'd be interested if you do a dig through the citation chain. The lifeextension page has a number of further links.

QALY/$ for promoting zinc as a common cold intervention

Epistemic status: Fun speculation. I know nothing about public health, and grabbed numbers from the first source I could find for every step of the below. I link to the sources which informed my point estimates.

Here’s my calculation broken down into steps:

  1. Health-related quality of life effect for one year of common cold -0.2

  2. Common cold prevalence in the USA 1.2/yr

  3. Modally 7 days of symptoms having -0.2

  4. ~1.5 million QALY burden per year when aggregated across the US population

    1. This is the average of estimating from the above (1e6) with what I get (2e6) when deriving the US slice of the total DALY burden from global burden of disease data showing 3% global DALYs come from URI
    2. There’s probably a direct estimate out there somewhere
  5. 50% probability the right zinc lozenges with proper dosing can prevent >90% of colds. This comes from here, here, and my personal experience of taking zinc lozenges ~10ish occasions.

  6. 15% best case adoption scenario, from taking a log-space mean of

    1. Masks 5%
    2. General compliance rate 10-90%

100,000 QALYs/year is my estimate for the expected value of taking some all-or-nothing action to promote zinc lozenges (without the possibility of cheaply confirming whether they work) which successfully changes public knowledge and medical advice to promote our best-guess protocol for taking zinc.

$35 million is my estimate for how much we should be willing to spend to remain competitive with Givewell’s roughly 1 QALY/$71. This assumes a 5 year effect duration. I have no idea how much such a thing would cost but I’d guess at most 1 OOM of value is being left on the table here, so I’m a bit less bullish on Zinc than I was before calculating.

EDIT: I calculated the cost of supplying the lozenges themselves. Going off these price per lozenge, this 5 year USA supply of lozenges costs ~35 million alone. Presumably this doesn't need to hit the Givewell spending bar, but just US government spending on healthcare.

Disagree. The natural, no-Anthropic, counterfactual is one in which Amazon invests billions into an alignment-agnostic AI company. On this view, Anthropic is levying a tax on AI-interest where the tax pays for alignment. I'd put this tax at 50% (rough order of magnitude number).

If Anthropic were solely funded by EA money, and didn't capture unaligned tech funds this would be worse. Potentially far worse since Anthropic impact would have to be measured against the best alternative altruistic use of the money.

I suppose you see this Amazon investment as evidence that Anthropic is profit motivated, or likely to become so. This is possible, but you'd need to explain what further factors outweigh the above. My vague impression is that outside investment rarely accidentally costs existing stakeholders control of privately held companies. Is there evidence on this point?

My deeply concerning impression is that OpenPhil (and the average funder) has timelines 2-3x longer than the median safety researcher. Daniel has his AGI training requirements set to 3e29, and I believe the 15th-85th percentiles among safety researchers would span 1e31 +/- 2 OOMs. On that view,  Tom's default values are off in the tails.

My suspicion is that funders write off this discrepancy, if noticed, as inside-view bias i.e. thinking safety researchers self-select for scaling optimism. My,  admittedly very crude, mental model of an OpenPhil funder makes two further mistakes in this vein: (1) Mistakenly taking the Cotra report's biological anchors weighting as a justified default setting of parameters rather than an arbitrary choice which should be updated given recent evidence. (2) Far overweighting the semi-informative priors report despite semi-informative priors abjectly failing to have predicted Turing-test level AI progress. Semi-informative priors apply to large-scale engineering efforts which for the AI domain has meant AGI and the Turing test. Insofar as funders admit that the engineering challenges involved in passing the Turing test have been solved, they should discard semi-informative priors as failing to be predictive of AI progress. 

To be clear, I see my empirical claim about disagreement between the funding and safety communities as most important -- independently of my diagnosis of this disagreement. If this empirical claim is true, OpenPhil should investigate cruxes separating them from safety researchers, and at least allocate some of their budget on the hypothesis that the safety community is correct. 

In my opinion, the applications of prediction markets are much more general than these. I have a bunch of AI safety inspired markets up on Manifold and Metaculus. I'd say the main purpose of these markets is to direct future research and study. I'd phrase this use of markets as "A sub-field prioritization tool". The hope is that markets would help me integrate information such as (1) methodology's scalability e.g. in terms of data, compute, generalizability (2) research directions' rate of progress (3) diffusion of a given research direction through the rest of academia, and applications.

Here are a few more markets to give a sense of what other AI research-related markets are out there: Google Chatbot, $100M open-source model, retrieval in gpt-4

Seems to me safety timeline estimation should be grounded by a cross-disciplinary, research timeline prior. Such a prior would be determined by identifying a class of research proposals similar to AI alignment in terms of how applied/conceptual/mathematical/funded/etc. they are and then collecting data on how long they took. 

I'm not familiar with meta-science work, but this would probably involve doing something like finding an NSF (or DARPA) grant category where grants were made public historically and then tracking down what became of those lines of research. Grant-based timelines are likely more analogous to individual sub-questions of AI alignment than the field as a whole; e.g. the prospects for a DARPA project might be comparable to the prospects for working out the details of debate. Converting such data into a safety timelines prior would probably involve estimating how correlated progress is on grants within subfields.

Curating such data, and constructing such a prior would be useful both in terms of informing the above estimates, but also for identifying factors of variation which might be intervened on--e.g. how many research teams should be funded to work on the same project in theoretical areas? This timelines prior problem seems like a good fit for a prize, where entries would look like recent progress studies reports (c.f. here and here).

Do you have a sense of which argument(s) were most prevalent and which were most frequently the interviewees crux?

It would also be useful to get a sense of which arguments are only common among those with minimal ML/safety engagement. If basic AI safety engagement reduces the appeal of a certain argument, then there's little need for further work on messaging in that area.

A few thoughts on ML/AI safety which may or may not generalize:

You should read successful candidates' SOPs to get a sense of style, level of detail, and content c.f. 1, 2, 3. Ask current EA PhDs for feedback on your statement. Probably avoid writing a statement focused on an AI safety/EA idea which is not in the ML mainstream e.g. IDA, mesa-optimization, etc. If you have multiple research ideas, considering writing more than one (i.e. tailored) SOP and submit the SOP which is most relevant to faculty at each university.

Look at groups' pages to get a sense of the qualification distribution for successful applicants, this is a better way to calibrate where to apply than looking at rankings IMO. This is also a good way to calibrate how much experience you're expected to have pre-PhD. My impression is that in many ML programs it is very difficult to get in directly out of undergraduate if you do not have an exceptional track-record e.g. top publications, or Putnam high scores etc.

For interviews, bringing up concrete ideas on next steps for a professor's paper is probably very helpful.

My vague impression is that financial security and depression are less relevant than in other fields here, as you can probably find job opportunities partway through if either becomes problematic. Would be interested to hear disagreement.

Load more