D

dy

1 karmaJoined

Posts
1

Sorted by New
1
dy
· · 1m read

Comments
4

dy
1
0
0

One needs to consider that instead of hiring X people at Y salary, one could have hired 2X people at Y/2 salary. My factual claim is that whether many of the "best people" take the deal does not depend as much on their salary as on how helpful they imagine the work environment to be for their output, and (more cynically) how prestigious they perceive the institution to be.

In fact, support by an environment of 2X (vs. X) researchers may be a stronger incentive than being paid  Y/2 salary (vs. Y) is a disincentive, if these 2X people are actually more supportive.

Considering the ratio of Google researcher salary and median PhD student salary, I factually believe that basing  salaries on need, rather than the tech market* - and allowing/encouraging people or groups to be from/in cheaper places, like India, the CIS or even Europe - would result in higher-quality output. Especially considering that people from these places move to the Bay Area to get these sorts of jobs even if they would have preferred to stay where they are.

 

*Of course, once expectations of salaries are set, people will get angry if these are decreased. So decreasing current salaries may be different.

dy
1
0
0

Thanks for your research and the comment(s) there - I heard that number this year.

I also replied to one of the comments there here.

dy
1
0
0

I think the crux here is that I think AI alignment probably requires really focused attention, and research done by people who are trying to do something else will probably end up not being very helpful for some of the core problems.

Considering the research necessary to "solve alignment for the AIs that will actually be built" as some nodes in the directed acyclic graph of scientific and engineering progress, another crux seems to me to be how effective it is to do that research with the input nodes available today to an org focused specifically on AI alignment:

My intuition there is that progress on fundamental, mathematically hard or even philosophical questions is likely to come serendipitously from people with academic freedom, who happen to have some relevant input nodes in their head. On the other hand, for an actual huge Manhattan-like engineering project to build GAI, making it safe might be a large sub-project itself - but only the engineers involved can understand what needs to be done to do so, just like the Wright brothers wouldn't have much to say about making a modern jet plane safe.

dy
2
0
0

I am less convinced than the median around here that donating to AI safety research organizations is the most important thing to do, and I'd like to share some of my unfiltered thoughts and ask for feedback.

  • I think that one can't solve a philosophy or math problem by throwing money at it. This works for engineering problems, as the moon landing or nukes - but if someone funded an institute with 1E9 $  and gave it the task to prove P != NP, I doubt it would succeed. In the history of mathematics, the ideas necessary for big advances tended to come serendipitously from random people who for some reasons had unique insights into problems they may have gotten in the course of considering different things, rather than big focused research programs.
  • Relatedly, I hear claims to the effect that "only 100 people in the world work on AI safety, and therefore a marginal person is extremely likely to make a difference." I think that number is off. Everyone who does AI research, is somewhat aware of the problem, and interested in publishing academic-style results (rather than being forced to work on concrete business applications) may come up with ideas relevant to AI alignment.
  • Finally, I have a bad feeling about funding researchers in the most expensive places in the world and calculating their salaries based on what they could earn at Google, minus a discount. PhD students and Postdocs - from whom much or most of the important progress in academia comes , as opposed to work organizing and systematizing progress - work for a lot less money, mostly for the prestige (and, possibly, concomitant future earning opportunities) associated with making discoveries. I think that a serious "effective" altruist organisation should consider this.