DO

Dan Oblinger

2 karmaJoined

Comments
3

Here I think the EA movement has a very substantial role to play. My hypothesis is thus that the EA message is very fruitful even though it may be philosophically trivial.

 

It seems correct given EA's goals, its effectiveness should not be measured philosophically -- instead it should be assessed practically.  If EA fails, likely it is because it becomes meta discussion (like this one) and fails to make a difference in this world.  (This is not intended as a dig against the present discussion).  My sense is that EA sometimes involves interested parties that are not directly involved in DOING the relevant activities in question.  Thus it is a kind of meta-discussion by its nature.  I think this is fine... as an AI guy I notice that practitioners rarely ask the hardest questions questions about what they are doing.  as a former DARPA guy I saw the same myopia in the defense sphere.  So outsiders may well be the right ingredient to add.

Personally I would assess EA on the basis of its subjectively/objectively-assessed movement of the Overton window for relevant decision makers.  e. g. company owner, voters, activists, researchers, etc.  The issues EA takes on are really quite large.  It seems hard to directly move that needle.  Still it seems plausible that EA could end up being transformative by changing very thinking of humanity.  And it seems possible that it gets wrapped up into its own sub-communities whose beliefs end up diverging from humanity at large and thus are ignored by humanity at large.

When I look at questions around AGI safety I think the tiny amounts of human effort and money expended by EA, perhaps this can be counted as a "win" ... that humanity's thinking is moving in directions that will affect large scale policy.  (On this particular issue, I fall into the "too little too late" camp) but still I have to acknowledge the apparently real impact EA has had in legitimizing this topic in practically impactful ways.


 

Here I think the EA movement has a very substantial role to play. My hypothesis is thus that the EA message is very fruitful even though it may be philosophically trivial.

 

It seems correct given EA's goals, its effectiveness should not be measured philosophically -- instead it should be assessed practically.  If EA fails, likely it is because it becomes meta discussion (like this one) and fails to make a difference in this world.  (This is not intended as a dig against the present discussion).  My sense is that EA sometimes involves interested parties that are not directly involved in DOING the relevant activities in question.  Thus it is a kind of meta-discussion by its nature.  I think this is fine... as an AI guy I notice that practitioners rarely ask the hardest questions questions about what they are doing.  as a former DARPA guy I saw the same myopia in the defense sphere.  So outsiders may well be the right ingredient to add.

Personally I would assess EA on the basis of its subjectively/objectively-assessed movement of the Overton window for relevant decision makers.  e. g. company owner, voters, activists, researchers, etc.  The issues EA takes on are really quite large.  It seems hard to directly move that needle.  Still it seems plausible that EA could end up being transformative by changing very thinking of humanity.  And it seems possible that it gets wrapped up into its own sub-communities whose beliefs end up diverging from humanity at large and thus are ignored by humanity at large.

When I look at questions around AGI safety I think the tiny amounts of human effort and money expended by EA, perhaps this can be counted as a "win" ... that humanity's thinking is moving in directions that will affect large scale policy.  (On this particular issue, I fall into the "too little too late" camp) but still I have to acknowledge the apparently real impact EA has had in legitimizing this topic in practically impactful ways.


 

Supporting the community with this new competition is quite valuable.  Thanks!

 

Here is an idea for how your impact might be amplified:  For ever researcher that is somehow has full time funding to do AI safety research I suspect there are 10 qualified researchers with interest and novel ideas to contribute, but who will likely never be full time funded for AI safety work.  Prizes like these can enable this much larger community to participate in a very capital efficient way.

But such "part time" contributions are likely to unfold over longer periods, and ideally would involve significant feedback from the full-time community in order to maximize the value of those contributions.

The previous prize required that all submissions be of never before published work.  I understand the reasoning here.  They wanted to foster NEW work.  Still this rule drops a wet blanket on any part-timer who might want to gain feedback on ideas over time.

Here is an alternate rule that might have fewer unintended side effects:  Only the portions of ones work that has never been awarded prize money in the past is eligible for consideration.

Such a rule would allow a part-timer to refine an important contribution with extensive feedback from the community over an extended period of time.  Biasing towards fewer higher quality contributions in a field with so much uncertainty seems a worthy goal.  Biasing towards greater numbers of contributors in such a small field also seems valuable from a diversity in thinking perspective too.