It seems I get the knack of it now...
So your argument here is that if we are going to go this route, then interpretability technology should be used as a measure in the future towards ensuring the safety of this agentic AI as much as they are using currently to improve their "planning capabilities"
I understand the reservation about donation from AI companies cause of conflict of interest, but I still think the larger driver of this intervention area (AI Cause Area) should largely be this Company... who else got the fund that could drive it? who else get the ideological initiatives necessary for changes in this area?
While it may be counterintuitive to have them on board, they are still the best bet for now.
Hi Tosin,
We are currently reviewing applications and you will get a response in due course. We apologize for any inconvenience.