R

Remmelt

Research Coordinator @ "Do Not Build Uncontrollable AI" area for AI Safety Camp
970 karmaJoined Feb 2017Working (6-15 years)

Bio

See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

 

Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety. 

Sequences
3

Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments
216

Topic contributions
3

Thank you for the incisive questions.

What is the current funding status of AISC? 

We received $57k through Manifund plus a $5k donation from a private donor.
 

Which funding bodies have you asked for funding from and do you know why they are not funding this (assuming they chose not to fund this)?

  • For LTFF and SFF, Oliver Habryka was our main evaluator. See his comment here.
  • For OpenPhil, see my comment here.
  • For Nonlinear, that's a network of donors who I guess mostly don't have that much funds to spent. But I don't know which if any donors there tried evaluating AISC and what their reasons were for not funding.
     

My understanding is you only just managed to get enough funding to run a budget version of AISC 10, so I presume that means you'll be looking for funding for AISC 11.

Yes, this is correct. Even then, it is stretching it, because we haven't gotten an income for running the just finished 150-participant edition (AISC 9). Backpay would be reasonable – to maintain our personal runways.

Good question!

I haven't written up a separate post on UCF and how it compares to other charity interventions.  I'd consider it, but I am already stretching myself with other work. 

I spent time digging into Uganda Community Farm’s plans last year, and ended up becoming a regular donor. From reading the write-ups and later asking Anthony about the sorghum training and grain-processing plant projects, I understood Anthony to be thoughtful and strategic about actually helping relieve poverty in the Kamuli & Buyende region.

Here are short explainers worth reading:

UCF focusses on training farmers and giving them the materials and tools needed to build up their own incomes, which is a much more targeted approach than just transferring money (though need to account for differences in local income levels too).

Personally, I think the EA community often focussed on measuring and mapping out consequences of global poverty interventions from afar and not as much on enabling charity entrepreneurs on the ground who have first-hand contextual knowledge on what’s holding their community back. My sense is that robust approaches will tend to consider both.

Is there an argument that it is impossible?

There is actually an impossibility argument. Even if you could robustly specify goals in AGI, there is another convergent phenonemon that would cause misaligned effects and eventually remove the goal structures.

You can find an intuitive summary here: https://www.lesswrong.com/posts/jFkEhqpsCRbKgLZrd/what-if-alignment-is-not-enough

Thanks! Also a good example of lots of complaints being prepared now by individuals

Actually, looks like there is a thirteenth lawsuit that was filed outside the US.

A class-action privacy lawsuit filed in Israel back in April 2023.

Wondering if this is still ongoing: https://www.einpresswire.com/article/630376275/first-class-action-lawsuit-against-openai-the-district-court-in-israel-approved-suing-openai-in-a-class-action-lawsuit

I agree that implies that those people are more inclined to spend the time to consider options. At least they like listening to other people give interesting opinions about the topic.

But we’re all just humans, interacting socially in a community. I think it’s good to stay humble about that.

If we’re not, then we make ourselves unable to identify and deal with any information cascades, peer proof, and/or peer group pressures that tend to form in communities.

Three reasons come to mind why OpenPhil has not funded us.

  1. Their grant programs don't match, and we have therefore not applied to them.They have fund individuals making early career decisions, our university-based courses, or programs that selectively support "highly talented" young people, or "high quality nuanced" communication. We don't fit any of those categories.
    1. We did sent in a brief application early 2023 though for a regrant covering our funds from FTX, which was not granted (same happened to at least one other field-building org I'm aware of).
  2. AISC wasn't contacted for bespoke grants – given OpenPhil's fieldbuilding focuses shown above, and focus on technical research, academic programs, and governance organisations for the rest.
    1. Also, even if we engage i with OpenPhil staff, I heard that another AIS field-building organisation had to make concessions and pick research focusses OpenPhil staff like, in order to ensure they get funding from OpenPhil.  Linda and I are not prepared to do that.
  3. I did not improve things by critiquing OpenPhil online for supporting AGI labs. I personally stand by the content of the critiques, but it was also quite in your face, and I can imagine they did not like that. 
    1. Whatever I critique about collaborations between longtermist orgs and AGI labs can be associated back to  AI Safety Camp is  or the area I run at AI Safety Camp. I want to be more mindful how I word my critiques in the future.
       

Does that raise any new questions?

They're not quite doing a brand partnership. 

But 80k has featured various safety researchers working at AGI labs over the years. Eg. see OpenAI.

So it's more like 80k has created free promotional content, and given their stamp of approval of working at AGI labs (of course 'if you weigh up your options, and think it through rationally' like your friends).

Do you mean OP, as in Open Philanthropy?

Load more