J

Jason

17864 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Comments
2108

Topic contributions
2

Jason
2
0
0
40% agree

Giving meaningful advance notice of a post that is critical of an EA person or organization should be

I think it's a good default rule, but think there are circumstances in which that presumption is rebutted. 

My vote is also influenced by my inability to define "criticism" with good precision -- and the resultant ambiguity and possible overinclusion pushes my vote toward the midpoint.

Done! The wording was trickier than I expected, but I decided it was better to post than not.

However, I think the cost of this position is non-negligible. Given the power-law distribution of impact among people and given the many rounds of tests, which employees at EA organizations allegedly undergo - a democratic vote would probably yield a much less discerning choice (as most people wouldn't spend more than 30 minutes picking a candidate). I'm not sure to what extent the wisdom of the crowd might apply here.

 

Important characteristics of the ambassador include the community has trust in this person and this person is aligned to the community's interests and concerns. A community vote is ~authoritative on the first question and awfully probative on the second. If someone independent of the community picked the evaluator, in a real sense they wouldn't be the community's ambassador.

You could also do a two-step selection process here; the community selects a committee (and perhaps does approval voting for candidates), and the committee selects the ambassador after more thought. That would allow the more detailed evaluation for finalists while maintaining at least indirect community selection.

I think that depends a lot on the specifics of the organization in question. For example: I think defining the electorate is a hard problem if the organization is devoted to spending lots of donor money. In that scenario, there are good reasons for people to seek a vote for reasons other than membership in the community.

But beyond that, most institutions in civil society do not impose demanding entry requirements. The US Chess Federation grants membership to anyone who pays a fee (and hasn't been banned for misconduct), without any concerns that the checkers crowd will stage a hostile takeover. To join a church with congregationalist governance (where the risk of hostile takeover is greater), you might need to attend a few classes, sign a statement agreeing with some core principles, and attend an interview with a group leader. 

It's not clear to me why the techniques that work for the rest of civil society would fail for EA. Most candidates would pass on Forum karma, EAG/EAGx attendance, or other easily verifiable criteria.

This is more a copyright law question than a First Amendment one, at least under current law. E.g., https://www.trails.umd.edu/news/ai-imitating-artist-style-drives-call-to-rethink-copyright-law.

I believe whether the 1A requires this outcome is unclear at present. Of course, there's a lot of activity protected by the 1A that is horrible to do.

So I think we may have a crux -- are "independent experiences" necessary for work to be transformative enough to make the use of existing art OK? If so, do the experiences of the human user(s) of AI count?

Here, I suspect Toby contributed to the Bulby image in a meaningful way; this is not something the AI would have generated itself or on bland, generic instructions. To be sure, the AI did more to produce this masterpiece than a camera does to produce a photograph -- but did Toby do significantly less than the minimum we would expect from a human photographer to classify the output as human art? (I don't mean to imply we should treat Bulby as human art, only as art with a human element.)

That people can prompt an AI to generate art in a way that crosses the line of so-called "stylistic forgeries" doesn't strike me as a good reason to condemn all AI art output. It doesn't undermine the idea that an artist whose work is only a tiny, indirect influence on another artist's work has not suffered a cognizable injury because that is inherent in how culture is transmitted and developed. Rather, I think the better argument there is that too much copying from a particular source makes the output not transformative enough.

Also, we'd need to consider the environmental costs of creating Bulby by non-AI means. Even assuming they are lower than AI generation now, I could see the argument flipping into a pro-AI art argument with sufficient technological advancement. 

How different is the process of how AIs "learn" to draw from how humans learn for ethical purposes? It seems to me that we consciously or unconsciously "scrape" art (and writing) we encounter to develop our own artistic (or writing) skills. The scraping student then competes with other artists. In other words, there's an element of human-to-human appropriation that we have previously found unremarkable as long as it doesn't come too close to being copying. Moreover, this process strikes me as an important mechanism by which culture is transmitted and developed.

Of course, one could try to identify problematic ways in which AI learning from images it encounters differs from the traditional way humans learn. But for me, I think there needs to be that something more, not just the use in training alone.

Most art is, I think, for "decorations" -- that way of characterizing most art is a double edged sword for your argument to me. It reduces the cost of abstaining from AI art, but also makes me think protecting human art is less important.

That's what I did for my recent critical review of one of Social Change Lab's reports.

One of the challenges here is defining what "criticism" is for purposes of the proposed expectation. Although the definition can be somewhat murky at the margin, I think the intent here is to address posts that are more fairly characterized as critical of people or organizations, not those that merely disagree with intellectual work product like an academic article or report. 

For what it's worth, I think your review was solidly on the "not a criticism of a person or organization" side of the ledger.

Second: A big reason to reach out to people is to resolve misunderstandings. But it's even better to resolve misunderstandings in public, after publishing the criticism. Readers may have the same misunderstandings, and writing a public back-and-forth is better for readers.

That's consistent with reaching out, I think. My recollection is that people who advocate for the practice have generally affirmed that advance notification is sufficient; the critic need not agree to engage in any pre-publication discourse.

Load more