I lead the DeepMind mechanistic interpretability team
I struggle to imagine Qf 0.9 being reasonable for anything on TikTok. My understanding of TikTok is that most viewers will be idly scrolling through their feed, watch your thing for a bit as part of this endless stream, then continue, and even if they decide to stop for a while and get interested, they still would take long enough to switch out of the endless scrolling mode to not properly engage with large chunks of the video. Is that a correct model, or do you think that eg most of your viewer minutes come from people who stop and engage properly?
Innocent until proven guilty is a fine principle for the legal system, but I do not think it is obviously reasonable to apply it to evaluating content made by strangers on the internet. It is not robust to people quickly and cheaply generating new identities, and new questionably true content. Further, the whole point of the principle is that it's really bad to unjustly convict people, along with other factors like wanting to be robust to governments persecuting civilians. Incorrectly dismissing a decent post is really not that bad.
Feel free to call discriminating against AI content prejudice if you want, but I think this is a rational and reasonable form of prejudice and disagree with the moral analogy you're trying to draw by using that word and example
I empathise but strongly disagree. AI has lowered the costs of making superficially plausible but bad content. The internet is full of things that are not worth reading and people need to prioritise.
Human written writing has various cues that people are practiced at identifying that indicate bad writing, and this can often be detected quickly, eg seeming locally incoherent, bad spelling, bad flow, etc. These are obviously not perfect heuristics, but convey real signal. AI has made it much easier to avoid all these basic heuristics, without making it much easier to have good content and ideas. Therefore, if AI wrote a text, it will be more costly to identify bad quality than if a human wrote it - AI text often looks good at first glance but is BS when you look into it deeply
People are rationally responding to the information environment they find themselves in. If cheap tests are less effective, conditional on the text being AI written, then you should be more willing to judge it or ditch it entirely if you conclude it was AI written. Having higher standards of rigour is just rational
Your points seem pretty fair to me. In particular, I agree that putting your videos at 0.2 seems pretty unreasonable and out of line with the other channels - I would have guessed that you're sufficiently niche that a lot of your viewers are already interested in AI Safety! TikTok I expect is pretty awful, so 0.1 might be reasonable there
Agreed with the other comments for why this is doomed. The thing closest to this that I think might make sense, is something like, "conditioned on the following assumptions/worldview we estimate that this intervention for an extra million dollars can have the following effect". I think that anything that doesn't acknowledge the fact that there are enormous fundamental cruxes here is pretty doomed. but that there might be something productive about clustering the space of worldviews and talking about what makes sense by the lights of each
My null hypothesis is that any research field is not particularly useful until proven otherwise. I am certainly not claiming that all economics research is high quality, but I've seen some examples that seemed pretty legit to me. For example, RCTs on direct cash transfers seem pretty useful and relevant to EA goals. And I think tools like RCTs are a pretty powerful way to find true insights into complex questions.
I largely haven't come across insights from other social sciences that seem useful for EA interests. I haven't investigated this much, and I would be happily convinced otherwise, but a lot of the stuff I've seen doesn't seem like it is tracking truth. You're the one writing a post trying to convince people of the claim that there is useful content here. I didn't see evidence of this in your post though I may have missed something. If you have some in mind I would be interested in seeing it.
(I didn't downvote you, and don't endorse people doing that)
This post is too meta, in my opinion. The key reason EA discusses economics a lot more is that if you want to have true beliefs about how to improve the world, economics can provide a bunch more useful insights than other parts of the social sciences. If you want to critique this, you need to engage with the actual object level claims of how useful the fields are, how good their scientific standards are and how much value there actually is. And I didn't feel like your post spent much time arguing for this
Your poll seems to exclude non US citizen, US residents, who are the most interesting category imo