pretraining data safety; responsible AI/ML
Agreed a lot on many points, also generally agreed on 1:1 talks. I think one of the biggest issues of brainstorming is some of these group discussions do not really allow time to do more in-depth independent thinking first and is thus counter-productive. I once attended a business school class, and our professor (I think for information systems and operations management) insightfully mentioned something for our group project - there were research (I don't have that research yet but I should later) showing that: best ideas comes from individuals trying to think 1-2 ideas independently themselves first (allowing for time for research and coming up with concrete ideas), and then maybe have group discussions going through each of these next. This learning still echos with me a lot.
Thanks for the piece! Was thinking about this potential effect the other day as well, also for literature. Would think repetition could matter as well - one single exposure to one documentary may not be helpful, but multiple different ones may. Additionally, it would probably be more effective if some part of the documentary make the viewer feel connected personally. But these are conjectures and I am not sure.
This seems to be linked to a classic problem in social science research of finding causal factors. For the most effective strategies, one key is to focus on individual cases - in other words, key strategies are different for each different case, even if it is the same cause area (but for example in different regions.) This do require lots of manual/field research to find nuances, in my own opinion.
Appreciate the post. https://www.pewresearch.org/social-trends/2020/01/09/trends-in-income-and-wealth-inequality/ This in-depth research article suggest the rich are getting richer faster, and suggest "Economic inequality, whether measured through the gaps in income or wealth between richer and poorer households, continues to widen." It matches with your intuition.
I wonder what could be done to really incentive the powerful/high income people to care about contributing more.
With long timeline and less than 10% probability: Hot take is these are co-dependent - prioritizing only extinction is not feasible. Additionally, does only one human exist while all others die count as non-extinction? What about only a group of humans survive? How should this be selected? It could dangerously/quickly fall back to Fascism. It would only likely benefit the group of people with current low to no suffering risks, which unfortunately correlates to the most wealthy group. When we are "dimension-reducing" the human race to one single point, we ignore the individuals. This to me goes against the intuition of altruism.
I fundamentally disagree with the winner-take-all type of cause prioritization - instead, allocate resources to each area, and unfortunately there might be multiple battles to fight.
To analyze people's responses, I can see this question being adjusted to consider prior assumptions: 1. What's your satisfaction on how we are currently doing in the world now? What are the biggest gaps to your ideal world? 2.What's your assessment of timeline + current % of extinction risk due to what?
Some example of large scale deepfakes that is pretty messed up: https://www.pbs.org/newshour/world/in-south-korea-rise-of-explicit-deepfakes-wrecks-womens-lives-and-deepens-gender-divide
Other examples on top of my head is the fake Linkedin profiles.
Not sure how to address the question otherwise; a thought is there might be deepfakes that we cannot detect/tell being deepfakes yet.
Agrees generally with some specific points but would not think they correlates with anti-fragile culture (unless it deviates from regular definition which in that case would recommend a different erm); and disagree with Fire Fast
On Mission First: I agree, but usually in practice, an anti-fragile environment is not mission first, but it enables blames more easily and this distracts people from focusing on mission.
On Fire Fast: I disagree with fire fast, since this only creates uncertainty and bad environment where people are again distracted on their own performance rather than putting mission first. It is very easy for managers to judge performance on one single instance, even though that might be noise not true signal. The real solution is to make sure not to hire too fast and recklessly.
On Work sustainably and avoid burnout: I agree, but usually in practice, anti-fragile culture fosters status races, and increase the risk of burnout. Anti-fragile seems to convey the idea that when facing stress, endure it, as opposed to reduce stress factors in the first place in the organization. This does not usually work in the long term.
Usually to achieve the specific points you mentioned, I see organizations hire based on a common goal/intrinsic motivation, embrace collaboration, allow mistakes, foster honesty with non-violent communication, and encourage growth mindset. These are usually values that can promote peace and retain talent. It is really about the balance of multiple values, where a single value may be limited. We cannot go extreme/one sided on either side of "anti-fragile".