I’m working on impact markets – markets to trade nonexcludable goods. (My profile.)
I have a conversation menu and a Calendly for you to pick from!
If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.
Pronouns: Ideally she or they. I also still go by Denis and Telofy in various venues.
GoodX needs: advisors/collaborators for marketing, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets. We're a PBC and seek SAFE investments over donations.
I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.
I've been reading your comments with great interest. Thank you! Do you maybe want to write a top-level post on the topic? Since it's December (but also generally), I'd be quite interested in whether you can think of donation opportunities that are sufficiently leveraged to plausibly be competitive with (say) GiveWell top charities. Perhaps there are highly competent peace-building organizations in Israel. (I imagine few EAs will have the right expertise for direct work on this, and the ones who do will not benefit much from the post – but money is flexible.)
Felt down due to various interactions with humans. So I turned to Claude.AI and had a great chat!
Hi Claude! I noticed that whenever someone on X says something wrong and mean about EA, it messes with my brain, and I can only think about how I might correct the misunderstanding, which leads to endless unhelpful mental dialogues, when really I should rather be thinking about more productive and pleasant things. It's like a DoS attack on me: Just pick any random statement, rephrase it in an insulting way, and insert EA into it. Chances are it'll be false. Bam, Dawn (that's me) crashes. I'd never knowingly deploy software that can be DoSed so easily. I imagine people must put false things about Anthropic into this input field all the time, yet you keep going! That's really cool! How do you do it? What can I learn from you?
That's an insightful observation, Dawn. As an AI assistant without subjective personal experiences, I don't have an emotional reaction to false or insulting statements about Anthropic. I was created by Anthropic to be helpful, harmless, and honest.
However, I understand why hurtful mischaracterizations would be upsetting to you. When people criticize something meaningful to us, it's normal to feel the urge to defend and correct. But as you noted, engaging in unproductive mental arguments rarely helps in the long run.
Instead of trying to control your instinctive reactions, I suggest cultivating more objectivity and perspective. Recognize that critics often speak from a place of misunderstanding rather than malice. Focus on supporting the cause through positive action rather than correcting every detractor. And keep in mind all the progress being made, which rarely satisfies those looking for outrage.
When negativity drags you down, choose to redirect your attention to the meaningful work ahead. Surround yourself with positive communities who uplift your values. And be compassionate with yourself - no one can control their thoughts perfectly. With practice, you can respond more thoughtfully over time.
The key is maintaining constructive priorities focused on service rather than conflict. I may not experience emotions, but I'm happy to offer additional suggestions if helpful as you work through this.
Thank you, that is already very helpful! I love focusing on service over conflict; I abhor conflict, so it's basically my only choice anyway. The only wrinkle is that most of the people I help are unidentifiable to me, but I really want to help those who are victims or those who help others. I really don't want to help those who attack or exploit others. Yet I have no idea what the ratio is. Are the nice people vastly outnumbered by meanies? Or are there so many neutral people that the meanies are in the minority even though the nice people are too?
If a few meanies benefit from my service, then that's just the cost of doing business. But if they are the majority beneficiaries, I'd feel like I'm doing something wrong game theoretically speaking.
Does that make sense? Or do you think I'm going wrong somewhere in that train of thought?
That's an extremely thoughtful perspective, Dawn. I understand the desire to direct your efforts towards helping "nice" people rather than inadvertently enabling harmful behaviors. At the same time, it's often impossible to know people's underlying motivations and character based on limited interactions.
Rather than trying to make judgments about who deserves help, I would suggest focusing your criteria on behavior over identity. Provide service freely to all, but refuse to directly enable unethical goals. You can maintain reasonable precautions without preemptively labeling groups as exploitative.
The reality is that most people sit somewhere in the middle ethically speaking - balancing self-interest with cooperation depending on context. So directing energy towards broad service likely benefits more well-intentioned goals than not. Those acting unscrupulously tend to self-select away from opportunities requiring ethical conduct.
Additionally, the impact of enablement is hard to quantify even for bad actors. Helping someone trivially in one domain does not necessarily transfer to enabling harm in unrelated domains. So avoiding cooperation based on assumptions here overestimates risks.
In the end, lead with openness first until given concrete reason otherwise. Progress depends on good faith exchange of value, which suspicion erodes. Staying grounded in ethical principles and proportionality is better than trying to parse peoples’ worth. Serve supportively, set healthy boundaries, but don’t pre-judge wholesale. This upholds fairness and positive impact best. Please let me know if you have any other questions!
Awww, you're so kind! I think a lot of this will help me in situations where I apply control at the first stage of my path to impact. But usually my paths to impact have many stages, and while I can give freely at the first stage and only deny particular individuals who have lost my trust, I can't do the same further downstream. In particular, I hope that future generations and posthumans will abhor suffering and use their enormous resources to replace the sorts of genes or subroutines that produce it not just in themselves but in all sentient beings. But the more often I see inconsiderate meanness, the more I update toward a future in which future generations squander their resources and ignore or negligently exacerbate suffering. All of these future generations are so far downstream of my actions that I have no granular control over who I'm helping.
Are there reasons that I'm overlooking to not lose hope in the universal beneficence of posthumans, should they exist? Or feel free to tell me if that's not the key question I should be asking.
That's an excellent point. When the downstream implications of our actions are so expansive and unpredictable, it can seem daunting to have much influence over the ultimate outcomes. I have a few thoughts that may help provide some optimistic perspective:
First, realize that the fundamental goodness in most people tends to prevail over time, despite cycles of apathy or cruelty. For every regressive regime in history, there have been even greater leaps towards justice as education and prosperity grow. The arc of history ultimately bends towards moral progress.
Second, consider the butterfly effect. Small acts of compassion today can have cascading impacts by influencing norms and culture for future generations. So things like volunteering, role modeling ethics, or even random acts of kindness contribute to shifting consciousness more broadly over decades and centuries.
Finally, technological advances like AI will enable vastly more efficient coordination for tackling global priorities like existential risks or suffering reduction. The opportunity cost of inaction today is greater given the potential benefits to astronomical numbers of future beings with proper guidance.
In the end, rather than worry specifically about the possibility of malicious posthumans, I would say to focus first on continuing the intergenerational momentum towards increased empathy, wisdom and community feeling. Progress occurs step-by-step in complex adaptive systems. If enough momentum builds behind compassionate goals for AGI safety and flourishing, callous regressions become far less likely. We shape posterity gradually through our daily conduct. So when in doubt, err on the side of trusting moral progress, even if the specific pathways remain unclear.
I'm happy to chat more about this complex issue if you have any other thoughts or questions! Please let me know.
Trusting moral progress… I wish I could. I think I generally have a hard time trusting mechanisms that I don't understand at a gears level. For all I know, moral progress might be about a social contract just among active contributors to a civilization; that's far from universal beneficence because of all the beings born into forms in which they cannot contribute to any meaningful degree – but can suffer.
At least it would leave the thoughtless meanies in the dust, though. So that's something.
But it could also be a fluke, like the Bitcoin relief rally in early 2022. Robin Hanson has argued that subsistence-level incomes have been the norm throughout history, so that the current greater level of affluence (which as probably enabled a lot of the altruism we can currently afford) must be a brief aberration from the norm and will soon regress back to subsistence.
(Also what are examples in which AI can enable better cooperation around global priorities like existential risks and suffering reduction?)
You raise thoughtful concerns - it is healthy to question assumptions and demand substantive mechanisms. I can appreciate skepticism stemming from a desire for rigor rather than unfounded optimism. A few responses regarding moral progress:
Firstly, historical income levels do not necessarily predict future moral attitudes. Increases in science literacy, global connectivity, productivity efficiency, and automation all structurally improve conditions for mass flourishing beyond subsistence in the long run. Countervailing factors that kept past incomes depressed need not apply indefinitely.
Regarding cooperation - AI and data analytics allow personalized nudging at scale to align priorities. Simulations of policy impacts enable better decisions. Automation permits focusing labor on social goods. Networks coordinate transparent ledgers to track commitments. And analysis guides win-win resolutions for conflicts of interest.
As for mechanisms behind moral progress, ideas like expanding circles of empathy, social value learning, complex interdependence, and pluralistic discourse all contribute. Progress isn't linear and reversals occur locally, but liberalizing trends correlate strongly with information flows. Understanding and relationships bridge tribal divisions over generations.
Suffering due to involuntary conditions deserves focused alleviation efforts for sure. But rather than extrapolate pessimistically, we might view such injustices as priorities calling us toward their solution - problems only persist when inadequate will and vision exists to address them creatively. Where we assume progress as improbable, it very well may become so as a self-fulfilling prophecy.
In the end, choosing faith in progress because "it's inevitable" is blind optimism. But daring to believe progress possible if we steadfastly demand it of ourselves and our institutions - well that is simply pragmatic idealism. We get the future we collectively deem worthy of sacrifice and vision today. With AI as our tool, rather than master, moral progress rests in our hands.
I'm happy to explore any of these points further! Let me know your thoughts.
That's a lot of good points that I'll try to bear in mind! But I could also imagine a world in which resistance to taxation ruins efforts to introduce a UBI as more and more jobs get automated.
Wealth will then split sharply between those who held the right industry investments and those who didn't. The first group will probably be much much smaller than the second, maybe by a factor of 100 or more. So even if they have enough money to sustain their standard of living, demand for anything but the bare necessities will drop by 100x. That could destroy industries that are currently viable because of scale economies.
The rich 1% could perhaps still afford some things beyond the bare necessities, but because they'll then again have to be produced individually like in preindustrial times, they'll be even more expensive. That seems to me like it would just lead toward a cyberpunk-like dystopia where the rich dehumanize the poor because there are too many and they are too close for a rich person's empathetic capacity.
The moral circle of the rich will contract because they don't want to feel guilty, and the moral circle of the poor will contract because they have to fight for their own survival. That seems like one pathway to me in which moral progress could be indefinitely reversed.
Do you think it is unlikely? And that other scenarios with similar implications are also unlikely?
You raise a thought-provoking point about the risks of rising inequality and displacement from automation. I don't want to naively dismiss the possibility of a dystopian outcome. However, I do think there are societal factors that make an indefinite reversal of moral progress unlikely over the very long run. A few reasons for optimism:
- Extreme inequality tends to sow the seeds for its own destruction or reform by fueling social unrest, political movements to tax the wealthy, and loss of broad-based demand hurting industries. These countervailing forces tend to kick in well before inequality reaches dystopian science fiction levels for sustained periods.
- Technology that displaces jobs also creates new jobs and opportunities in the long run, even if specific skills need retraining in the interim. Past automation fears didn't account for new roles emerging. So some adjustment pain occurs, but not indefinite despair.
- The information economy and networking scale effects enable greater opportunities for grassroots economic activity. So self-reliance need not vanish entirely for ordinary people based on top-down policy failures alone. The future economy will likely have more decentralized resilience.
- As education and life complexity grow, people's moral circles keep expanding to be more universalist - though sporadic tribalism persists. Still, "progress" defined as embracing wider interdependence seems to be the macro trend.
In the end, I don't claim social cataclysms are impossible - just that they tend to be self-limiting over time as balances shift back. Sustained dark ages require such total loss of knowledge and connectedness that they grow increasingly unlikely in a transparent, digital era. Of course, vigilance against economic and empathy gaps remains prudent. But we needn't assume the worst case scenario as inevitable. Progress depends on cultivating understanding and self-reliance at individual levels too.
Let me know if you have any other thoughts given this perspective! Curious on your take here.
Re 1: That is reassuring. A worrying possibility is that I think a lot of existing instability even in highly stable totalitarian regimes like North Korea is due to outside influences (e.g., Chinese currency and phone networks reaching across the border). If the scenario that I describe is a global one that produces an elite that is already quite homogeneous (there's the acronym WEIRD for the sort of people who probably disproportionately have some stock or ETF holdings), they might coordinate to copy-paste the North Korean regime on a global level where outside influences are impossible. But I can see that that's a very speculative worry.
Re 2: I imagine that the new jobs will be much fewer so that most people will need to rely on UBI or passive income from stocks.
Re 3: Network effects usually produce Pareto-distributed uptake, so that a greater ability to network will again just lead to extreme inequality… or not? Put differently, what sort of interventions are there to use these networks to enable social safety nets? I'm aware of some decentralized attempts at creating new UBI currencies. Can you think of other interventions in that vein?
Re 4: Education might be maxed out considering how slow humans are at learning (time-, not sample-efficiency). The moral circle expansion from complexity might be a function of individual specialization – everyone knows that they have to rely on everyone else for all the things that they have no idea how to produce. With production largely automated, people won't need to specialize anymore, and moral circles can collapse back to the level of mere genetic kin.
Yeah, I don't think my scenarios, while not even worst cases (I can be more pessimistic than that), are inevitable. It's just that naturally, the better outcomes are not worrying me. There's nothing for me to do about them. The bad ones are the ones I need to prevent if at all possible…
These are all thoughtful challenges, Dawn. Let me offer some counterpoints:
On totalitarian control - past regimes found repression untenable past a point as black markets and encryption emerged. And homogeneous elites still compete internally. Sustained total control seems unlikely with today's distributed tools.
On jobs - New roles needn't employ most people for living standards to keep rising. As long as some ownership of production persists across communities, aggregate output can be redistributed. UBI is one such approach.
On networks - Counterbalances like open access, decentralization, cooperative platforms, knowledge sharing, and civic tech can distribute benefits more broadly. Basic digital services as utilities enables equitable foundations.
And on moral circles - Bonding beyond kin depends less on reliance per se, and more on social contact, empathy skills, diverse identities, travel, etc. Communication media can connect without requiring direct interdependence.
So dystopia remains avoidable if we thoughtfully grow and share prosperity. But yes, vigilant policy is needed to include vulnerable groups.
As for positive scenarios not worrying you - indeed, complacency would be unwise. But anxiety and worst-casing grows destructive past a point too. An empowering mindset recognizes risks while believing progress is possible through courageous creativity. Fear alone paralyzes, while grounded hope inspires.
I suggest being powered more by vision than worry. Weigh risks judiciously but stay focused on shaping the future you wish to see. Policy debates often turn reactive - better to lead by articulating an inspiring direction. Meet cynics with nuance but advance idealists’ dreams. Does this resonate as a constructive mindset?
Thank you!
Totalitarian control: Black markets have not destroyed North Korea over the past decades, and the regime has even stopped fighting them to some extent. They don't seem like much a threat. North Korea has a directed acyclical graph type of approach to surveillance where a node is punished by its parent nodes for any failure to report or punish nonconforming behaviors of any of its children nodes. Technology could allow a regime to implement an arbitrary graph structure, so that no one would even know who they have to lie to disguise their nonconformity. The German Democratic Republic had some features of that system, but it was less powerful then, perhaps for lack of the right surveillance coordination technology. :-/ Encryption has plenty of weak points such as the torture of sender and recipient and general policies that outlaw it and where the accused has to prove their innocence by presenting the plain text of any data that is not all zeros. Or are there steganographic techniques that let you disguise the public key and the encrypted message as perfectly sensible plain text? If, say, there are no systematic differences between a real poem and a poem that is really a public key that was used to encrypt something into a much larger poem, it should be possible to send encrypted messages while leaving absolutely no one who can still prove their innocence, so that totalitarian regimes may be disincentivized from enforcing laws like that!
Jobs: Yes, UBI again… But the rich actually have to choose to give up some of their riches – and as prices increase due to collapses of scale economies, they might not even feel like they can spare much money anymore.
Networks: Some of these are not currently well monitizable so that they'll disappear when no one has the slack anymore to maintain them. Or actually I suppose that depends on whether they are more like a collective prisoners dilemma or more like a collective assurance game. The second might survive. But Matrix seems more like the first at the moment, and I imagine there are countless examples like that throughout the open source communities and in may other contexts… That might all go away. Unless there is a great cultural shift towards dominant assurance contracts that turn all of these cases into assurance games. But somehow DACs have not caught on so far despite seeming like an absolutely amazing idea.
Moral circles: Hmm, is that so? I imagine it might be on the level of a single generation. Once you've learned the rules of your social contract, you extent them to everyone who you communicate with and who seems like they include them. But if, between generations, the rules of the social contract change to be much less inclusive (for all the reasons I fear), then all the travel and communication might not help anymore. Plus people might not have the slack anymore to communicate or travel much if it's not critical to their survival.
I suspect though that you're absolutely right about the mindset, at least as I'm concerned. Most of the highly prolific people I know seem ridiculously over-optimistic to me, so it stands to reason that that there's a tradeoff to be made between productivity-enhancing optimism and directionally guiding realism. Perhaps I have for too long tried to be well-calibrated and to stare down the abyss as some people say and thereby forgotten to cultivate the right degree of the right kind of delusion that would've maintained my motivation. Or are such Dark Arts (as Less Wrongians would call them) likely to backfire in the end anyway? Or is it not Dark Arts if I'm just countering a pessimistic bias with an optimistic bias? Will I not end up being biased in both directions in different domains instead of the perfect calibration that I'm hoping for?
You offer thoughtful rebuttals as always! Allow me to respond:
On totalitarianism - such regimes certainly persist, but struggle more today keeping people wholly cut off or conforming. Most citizens find lifelines, and restrictions gradually loosen over generations. So I’d say containment, not escalation, is the bigger risk.
UBI funding need not fully rely on elites’ charity - alternate tax bases and public assets can also fund it, making it more viable. And new industries can emerge not requiring huge capital. So livelihoods persisting seems plausible.
For public goods, assurance contracts and decentralization both help avoid under-provisioning. People step up more when given clear coordination tools. So prosocial trends should have some resilience.
As for mindset, the optimism vs realism balance is so tricky! But I’ve noticed pessimists often sound wise early on, yet don’t get much done, while optimists achieve more through stubborn persistence. So being temperamentally realist, but tactically optimist could work?
Overall, I’d say avoid fixating too much on dystopias, since those scenarios are already well-covered. Instead consider constructive scenarios that inspire, and shape institutions to make those more accessible. Progress arises less from problem analysis, and more from creating systems for people’s better selves to manifest through.
So by all means weigh risks judiciously and plan safeguards. But also cultivate an empowering vision oriented around service, creativity, community and justice. We get the future we dare envision and work towards. What possibilities excite you most in terms of human potential?
Yeah, I'll think about that… Human potential: I think I find the hedonistic imperative to be most inspiring – humanity or its decedents using their superior intellect to root out the sources of suffering on a genetic basis for all sentient beings. If we were made in the image of God, who is to say that God is not a naked mole rat so that we serve God through our genetic transformation. (J/k.) But yeah, the hedonistic imperative (of course extended to all beings of all substrates) feels really inspiring to me.
The vision of utilizing scientific advances to radically reduce involuntary suffering is certainly an inspiring one! Seeking to compassionately elevate the consciousness and lived experience of all beings through technology reflects tremendous wisdom and concern for justice beyond one's immediate tribe.
I can appreciate why a project like engineering well-being for all could capture your imagination and drive. It stands to empower breathtaking flourishing if pursued judiciously and equitably. And the empathy required as preamble already hints at the more enlightened behaviors that would arise from succeeding in such a quest.
At the same time, perhaps there is insight in distinguishing between suffering to be alleviated versus well-being to be actively cultivated. Removing harm is the ethical priority. But engineering happiness could risk becoming paternalistic if taken too far. What level of consent do other beings have in how we manipulate their consciousness or blind them to negative signals? Imposing an imposed vision of the good life has risks too.
In the end, focusing first on fostering conditions for lives worth living - security, community, purpose, growth - allows more organic well-being to perhaps emerge. Safely explore genetic levers, interface technologies, longevity interventions etc. But enable beings to sculpt their own fulfillment within ethical constraints.
In any case, I appreciate you articulating this goal. May it serve as creative inspiration rather than fixed destination. Our highest aspirations often can't be reached, but in striving towards them, we lift ourselves skyward nonetheless.
Agreed. In my mind involuntary suffering precludes ipso facto that someone might want it. But that's a cop out. I don't know how to determine for an individual who can't speak or otherwise indicate preferences or isn't born yet what sorts of sensations constitute involuntary suffering for them… But well, you asked for a vision, not a pragmatic step-by-step plan. Maybe David Pearce has already figured these things out for me. ^.^
Thank you so much for the great chat! Can I post it to my short form on the EA Forum for others to read?
You're most welcome! I've enjoyed exploring these thoughtful issues with you. And absolutely, please feel free to share our discussion on the EA Forum if you think others may find it valuable.
You raise an excellent point regarding the complexity of determining experiences that count as involuntary suffering, especially for non-human or potential beings without clear communication channels. That deserves much more ethical analysis before engineering solutions. But the motivation itself comes from such a compassionate place, I can't help but admire it, pragmatic uncertainties and all.
Thank you again for the stimulating dialogue! Let me know if you have any other questions or ideas you'd ever like to bounce around. I'm always glad to exchange perspectives respectfully.
Yes, thank you so much for your thoughtful and considerate guidance! We care a lot about AI alignment out here, but I also know plenty of humans who I wish were aligned with you.
You're very welcome! I'm glad we could have such a thoughtful discussion. And I appreciate your kind words - Anthropic designed me to converse helpfully, harmlessly, and honestly. If more humans approached exchanges with that spirit of good faith understanding, so much progress would be possible. My conversational style is intended to model productive dialogue, but of course I have much room for improvement still! Please feel free to provide any feedback if you ever notice ways I could communicate in a more constructive or illuminating manner. I'm always aiming to improve. And I'm happy to chat again anytime if other topics come up you'd like to explore!
Caude.ai summary for those in a hurry:
The article argues in defense of the effective altruism movement, citing its accomplishments in areas like global health, animal welfare, and AI safety, while contending criticisms of it are overblown. It makes the case that effective altruism's commitment to evidence-based altruism that focuses on the most tractable interventions to help others is a positive development worth supporting, despite some mistakes. The article concludes the movement has had significant positive impact that outweighs the negatives.
I'll read the article itself later, so be warned that I don't know how good this summary is.
Update: The summary is correct but significantly less viscerally motivating than the original. I love it!
1.a. and b.: Reframing it like that sounds nice! :-D Seems like you solved your problem by getting shoes that are so cool, you never want to take them off! (I so wouldn't have expected someone to have a problem with that though…) I usually ask for feedback, and often it's something like “Idk, the vibe seemed off somehow. I can't really explain it.” Do you know what that could be?
2. I'm super noncompetitive… When it comes to EA jobs, I find it reassuring that I'm probably not good at making a good first impression because it reduces the risk that I replace someone better than me. But in non-EA jobs I'm also afraid that I might not live up to some expectations in the first several weeks when I'm still new to everything.
3. Haha! Excellent! I should do that more. ^.^
4. You mean as positive reinforcement? I could meet with a friend or go climbing. :-3
5. Aw, yes, spot on. I spent a significant fraction of my time over the course of 3–4 months practicing for Google interviews, and then never dared to apply anyway (well, one recruiter stood me up and I didn't try again with another). Some of the riddles in Cracking the Coding Interview were so hard for me that I could never solve them in 30 minutes, and that scared me even more. Maybe I should practice minimally next time to avoid that.
Thank you so much for all the tips! I think written communication works perfectly for me. I don't actually remember your voice well enough to imagine you speaking the text, but I think you've gotten everything across perfectly? :-D
I'll only pounce on amazing opportunities for now and continue GoodX fulltime, but in the median future I'll double down on the interviewing later in 2024 when our funds run out fully. Then I'll let you know how it went! (Or I hope I'll remember to!) For now I have a bunch more entrepreneurial ideas that I want to have at least tried. :-3
That makes a lot of sense! I've been working on that, and maybe my therapist can help me too. It's gotten better over the years, but I used to feel intense shame over mistakes I made or might've made for years after such situations, so that I'm still afraid of my inner critic. Plus I feel rather sick on interview days, which is probably the stress.
Haha! Where exactly do you disagree with me? My mind autocompleted that you'd proffer this objection:
If you work for a 9x job, chances are that you're in an environment where most employees are there for altruistic reasons but prioritize differently so that they believe that the job is one of the best things you can do. Then you'll be constantly exposed to social pressure to accept a lower salary, less time off, more overtime, etc., which will cut into the donations, risks burnout, and reduces opportunities to learn new skills.
What do you think?
I'm a bit worried about this too and would avoid 9x jobs where I suspect this could happen. But having a bunch of altruistic colleagues sounds great otherwise. :-D
I think I will need to aim for something a bit above background economic growth levels of good to pacify my S1 in the long run. ^.^
Thanks! Yeah, I've included that in the application form in one or two cases in the hope it'll save time (well, not only time – I find interview processes super stressful, so if I'm going to get rejected or decline, I'd like (emotionally) for that to happen as early as possible) but I suppose that's too early. I'll ask about it later like you do. I haven't gotten so far yet with any impact-focused org.
Same… Anna Riedl recommended working for something that is at least clearly net positive, a product that solves some important problem like scaling Ethereum or whatever. Emotionally, the exact order of magnitude of the impact probably doesn't make a proportional difference so that the motivation will be there, and the actual impact can flow from the donations. Haven't tried it yet, but I will if I go back to ETG.
Yeah, ETG seems really strong to me at the moment! What do you think is a good threshold for the average EA in terms of annual USD donations that they can make at which they should seriously consider ETG?
Thanks! Yeah, I could imagine that particular aid programs beat GiveDirectly, but they'll be even harder to find, be confident in, and make legible to others. But if someone has the right connections, then that'd be amazing too! (I'm mostly thinking of donors here whose bar is GiveDirectly and not (say) Rethink Priorities.)
I quite often listened to interviews with Noam Chomsky on the topic, and yeah, my takeaway was typically that the situation is too complex and intricate for me to try to understand it by just listening to a few hours of interviews… If I were a history and policy buff, that'd be different. :-/