Indra Gesink 🔸

Co-founder and Organiser, Local Politics @ Effective Altruism Tilburg, Party for the Animals
114 karmaJoined Working (0-5 years)Breda, Netherlands

Bio

Participation
11

BSc Econometrics and Operations Research (focus: econometrics)

MSc Systems Biology (focus: evolutionary game theory for adaptive cancer treatment)

MSc Econometrics and Operations Research (focus: operations research, and math. econ. thesis regarding social influence and change)

Premaster Executive Master in Actuarial Science

Premaster and master Amsterdam Master in Actuarial Science

 

Teaser essay bundle "Ideas to Secure Our Future": 

Comments
26

Producing reasoning transparancy would I think yield echo-chamber-reduction effects, and also inform the powerful person practicing it how to (to them, for starters) weigh pros and cons of transferring power. Moreover, without it, I don't see using reason and evidence to do the most good practiced, nor with that a license to be a powerful EA, as opposed to simply powerful. And if EA membership would instead just be about trying to do the most good, that would include all of humanity minus some deviants.

I appreciate your point that people that donate are under no obligation. As such an advisory (instead of instructing) role to them seems fitting. On the other hand, the intellectual EA community should however also have the freedom to: not take on certain money, or not take on certain money coupled to certain actions, or disassociate with people, e.g. when this otherwise puts the community's (intellectual) integrity at risk, e.g. their reasoning transparancy. (And even chosen intransparancy one can be transparant about at a higher level.) In that, also being that much of EA charity work is research-based, an analogy to the scientific community, where such potent integrity risk is also tantamount, seems quite fitting.

All in all there should I think be some balance in the democratic power at both ends, including on the burden of proof, instead of this being fully one-sided. Take in FTX maybe as another (historical) example. And ideally both sides are practicing (reasoning transparancy and are) getting better in being informed by reason and evidence to do good better. Potentially this identifies (and resolves?) some (but not all) cruxes, and fleshes out new ones, while also responding to some of your encouragements, to move the conversation (or reasoning transparancy) forward?

Interesting, thanks, will try to find more info!

"OpenPhil’s Worldview Investigations team" refers I think to Rethink Priorities', or another one at Open Philanthrophy? Thanks!

Hi Joe,

I much appreciated your post on deep atheism, and will still finish that. I also found this above post, and I thought I could contribute to the understanding at some points (as a Thesean myself).

  • You seem to move to discussing content, not distinguished this from consciousness (as in Dennett's work), and perhaps even conflate the two concepts. Consciousness, I would refer with to, the mere platform, the capability that enables content to be featured. The self/I Dennett (r.i.p.) would conceive as "center of narrative gravity", in line with your reflections.
  • Pragmatically there are tradeoffs in wrongly or rightly having confidence or not having confidence in your first person perspectives. Most commonly a belief in the reality of (the content of) consciousness co-occurs with the belief that one simply cannot be wrong about certain aspects of it, as opposed to them being "mere illusions", that not necessarily have realism. This is I think particulary relevant in, among many other highly relevant ethical applications, interpersonal power-dynamics, where the assertions in the necessary realism of some of (the content in) consciousness can only be effectively countered with a retalliatory healthy skepticism on this front. I also agree that how this should exactly look is not yet most clear, as is even also the case for a very senior scientist like Anil Seth, according to his reflections prompted by Daniel Dennett's passing.

Best,

Indra

"many causes we choose to support tend to be the result of" should I think be "many choices to support a cause tend to be the result of" as what follows should I think refer to the choice as opposed to the cause or charity.

“So holding lifespans fixed, a greater capacity for synchronic welfare does entail a greater capacity for diachronic welfare.” I’m missing here a discussion of adaptation, e.g. I might really like my first donut, but with more donuts my welfare capacity from another donut rapidly declines. The rate of this declining might differ across species. As such momentary peaks might be higher in one species while less rate of decline and less variance in another species inclunes larger diachronic welfare, despite lower synchronic welfare, at times, or, sustainably.

Thanks for this post and for drawing attention to the topic. I specialized in OR within my master degree with much enjoyment. I would very much enjoy a seminar series with EA-aligned or -adjacent OR-talks and/or OR research projects. Happy to connect on these topics!

In addition, we might also want to use - and take in account - our abilities to look ahead. Suppose for example a worthwhile task that requires two people to engage in it. The first person to engage in it gains zero marginal returns, while the latter gets everything (all of the returns as marginal returns). The first person might however predict the second person's behavior and based on the expectation that results engage with this task. By contrast, chimpanzees are not able to do this; You would never see two of them cooperate to together e.g. carry a log (research by Joseph Henrich).

Load more