This is a special post for quick takes by Ben Yeoh. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I was listening/chatting to the British philosopher, Jonathan Wolff on how to value life. We went through some expected value and cost-benefit theory as applied to philosophy and healthcare spending. 


 

We then came across the challenge of potentially incorporating “society preferences”. For instance, in the UK it seems - from surveys and outreach as well as policy practice - that much more money is spent on pre-term babies and also rare disease healthcare spend then woul dbe suggested by the  more “normal” quality adjusted life year (QALY) calculation for eg diabetes treatment.


 

We didn’t discuss this that much further, but it seems challenging to me that people/society does seem to value some times of health/life differently and to what extent we need to think about that when allocating healthcare or other scare but beneficial sources.


 

It does seem to me something EA or society has to account for? Or not? And then also how? As revealed preference is tricky here?


 

Podcast with philosopher here: https://www.thendobetter.com/arts/2021/9/26/jonathan-wolff-valuing-life-philosophy-disability-models-society-of-equals-musical-performance-podcast


 

I'm organising a meet-up. It's not only EA, but EA and EA-curious people come along. It's more about getting curious minded thinkers together. Thought i'd leave it here in short form.

 https://www.eventbrite.com/e/mingle-for-the-curious-tickets-407058341457

These meet-ups are a small experiment in having interesting chats across investing, arts, long-termism, progress, sustainability, life... it's a way of cross-pollinating ideas from arts, investing, sustainabililty, theatre, progress, long-termism....

So this re: disabilty is mostly not very EA at all - but at the meta level - it's very interesting on "social movements" - which is basically EA - so interesting on the learnings from the disability rights movement -  cross post: https://www.thendobetter.com/arts/2023/6/5/david-ruebain-disability-protest-movements-law-equality-inclusion-interdependence-podcast

 

David Ruebain is one of the most thoughtful thinkers I know on disability, equality and the law. He is currently a Pro-Vice Chancellor at the University of Sussex with strategic responsibility for Culture, Equality and Inclusion including dignity and respect. He is an adviser to the football premier league, the former director of legal policy at the equality and human rights commission and has been in the top 25 most influential disabled people in the UK. 

We chat on:

Social change seems to come about in a complex way. But peaceful protests seem to have had influence on some social topics. What is the importance of protest? In particular, thinking about the disability rights movement.

David gives insights into his role and view into the UK disability rights movement. The roles of agency and simplicity of message. The comparison with the climate protest movements. 

David’s work with the UK football premier league and also the equality commission. What types of policies are successful for equality and diversity. What challenges are structural and what that implies for solutions.

The role of interdependence and that means at the moment. Whether the law can deliver inclusion and what that means.

How ordinary talking about equality seems now vs the 1970s. But how it itself will not be enough for humanity. 

“Equality is what we all wanted in the seventies; for those of us who considered ourselves progressive. But now it feels fairly vanilla really as an idea. Equality is simply about level playing fields, with its sort of a zero sum game approach to if two people are in a race, nobody should be unfairly disadvantaged for any relevant consideration, which of course is true. It's sort of almost unarguable. But it isn't especially ambitious. … But if we are really to bring about the change which will ensure the survival of the species and other species, it will need more than equality, I think.” 

We end on David’s current projects and life advice.

“....do what you need to do to believe in yourself because so many of us don't or doubt ourselves. That doesn't mean to say-- I think first of all, that knowing there's nothing profoundly wrong with anyone, including whoever you are. But secondly, knowing that from that perspective you get to learn and evolve; it doesn't mean you say rigid in the position. So there's something the risk of sounding like a not very good therapist. There's something about really believing in yourself…”

Listen below (or wherever you listen to pods) or on video (above or on YouTube) and the transcript is below.

Climate regulation. Relevance to EA - if climate is a top 10 existential risk, and if SEC is a form of meta-regulator, then we should be ensuring this data and regulation does come into existence, as it would touch a lot / all the world?

In sustainability world. The SASB-VF-ISSB met and ISSB announced it will be working with GRI. All those acronyms… but essentially it means sustainability standards are progressing and many of the entrenched arguments - for instance between a “double materiality” view point or an “investor-centric” view point might be a little closer to some reconciliation.

 Most investors pay limited attention to the nuances of those arguments but do pay attention to data - especially “material” data - the data we want/need to make investment relevant decisions.

This makes the SEC announcements that they will require carbon emission disclosure very significant. There is hardly a sustainability investor who has not heard but the recap is:


 

 Board and management oversight and governance of climate-related risks

· How any climate-related risks have had or are likely to have a material impact on its business and financial statements over the short-, medium-, or long-term

· How any identified climate-related risks have affected or are likely to affect strategy, business model, and outlook

· Processes for identifying, assessing, and managing climate-related risks and whether such processes are integrated into the overall risk management system or processes

· The impact of climate-related events

· Scopes 1 and 2 GHG emissions metrics, separately disclosed, expressed both by disaggregated constituent greenhouse gases and in the aggregate, and in absolute and intensity terms

· Scope 3 GHG emissions and intensity, if material, or there is a GHG

emissions reduction target or goal that includes its Scope 3 emissions; and

· Any climate-related targets or goals, or transition plan


 

Columnist Matt Levine has several takes on this but one intriguing idea (which he floats from time to time) is that the SEC is a form of global “meta-regulator” because US business touches the whole world (and so many “stakeholders”, customers, employees, supplies etc.) in so many ways then the way you regulate US business will regulate the world.


 

In that sense by demanding climate data, the SEC is suggesting climate is relevant for US business and thus the world. There is significant push back on this. Probably best summed up from a regulators view by Hester Peirce, who essentially argue the SEC is not an “Environment Commission. She argues:


 

“...the proposal will not bring consistency, comparability, and reliability to company climate disclosures.  The proposal, however, will undermine the existing regulatory framework that for many decades has undergirded consistent, comparable, and reliable company disclosures…”


 

If you believe Levine’s view or even weight it a little bit then this disclosure proposal is quite a significant battle. Do feel free to comment your support (or not) here: https://www.sec.gov/rules/submitcomments.htm


 

Press release with links to full report here. 



 

Matt Levine also highlights a somewhat new piece of thinking on the idea of “Universal Ownership” and how this is different (recall certain passive investors may own 3 - 5 % of all American companies in their tracking mandates)


 

Several large institutional investors and financial institutions, which collectively have trillions of dollars in assets under management, have formed initiatives and made commitments to achieve a net-zero economy by 2050, with interim targets set for 2030. These initiatives further support the notion that investors currently need and use GHG emissions data to make informed investment decisions. These investors and financial institutions are working to reduce the GHG emissions of companies in their portfolios or of their counterparties and need GHG emissions data to evaluate the progress made regarding their net-zero commitments and to assess any associated potential asset devaluation or loan default risks.

Notice that this is weird. This is not “investors need this information to understand the company providing the information,” but rather “look, investors these days are diversified, and many of them care about the systemic risks to their portfolios, not about how any one company runs its business.” If it’s material to an institutional investor that its portfolio be carbon-neutral, then it needs to know the carbon emissions of each portfolio company, even if those emissions are not actually material to that company.

This strikes me as very new! And basically correct, I mean: Investors are often diversified and systemic these days, so the SEC’s rules might as well reflect how investing actually works. Still it is a novel and surprising concession, asking a company to disclose stuff because it is useful to its shareholders as universal shareholders, not (just) because it is relevant to the company’s own business.


... Relevance to EA - if climate is a top 10 existential risk, and if SEC is a form of meta-regulator, then we should be ensuring this data and regulation does come into existence, as it would touch a lot / all the world?

Open Space Quadratic Voting Polis Citizen Assemblies and learning from Taiwan

 

I believe that 4 adjacent systems of idea generation and decision making, namely: (1) Open Space Technology (2) Quadratic Voting (3) Polis and (4) Citizen Assemblies have the ability to unlock decision making processes that are currently log jammed. 

We have proved use cases led by the work of Audrey Tang (Digital Minister) in Taiwan*, and many previous smaller examples. I judge the chances of success are high (68%) based on the Taiwan experience but chances of implementation in other nation states are currently low (2%). There may be higher chances at the city-state level or for moderately challenging mid sized organisations or stakeholder populations. 

This essay will briefly discuss the background to the challenge (stagnation, lack of institution decision making, inability to form consensus) and why this barrier is plausibly holding back substantial value (and importance for the long-term survival of humanity). 

 I will then apply this thinking to moderately hard (and somewhat esoteric problems) of: (i) deciding the pension status of >476,000 university pension members and £80bn in assets, (ii) decision making for UK wind farms and planning in general, (iii) deciding what projects at IARPA, UK innovation agency should fund (iv) deciding and allocating what projects government funding should focus on to ensure the thriving and survival of humanity. 

I conclude with moderate amounts of resources (time and money) equivalent to what is already being spent or lower these process and techniques could bring significantly better outcomes and include greater amounts of the population within decision making.

 

Putting this out here:

[I am very busy at the moment, I'd be interested in potentially having someone else colloborate or research this idea for me. If they are keen and have time, I'd consider a fee to write this up once they have discussed with me and if they are interested in pursuing]

*Background here would be Rob Wiblin's recent podcast with Audrey and many of Audrey's podcasts/YTs and work.


 

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 1m read
 · 
If you are planning on doing AI policy communications to DC policymakers, I recommend watching the full video of the Select Committee on the CCP hearing from this week.  In his introductory comments, Ranking Member Representative Krishnamoorthi played a clip of Neo fighting an army of Agent Smiths, described it as misaligned AGI fighting humanity, and then announced he was working on a bill called "The AGI Safety Act" which would require AI to be aligned to human values.  On the Republican side, Congressman Moran articulated the risks of AI automated R&D, and how dangerous it would be to let China achieve this capability. Additionally, 250 policymakers (half Republican, half Democrat) signed a letter saying they don't want the Federal government to ban state level AI regulation. The Overton window is rapidly shifting in DC, and I think people should re-evaluate what the most important messages are to communicate to policymakers. I would argue they already know "AI is a big deal." The next important question to answer is, "What should America do about it?"
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s