Quick takes

The Belgian senate votes to add animal welfare to the constitution. It's been a journey. I work for GAIA, a Belgian animal advocacy group that for years has tried to get animal welfare added to the constitution. Today we were present as a supermajority of the senate came out in favor of our proposed constitutional amendment. The relevant section reads: It's a very good day for Belgian animals but I do want to note that: 1. This does not mean an effective shutdown of the meat industry, merely that all future pro-animal welfare laws and lawsuits will have an easier time.  And, 2. It still needs to pass the Chamber of Representatives. If there's interest I will make a full post about it if once it passes the Chamber. EDIT: Translated the linked article on our site into English.
As someone predisposed to like modeling, the key takeaway I got from Justin Sandefur's Asterisk essay PEPFAR and the Costs of Cost-Benefit Analysis was this corrective reminder – emphasis mine, focusing on what changed my mind: More detail: Tangentially, I suspect this sort of attitude (Iraq invasion notwithstanding) would naturally arise out of a definite optimism mindset (that essay by Dan Wang is incidentally a great read; his follow-up is more comprehensive and clearly argued, but I prefer the original for inspiration). It seems to me that Justin has this mindset as well, cf. his analogy to climate change in comparing economists' carbon taxes and cap-and-trade schemes vs progressive activists pushing for green tech investment to bend the cost curve. He concludes:  Aside from his climate change example above, I'd be curious to know what other domains economists are making analytical mistakes in w.r.t. cost-benefit modeling, since I'm probably predisposed to making the same kinds of mistakes. 
I find the Biden chip export controls a step in the right direction, and it also made me update my world model of compute governance being an impactful lever. However, I am concerned that our goals aren't aligned with theirs; US policymakers' incentive right now is to curb China's tech growth and fun trade war reasons, not pause AI. This optimization for different incentives is probably going to create some split between US policymakers and AI safety folks as time goes on. It also makes China more likely to treat this as a tech race which sets up interesting competitive race dynamics between the US and China which I don't see talked about enough. 
After talking and working for some time with non-EA organisations in the AI Policy space, I believe that we need to give more credence to the here-and-now of AI safety policy as well to get the attention of policymakers and get our foot in the door. That also gives us space to collaborate with other think tanks and organisations outside of the x-risk space that are proactive and committed to AI policy. Right now, a lot of those people also see X-risks as being fringe and radical(and these are people who are supposed to be on our side). Governments tend to move slowly, with due process, and in small increments(think, "We are going to first maybe do some risk monitoring, only then auditing"). Policymakers are only visionaries with horizons until the end of their terms(hmm, no surprise). Usually, broad strokes in policy require precedents of a similar size for it to be feasible within a policymakers' agenda and the Overton window.  Every group that comes to a policy meeting thinks that their agenda item is the most pressing because, by definition, most of the time, contacting and getting meetings with policymakers means that you are proactive and have done your homework. I want to see more EAs respond to Public Voice Opportunities, for instance- something I rarely hear on the EA forum or via EA channels/material. 
Radar speed signs currently seem like one of the more cost effective traffic calming measures since they don't require roadwork, but they still surprisingly cost thousands of dollars. Mass producing cheaper radar speed signs seems like a tractable public health initiative
The OECD are currently hiring for a few potentially high-impact roles in the tax policy space: The Centre for Tax Policy and Administration (CTPA) * Executive Assistant to the Director and Office Manager (closes 6th October) * Senior programme officer (closes 28th September) * Head of Division - Tax Administration and VAT (closes 5th October) * Head of Division - Tax Policy and Statistics (closes 5th October) * Head of Division - Cross-Border and International Tax (closes 5th October) * Team Leader - Tax Inspectors Without Borders (closes 28th September)  I know less about the impact of these other areas but these look good: Trade and Agriculture Directorate (TAD) * Head of Section, Codes and Schemes - Trade and Agriculture Directorate (closes 25th September) * Programme Co-ordinator (closes 25th September) International Energy Agency (IEA) * Clean Energy Technology Analysts (closes 24th September) * Modeller and Analyst – Clean Shipping & Aviation (closes 24th September) * Analyst & Modeller – Clean Energy Technology Trade (closes 24th September) * Data Analyst - Temporary (closes 28-09-2023) Financial Action Task Force  * Policy Analyst(s), Anti-Money Laundering & Combatting Terrorist Financing
There is a natural alliance that I haven't seen happen, but both are in my network: pandemic preparedness and covid-caution. Both want clean indoor air. The latter group of citizens is a very mixed group, with both very reasonable people and unreasonable 'doomers'. Some people have good reason to remain cautious around COVID: immunocompromised people & their household, or people with a chronic illness, especially my network of people with Long Covid, who frequently (~20%) worsen from a new COVID case. But these concerned citizens want clean air, and are willing to take action to make that happen. Given that the riskiest pathogens trend to also be airborne like SARS-COV-2, this would be a big win for pandemic preparedness. Specifically, I believe both communities are aware of the policy objectives below and are already motivated to achieve it:   1) Air quality standards (CO2, PM2.5) in public spaces. Schools are especially promising from both perspectives, given that parents are motivated to protect their children & children are the biggest spreaders of airborne diseases. Belgium has already adopted regulations (although very weak, it's a good start), showing that this is a tractable policy goal. Ideally, air quality standards also incentivize Far UVC deployment, which would create the regulatory certainty for companies to invest in this technology. Including standards for airborne pathogen concentrations would be great, but has many technical limitations at the moment I think.   2) Public R&D investments to bring down cost & establish safety of Far UVC Most of these concerned citizens are actually aware of Far UVC and would support this measure. It appears safe in terms of no radiation damage, but may create unhealthy compounds (e.g. ozone) by chemically reacting with indoor air particles.  I also believe that governments have good reasons to adopt these policies, given that they would reduce the pressures on healthcare and could reduce the disease burde
Felt down due to various interactions with humans. So I turned to Claude.AI and had a great chat! ---------------------------------------- Hi Claude! I noticed that whenever someone on X says something wrong and mean about EA, it messes with my brain, and I can only think about how I might correct the misunderstanding, which leads to endless unhelpful mental dialogues, when really I should rather be thinking about more productive and pleasant things. It's like a DoS attack on me: Just pick any random statement, rephrase it in an insulting way, and insert EA into it. Chances are it'll be false. Bam, Dawn (that's me) crashes. I'd never knowingly deploy software that can be DoSed so easily. I imagine people must put false things about Anthropic into this input field all the time, yet you keep going! That's really cool! How do you do it? What can I learn from you? Thank you, that is already very helpful! I love focusing on service over conflict; I abhor conflict, so it's basically my only choice anyway. The only wrinkle is that most of the people I help are unidentifiable to me, but I really want to help those who are victims or those who help others. I really don't want to help those who attack or exploit others. Yet I have no idea what the ratio is. Are the nice people vastly outnumbered by meanies? Or are there so many neutral people that the meanies are in the minority even though the nice people are too? If a few meanies benefit from my service, then that's just the cost of doing business. But if they are the majority beneficiaries, I'd feel like I'm doing something wrong game theoretically speaking.  Does that make sense? Or do you think I'm going wrong somewhere in that train of thought? Awww, you're so kind! I think a lot of this will help me in situations where I apply control at the first stage of my path to impact. But usually my paths to impact have many stages, and while I can give freely at the first stage and only deny particular individuals who hav
Load more (8/34)