Matt Beard

Political Science MA Student @ Carleton University (Ottawa)
35 karmaJoined Aug 2022Pursuing a graduate degree (e.g. Master's)

Bio

Participation
3

How others can help me

Seeking an employment opportunity that sponsors a work visa for a Canadian to come to the United States.

How I can help others

Happy to chat about AI governance, clean tech policy, or connect you with the Canadian EA community!

Comments
4

Interesting post! Smil is great on this. His (poorly named) book How The World Really Works is excellent and has a chapter on fertilizers.

I've done some research on sustainable transitions in the concrete industry, which is another high capital expenditure/low margin product that requires innovation. It contributes ~4-8% to global CO2 emissions and is expected to rise. I wouldn't say concrete is as important for wellbeing as fertilizer, but it follows the same pattern that the developing world needs green innovations, not de-growth.

I'm skeptical that direct investments in fertilizer or R&D would meet GiveWell's 10x cash transfers threshold, at least in the short/medium term. For the fertilizer itself, it might be cheaper to subsidize imports of food from more productive regions. For R&D, cleantech breakthroughs typically start in university or private research labs, requiring significant investment without guaranteed returns. One approach might be to follow the Good Food Institute's model of $ spent persuading governments to unlock $$$ at the scale required for these problems. (Exciting to note that alternative proteins are also a high capital expenditure/low margin product which requires innovation to overcome climate barriers. I think this is a common pattern policymakers should be more aware of, trying to get the same learning-curve benefits solar had). Low emission concrete is often held back by regulatory barriers, I wonder if the same is true for fertilizer?

Lastly, I'm not sure how neglected this is in the overall development/climate space. You describe a bunch of ongoing research and investment- what is the marginal benefit of the next EA dollar compared to other causes? I'd be interested to hear more about that aspect of the problem!

Congrats on admission to Carleton! I'm finishing my MA in political science there this summer. We'd be happy to have you in the EA Carleton Discord if you haven't joined yet :) I'm not aware of any specific internships, but I can connect you with some people who might be. Feel free to reach out!

If it passes, Canada's proposed AI and Data Act (part of Bill C-27) will almost certainly involve hiring new employees at Innovation Canada. ISED also has staff working to support the AI startup ecosystem in Canada. Effective Altruism Canada is building momentum, and I know AI Governance and Safety (AIGS) Canada is working on advocacy.

Thanks for the feedback. I agree that trying to present an alternative worldview ends up quite broad with some good counter examples. And I certainly didn't want to give this impression:

it's largely hopeless to make decision-informing predictions about what to do in the short term to increase the chance of making the long-run future go well.

Instead I'd say that it is difficult to make these predictions based on a priori reasoning, which this community often tries for AI, and that we should shift resources towards rigorous empirical evidence to better inform our predictions. I tried to give specific examples- Anthropic style alignment research is empiricist, Yudkowsky style theorizing is a priori rationalist. This sort of epistemological critique of longtermism is somewhat common.

Thanks for the feedback! definitely a helpful question. That error bars answer was aimed at OpenPhil based on what I've read from them on AI risk + the prompt in their essay question. I'm sure many others are capable of answering the "what is the probability" forecasting question better/more directly than me, but my two cents was to step back and question underlying assumptions about forecasting that seem common in these conversations.

Hume wrote that "all probable reasoning is nothing but a species of sensation." This doesn’t mean we should avoid probable reasoning (we can't) but I think we should recognize it is based only on our experiences/observations of the world. and question how rational its foundations are. I don't think at this stage anyone actually has the empirical basis to give a meaningful % for "AI will kill everyone." Call it .5 or 1 or 7 or whatever but my essay is about trying to take a step back and question epistemological foundations. Anthropic seems much better at this so far (if they mean it that they'd stop given further empirical evidence of risks).

I did list two premises from Hume that I think are true (or truer than the average person concerned about AI x-risk holds them to be), so those were my TLDR I guess also.