As someone who is not an AI safety researcher, I've always had trouble knowing where to donate if I wanted to reduce x-risk specifically from AI. I think I would have donated quite a larger share of my donations to AI safety over the past 10 years if something like an AI Safety Metacharity existed. Nuclear Threat Initiative tends to be my go to for x-risk donations, but I'm more worried about AI specifically lately. I'm open to being pitched on where to give for AI safety.
Regarding the model, I think it's good to flesh things out like this, so thank you for undertaking the exercise. I had a bit of a play with the model, and one thing that stood out to me is that the impact of an AI safety professional at different percentiles doesn't seem to depend on the ideal size, which doesn't seem right (I may be missing something). Shouldn't the marginal impact of one AI safety professional be lower if it turned out the ideal size of the AI safety workforce were 10 million rather than 100,000?
Applying remote sensing to fish welfare is a neat idea! I've got a few thoughts.
I’m surprised that temperature had no/low correlation with the remote sensing data. My understanding is that using infrared radiation to measure water surface temperature was quite robust. The skin depth of these techniques are quite small, e.g., measuring the temperature in the top 10 μm. Do you have a sense of the temperature profile with respect to depth for these ponds? Perhaps you were measuring the temperature below the surface, and the surface temperature as predicted by the satellite was different. Then again, you might expect some systematic error here giving you some kind of correlation anyway.
The methodology used by Captain Fresh is a black box as you say, but maybe you could ask for more detail. When I was working for an exploration company, specialist contractors who gave us data were usually eager to give us presentations on the minutia of the data and methodology and answer our questions because they wanted our future business.
Do you know what water depth your on-site measurements were taken at? Ensuring that this was consistent seems important, and it’s important to remember the depth of penetration of the remote sensing data. If you could ask Captain Fresh for this, that would be ideal, but it’s typically quite small/shallow. I’m less familiar with best practice for data collection, e.g., how important is it to collect on-site data from as close as possible to the surface, but these might be important considerations. Did Captain Fresh or ProDigital give any guidance for this? (I didn't see anything from a brief skim of the user manual)
You might also want to consider doing more detailed on-site measurements at a few sites to see how well each water property at depth x correlates to depth y. If the remote sensing data gives you good predictions of the properties at the surface but the properties vary greatly at depth, it's probably not a very useful prediction, unless they vary in a systematic or predictable way.
This study was able to predict pH levels in lakes using Landsat data with an R2 of 0.81, but the lakes were quite large, on the scale of several km wide. I intuitively but weakly suspect that this method would be less effective for small farmed fish ponds.
I’m surprised to see salinity missing from this list. Predicting water salinity with remote sensing also seems to be quite robust, and it seems to be quite important for monitoring fish welfare. Was this omitted just due to limitations of the Captain Fresh data? Your ProDigtal seems to be capable of measuring water salinity on-site.
Happy to chat about this some more if any of this was helpful. It's been quite a while since I actually did any remote sensing myself, but I've relied on remote sensing data for other work from time to time.
Point 4, Be cautious and intentional about mission creep, makes me think of environmental- and animal-focused political parties such as the Greens and Animal Justice Party in Australia, and the Dutch Party for the Animals in the Netherlands. The first formed as as an environmental party, and the latter two formed as animal protection parties.
All three of these have experienced a lot of mission creep since then (Animal Justice Party to a lesser extent than the other two). The prevailing wisdom from many is that this is a good thing. A serious political party should have a position on every issue, some will say. But the sense I get from your post is that this may not be the case, because just like with a movement, a political party can become partisan by taking a position on every issue and adopting a political leaning of some kind.
Thanks for writing this! I had one thought regarding how relevant saying no to some of the technologies you listed is to AGI.
In the case of nuclear weapons programs, the use of fossil fuels, CFCs, and GMOs, we actively used these technologies before we said no (FFs and GMOs we still use despite 'no', and nuclear weapons we have and could use at a moments notice). With AGI, once we start using them it might be too late. Geo-engineering experiments is the most applicable out of these, as we actually did say no before any (much?) testing was undertaken.
I supplement iron and vitamin C, as my iron is currently on the lower end of normal (after a few years of being vegan it was too high, go figure).
I tried creatine for a few months but didn't notice much difference in the gym and while rockclimbing.
I drink a lot of B12 fortified soy milk which seems to cover that.
I have about 30g of protein powder a day with a good range of different amino acids to help hit 140g a day.
I have a multivitamin every few days.
I have iodine fortified salt that I cook with sometimes.
I've thought about supplementing omega 3 or eating more omega 3 rich foods but never got around to it.
8 years vegan for reference.
I strongly agree that current LLM's don't seem to pose a risk of a global catastrophe, but I'm worried about what might happen when LLM's are combined with things like digital virtual assistants who have outputs other than generating text. Even if it can only make bookings, send emails, etc., I feel like things could get concerning very fast.
Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work? I'm envisioning something like a LLM virtual assistant which leads to a lot of lost productivity and some security breaches but nothing too catastrophic, which makes people take AI safety seriously, slowing progress on more advanced AI, perhaps.
A complete spitball.
This is cool! I came across EA in early 2015, and I've sometimes been curious about what happened in the years before then. Books like The Most Good You Can Do sometimes incidentally give anecdotes, but I haven't seen a complete picture in one public place. Not to toot our own horn too much, but I wonder if there will one day be a documentary about the movement itself.
Luke, you've been so strong at the helm of GWWC for so long that I'm often guilty of thinking about you and GWWC as synonymous (that's a compliment, I swear!). Well done on the amazing work you've done, and enjoy a well deserved break. I can't wait to see what you do next.