Hide table of contents

Hey all. Our organization, Fish Welfare Initiative, works to improve the welfare of farmed fishes, mainly through improving water quality on farms in India. We're still generally pretty uncertain about the best ways of helping these animals, so we invest heavily in R&D to better understand their welfare issues and to identify promising programs to help them.

One of the recent R&D studies we ran looked at using satellite imagery technology to identify water quality issues remotely (see the study post here). The idea is that, if this tech works, it would enable us to identify issues without ever visiting the farm, and to thus significantly increase our scalability.

We're looking for crowdsourced input on this study and on our possible next steps. If you're interested in giving input here—or just generally interested—read on!

The Study

We collected ground water quality data and satellite-predicted water quality data from 20 fish farms over 3 weeks, and later compared how accurate the satellite-predicted data was to the ground truth data.

The Results: Satellite tech seems to work for predicting water quality

We found that the satellite data and the ground truth data did correlate quite highly in the cases of dissolved oxygen, ammonia, and some phytoplankton measures. Specifically:

For four of the six water quality parameters—ammonia, DO, Chl-a, and PC—predicted and empirical data were sufficiently correlated to suggest that remote monitoring has utility. Most encouragingly, the predicted and empirical data for both PC and Chl-a showed high correlations, with r values of 0.99 and 0.96, respectively, and R² values of 0.98 and 0.92, respectively. The data for DO and ammonia were also strongly correlated, with r values of 0.90 and 0.92, and R² values of 0.81 and 0.85, respectively.

If you're interested in digging into this, I recommend reading our full results post and/or the PDF linked in it.

Caveats to the Results

The following are some of the reasons we've thought of that this technology might actually be less promising than our initial results suggest:

Caveat 1 - Limited sample size and geographic range

We only ran this study with 20 fish farms over 3 weeks (with ~100 total ground measurements). We also only operated in a relatively small geographic region, as all farms were in the Kolleru region of Andhra Pradesh, India.

We probably need more, and more diversity, of data to further validate this technology.

Caveat 2 - We don't actually have access to the algorithm here

The algorithm used to correlate different satellite images with different predicted water quality parameter values is not actually one we have access to—it is a proprietary product of our corporate partner for this study. This means that we can't actually see how they did what they did.

If we are to proceed, we believe we will likely need to recreate this technology in-house, to ensure that it is operating as expected.

Caveat 3 - We used the same farms for both training and validation purposes

From our report:

It’s also important to note that the methodology used to train and subsequently validate the models may have resulted in correlations that are higher than they should be. Specifically, ponds used for training the models were not separated from ponds used for validating the models. Using data from the same ponds—albeit collected at different time points—to train and validate the models may have overfit the models to specific characteristics of those pools.

This was simply a mistake we made while conducting the study.

How You Can Help

1 - Redteam our results

As these results were surprisingly positive, and we're new to this technology, we'd love to hear any ways our methodology or results might be flawed (in addition to the caveats already discussed above). If you have thoughts on this, please feel free to comment those below or DM me.

2 - Suggest avenues forward

Internally, we're currently thinking about our next steps with a) further testing, and, provided further testing continues to be promising, b) later implementing this technology in order to scale our programs.

We're interested in hearing the EA's community's inputs on either a) or b). For instance, what kind of expanded test might make sense to run now?

To give a sense of what sort of programs we'd later consider implementing with this tech, here's some of the ideas we've thought of exploring:

  1. Large-scale monitoring: We use satellites to monitor water quality across a large range of farms (e.g. potentially across a whole state in India or even larger) and then only contact or travel to the farms where we detect issues. (Right now, the majority of site visits we conduct do not result in us detecting an issue we are able to solve.)
  2. Identifying new regions to expand into: We use satellites to determine areas with clusters of farms with particularly and/or consistently bad water quality, and then prioritize these areas for expanding our in-person operations.
  3. App/auto-generated reports: We automate monitoring over a vast region and automatically send farmers water quality reports along with suggested corrective actions, perhaps via a smartphone app. The humans on our team are nudged to take action when particularly concerning water quality/welfare issues are detected.

We'd love to hear other, random ideas you might have!

3 - Connect us with experts in this area

Though our need here is diminishing some as we share these results and meet more relevant people, we're still looking to connect with experts who might be able to either a) be contracted by us to build in-house software for using this technology, and/or b) offer pro bono advice on how best to proceed. People who have the following skills would be particularly useful:

  • Use of satellite imagery (particularly with water quality!)
  • Machine learning

If you know someone like this, please feel free to connect them to us/me! I'm at haven@fishwelfareinitiative.org. 

 

Lastly, if you’re interested in more of these sorts of studies, you can check out the other research that FWI is doing on the ground in India on our blog. Thanks!

27

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

Applying remote sensing to fish welfare is a neat idea! I've got a few thoughts.

I’m surprised that temperature had no/low correlation with the remote sensing data. My understanding is that using infrared radiation to measure water surface temperature was quite robust. The skin depth of these techniques are quite small, e.g., measuring the temperature in the top 10 μm. Do you have a sense of the temperature profile with respect to depth for these ponds? Perhaps you were measuring the temperature below the surface, and the surface temperature as predicted by the satellite was different. Then again, you might expect some systematic error here giving you some kind of correlation anyway. 

The methodology used by Captain Fresh is a black box as you say, but maybe you could ask for more detail. When I was working for an exploration company, specialist contractors who gave us data were usually eager to give us presentations on the minutia of the data and methodology and answer our questions because they wanted our future business.

Do you know what water depth your on-site measurements were taken at? Ensuring that this was consistent seems important, and it’s important to remember the depth of penetration of the remote sensing data. If you could ask Captain Fresh for this, that would be ideal, but it’s typically quite small/shallow. I’m less familiar with best practice for data collection, e.g., how important is it to collect on-site data from as close as possible to the surface, but these might be important considerations. Did Captain Fresh or ProDigital give any guidance for this? (I didn't see anything from a brief skim of the user manual)

You might also want to consider doing more detailed on-site measurements at a few sites to see how well each water property at depth x correlates to depth y. If the remote sensing data gives you good predictions of the properties at the surface but the properties vary greatly at depth, it's probably not a very useful prediction, unless they vary in a systematic or predictable way.

This study was able to predict pH levels in lakes using Landsat data with an R2 of 0.81, but the lakes were quite large, on the scale of several km wide. I intuitively but weakly suspect that this method would be less effective for small farmed fish ponds.

I’m surprised to see salinity missing from this list. Predicting water salinity with remote sensing also seems to be quite robust, and it seems to be quite important for monitoring fish welfare. Was this omitted just due to limitations of the Captain Fresh data? Your ProDigtal seems to be capable of measuring water salinity on-site.

Happy to chat about this some more if any of this was helpful. It's been quite a while since I actually did any remote sensing myself, but I've relied on remote sensing data for other work from time to time.

Thanks for taking the time to look at the report and respond with your thoughts. We very much appreciate it!

Specific to temperature, we do not know how our partner extracted data from images to determine temperature (or any parameter). We have already followed up with them to get more specific information about what exactly they did. 

Regarding the depth of measurements, our “ground truthed” data were collected at a depth of approximately 0.5-1m. The sensor of the handheld device---which collected data for all parameters except for ammonia---was submerged just below the water surface. For ammonia, a sample of water was collected from the same site at approximately the same depth. This aspect of the study protocol was designed to match the procedures conducted by the ARA.

No problem! I think my main concern is just that you make sure the water properties at 0.5-1m depth match the water properties at the surface, or at least, you can work out how they vary to apply corrections to the satellite data. But overall I'm positive about this venture.

Executive summary: Fish Welfare Initiative conducted a promising study using satellite imagery to remotely monitor water quality on fish farms, and is seeking input on next steps to validate and potentially implement this technology at scale.

Key points:

  1. Initial study showed strong correlations between satellite-predicted and ground-truth water quality data for key parameters like dissolved oxygen and ammonia.
  2. Caveats include limited sample size, proprietary algorithm, and potential overfitting of the model.
  3. Organization is seeking input on further testing methodology and ideas for implementing the technology, such as large-scale monitoring or identifying new regions to expand operations.
  4. Potential applications include automated farm monitoring, targeted interventions, and providing water quality reports to farmers.
  5. FWI is looking to connect with experts in satellite imagery, water quality analysis, and machine learning for advice or potential collaboration.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from haven
Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed