I wasn't aware of that essay. I think it's ambitious in exactly the right direction: finding a way to analyze value objectively and quantitatively.
That being said, I think it's flawed as a physical model of value.
Because it's a complex topic, I'll just mention a few quick points (we can expand on them later): 1. The future is not a thing that already exists and merely waits to be predicted. It is an unfolding computation. Because the systems generating it are chaotic, the only way to know the exact future is to run the process itself, step by step. At that point, prediction becomes identical to running through it in reality. 2. So you can't shift the curve to the left. You can't just bring an identical version of the future closer. Because by doing that, you're changing it. In chaotic systems, tiny changes compound into wildly different trajectories. 3. The whole "area under the curve" is problematic. Think about a piece of knowledge that existed in the past and that was lost. That knowledge was valuable when it was saved on at least one substrate. Now that value doesn't exist. Same with the future. Again... the future doesn't exist. There is no value under the curve past this moment. 4. I believe there is a much cleaner way to measure objective value. You measure the present moment's Functional Information (FI), in bits. Another way to think of it is as the structured information that exists now (it's hard to understand this without the context). The future is a branching consequence of the present structure. 5. Present FI persists by reducing hazard. Reducing hazard over longer horizons requires better models, better coordination, better tools, and better memory. Those successful structures accumulate as new FI.
Hey guys, if you want, give me some hypothetical scenarios (A vs B), and I can work through them from the POV of my framework. Stress test it and show its utility at the same time.
I think longtermism has good intentions, but there is a deep flaw in its accounting. This is my argument: https://paulcomans.substack.com/p/longtermism-is-counting-wrong
I wasn't aware of that essay. I think it's ambitious in exactly the right direction: finding a way to analyze value objectively and quantitatively.
That being said, I think it's flawed as a physical model of value.
Because it's a complex topic, I'll just mention a few quick points (we can expand on them later):
1. The future is not a thing that already exists and merely waits to be predicted. It is an unfolding computation. Because the systems generating it are chaotic, the only way to know the exact future is to run the process itself, step by step. At that point, prediction becomes identical to running through it in reality.
2. So you can't shift the curve to the left. You can't just bring an identical version of the future closer. Because by doing that, you're changing it. In chaotic systems, tiny changes compound into wildly different trajectories.
3. The whole "area under the curve" is problematic. Think about a piece of knowledge that existed in the past and that was lost. That knowledge was valuable when it was saved on at least one substrate. Now that value doesn't exist. Same with the future. Again... the future doesn't exist. There is no value under the curve past this moment.
4. I believe there is a much cleaner way to measure objective value. You measure the present moment's Functional Information (FI), in bits. Another way to think of it is as the structured information that exists now (it's hard to understand this without the context). The future is a branching consequence of the present structure.
5. Present FI persists by reducing hazard. Reducing hazard over longer horizons requires better models, better coordination, better tools, and better memory. Those successful structures accumulate as new FI.
Open to discussion. I know it's a lot :D