retired president of lse ea with overly contrarian takes and hopes to get some attention
I have a challenge to write 10 whitepapers by the end of the week to apply to YC. It is indeed not a rigorous proof - thus why it's called a whitepaper :) However, I thought it is interesting enough that it is worth sharing. And the "overconfident lingo" is because the post is fully written by AI - still I believe people would prefer interesting AI slop to confusing mathematical paper. Thank you for your feedback though!
Nice to hear from you, Noah! one can be inclined to say something like “I’m an actor, so I don’t have to do any of the accuracy-based forecasting, etc,” which I think is definitely wrong. totally agree
One should choose which direction they act based on some somewhat accuracy/EV + vibes. Otherwise, there are infinite ways to act and most of them, without any foresight, fail even with the type of agency you’re describing. lets write something on bounded rationality together
In addition, once you are in the project you are acting in, you (ofc) shouldn’t constantly be doing the accuracy/EV thing. Sometimes, though, it’s probably worth taking a step back and doing it to consider opportunity cost (on some marked date every x amount of time to avoid decision/acting paralysis). agree
i am solving art market today! lets chat https://www.twitch.tv/meugenn1924
On AI alarmists:
A fair-sized stream seems vast to one who until then
Has never seen a greater; so with trees, with men.
In every field each man regards as vast in size
The greatest objects that have come before his eyes
(Lucretius)
My critique of forecasting and rationality communities can be summed up as “penny wise, pound foolish”. And this approach to thinking can be automated by AI, as opposed to more unstructured software thinking
These are great thoughts, thank you so much for the comment Eli!