Hey there~ I'm Austin, currently building https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
I believe this! Anecdotally:
For example, Substack is a bigger deal now than a few years ago, and if the Forum becomes a much worse platform for authors by comparison, losing strong writers to Substack is a risk to the Forum community.
I've proposed to the LW folks and I'll propose to y'all: make it easy to import/xpost Substack posts into EA Forum! RIght now a lot of my writing goes from Notion draft => our Substack => LW/EAF, and getting the formatting exactly right (esp around images, spacing, and footnotes) is a pain. I would love the ability to just drop in our Substack link and have that automatically, correctly, import the article into these places.
I'm also not sure if this is what SWP is going for, but the entire proposal reminds me of Paul Christiano's on humane egg offsets, which I've long been fond of: https://sideways-view.com/2021/03/21/robust-egg-offsetting/
With Paul's, the egg certificate solves a problem of "I want humane eggs, but I can buy regular egg + humane cert = humane egg". Maybe the same would apply for stunned shrimp, eg a supermarket might say "I want to brand my shrimp as stunned for marketing or for commitments; I can buy regular shrimp + stun cert = stunned shrimp"
Really appreciate this post! I think it's really important to try new things, and also have the courage to notice when things are not working and stop them. As a person who habitually starts projects, I often struggle with the latter myself, haha.
(speaking of new projects, Manifund might be interested in hosting donor lotteries or something similar in the future -- lmk if there's interest in continuity there!)
Hey! Thanks for the thoughts. I'm unfortunately very busy these days (including, with preparing for Manifest 2025!) so can't guarantee I'll be able to address everything thoroughly, but a few quick points, written hastily and without strong conviction:
Thank you Caleb, I appreciate the endorsement!
And yeah, I was very surprised by the dearth of strong community efforts in SF. Some guesses at this:
Hey Trevor! One of the neat things about Manival is the idea that you can create custom criteria to try and find supporting information that you as a grantmaker want to weigh heavily, such as for adverse selection. So for example, one could create their own scoring system that includes a data fetcher node or a synthesizer node, which looks for signals like "OpenPhil funded this two years ago, but has declined to fund this now".
Re: adverse selection in particular, I still believe what I wrote a couple years ago: adverse selection seems like a relevant consideration for longtermist/xrisk grantmaking, but not one of the most important problems to tackle (which, off the top of my head, I might identify as "not enough great projects", "not enough activated money", "long and unclear feedback loops"). Or: my intuition is that the amount of money wasted, or impact lost, due to adverse selection problems is pretty negligible, compared to upside potential in growing the field. I'm not super confident in this though and curious if you have different takes!