I remain a non-doomer (and am considering such bets more recently), but support this comment. I don't think the above criticisms make sense, but with a couple of caveats:
1) Zach Stein-Perlman's above borrowing in general seems reasonable. If your response is that it's high risk, it seems like making a bet is de-facto asking the better to shoulder that risk for you
2) 'This would not be good for you unless you were an immoral sociopath with no concern for the social opprobrium that results from not honouring the bet.' - I know you were responding to his 'can’t possibly be good for you' comment (emph mine), but I don't see why this isn't a rational behaviour if you think the world is going to end in <4 years. Both from a selfish perspective - is it selfishly rational to be concerned about a couple of years of reduced reputation vs extinction beyond that?; and from an altruistic perspective - if you think the world is almost certainly doomed, that the counterfactual world in which we survive is extremely +EV, and that spending the extra money could move the needle on preventing doom, it seems crazy not to just spend it and figure out the reputational details on the slim chance we survive.
The second is one of the main sources of counterparty risk that makes me wary of such bets - it seems like it would be irrational for anyone to accept them with me in good faith.
It's difficult if the format requires a 1D sliding scale. I think reasonable positions can be opposed on AI vs other GCRs vs infrastructure vs evidenced interventions, and future (if it exists) is default bad vs future is default good, and perhaps 'future generations should be morally discounted' vs not.
I'm going to struggle to cast a meaningful vote on this, since I find 'existential risk' terminology as used in the OP more confusing than helpful, since e.g. it includes nonexistential considerations and in practice excludes non-extinction catastrophes from a discussion they should very much be in, in favour of work on the heuristical-but-insufficient grounds of focusing on events that have maximal extinction probability (i.e. AI).
I've argued here that non-extinction catastrophes could be as or more valuable to work on than immediate extinction events, even if all we care about is the probability of very long-term survival. For this reason I actually find Scott's linked post extremely misleading, since it frames his priorities as 'existential' risk, then pushes people entirely towards working on extinction risk - and gives reasons that would apply as well to non-extinction GCRs. I gave some alternate terminology here, and while I don't want to insist on my own clunky suggestions, I wish serious discussions would be more precise.
For what it's worth, there used to be an 80k pledge along similar lines. They quietly dropped it several years ago, so you might want to find someone involved in that decision to try and understand why (I suspect and dimly remember that it was some combination of non-concreteness, and concerns about other-altruism-reduction effects).
I happen to strongly agree that moral discount rate should be 0, but a) it's still worth acknowledging that as an assumption, and b) I think it's easy for both sides to equivocate it with risk-based discounting. It seems like you're de facto doing when you say 'Under that assumption, the work done is indeed very different in what it accomplishes' - this is only true if risk-based discounting is also very low. See e.g. Thorstad's Existential Risk Pessimism and the Time of Perils and Mistakes in the Moral Mathematics of Existential Risk for formalisms of why it might not be - I don't agree with his dismissal of a time of perils, but I do agree that the presumption that explicitly longtermist work is actually better for the long term than short-to-medium-term focused work is is based on little more than Pascalian handwaving.
I'm confused by your paragraph about insurance. To clarify:
Of course you can disagree about the high risk to flourishing from non-existential catastrophes but that's going to be a speculative argument about which people might reasonably differ. To my knowledge, no-one's made the positive case in depth, and the few people who've looked seriously into our post-catastrophe prospects seem to be substantially more pessimistic than those who haven't. See e.g.:
Reading the Eliezer thread, I think I agree with him that there's no obvious financial gain for you if you hard-lock the money you'd have to pay back.
I don't follow this comment. You're saying Vasco gives you X now, 2X to be paid back after k years. You plan to spend X/2 now, and lock up X/2, but somehow borrow 3/(2X) money now, such that you can pay the full amount back in k years? I'm presumably misunderstanding - I don't see why you'd make the bet now if you could just borrow that much, or why anyone would be willing to lend to you based on money that you were legally/technologically committed to giving away in k years.
One version that makes more sense to me is planning to pay back in installments, on the understanding that you'd be making enough money to do so at the agreed rate - though a) that comes with obviously increased counterparty risk, and b) it still doesn't make much sense if your moneymaking strategy is investing money which you have rather than selling service/labour, since, again, it seems irrational for you to have any money at the end of the k-year period.