This is a special post for quick takes by quinn. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I'm pretty confident that people who prioritize their health or enjoyment of food over animal welfare can moral handshake with animal suffering vegans by tabooing poultry at the expense of beef. So a non-vegan can "meet halfway" on animal suffering by preferring beef over chicken.

Presumably, a similar moral handshake would work with climate vegans that just favors poultry over beef.

Is there a similar moral handshake between climate ameliaterians (who have a little chicken) and animal suffering (who have a little beef)?

I'm pretty confident that people who prioritize their health or enjoyment of food over animal welfare can moral handshake with animal suffering vegans by tabooing poultry at the expense of beef.


Generally disagree, because the meat eaters don't get anything out of this agreement. "We'll both agree to eat beef but not poultry" doesn't benefit the meat eater. The one major possible exception imho is people in relationships – I could image a couple where one person is vegan and the other is a meat eater where they decide both doing this is a pareto-improvement.

While I think the fuzzies from cooperating with your vegan friends should be considered rewarding, I know what you mean--- it's not a satisfying moral handshake if it relies on a foundation of friendship! 

Beef + unrelated environmental action?

CC'd to lesswrong.com/shortform

Positive and negative longtermism

I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.

In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.

Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that's a win for negative longtermism.

In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to 1e1000 comets and planets hurts literally as bad as extinction.

Negative longtermism is a vision of what shouldn't happen. Positive longtermism is a vision of what should happen.

My model of Ord says we should lean at least 75% toward positive longtermism, but I don't think he's an extremist. I'm uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.

What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you're working on and who you're teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are "do" and "don't". I won't attempt to claim which disposition is more rational or desirable, but explore each branch

When Alice wants future X and Bob wants future Y, but if they don't defeat the adversary Adam they will be stuck with future 0 (containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.

  • Form myopic coalitions. A trivial case where you would expect Alice and Bob to tend toward this disposition is if X and Y are similar. However, if X and Y are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they're in a high trust situation where they each can credibly signal that they won't try to get a head start on the X vs. Y battle until 0 is completely ruled out.
  • Don't form myopic coalitions. A low trust environment where Alice and Bob each fully expect the other to try to get a head start on X vs. Y during the fight against 0 would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.

An example of such a low-trust environment is, if you'll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.

For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli's rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.

Now, while I don't support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody's political aesthetics, like a philosophy professor's preference for a long reflection or an engineer's preference for moar spaaaace or a conservative's preference for retvrn to pastorality or a liberal's preference for intercultural averaging. A negative goal like "don't kill literally everyone" greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.

The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.

Stem cell slowdown and AI timelines

My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown. 

Has anyone looked to that movement for lessons about AI? 

Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach? 

How are people mistreated by bellcurves? 

I think this is a crucial part of a lot of psychological maladaption and social dysfunction, very salient to EAs. If you're way more trait xyz than anyone you know for most of your life, your behavior and mindset will be massively effected, and depending on when in life / how much inertia you've accumulated by the time you end up in a different room where suddenly you're average on xyz, you might lose out on a ton of opportunities for growth. 

In other words, the concept of "big fish small pond" is deeply insightful and probably underrated. 

Some IQ-adjacent idea is sorta the most salient to me, since my brother recently reminded me "quinn is the smartest person I know", to which I was like, you should meet smarter people? Or I kinda did feel unusually smart before I was an EA, I can only reasonably claim to be average if you condition on EA or something similar. But this post is extremely important in terms of each of the Big 5, "grit"-adjacent things, etc. 

For example, when you're way more trait xyz than anyone around you, you form habits around adjusting for people to underperform relative to you at trait xyz. Sometimes those habits run very deep in your behavior and wordview, and sometimes they can be super ill-tuned (or at least a bit suboptimal) to becoming average. Plus, you develop a lot of "I have to pave my own way" assumptions about growth and leadership. Related to growth, you may cultivate lower standards for yourself than you otherwise might have. Related to leadership, I expect many people in leader roles at small ponds would be more productive, impactful, and happy if they had access to averageness. Pond size means they don't get that luxury! 

There's a tightly related topic about failure to abolish meatspace / how you might think the internet corrects for this but later realize how much it doesn't. 

I've had the thought recently that people in our circles underrate the benefits of being a big fish in a small pond. Being a small fish in a bigger pond means fiercer competition relative to others. Being the dumbest person in the room becomes mentally taxing. It's literally an invitation to be lower status, one of the most important commodities for an ape brain besides food. Of course there are still the benefits to associating with your equals or superiors, which probably outweigh the harms, but some nuanced balance is called for. It makes any zero sum dynamics more fierce and any positive sum dynamics more magnanimous.

this is clearly a law of opposite advice situation. 

My guess is that being a big fish in a small pond for most of my childhood is on net beneficial for me. If I were to hazard the effects, I'd guess the effects are something like greater overall confidence (particularly on intellectual matters), greater self-identification with intellectual matters, worse overall sociability, increased ambition in some ways and decreased ambition in others.
 

It seems like a super quick habit-formation trick for a bunch of socioepistemic gains is just saying "that seems overconfident". The old Sequences/Methods version is "just what do you think you know, and how do you think you know it?" 

A friend was recently upset about his epistemic environment, like he didn't feel like people around him were able to reason and he didn't feel comfortable defecting on their echo chamber. I found it odd that he said he felt like he was the overconfident one for doubting the reams of overconfident people around him! So I told him, start small, try just asking people if they're really as confident as they sound. 

In my experience, it's a gentle nudge that helps people be better versions of themselves. Tho I said "it seems" cuz I don't know how many different communities it work would reliably in-- the case here is someone almost 30 in a nice college with very few grad students in an isolated town. 

In negative longtermism, we sometimes invoke this concept of existential security  (which i'll abbreviate to xsec), the idea that at some point the future is freed from xrisk, or we have in some sense abolished the risk of extinction. 

One premise for the current post is that in a veil of ignorance sense, affluent and smart humans alive in the 21st century have duties/responsibilities/obligations, (unless they're simply not altruistic at all), derived from Most Important Century arguments. 

I think it's tempting to say that the duty -- the ask -- is to obtain existential security. But I think this is wildly too hard, and I'd like to propose a kind of different framing

Xsec is a delusion

I don't think this goal is remotely obtainable. Rather, I think the law of mad science implies that either we'll obtain a commensurate rate of increase in vigilance or we'll die. "Security" implies that we (i.e. our descendants) can relax at some point (as the minimum IQ it takes to kill everyone drops further and further). I think this is delusional, and Bostrom says as much in the Vulnerable World Hypothesis (VWH)

I think the idea that we'd obtain xsec is unnecessarily utopian, and very misleading. 

Instead of xsec summed over the whole future, zero in on subsequent 1-3 generations, and pour your trust into induction

Obtaining xsec seems like something you don't just do for your grandkids, or for the 22nd century, but for all the centuries in the future. 

I think this is too tall an order. I think that instead of trying something that's too hard and we're sure to fail at, we should initialize a class or order of protectors who zero in on getting their 1-3 first successor generations to make it. 

In math/computing, we reason about infinite structures (like the whole numbers) by asking what we know about "the base case" (i.e., zero) and by asking what we know about constructions assuming we already know stuff about the ingredients to those constructors (i.e., we would like for what we know about n to be transformed into knowledge about n+1). This is the way I'm thinking about how we can sort of obtain xsec just not all at once. There are no actions we can take to obtain xsec for the 25th century, but if every generation 1. protects their own kids, grandkids, and great-grandkids, and 2. trains and incubates a protector order from among the peers of their kids, grandkids, and great-grandkids, then overall the 25th century is existentially secure. 

Yes, the realities of value drift make it really hard to simply trust induction to work. But I think it's a much better bet than searching for actions you can take to directly impact arbitrary centuries. 

I think when scifis like dune or foundation reasoned about this, there was a sort of intergenerational lock-in, people are born into this order, they have destinies and fates and so on, whereas I think in real life people can opt-in and opt-out of it. (but I think the 0 IQ approach to this is to just have kids of your own and indoctrinate them, which may or may not even work). 

But overall, I think the argument that accumulating cultural wisdom among cosmopolitans, altruists, whomever is the best lever we have right now is very reasonable (especially if you take seriously the idea that we're in the alchemy era of longtermism). 

open problems in the law of mad science

The law of mad science (LOMS) states that the minimum IQ needed to destroy the world drops by  points every  years. 

My sense from talking to my friend in biorisk and honing my views of algorithms and the GPU market is that it is wise to heed this worldview. It's sort of like the vulnerable world hypothesis (Bostrom 2017), but a bit stronger. VWH just asks "what if nukes but cost a dollar and fit in your pocket?", whereas LOMS goes all the way to "the price and size of nukes is in fact dropping".

I also think that the LOMS is vague and imprecise. 

I'm basically confused about a few obvious considerations that arise when you begin to take the LOMS seriously.

  1. Are  (step size) and  (dropping time) fixed from empiricism to extinction? This is about as plausible as P = NP, obviously Alhazen (or an xrisk community contemporaneous with Alhazen) didn't have to deal with the same step size and dropping time as Shannon (or an xrisk community contemporaneous with Shannon), but it needs to be argued. 
  2. With or without a proof of 1's falseness, what are step size and dropping time a function of? What are changes in step size and dropping time a function of? 
  3. Assuming my intuition that the answer to 2 is mostly economic growth, what is a moral way to reason about the tradeoffs between lifting people out of poverty and making the LOMS worse? Does the LOMS invite the xrisk community to join the degrowth movement? 
  4. Is the LOMS sensitive to population size, or relative consumption of different proportions of the population? 
  5. For fun, can you write a coherent scifi about a civilization that abolished the LOMS somehow? (this seems to be what Ord's gesture at "existential security" entails). How about merely reversing it's direction, or mere mitigation? 
  6. My first guess was that empiricism is the minimal civilizational capability that a planet-lifeform pair has to acquire before the LOMS kicks in. Is this true? Does it, in fact, kick in earlier or later? Is a statement of the form "the region between an industrial revolution and an information or atomic age is the pareto frontier of the prosperity/security tradeoff" on the table in any way?

While I'm not 100% sure there will be actionable insights downstream of these open problems, it's plausibly worth researching. 

As far as I know, this is the original attribution. 

Scattered and rambly note I jotted down in a slack in February 2023, and didn't really follow up on


thinking of jotting down some notes about "what AI pessimism funding ought to be", that takes into account forecasting and values disagreements.The premises:
 

  • threatmodels drive research. This is true on lesswrong when everyone knows it and agonizes over "am I splitting my time between hard math/cs and forecasting or thinking about theories of change correctly?" and it's true in academia when people halfass a "practical applications" paragraph in their paper.
  • people who don't really buy into the threatmodel they're ostensibly working on do research poorly
  • social pressures like funding and status make it hard to be honest about what threatmodels motivate you.
  • I don't overrate democracy or fairness as terminal values, I'm bullish on a lot of deference and technocracy (whatever that means), but I may be feeling some virtue-ethicsy attraction toward "people feeling basically represented by governance bodies that represent them", that I think is tactically useful for researchers because the above point about research outputs being more useful when the motivation is clearheaded and honest.
  • fact-value orthogonality, additionally the binary is good and we don't need a secret third thing if we confront uncertainty well enough

The problems I want to solve:
 

  • thinking about inclusion and exclusion (into "colleagueness" or stuff that funder's care about like "who do I fund") is fogged by tribal conflict where people pathologize eachother (salient in "AI ethics vs. AI alignment". twitter is the mindkiller but occasionally I'll visit, and I always feel like it makes me think less clearly)
  • no actual set of standards for disagreement to take place in, instead we have wishy washy stuff like "the purple hats undervalue standpoint epistemology, which is the only possible reason they could take extinction-level events seriously" or "the yellow hats don't unconsciously signal that they've read the sequences in their vocabulary, so I don't trust them". i.e. we want to know if disagreements are of belief (anticipation constraints) or values (what matters), and we might want to coalition with people who don't think super clearly about the distinction.
  • standard "loud people (or people who are really good at grantwriting) are more salient than polling data" problems
  • standard forecasting error bar problems
  • funding streams misaligned with on the ground viewpoint diversity

I'm foggy headed about whether I'm talking about "how openphil should allocate AI funds" vs. "how DARPA should allocate AI funds" vs. "how an arbitrary well meaning 'software might be bad' foundation should allocate AI funds", sorry.The desiderata for the solution
 

  • "marketplace of ideas" applied to threatmodels has a preference aggregation (what people care about) part and forecasting part (what people think is gonna go down)
  • preference aggregation part: it might be good for polling data about the population's valuation of future lives to drive the proportion of funding that goes to extinction-level threatmodels.
  • forecasting part: what are the relative merits of different threat models?
  • resolve the deep epistemic or evidentiary inequality between threatmodels where the ship sailed in 2015, where we might think crunch time is right now or next year, and what we won't know until literal 2100.
  • mediating between likelihood (which is determined by forecasters) and importance (which is determined by polling data) for algorithmic funding decisions. No standard EV theory, because values aren't well typed (not useful to "add" the probability of everything Alice loves being wiped out times its disvalue to the probability of everything Bob loves being reduced by 60% times its disvalue)
  • some related ideas to Nuno's reply to Dustin on decentralization, voting theory / mechdzn, etc. to a minor degree. https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money?commentId=SuctaksGSaH26xMy2 
  • unite on pessimism. go after untapped synergies within the "redteaming software" community, be able to know when you have actual enemies besides just in the sense that they're competing against you for finite funding. Think clearly about when an intervention designed for Alice's threatmodel also buys assurances for Bob's threatmodel, when it doesn't, when Alice's or Bob's research outputs work against Sally's threatmodel. (an interesting piece of tribal knowledge that I don't think lesswrong has a name for is if you're uncertain about whether you'll end up in world A or world B, you make sure your plan for improving world A doesn't screw you over in the event that you end up in world B. there's a not very well understood generalization of this to social choice, uncertainty over your peers' uncertainty over world states, uncertainty over disagreements about what it means to improve a state, etc.)
  • the only people that should really be excluded are optimists who think everything's fine, even tho people who's views aren't as popular as they think they are will feel excluded regardless.
  • an "evaluations engineering stack" to iterate on who's research outputs are actually making progress on their ostensible threatmodels, over time.

This institution couldn't possibly be implemented in real life, but I think if we got like one desiderata at least a little institutionalized it'd be a big W.I'm predicting that Eli Lifland wants to be a part of this conversation. Maybe Ozzie Gooen's podcast is the appropriate venue? I feel stronger about my ability to explore verbally with someone than my ability to just drag the post into existence myself (I managed to paint a pretty convincing picture of the institution i'm dreaming about to Viv, my housemate who some of you know, verbally yesterday.). Critch obviously has a lot to say, too, but he may not care about the "write down fantasy world desiderata" approach to communicating or progressing (idk I've never actually talked to him)related notes i jotted down on EA Forum yesterdayIs it acceptable to platform "not an AI Gov guy, applied type theorist who tried a cryptography-interp hybrid project and realized it was secretly a governance project a few weeks ago" (who, to be fair, has been prioritizing Critch-like threatmodels every since ARCHES was published) instead of an AI Gov expert? This is also something that could frame a series of "across the aisle" dialogues, where we find someone who doesn't get extinction-level software threatmodels at all or who has a disgust reaction at any currently alive vs. future lives tradeoff and invite them onto the pod or something? maybe that's a stretch goal lol.

We need an in-depth post on moral circle expansion (MCE), minoritarianism, and winning. I expect EA's MCE projects to be less popular than anti-abortion in the US (37% say ought to be illegal in all or most cases, while for one example veganism is at 6%) . I guess the specifics of how the anti-abortion movement operated may be too in the weeds of contingent and peculiar pseudodemocracy, winning elections with less than half of the votes and securing judges and so on, but it seems like we don't want to miss out on studying this. There may be insights. 

While many EAs would (I think rightly) consider the anti-abortion people colleagues as MCE activists, some EAs may also (I think debatably) admire republicans for their ruthless, shrewd, occasionally thuggish commitment to winning. Regarding the latter, I would hope to hear out a case for principles over policy preference, keeping our hands clean, refusing to compromise our integrity, and so on. I'm about 50:50 on where I'd expect to fall personally, about the playing fair and nice stuff. I guess it's a question of how much republicans expect to suffer from externalities of thuggishness, if we want to use them to reason about the price we're willing to put on our integrity. 

Moreover, I think this "colleagues as MCE activists" stuff is under-discussed. When you steelman the anti-abortion movement, you assume that they understand multiplication as well as we do, and are making a difficult and unhappy tradeoff about the QALY's lost to abortions needed by pregancies gone wrong or unclean black-market abortions or whathaveyou. I may feel like I oppose the anti-abortion people on multiplicationist/consequentialist grounds (I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever), but things get interesting when I model them as understanding the tradeoffs they're making. 

(To be clear, this isn't "EA writer, culturally coded as a democrat for whatever college/lgbt/atheist reasons, is using a derogatory word like 'thuggish' to describe the outgroup", I'm alluding to empirical claims about how the structure of the government interacts with population density to create minority rule, and making a moral judgment about the norm-dissolving they fell back on when obama appointed a judge.) 

(I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever)

This is a pretty strong stance to take! Most people believe that it is reasonable to ban at least some disvaluable things, like theft, murder, fraud etc., in an attempt to reduce their incidence. Even libertarians who oppose the existence of the state altogether generally think it will be replaced by some private alternative system which will effectively ban these things.

right, yeah, I think it's a fairly common conclusion regarding a reference class like drugs and sex work, but for a reference class like murder and theft it's a much rarer (harder to defend) stance.

I don't know if it's on topic for the forum to dive into all of my credences over all the claims and hypotheses involved here, I just wanted to briefly leak a personal opinion or inclination in OP. 

CW death

I'm imagining myself having a 6+ figure net worth at some point in a few years, and I don't know anything about how wills work. 

Do EAs have hit-by-a-bus contingency plans for their net worths? 

Is there something easy we can do to reduce the friction of the following process: Ask five EAs with trustworthy beliefs and values to form a grantmaking panel in the event of my death. This grantmaking panel could meet for thirty minutes and make a weight allocation decision on the giving what we can app, or they can accept applications and run it that way, or they can make an investment decision that will interpret my net worth as seed money for an ongoing fund; it would be up to them. 

I'm assuming this is completely possible in principle: I solicit those five EAs who have no responsibilities or obligations as long as I'm alive, if they agree I get a lawyer to write up a will that describes everything. 

If one EA has done this, the "template contract" would be available to other EAs to repeat it. Would it be worth lowering the friction of making this happen? 

Related idea: I can hardcode weight assignment for the giving what we can app into my will, surely a non-EA will-writing lawyer could wrap their head around this quickly. But is there a way to not have to solicit the lawyer every time I want to update my weights, in response to my beliefs and values changing while I'm alive? 

It sounds at the face of it that the second idea is lower friction and almost as valuable as the first idea for most individuals. 

Will @Austin’s ‘In defense of SBF’ have aged well? [resolves to poll]

Posting here because it's a well worth reading and underrated post, and the poll is currently active. The real reason I'm posting here is so that I can find the link later, since searching over Manifold's post feature doesn't really work, and searching over markets is unreliable. 

The poll is here, closing November 25th https://manifold.markets/NicoDelon/has-in-defense-of-sbf-by-austin-age?referrer=Quinn

Feel free to have discourse in the comments here. 

The article was bad when it was written and it has aged like milk.

Any good literature reviews of feed conversion ratio you guys recommend? I found myself frustrated that it's measured in mass, I'd love a caloric version. The conversion would be straightforward given a nice dataset about what the animals are eating, I think? But I'd be prone to steep misunderstandings if it's my first time looking at an animal agriculture dataset. 

I'm willing to bite the tasty bullets on caring about caloric output divided by brain mass, even if it recommends the opposite of what feed conversion ratios recommend. But lots of moral uncertainty / cooperative reasons to know in more detail how the climate-based agricultural reform people should be expected to interpret the status quo. 

Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable?

Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!)

It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you have fuel source indifference!

I mean if you look at all the money they had to spend on disinformation and lobbying, isn't it insultingly obvious to say "just invest that money into renewable research and markets instead"?

Is there dialogue on this? Also, have any members of "big oil" in fact done what I'm suggesting, and I just didn't hear about it?

CC'd to lesswrong shortform

This happens quite widely to my knowledge and I've heard about it a lot (but I'm heavily involved in the climate movement so that make sense). Examples:

  • BP started referring to themselves as "Beyond Petroleum" rather than "British Petroleum" over 20 years ago.
  • A report by Greenpeace that found on average amongst a few "big oil" business, 63% of their advertising was classed as "greenwashing" when approx. only 1% of their total portfolios where renewable energy investment.
  • Guardian article covering analysis by Client Earth who are suing big oil companies for greenwashing
  • A lawsuit by Client Earth got BP to retract some greenwashing adverts for being misleading
  • Examples of oil companies promoting renewables
  • Another article on marketing spending to clean up the Big Oil image

Another CCing of something I said on discord to shortform

If I was in comms at Big EA, I think I'd just say "EAs are people who like to multiply stuff" and call it a day

I think the principle that is both 1. as small as possible and 2. is shared as widely between EAs as possible is just "multiplication is morally and epistemically sound". 

It just seems to me like the most upstream thing. 

That's the post. 

it's not really swapcard's fault, if a salesforce admin at CEA wrote some interaction with a swapcard API? 

Getting swapcard to overwrite old data with new data is extremely hard every time I have to do it---- it is way worse than having no persistence or autofill at all. 

Or can CEA Events come with really bold announcements in emails not to type their bio / paragraph answers directly in browser and to use a notepad? Risk of stupid users like me thinking that the button labeled "keep what I just typed, overwrite the old data with it, do not overwrite what I just typed with the old data" will do what it says on the label seems pretty bad. 

Demoralized cuz I wrote thoughtfully about my current projects, uncertainties, bottlenecks, hairbrained schemes et. al. and doing it again feels bad :( 

Hi Quinn,

I'm Ivan and I'm responsible for the systems-side of things on the CEA Events team. 

I understand and share your frustration about the confusing User Interface of Swapcard's "Update your details?" modal. This has been confusing for multiple users in the past, and is something I've been pushing Swapcard to improve for a while now (I even sent them an email with a screenshot of the modal we use to do this on the EA Forum — which is magnitudes better)...

If you are referring to data completed via the registration form, I can have this data re-pushed to Swapcard for you so that you don't have to re-type it (note that you'll need to select the "Accept new data" option for this to work). The reason this whole thing exists is because of GDPR and so we can't technically edit user profile data without their consent.

If you encountered the modal after filling out your profile on Swapcard, then please let me know — as this would indicate that the modal is showing at the wrong time, and I can follow up with Swapcard about this.

Hope this helps!

"EV is measure times value" is a sufficiently load-bearing part of my worldview that if measure and value were correlated or at least one was a function of the other I would be very distressed. 

Like in a sense, is John threatening to second-guess hundreds of years of consensus on is-ought? 

post idea: based on interviews, profile scenarios from software of exploit discovery, responsible disclosure, coordination of patching, etc. and try to analyze with an aim toward understanding what good infohazard protocols would look like. 

(I have a contact who was involved with a big patch, if someone else wants to tackle this reach out for a warm intro!)

Don't Look Up might be one of the best mainstream movies for the xrisk movement. Eliezer said it's too on the nose to bare/warrant actually watching. I fully expect to write a review for EA Forum and lesswrong about xrisk movement building.

cool projects for evaluators

Find a nobel prizewinner and come up with a more accurate distribution of shapley points. 

The Norman Borlaug biography (the one by Leon Hesser) really drove home for me that, in this case, there was a whole squad behind the nobel prize, but only one guy got the prize. Tons of people moved through the rockefeller foundation and institutions in mexico to lay the groundwork for the green revolution, Borlaug was the real deal but history should also appreciate his colleagues. 

It'd be awesome if evaluators could study high impact projects and come up with shapley point allocations. It'd really outperform the simple prizes approach. 

Thanks to the discord squad (EA Corner) who helped with this. 

Casual, not-resolvable-by-bet prediction: 

Basically EA is going to splinter into "trying to preserve permanent counter culture" and "institutionalizing"

I wrote yesterday about "the borg property", that we shift like the sands in response to arguments and evidence, which amounts to assimilating critics into our throngs.

As a premise, there exists a basic march of subcultures marching from counterculture to institution: abolitionists went from wildly unpopular to champions commonsense morality over the course of some hundreds of years, I think feminism is reasonably institutionalized now but had countercultural roots, let's say 150 years. Drugs from weed to hallucinogens have counterculture roots, and are still a little counterculture, but may not always be. BLM has gotten way more popular over the last 10 years. 

But the borg property seems to imply that we'll not ossify (into, begin metaphor torturing sequence: rocks) enough to follow that march, not entirely. Rocks turn into sand via erosion, we should expect bottlenecks to reverse erosion (sand turning into rocks), i.e. the constant shifting of the dunes with the wind. 

Consequentialist cosmopolitans, rats, people who like to multiply stuff, whomever else may have to rebrand if institutionalized EA got too hegemonic, and I've heard a claim that this is already happening in the "rats who arent EAs" scene in the bay, that there are ambitious rats who think the ivy league & congress strategy is a huge turn-off. 

Of interest is the idea that we may live in a world where "serious careerists who agree with leadership about PR are the only people allowed in the moskovitz, tuna, sbf ecosystems", perhaps this is a cue from the koch or thiel ecosystems (perhaps not: I don't really know how they operate). Now the core branding of EA may align itself with that careerism ecosystem, or it may align itself with higher variance stuff. I'm uncertain what will happen, I only expect splintering not any proposition about who lands where. 

Expected and obligate citation.

Ok, maybe a little resolvable by bet

A manifold market could look like "will there exist charities founded and/or staffed by people who were high-engagement EAs for a number of years before starting these projects, but are not endorsed by EA's billionaires". This may capture part of it. 

One brief point against Left EA: solidarity is not altruism.

low effort shortform: do pingback to here if you steal these ideas for a more effortful post

It has been said in numerous places that leftism and effective altruism owe each other some relationship, stemming from common goals and so on. In this shortform, I will sketch one way in which this is misguided. 

I will be ignoring cultural/social effects, like bad epistemics, because I think bad epistemics are a contingent rather than necessary feature of the left. 

Solidarity appeals to skin-in-the-game. Class awareness is good to team up with your colleague to bargain for higher wages, but it's literally orthogonal to cosmopolitanism/impartiality. Two objections are mutual aid and some form of "no actually leftism is cosmopolitanism". Under mutual aid, at least as it was taught at the philly food not bombs chapter back in my sordid past, we observe the hungry working alongside the fed to feed even more of the hungry, that you can coalition across the hierarchical barrier between charitable action and skin in the game, or reject the barrier flatly. While this lesson works great for meals or needle exchanges, I'm skeptical about how well it generalizes even to global poverty, to say nothing of animals or the unborn. The other objection, that leftism actually is cosmopolitan, only really makes sense to the thought-leaders of leftism and is dissonant with theories of change that involve changing ordinary peoples' minds (which is most theories of change). A common pattern for leftist intellectuals to take is "we have to free the whole world from the shackles of capitalism, working class consciousness shows people that they can fight to improve their lot" (or some flavor of "think global act local"). It is always the intellectual who's thinking about that highfalutin improving the lot of others, while the pleb rank and file is only asked to advocate for themselves. Instead, EAs should be honest: that we do not fight via skin in the game, we fight via caring about others; EA thought leaders and EA rank and file should be on the same page about this. This is elitist to only the staunchest horizontalist. (However, while I think it is sparingly that we defer to standpoint epistemology, for good reason, it's very plausible that it has it's moments to shine, and plausible that we currently don't standpoint epistemology enough, but that's getting a bit afield). 

What's the latest on moral circle expansion and political circle expansion? 

  • Were slaves excluded from the moral circle in ancient greece or the US antebellum south, and how does this relate to their exclusion from the political circle? 
  • If AIs could suffer, is recognizing that capacity a slippery slope toward giving AIs the right to vote? 
  • Can moral patients be political subjects, or must political subjects be moral agents? If there was some tipping point or avalanche of moral concern for chickens, that wouldn't imply arguments for political representation of chickens, right? 
  • Consider pre-suffrage women, or contemporary children: they seem fully admitted into the moral circle, but only barely admitted to the political circle. 
  • A critique of MCE is that history is not one march of worse to better (smaller to larger), there are in fact false starts, moments of retrograde, etc. Is PCE the same but even moreso? 

If I must make a really bad first approximation, I would say a rubber band is attached to the moral circle, and on the other end of the rubber band is the political circle, so when the moral circle expands it drags the political circle along with it on a delay, modulo some metaphorical tension and inertia. This rubber band model seems informative in the slave case, but uselessly wrong in the chickens case, and points to some I think very real possibilities in the AI case. 

idea: taboo "community building", say "capacity building" instead. 

https://en.wikipedia.org/wiki/Capacity_building 

We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge"  things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!

I heard it from Abram Demski at AISU'21. 

Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever  which will be 100 valuable if you end up in world A, or you can pull lever  which will be 100 valuable if you end up in world B. The heuristic is that if you pull  but end up in world B, you do not want to have created disvalue, in other words, your intervention conditional on the belief that you'll end up in world A should not screw you over in timelines where you end up in world B

This can be fully mathematized by saying "if most of your probability mass is on ending up in world A, then obviously you'd pick a lever L such that  is very high, just also make sure that  or creates an acceptably small amount of disvalue.", where  is read "the value of pulling lever L if you end up in world A" 

Is there an econ major or geek out there who would like to 

  1. accelerate my lit review as I evaluate potential startup ideas in prediction markets and IIDM by writing paper summaries
  2. occasionally tutor me in microeconomics and game theory and similar fun things 

something like 5 hours / week, something like  $20-40 /hr

(EA Forum DMs / quinnd@tutanota.com / disc @quinn#9100) 

I'm aware that there are contractor-coordinating services for each of these asks, I just think it'd be really awesome to have one person to do both and to keep the money in the community, maybe meet a future collaborator! 

[comment deleted]1
0
0
[comment deleted]1
0
0
More from quinn
Curated and popular this week
Relevant opportunities