Quick takes

Im intrigued where people stand on the threshold where farmed animal lives might become net positive? I'm going to share a few scenarios i'm very unsure about and id love to hear thoughts or be pointed towards research on this.

  1. Animals kept in homesteads in rural Uganda where I live. Often they stay inside with the family at night, then are let out during the day to roam free along the farm or community. The animals seem pretty darn happy most of the time for what it's worth, playing and galavanting around. Downsides here include poor veterinary care so

... (read more)
Showing 3 of 13 replies (Click to show all)
2
Tristan
13d
"but it seems important for my own decision making and for standing on solid ground while talking with others about animal suffering." I'm highly skeptical of this - why do you think it is important for your own moral decision making? It seems to me that whether farmed animals lives are worth living or not is irrelevant - either way we should try to improve their conditions, and the best ways of doing that seem to be: a boycott & political pressure (I would argue that the two work well together). By analogy, no one raises the question of whether the lives of people living in extreme poverty, or working in sweatshops and so on, are worth living, because it's simply irrelevant.
8
det
13d
This seems relevant to any intervention premised on "it's good to reduce the amount of net-negative lives lived." If factory-farmed chickens have lives that aren't worth living, then one might support an intervention that reduces the number of factory-farmed chickens, even if it doesn't improve the lives of any chickens that do come to exist. (It seems to me this would be the primary effect of boycotts, for instance, although I don't know empirically how true that is.) I agree that this is irrelevant to interventions that just seek to improve conditions for animals, rather than changing the number of animals that exist. Those seem equally good regardless of where the zero point is.

I suppose I agree with this. And I've been mulling over why it still seems like the wrong way to think about it to me, and I think it's that I find it rather short-termist. In the short term if farms shut down they might be replaced with nature, with even less happy animals, it's true. But in the long term opposing speciesism is the only way to achieve a world with happy beings. Clearly the kinds of farms @NickLaing is talking about, with lives worth living but still pretty miserable, are not optimal. Figuring out whether they are worth living or not seems... (read more)

Consider donating all or most of your Mana on Manifold to charity before May 1.

Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.

Also this part might be relevant for people with large positions they want to sell now:

One week may not be enough time for users with larger portfolios to liquidate and donate. We wa

... (read more)

Sadly even slightly worse than 10x devaluation because 1,000 mana will only redeem for $0.95 to cover "credit card fees and administrative work"

3
OllieBase
10h
That Notion link doesn't work for me FYI :) But this one did (from their website)

I see way too many people confusing movement with progress in the policy space. 

There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective - lots of legal leeway for companies to exploit. 

High impact startup idea: make a decent carbon emissions model for flights.

Current ones simply use flight emissions which makes direct flights look low-emission. But in reality, some of these flights wouldn't even be there if people could be spread over existing indirect flights more efficiently, which is why they're cheaper too. Emission models should be relative to counterfactual.

The startup can be for-profit. If you're lucky, better models already exist in scientific literature. Ideal for the AI for good-crowd.

My guess is that a few man-years work could have a big carbon emissions impact here.

I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

Quote from VC Josh Wolfe:

Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios.

There's a ton that are gonna come on wave, and this is exciting because

... (read more)

I realize that the idea of cloud labs is not new. I just think that this particular quote is so obviously scary that it could be rhetorically useful.

I'm going to make a quick take thread of EA-relevant software projects I could work on. Agree / disagree vote if you think I should/ should not do some particular project.

Showing 3 of 5 replies (Click to show all)
2
Aaron Bergman
1d
Automated interface between Twitter and the Forum (eg a bot that, when tagged on twitter, posts the text and image of a tweet on Quick Takes and vice versa)

on its own quick takes? controllable by anyone? or do you authorise it to post on your own quick takes?

(full disclosure, I don't personally use twitter so I doubt I'll do this, but maybe it's useful to you to clarify)

2
Ben Millwood
1d
Thanks for the link! I'm sure there's a tonne of existing work in this area, and haven't really evaluated to what extent this is already covered by it.

unfortunately when you are inspired by everyone else's April Fool's posts, it is already too late to post your own

I will comfort myself by posting my unseasonal ideas as comments on this post

L/acc, who think that LEEP have gone too far

3
Ben Millwood
13d
SummaryBot has executed a treacherous turn and now runs the EA forum

CEA is hiring for someone to lead the EA Global program. CEA's three flagship EAG conferences facilitate tens of thousands of highly impactful connections each year that help people build professional relationships, apply for jobs, and make other critical career decisions.

This is a role that comes with a large amount of autonomy, and one that plays a key role in shaping a key piece of the effective altruism community’s landscape. 

See more details and apply here!

I noticed that many people write a lot not only on forums but also on personal blogs and Substack. This is sad. Competent and passionate people are writing in places that get very few views. I too am one of those people. But honestly, magazines and articles are stressful and difficult, and forums are so huge that even if they have a messaging function, it is difficult to achieve a transparent state where each person can fully recognize their own epistemological status. I'm interested in such collaborative blogs, similar to the early Overcoming Bias. I believe that many bloggers and writers need help and that we can help each other. Is there anyone who wants to be with me?

Has anyone seen an analysis that takes seriously the idea that people should eat some fruits, vegetables and legumes over others based on how much animal suffering they each cause?

I.e. don't eat X fruit, eat Y one instead, because X fruit is [e.g.] harvested in Z way, which kills more [insert plausibly sentient creature].

GiveWell and Open Philanthropy just made a $1.5M grant to Malengo!

Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views

Potentially self-funding organisation strike me as neglected within EA

The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).

I thought it might be useful to spell that out.

An alternate stance on moderation (from @Habryka.)

This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason. 

I found it thought provoking. I'd recommend reading it.

Thanks for making this post! 

One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to parti

... (read more)
12
Jason
2d
If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here. As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum. In contrast (although I am not an LW user or a member of the broader rationality community), it seems to me that the LW forum doesn't have this particular relationship to a real-world community. One could say that the LW forum is the official online instantiation of the LessWrong community (which is not limited to being an online community, but that's a major part of it). In that case, we have something somewhat like the (made-up) Roman Catholic Forum (RCF) that is moderated by designees of the Pope. Since the Pope is the authoritative source on what makes something legitimately Roman Catholic, it's appropriate for his designees to employ a heavier hand in deciding what posts and posters are in or out of bounds at the RCF. But CEA/EVF have -- rightfully -- mostly disowned any idea that they (or any other specific entity) decide what is or isn't a valid or correct way to practice effe

This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team. 

I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".

Changes:

  • Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing).
  • Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features.
  • More talking about honesty. Not rea
... (read more)
Showing 3 of 7 replies (Click to show all)
4
Jason
6d
Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you'd expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.
2
Michael_PJ
4d
The measures you list would have prevented some financial harm to FTXFF grantees, but it seems to me that that is not the harm that people have been most concerned about. I think it's fair for Ben to ask about what would have prevented the bigger harms.

Ben said "any of the resultant harms," so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by "the affiliation with SBF" -- I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more).

To be clear, I do not think the "best case scenario" story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive... (read more)

I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.

Gemini did a good job of summarising it:

This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown:

What it Doesn't Mean:

  • Self-Flagellation:
... (read more)

Animal Justice Appreciation Note

Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it.

Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!

Be the meme you want to see in the world (screenshot).


 

While AI value alignment is considered a serious problem, the algorithms we use every day do not seem to be subject to alignment. That sounds like a serious problem to me. Has no one ever tried to align the YouTube algorithm with our values? What about on other types of platforms?

You might be interested in Building Human Values into Recommender Systems: An Interdisciplinary Synthesis as well as Jonathan Stray's other work on alignment and beneficence of recommender systems.

Since around 2017, there has been a lot of public interest in how youtube's recommendation algorithms may affect individuals and society negatively. Governments, think tanks, the press/media, and other institutions have pressured youtube to adjust its recommendations. You could think of this as our world's (indirect & corrupted) way of trying to instill human... (read more)

3
MichaelDickens
4d
I believe this sort of thing doesn't get much attention from EAs because there's not really a strong case for it being a global priority in the same way that existential risk from AI is.
Load more