Giving season 2023

Donate to the Election Fund and discuss where the donations should go. Learn more in the Giving portal.
Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
36
Kaleem
3d
7
EZ#1 The world of Zakat is really infuriating/frustrating. There is almost NO accountability/transparency demonstrated by orgs which collect and distribute zakat - they don't seem to feel any obligation to show what they do with what they collect. Correspondingly, nearly every Muslim I've spoken to about zakat/effective zakat has expressed that their number 1 gripe with zakat is the strong suspicion that it's being pocketed or corruptly used by these collection orgs. Given this, it seems like there's a really big niche in the market to be exploited by an EA-aligned zakat org. My feeling at the moment is that the org should focus on, and emphasise, its ability to be highly accountable and transparent about how it stores and distributes the zakat it collects. The trick here is finding ways to distribute zakat to eligible recipients in cost-effective ways. Currently, possibly only two of the several dozen 'most effective' charities we endorse as a community would be likely zakat-compliant (New Incentives, and Give Directly), and even then, only one or two of GiveDirectly's programs would qualify. This is pretty disappointing, because it means that the EA community would probably have to spend quite a lot of money either identifying new highly effective charities which are zakat-compliant, or start new highly-effective zakat complaint orgs from scratch.
I'm a little disheartened at all the downvotes on my last post. I believe an EA public figure used scientifically incorrect language in his public arguments for x-risk, and I put quite a bit of work into explaining why in a good faith and scientifically sourced manner. I'm particularly annoyed that a commenter (with relevant expertise!) was at one point heavily downvoted just for agreeing with me (I saw him at -16 at one point) . Fact checking should take precedence over fandoms. 
1
Ramiro
4d
0
Idea for free (feel free to use, abuse, steal): a tool to automatize donations + birthday messages. Imagine a tool that captures your contacts and their corresponding birthdays from Facebook; then, you will make (or schedule) one (or more) donations to a number of charities, and the tool will customize birthday messages with a card mentioning that you donated $ in their honor and send it on their corresponding birthdays. For instance: imagine you use this tool today; it’ll then map all the birthdays of your acquaintances for the next year. Then you’ll select donating, e.g., $1000 to AMF, and 20 friends or relatives you like; the tool will write 20 draft messages (you can select different templates the tool will suggest you… there’s probably someone already doing this with ChatGPT), one for each of them, including a card certifying that you donated $50 to AMF in honor of their birthday, and send the message on the corresponding date (the tool could let you revise it one day before it). There should be some options to customize messages and charities (I think it might be important that you choose a charity that the other person would identify with a little bit - maybe Every.org would be more interested in it than GWWC). So you’ll save a lot of time writing nice birthday messages for those you like. And, if you only select effective charities, you could deduce that amount from your pledge. Is there anything like that already?

Popular comments

Recent discussion

Nick K. commented on titotal's quick take 16h ago

I'm a little disheartened at all the downvotes on my last post. I believe an EA public figure used scientifically incorrect language in his public arguments for x-risk, and I put quite a bit of work into explaining why in a good faith and scientifically sourced manner. ...

Continue reading

That's fair enough and levels of Background understanding vary (I don't have a relevant PhD either), but then the criticism should be about this point being easily misunderstood rather than making a big deal about the strawman position being factually wrong. In which case it would also be much more constructive than adversarial criticism.

I think part of titotal’s point is it’s not the ‘strawman’ interpretation but the straightforward one, and having it framed that way would understandably be frustrating. It sounds like he also disagrees with Eliezer’s actual, badly communicated argument [edit: about the size of potential improvements on biology] anyway though? Based on the response to Habryka

Yeah, I think it would have been much better for him to say "proteins are shaped by..." rather than "proteins are held together by...", and to give some context for what that means. Seems fair to criticize his communication. But the quotes and examples in the linked post are more consistent with him understanding that and wording it poorly, or assuming too much of his audience, rather than him not understanding that proteins use covalent bonds. 

The selected quotes do give me the impression Eliezer is underestimating what nature can accomplish relative to design, but I haven't read any of them in context so that doesn't prove much. 

This is about a personal experience - rescuing a dog on a trip in Mexico - that helped me realize how I wrestle with being effective. 

My girlfriend and me were recently in Mexico. After speaking at a conference, we took two weeks off in Yucatan. We had both been aware...

Continue reading

Perhaps another way to frame it might be to count the time and money outside of the your donation bucket? As in, donation budget is rational/effective, and everything else can be included as part of discretionary spending on personal/wellbeing? eg- same bucket as hobbies, travel etc.

Not sure if this might simplify things mentally and guard against motivated reasoning and slippery slope concerns

1
Ulrik Horn
6h
I mostly agree with the other commenters here, and thought I might perhaps have an additional perspective: I think both your way of being and EA, as well as how your EA friends chose to live are important to EA. I could, when reading yourt post, feel the pull to divide EA into the "too much" camp and the "EA light" camp but I think this initial, us vs them reaction is uncessary and perhaps even counter productive. The more "hard core" among us helps the more "EA light" among us (to which I think I might belong) prevent value drift and encourages us to take helping as much as we can even more seriously. At the same time, I think us "EA light" folks have value to add to the "hard core" faction by making them feel more ok about he occasional "mistake" they might make. I also think us "EA light" folks can help grow the movement as I think it is much easier to recruit "EA light" people to our causes and organizations than the hard core types. I think movment growth is super important if we want to create a radically better world and play a big part in solving some of the world's biggest problems. I really hope you and your friends can come to some sort of understanding like this, where you can both live guilt and judgement free as EAs and feel like you belong in this movement. For what it is worth, I think you are probably being more than EA enough - just the fact that you thought through this and wrote this post seems to demonstrate that to me. In my books it is completely ok that EAs act ineffectively, even frequently! I sure do!
10
Karthik Tadepalli
7h
In your position, my response to the question "why spend so much on this one dog?" would be "because I wanted to lol". You don't have to justify yourself to anyone, and you don't have to reconstruct some post-hoc justification as to why you did it. I understand that's not a satisfying solution, in that it doesn't preclude a slippery slope into "all of my money goes into arbitrary things that tug my heartstrings and none of it goes to the most effective causes". You may be seeking guardrails against that possibility. There are none. Which is okay, because you probably don't need them! You identify as EA for a reason. I'm going to guess it's because the suffering of people and animals tugs at your heartstrings, even when you don't see them. As long as that's true, it seems extremely unlikely that you will fall off this slippery slope. Moreover, I don't think it's healthy to try to justify all your life choices on EA grounds, a point made best here.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Who the fuck are you?

I run EA's biggest volunteer organisation. We train psychology graduate volunteers to treat mental illnesses, especially in LMICs.  To lead by example, I don't take a salary despite working >50Hs per week. To pay the bills, I coach rich people's...

Continue reading

Hello, I'd just like to say that I enjoy your honesty almost as much as your writing style. Keep them both up. 

We are excited to announce a match for donations made towards our operations at Giving What We Can!

Starting December 1st, every dollar donated towards GWWC’s operations will be matched 1:1 up to US$200,000 until the match has been exhausted, or until January 31st 2024, ...

Continue reading

Thanks both. They haven't shared this with us specifically so I can't speak for them. They have been very clear that it is a conditional match.

I'll try updating the wording for clarity.

Johannes Ackva, Megan Phelan, Aishwarya Saxena & Luisa Sandkühler, November 2023      

Context for Forum Readers:

This is the methodological component of our Giving Season Updates, originally published here and leaning heavily on our recent EAGx Virtual...

Continue reading

Thanks for sharing your thinking in a detailed and accessible way! I think this is a great example of reasoning transparency about philanthropic grantmaking, and relevant modelling.

Similarly, the impact of any given policy depends on the quality of implementation, features of the world we do not know before, as well as general political, economic and geopolitical conditions, to name a few. Again, an uncertainty of a factor of 10x seems conservative ex ante.

How are you thinking about adaptation to climate change (e.g. more air conditioning)?

If all the

... (read more)

This is the summary of the report with additional images (and some new text to explain them) The full 90+ page report (and a link to its 80+ page appendix) is on our website.

Summary 

This report forms part of our work to conduct cost-effectiveness analyses ...

Continue reading

I have previously let HLI have the last word, but this is too egregious. 

Study quality: Publication bias (a property of the literature as a whole) and risk of bias (particular to each individual study which comprise it) are two different things.[1] Accounting for the former does not account for the latter. This is why the Cochrane handbook, the three meta-analyses HLI mentions here, and HLI's own protocol consider distinguish the two.

Neither Cuijpers et al. 2023 nor Tong et al. 2023 further adjust their low risk of bias subgroup for publication b... (read more)

Gemini Pro (the medium-sized version of the model) is now available to interact with via Bard.

Here’s a fun and impressive demo video showing off Gemini’s multi-modal capabilities:

[Edit, Dec. 8 at 5:54am EST: This demo video is potentially misleading.]

How Gemini compares...

Continue reading
3
JWS
6h
Hey Ryan, thanks for your engagement :) I'm going to respond to your replies in one go if that's ok #1: This is a good point. I think my argument would point to larger updates for people who put susbtantial probability on near term AGI in 2024 (or even 2023)! Where do they shift that probability in their forecast? I think just dropping it uniformly over their current probability would be suspect to me. So maybe it'd wouldn't be a large update for somebody already unsure what to expect from AI development, but I think it should probably be a large update for the ~20% expecting 'weak AGI' in 2024 (more in response #3) #2: Yeah I suppose ~80%->~60% is a decent update, thanks for showing me the link! My issue here would be the resolution criteria realy seems to be CoT on GSM8K, which is almost orthogonal to 'better' imho, especially given issues accounting for dataset contamination - though I suppose the market is technically about wider perception rather than technical accuracy. I think I was basing a lot of my take on the response on Tech Twitter which is obviously unrepresentative, and prone to hype. But there were a lot of people I generally regard as smart and switched-on who really over-reacted in my opinion. Perhaps the median community/AI-Safety researcher response was more measured. #3: I'm sympathetic to this, but Metaculus questions are generally meant to be resolved according a strict and unambiguous criteria afaik. So if someone thinks that weakly general AGI is near, but that it wouldn't do well at the criteria in the question, then they should have longer timelines than the current community response to that question imho. The fact that this isn't the case to me indicates that many people who made a forecast on this market aren't paying attention to the details of the resolution and how LLMs are trained and their strengths/limitations in practice. (Of course, if these predictors think that weak AGI will happen from a non-LLM paradigm then fine, but

Thanks for the response!

A few quick responses:

it says 'less than 10 SAT exams' in the training data in black and white

Good to know! That certainly changes my view of whether or not this will happen soon, but also makes me think the resolution criteria is poor.

I think funding, supporting, and popularising research into what 'good' benchmarks would be and creating a new test would be high impact work for the AI field - I'd love to see orgs look into this!

You might be interested in the recent OpenPhil RFP on benchmarks and forecasting.

Perhaps the median commu

... (read more)