This is a special post for quick takes by leillustrations🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I just looked at the application for the role of content specialist for CEA, which seems to involve a lot of working on this forum. 

I noticed that if one indicates they have been personally referred by someone 'involved in effective altruism', one is given the option to skip 'the rest of the application' - which seems like the majority of the substantive information one is asked to give. 

This seems overtly nepotistic, and I can't think of a good reason for it - can anyone give one?

The rest of the application seems to be optional also if one indicates that they have not been personally referred by someone. Do you get something different?

https://www.loom.com/share/c0ef87a96a1c4d28bfc0df2e48d7662b 

Oh I see, thanks! - I didn't realise this because the statement that appears after indicating you've been personally referred is: "Since you were referred to this position, the rest of the application is optional" which makes it sound like it wouldn't be optional if you weren't referred.

I think that the short hand of "this person vouches for this other person" is a good enough basis for a lot of pre-screening criteria. Not that it makes the person a shoe in for the job, but it's enough to say that you can go by on a referral. 

You might say, this is a strange way to pick people, but this is how governments interview people for national security roles. They check references. They ask questions. 

I imagine more questions would be asked to the third party who is 'personally referring' the applicant, leading to a slightly different series of interviews anyway. In my experience, people have to work a lot harder to get a job, than to keep one. I know that it's true with everyone that referred me to just about every position. Then if I perform badly it looks poorly on them, but after a certain time, I'm the one referring people onwards, so I have to make my own assessment of if I'm willing to put my reputation on the line.

Yeah but I think it relies too much on a given applicant's estimate of how well CEA knows or how much they trust the connection. 

Some reasons could be

a) The purpose of the rest of the questions is to inform the initial sift, and not later stages of the application, and if you have been referred by a trusted colleague, then there is no further use of the optional questions to the initial sift, so it would be a waste of applicants’ time

b) Saving applicants’ time on the initial application makes you likely to receive more applications to choose from

However, these referrals could indeed have a nepotistic effect by allowing networking to have more of an influence on the ease of getting to stage 2.

I was referred to apply to this job by someone who was close to another hiring round I was in (where I reached the final stage but didn’t get an offer).

I can see that this does not feel great from a nepotism angle. However, as Weaver mentions the initial application is only a very rough pre-screening, and for that, a recommendation might tip the scales (and that might be fine).

Reasons why this is not a problem:

First, expanding on Weavers argument:

I think that the short hand of "this person vouches for this other person" is a good enough basis for a lot of pre-screening criteria. Not that it makes the person a shoe in for the job, but it's enough to say that you can go by on a referral. 


If the application process is similar to other jobs in the EA world, it will probably involve 2-4  work trials, 1-2 interviews, and potentially an on-site work trial before the final offer is made. The reference maybe gets an applicant over the hurdle of the first written application, but won't be a factor in the evaluation of the work trials and interviews. So it really does not influence their chances too much.

Secondly, speaking of how I update on referrals: I don't think most referrals are super strong endorsements by the person referring, and one should not update on them too much. I.e. most referrals are not of the type "I have thought about this for a couple of hours, worked with the person a lot in the last year, and think they will be an excellent fit for this role", but rather "I had a chat with this person, or I know them from somewhere and thought they might have a 5%-10% chance of getting the job so I recommended they apply". 

Other reasons why this could be bad:
1. The hiring manager might be slightly biased and keep them in the process longer than they ought to (However, I do not think this would be enough to turn a "not above the bar for hiring" person into the "top three candidate" person). Note that this is also bad for the applicant as they will spend more time on the application process than they should.

2. The applicant might rely too much on the name they put down, and half-ass the rest of the application, but in case the hiring manager does not know the reference, they might be rejected, although their non-half-assed application would have been good. 

tabforacause - a browser extension which shows you ads and directs ad revenue to charity - has launched a way to set GiveDirectly as the charity you want to direct ad revenue to. 

It doesn't raise a lot of money per tab opened, obviously, but I'm not using my newtab page for anything else and find the advertising unobtrusive - its in the corner, not taking up the whole screen - if. you're like me in these respects it could be something to add.

Thanks for pointing this out; I'll note that Partners in Health is also available, and GiveWell seems to like them but doesn't think that they beat the GiveWell charity bar, at least when this was written (https://www.givewell.org/international/charities/PIH#:~:text=Partners%20in%20Health%20provides%20comprehensive,network%20of%20community%20health%20workers.). I'd be interested in seeing anything about whether Partners in Health is a better option than GiveDirectly.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 1m read
 · 
If you are planning on doing AI policy communications to DC policymakers, I recommend watching the full video of the Select Committee on the CCP hearing from this week.  In his introductory comments, Ranking Member Representative Krishnamoorthi played a clip of Neo fighting an army of Agent Smiths, described it as misaligned AGI fighting humanity, and then announced he was working on a bill called "The AGI Safety Act" which would require AI to be aligned to human values.  On the Republican side, Congressman Moran articulated the risks of AI automated R&D, and how dangerous it would be to let China achieve this capability. Additionally, 250 policymakers (half Republican, half Democrat) signed a letter saying they don't want the Federal government to ban state level AI regulation. The Overton window is rapidly shifting in DC, and I think people should re-evaluate what the most important messages are to communicate to policymakers. I would argue they already know "AI is a big deal." The next important question to answer is, "What should America do about it?"
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s