Update, 12/7/21: As an experiment, we're trying out a longer-running Open Thread that isn't refreshed each month. We've set this thread to display new comments first by default, rather than high-karma comments.


If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)


Open threads are also a place to share good news, big or small. See this post for ideas.

Comments256
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

How do EA's in SF think about local civic action and altruism? That seems a priori like a place with A) a lot of EAs and B) a place with LOTs of local problems. Here's a good Atlantic article that's worth reading in full on the problems of SF:  theatlantic.com/ideas/archive/2022/06/how-san-francisco-became-failed-city/661199/

 

And for reference here's a post I penned recently in response to the call for EA critiques that emphasizes the importance of local as well as global altruistic action: https://forum.effectivealtruism.org/posts/LnuuN7zuBSZvEo845/why-the-ea-aversion-to-local-altruistic-action

Hi everyone! I'm Hyunjun and I live in Boston. I first came across effective altruism while reading about utilitarianism in college classes, but I just recently heard about this organization. Excited to be here!

Also, a bit of a shameless plug: I'm in the early stages of building a product that makes it much easier for people to invest their money in a socially responsible way while meeting their financial goals. If you've ever felt frustrated when thinking about how your personal investments could better line up with your values and you live in the US, I'd... (read more)

This morning, the stock and crypto market has seen large declines. BTC and ETH has fallen 17-20%.

I guess:

  • This might affect EA spend given the source of EA funds. (But it’s unclear how substantive this is, as spending still amounts to a small fraction of these funds.)
  • There might be a recession (I don’t know how likely this is)
  • In a recession, some EA orgs may benefit from donations to cover shortfalls
  • In a recession, the “talent market” (relevant to EAs where there is a shortage of leaders for new projects and tech talent) might change and talent might become easier to obtain. (Alternatively, you can imagine adverse selection from a “people seeking shelter” sort of thing).

This comment is supposed to be “maybe this is relevant news, and like, put this on your radar or something”, I’m not really an expert in any of the above.

4
Charles He
FUNDS ARE SAFU  https://fortune.com/2022/06/18/ftx-sam-bankman-fried-coinbase-brian-armstrong-crypto-layoffs/ FTX Strong If this is not the last crypto cycle, maybe the market is an opportunity for some  EAs.  Or EAs should help FTX or SBF in some way?

Hi is there a way to get stats on EA membership and activity by location? I can't seem to place that from the individual local chapters pages, which might be for the best since that'd be a pain to scrape one by one, and ideally there'd be a simple table with chapter location and number of members (total is fine, ideally would have a subcategory for a common definition of active members). Anyone know where one might find such a thing? 

How do you practice charity beginning at home? Do any EA folks give a set percentage of their giving locally? Has anyone seen statistics on typical breakdowns? Is the EA recommended giving percentage 100% to the globally highest impact charities? A EA member passed along this GiveWell post. It seems very intuitive to me that getting your own life, household and community in order is a good thing. It also seems like the more that you get your immediate life and those in it in order, the more you can support people in need further away. 

 

3
Henry Howard🔸
A small percentage of my donations go to local organisations. People are liable to interpret EA ideas as saying that their favourite local charity sucks. I want to emphasise to people that while their favourite local charity is great, there are even better giving opportunities out there. I think it's good messaging.
1
Locke
What's small? 1%? 10%? Do you have a sense of how typical your beliefs are in the EA community? I'd be very curious to have this type of question included in a future EA annual survey. It seems the last one was done in 2020 which means that perhaps its timely for another? 

Who are the EA folks most into the AI governance space? I'd be curious to their thoughts on this essay on the superintelligence issue and realistic risks: https://idlewords.com/talks/superintelligence.htm 

2
Evan R. Murphy
You may have better luck getting responses to this posting on LessWrong with the 'AI' and 'AI Governance' (https://www.lesswrong.com/tag/ai-governance) tags, and/or on the AI Alignment Slack. I skimmed the article. IMO it looks like a piece from circa 2015 dismissive of AI risk concerns. I don't have time right now to go through each argument, but it looks pretty easily refutable esp. with all that we've continued to learn about AI risk and the alignment problem in the past 8 years. Was there a particular part from that link you found particularly compelling?
2
Locke
Tbh the whole piece is my go to for skepticism about AI. In particular, the analogy with alchemy seems apropos given that concepts like sentience are very ill posed.   What would you say are good places to get up to speed on what we've learned about AI risk and the alignment problem in the past 8 years? Thanks much! 
7
Evan R. Murphy
I took another look at that section, interesting to learn more about the alchemists. I think most AI alignment researchers consider 'sentience' to be unimportant for questions of AI existential risk - it doesn't turn out to matter whether or not an AI is conscious or has qualia or anything like that. [1] What matters a lot more is whether AI can model the world and gain advanced capabilities, and AI systems today are making pretty quick progress along both these dimensions.  My favorite overview of the general topic is the AGI Safety Fundamentals course from EA Cambridge. I found taking the actual course to be very worthwhile, but they also make the curriculum freely available online. Weeks 1-3 are mostly about AGI risk and link to a lot of great readings on the topic. The weeks after that are mostly about looking at different approaches to solving AI alignment. As for what has changed specifically in the last 8 years. I probably can't do  the topic justice, but a couple things that jump out at me: * The "inner alignment" problem has been identified and articulated. Most of the problems from Bostrom's Superintelligence (2014) fall under the category of what we now call "outer alignment", as the inner alignment problem wasn't really known at that time. Outer alignment isn't solved yet, but substantial work has been done on it. Inner alignment, on the other hand, is something many researchers consider to be more difficult. Links on inner alignment: Canonical post on inner alignment, Article explainer,  Video explainer * AI has advanced more rapidly than many people anticipated. People used to point to many things that ML models and other computer programs couldn't do yet as evidence that we were a long way from having anything resembling AI. But AI has now passed many of those milestones. Here I'll list out some of those previously unsolved problems along with AI advances since 2015 that have solved them: Beating humans at Go (AlphaGo), beating hum

Hello.

Ideas to improve the Effective Altruism movement include:

* include scoring, ranking, and distance measures of the altruistic value of the outcome of all personal behaviors, including all spending behaviors.

* research the causal relations of personal behaviors and the altruistic value of the consequences of personal behaviors.

* treat altruistic value as a relative and subjective metric with positive, null, and negative possible values.

* provide public research and debate on the size and certainty of altruistic values assigned to all common human behav... (read more)

Hi, everyone, I'm Muireall. I recently put down some thoughts on weighing the longterm future (https://muireall.space/repugnant/). I suspect something like this has been brought up before, but I haven't been keeping up with writing on the topic for years. It occurred to me that this forum might be able to help with references or relevant keywords that come to mind. I'd appreciate any thoughts you have.

The idea is that, broadly, if you accept the repugnant conclusion with a "high" threshold (some people consensually alive today don't meet the "barely worth ... (read more)

1
Muireall
I added a more mathematical note at the end of my post showing what I mean by (2). I think in general it's more coherent to treat trajectory problems with dynamic programming methods rather than try to integrate expected value over time.
2
Muireall
I'll answer my own question a bit: * Scattered critiques of longtermism exist, but are generally informal, tentative, and limited in scope. This recent comment and its replies were the best directory I could find. * A longtermist critique of "The expected value of extinction risk reduction is positive", in particular, seems to be the best expression of my worry (1). My points about near-threshold lives and procrastination are another plausible story by which extinction risk reduction could be negative in expectation. * There's writing about Pascalian reasoning (a couple that came up repeatedly were A Paradox for Tiny Probabilities and Enormous Values, In defence of fanaticism). * I vaguely recall a named paradox, maybe involving "procrastination" or "patience", about how an immortal investor never cashes in—and possibly that this was a standard answer to Pascal's wager/mugging together with some larger (but still tiny) probability of, say, getting hit by a meteor while you're making the bet. Maybe I just imagined it.

Hi all, this is my first post on the forum and I apologize for the shameless plug, but I just recently came into an opportunity to work on a large project focusing on climate change and emerging technologies relating to it ending with a presentation to the leadership of a fund with ~50 billion dollars in assets under management and the ability to put reasonable portions of that to work every year. 

    My influence is likely to be quite limited, however if anyone has special insight into hydrogen production, green VC firms, carbon storage tec... (read more)

2
Lorenzo Buonanno🔸
Hi Chris! You're probably already aware of this, but founders pledge and giving green are doing great research on this and might be worth contacting. You might also be interested in the forum posts tagged climate change or climate engineering, and maybe contact their authors or some commenters that seem subject matter experts. Good luck on the project!
1
Chris Dz
Good advice, thanks!

Hi everyone, I'm new to the EA community. My husband introduced me here, since I'm facing a career choice dilemma about helping others. I'm currently in tech, but wanted to change to a career in Coaching or Therapy. 

Why the switch: I care deeply about reducing individual human suffering and I enjoy working with people 1:1. I don't see myself in tech for my whole productive years. Causes I care the most: mental health in the workplace, career happiness, and  connecting to one's true self.

My dilemma: I'm debating between a career in coaching vs. th... (read more)

4
DavidNash
I would suggest to try coaching first as it will be much quicker to find out if you enjoy it/find it impactful compared to therapy which could take years before you get a good sense of your personal fit. 80,000 Hours have a section in their career guide on exploration which might be useful here. "Later in your career, if you’re genuinely unsure between two options, you might want to try the more ‘reversible’ one first. For instance, it’s easier to move from business to nonprofits than vice versa."   It's worth reaching out to therapists and coaches here to get a better sense of your uncertainties. 
1
anssya
Thanks so much for the pointers here! Super helpful

Hi EA community, I've been EA adjacent for a while both online and IRL. I saw the request for critiques of the EA movement on Marginal Revolution which inspired me to come over here and finally sign up. I do have to say though that with so many problems in the world today, any effort that's getting people to go forth and do some good in the world is, well, a good thing! So it'll take a bit of work to come up thoughtful critiques.

By the way, is there a EA member directory? I'd be curious to learn more about why people participate in the movement. Perhaps th... (read more)

4
Lorenzo Buonanno🔸
Hi Locke! I don't think there's a definition of "EA member", there is a list of users of this forum by location, a list of Giving What We Can pledgers, some profiles on ea hub. But many people very involved with the movement are not in any of these lists, and there are people in these lists that don't identify as "EA". That's an interesting question, I would make one! You would get new answers and maybe someone will link to previous threads (I couldn't find any). Maybe you might be interested in reading some posts tagged "Community experiences" There is one https://forum.effectivealtruism.org/topics/effective-altruism-survey?sortedBy=new but the latest data is from 2020 Actually, now that I look at it, it includes some information on your previous question

Hi! My name is Dev and I'm 17 years old.  I'm a current high school graduate about to start university in the fall of 2022. Looking forward to interacting here. I'm currently interested in a lot of areas - including global priorities research, AI alignment, existential and s-risk,  and energy poverty - but I'm currently trying to figure out the best path I could take since I'm at quite an early stage in my career. Of these topics, I'd say I'm most well-informed about energy poverty and I'm currently reading Superintelligence to get a better idea of AI alignment. Not sure what I want to do to have the most impact as of yet, but I welcome anyone who might want to have a conversation.

2
Locke
How'd you hear about the EA forum out of curiosity? 
1
Dev Sajnani
Got introduced to effective altruism by a friend and found the forum on the effectivealtruism.org website. Was a lurker for quite a while before I made this post
1[anonymous]
Hey Dev! I noticed you're attending Berkeley for college - just wanted to let you know that the city is a pretty large EA hub, and the university has an active EA club. Feel free to reach out if you'd like to chat more or join our student group slack :)

TLDR; The EA Forum (EA as a whole?) should ready for attention/influx due to political money in about a 12 month horizon from this comment (so like 2023ish?). So maybe designing/implementing structure or norms, e.g. encouraging high quality discussion, using real names is good.

There is a news cycle going around that SBF will increase political spending for 2024. 

Examples:

... (read more)
3
Charles He
Two examples of newcomers, whose presence seems positive or productive: * https://forum.effectivealtruism.org/users/_pk * https://forum.effectivealtruism.org/users/carol-greenough   But this doesn't indicate what could happen to forum discussion after an extensive, large deployment of money.  It's prudent to think about bad scenarios for the forum (e.g. large coordinated outside response, or just ~100 outside people coming in, causing weeks of chatter).   The best scenarios probably involve a forum which encourages and filters for good discussion (because the hundreds of thousands of people interested can't all be accommodated and just relying on self selection from a smaller group of people who wander in probably results in adverse selection).  The best outcomes might include bringing in and hosting discussions with great policy expertise, getting EA candidates good exposure, and building understanding and expertise in political campaigning. I guess a bad scenario is maybe 20-30% probable? I guess most scenarios are just sort of mediocre outcomes, with "streetlight" sort of limitations in discussion, and selecting for the loud voices with less outside options.  Very good scenarios seem unlikely without EA effort. Maybe good scenarios requires active involvement and promotion of discussion. 

I'm excited.

A lot changes now.

Future is really now.

What's up EAers. I noticed that this website has some issues on mobile devices - the left bar links don't work, several places where text overlaps, tapping the search icon causes an inappropriate zoom - is there someone currently working on this where it would help if I filed a ticket or reported an issue?

2
JP Addison🔸
Don't worry about finding the perfect place (here is a fine place for now). You can message us about bugs, or post in the Feature Suggestion Thread for feature requests, so that others can vote on the ideas. I'm guessing you use an iPhone? This is a longstanding issue that we really should have fixed, it used to be you had to tap twice, though now it appears to have broken entirely. Thanks for the report. I see the behavior, thanks.
3
Kevin Lacker
Excellent, sounds like you're on it. I do in fact use an iPhone. I should have made a more specific note about where I saw overlapping text earlier, I can't seem to find it again now. I'll use the message us link about any future minor UI bugs.

Hello, I'm new to this forum, met a bunch of EA folk in London at the EA Global drinks a couple of weeks ago, and have been EA adjacent for a while, so happy to chat and link up on projects of mutual interest. Most of my personal giving is in humanitarian and development, also investing in green tech through crowdfunding platforms.

I'm currently Head of Global Health Communications & Stakeholder Engagement at UK's National Institute for Health & Care Research (NIHR) , previously over 20 years senior leadership in universities, research institutes, international NGOs, charities and funders, mainly in bioscience, health, and international development. Full career history on https://www.linkedin.com/in/patrick-wilson-323b591b/
I am active in various science communication networks and rationalist/ish groups. I enjoy football and samba. I blog at https://pathfindings.substack.com and I'm currently writing a popular (I hope!) science book on advances in bio-gerontology and the future of humanity. If you want to get a flavour of some of my writing, I just cross-posted a recent blog on https://forum.effectivealtruism.org/posts/h2EaaDchr9QYuKz9z/rabbits-robots-and-resurrection.

Are shortforms supposed to show up on the front page? I published a shortform on Sunday and noticed that it did not appear in the recent activity feed, but older material did.

Also, does anyone else think that the shortform section should be more prominent? It's a nice way to encourage people to publish ideas even if they're not confident in them, but my most recent one has gotten little to no engagement.

4
Lizka
The shortform should in fact appear in recent activity -- not sure what happened there.  And I agree that we should grow and develop low-barrier ways of interacting with the Forum.

Upcoming posts about not yet created EA project or institution called “EA common application”.

I know a writer/”founder” who wrote up documents related to an “EA common application”. 

Importantly, their vision seemed to get serious interest and funding—but they later exited or got kicked off the project[1][2].

I have access to these documents written by this person. 

In the last few months, EAs have asked for these documents to read and distribute to others. Some requests have come from people I have never met. 

There seems to be a lot of interes... (read more)

2
Charles He
(Continued) For my own idiosyncratic reasons, related to this particular project of the common application, it seems bad for me to organize, or put people together. Similarly, being a single point of contact, or “holding on to these documents or ideas” seems inappropriate. Yet, with the pressure/sentiment described above, it's irresponsible to do nothing or just sit on the document . So I plan to write up some posts and share the documents.  I'll write this all up quickly. The resulting output might be low quality, or confusing to people not engaged or in the “common app headspace” (like, 99% of forum users). The truth is that writing about this is pretty hard, orthogonal to founder skill, and there’s just a lot going on, it’s one of the more complex projects, and the content is by its nature opinionated.  All this content will be posted on a new EA forum account, with much more conventional communication norms than the one used here. I’ll write a little more below for some context, as I prepare a document.  
2
Charles He
Quick, basic overview of EA Common Application (1/2) (Note that the following describes one vision of the common application, and is dependent on founding team preferences and ability. Things will be different, even if everything goes perfectly. The below content might also be wrong or misleading.) Basically, the “common application” is a common point of entry for EAs and talented individuals applying to EA organizations.  Concretely, this would include a website that is used by applicants and EA orgs. It would also become a team or institution that is universally seen as competent, principled and transparent by all EAs. To say it simply, it would be a website that everyone uses and applies to, when working in EA. It’s just the optimal thing to do.   To the organizations and applicants that are users, the common application will be simple and straightforward. But for the founders/creators, achieving this is harder than it sounds, and in the best version, there are (extraordinarily) complex considerations[1].  But as demanding as it is, it’s equally or more valuable to EA. Even in the early stages, the value of the common application includes: * A streamlined, common place for thousands of talented people looking to contribute or work at EA orgs, as well as a competent institution that provides services, advice and standards to EA organizations. * A central place that provides insights about EA recruiting (like this, but automatically for everyone, at the time), and observes and can intervene in bad outcomes ("really hard to find an EA job"). * The common application can coordinate with EA, responding to gaps as well as surpluses for talent, for example by creating grants or special programs to keep talent from bouncing off, or coordinating with headhunting or hiring agencies to fill gaps. 1. ^ To see this: * One of the key powers of the common application is sharing applicant interest and progress among organizations, e.g. there might
2
Charles He
Quick, basic overview of EA Common Application (2/2) The bread and butter of the common application is the day-to-day work to get operations running smoothly and build expertise and trust among EA orgs and applicants.  While much of this seems seems mundane, just the basic operations and having experienced, trusted staff perform friendly check-ins with talented candidates is important (I think focus might be on engaging and retaining highly talented “liminal” EAs, as opposed to existing highly-engaged EAs). It is key to have founder(s) who respects and will execute this unglamorous work. That being said, in the later stages (year 2 and after), the common application can provide enormous and unique value: * Working as a servant to EA organizations, the common application can develop assessment, screening and guidance tools for candidates and organizations that makes EA organizations recruit more effectively and provides confidence and insight for EAs in their job search. * The common application can go far beyond streamlining recruiting, bringing strong candidates into EA, and make better matches for existing talent, for example, creating new roles, catching candidates who might bounce off EA, and building up deep pools of talent beyond any single job search. * This activity in the common application will provide a way to further develop and grow the pool of EA “vetting” and communication that is important for EA scaling, supporting existing strong EA culture, norms and institutions This vision of the common application is unusual. It’s hard to think of any other movement that has an institution like this. In later stages, some of the ideas, methods and practices could be groundbreaking.  The previous writer/"founder" had interest from professors in Stanford GSB , Sloan/MIT and Penn State, as well as other schools, who expressed interest in working for free, studying and developing methods (market design, assessment) for this common application (because the
5
Greg_Colbourn ⏸️
Would it be fair to say that Triplebyte is a similar thing for the software engineering industry?
2
Charles He
I don't fully understand Triplebyte, but the common application seems more extensive in functionality.  I expect EAs who create a common application to believe they can achieve closer and more effective coordination between EA organizations than many portals or job search sites. For example, (in one vision of the common application) with the consent of organizations and explicit agreement by candidates, organizations can share (carefully controlled, positive) information about candidates who don't end up accepting a job offer, or share other expertise or knowledge about hiring or talent pools they come across. I think this post, and future, not yet posted content, by the account "che" will be more explicit and clarify the role and value of a common application.

Hi All, 

Just introducing myself! I've been an advocate of EA for a number of years but I'm new to the forum. I've spent a while reading though various posts and it's great to see a forum with such a reasonable, open minded and friendly tone. 

Like most people here I'm really interested in how humanity responds to existential threats (e.g. climate change) and global living standards (e.g. economic development in poorer regions). My background has been working in a start up - so I feel very comfortable starting projects, getting things off the ground, discovering something doesn't quite work and then consigning it to the failure list :P 

If anyone has a great idea that they want help getting off the ground then I'd love to hear from you. I'm hoping to have more free time to devote to projects soon as I'm leaving my job as a Financial Director to go back to university to retrain as a computer scientist :)  

1
Dem0sthenes
Hi Stephen! Thanks for the post. What are the typical frameworks that you use to think about existential threats? Sometimes for instance we utilize probabilities to describe the chance of say nuclear Armageddon though that seems a bit off from a frequentinost philosophical perspective. For example, that type of event either happens or it doesn't. We can't run 100 earth high fidelity simulations and count the various outcomes and then calculate the probability of various catastrophes. I work with data in my day job so these types of questions are top of mind. 
3
Stephen Beard
Hi Dem, I don't really have a defined framework for thinking about existential threats. I have read quite a lot around AI, Nuclear (command and control is a great book on the history of nuclear weapons) and Climate Change. I tend to focus mainly on the likelihood of something occurring and the tractability of preventing it. On a very high level I've concluded that the AI threat is unlikely to be catastrophic, and until a general AI is even invented there is little research or useful work that can be done in this area. I think the nuclear weapons threat is very serious and likely underestimated (given the history of near misses it seems amazing to me that there hasn't been a major incident) - but this is deeply tied up in geopolitics and seems highly intractable to me. For me that leaves climate change, which has ever stronger scientific evidence supporting the idea that it will be really bad, and there is enough political support to allow it to be tractable - which is why I have chosen to make it the area of my focus. I also think economic development for poorer countries (or the failure to do so ) is  a huge issue on a similar scale to the above, but again I believe that it's too bogged down in politics and national interests to be tractable.  
1
Dem0sthenes
Yes that makes sense and aligns with my thinking as well. Do you have a sense of how much the EA community gives to AI vs nuclear vs bioweapon existential risks? Or how to go about figuring that out? 
2
Linch
Up until recently, the vast majority of EA donations come from Open Philanthropy, so you can look at their grants database to get a pretty good sense.
1
Locke
Does the Doomsday Clock and the bulletin of the Atomic scientists come up much in EA? I'm a bit new to this scene. https://thebulletin.org/ Jerry Brown's warnings about nuclear Armageddon and the slow building climate tidal wave have definitely turned me on that organization. Where do you see the opportunity to make a difference in the decarbonization effort? 
1
Stephen Beard
Hi Locke - I'm not 100% sure how seriously nuclear Armageddon is taken in the EA community as I'm also pretty new. I'm just starting a piece of research to try and highlight where specific de-carbonisation efforts will be found (focused on a specific country - in my case Canada). Even though I haven't started I strongly suspect the answer will be agriculture, as it accounts for a very large proportion of emissions, there are many proven, scalable low cost solutions and it seems to me to be very neglected from a funding point of view (I say that based on some brief research I did on the UK) compared to other areas like electric vehicles and renewable energy.   

Is there like some statistics on this forum? Particularly distribution of votes over posts?

6
Lorenzo Buonanno🔸
Hi Emrik! Is this what you're looking for?  https://effectivealtruismdata.com/#post-wilkinson-section  https://www.effectivealtruismdata.com/#forum-scatter-section
2
Emrik
Exactly! Thanks a lot.

Hi everyone!

It's been a while since I started my research on how to donate cost-effectively. That journey led me to GiveWell, TheLifeYouCanSave, Animal Charity Evaluators  and, eventually, to the EA Community. I am so grateful for all the valuable resources, tools and concepts I could find thanks to the effective altruism movement. This has allowed me to start refining my mindset to maximise the positive impact, not only of my donations, but all my actions.

However, I have not found any way to donate tax-efficiently from my country (Spain). The chariti... (read more)

2
Lorenzo Buonanno🔸
Hi Liam! I usually look at this table for country-specific tax-deductibility opportunities https://donationswap.eahub.org/charities/ It seems that among the listed charities only Animal Ethics and Oxfam are tax-deductible in Spain :( You can try the donation swap (not sure how responsive they are), and of course keep in mind that donating effectively does not necessarily imply donating tax-deductibly, but you probably already thought about that.
1
Liam McHara
Hey Lorezo, thank you for your reply! What a pitty that only Animal Ethics and Oxfam are the only tax-deductible listed charitities in Spain :/ Lately I have been seriously thinking about starting a Spanish platform like RC Forward. I may write a post soon about that idea, asking for the community's feedback.

Hello!,

I chose a pseudonym (-dunce scout), as I'm starting a blog with same name. There isn't a popular blog (or one that I know of), that talks about simple/big ideas like lesswrong or SSC or EA forum -around here. (I'm based in Kerala; I'll write mostly in ENG, maybe both ENG/MAL for region-relevant posts? Then again, typing MAL is hard)

The blog will be a guide/map to these sites. Occasionally, I'll digest the more large/complex posts -in an original way?; maybe write/think on simple things and show a new way to think through.

I somehow stumbled upon lesswrong, and added it to bookmarks. (This maybe through stumbleupon when it was free and available on the chrome web store; I think I was in 5th/6th grade when that happened) Never read it though. When covid/online classes happened, I got time. I started with Rationality A-Z since the posts had catchy headings. Soon realised that most posts are going over my head. Then, after a week or so I started with Codex and really enjoyed reading it. (except for the much more than you need to know series) I did read some of Rationality A-Z, but not to completion. Enjoyed hpmor, replacing guilt by Nate Soares, and few other posts on lesswrong b... (read more)

Hello! I am here to get feedback on a blog post I wrote recently (Wild Animal Suffering Should be Effective Altruism's Flagship Cause (substack.com)). I wrote it for my blog, but I ended up emailing openphil for feedback, and a rep told me to go ahead and share it here.

A summary of the article is that wild animal suffering would become much more relevant as correlated with certain engineering problems as ecosystem design and microbiome control, and that this gives it desirable properties as a future "flagship". Therefore, we should invest in popularizing i... (read more)

Hi everyone! 

I've known about the ideas behind EA for a while now, but have just recently become aware of how much concrete organizing is going on and how many resources the movement now has.

I've got academic training in a lot of skills that are useful to EA organizations, such as cost-effectiveness analysis, decision science, and preference elicitation. My reading in the EA literature has given me a few ideas about how I might some day put those skills to work for this cause. I'm definitely open to research and project collaborations if you think I might be useful to you -- or even if you just want someone to brainstorm with!

Somewhat new EA here - I'm thinking of wearing EA gear at an upcoming livestreamed collegiate poker tournament. Any thoughts on whether that's a good idea? Seems good for the EA brand as long as I don't do/say anything too out of line (?)

Thoughts on how to talk about EA to other competitors/interviewers would be much appreciated too

Also a disclaimer that I don't expect to do very well on the tournament hahaha, I'm a pretty recreational player

7
Yhw
Update: I made it on akaNemsko's Twitch stream (287k followers!) with my EAGxBoston shirt! https://www.twitch.tv/videos/1457837512?t=02h28m34s
[anonymous]22
0
0

Hello! My name is Garrett, and I am from Seattle, Washington. I have been involved in EA for about a year and was introduced to it by my closest friend while at school. He and I have both always been directly involved in humanitarian aid projects around the world for most of our lives (it's how we met, actually), and after returning from a service trip in Lesbos where he had been shaken by the suicide of a small child there he began to wonder about the effectiveness of his efforts. This then put him on the road to finding EA. When he ran across it, he shared it with me, and I immediately fell in love with everything about EA. I was the director of the university's service department at the time and was responsible for activities involving hundreds of students, and was frustrated with what I perceived to be inefficient and ineffective university policies governing funding and activity options. EA was simply too relatable to pass up.  I've been heavily involved ever since, although my schooling has prevented me from attending many of the conferences that I wish to attend one day in order to make more of your acquintances.  Until then, I am happily engaged in furthering the ... (read more)

https://www.nytimes.com/2022/04/10/business/mackenzie-scott-charity.html

 

This seems like a great article and thought provoking:

  • There's a lot of attention on meta EA and EA money. The FTX grants, which might total ~$100M in a year, seem big. These grants are extremely important for the cultural effects and could be enormously impactful. 
  • Scott moved out $8.6 billion last year. If just 10% of that was directed toward very impactful causes, what would the value of that be?
  • Did Scott or her staff encounter EA? Did this happen, and if so, what did they
... (read more)

Hi everyone. I'm a therapist & academic philosopher based in Boston. I do individual therapy  and also teach philosophy at Bentley University. Further info here: https://www.jmaier.net/about-me.html

I look forward to hearing more about ideas/suggestions about how to direct my own giving. I have a strong interest in promoting effective mental health interventions at scale. I've written about this a bit in a blog for Psychology Today: https://www.psychologytoday.com/us/blog/philosophy-and-therapy. Looking forward to learning from folks on this forum.

8
Lorenzo Buonanno🔸
Hi John! You might be interested in the work of the Happier Lives Institute, they have a donation advice page https://www.happierlivesinstitute.org/donation-advice.html You can also see all forum posts tagged as "mental health" here: https://forum.effectivealtruism.org/tag/mental-health
2
John_T_Maier
Thank you, Lorenzo, this is really helpful. I'm familiar w the Happier Lives Institute and the v important work that they're doing. Looking forward to learning more.

Hey everyone! Just joined EA a few months ago and was very fortunate to attend EAGx Bostone recently! I could not be more excited about discovering this community!!!

I’m doing two fellowships and working on a marketing project team in my university EA USC group.

I feel very strongly about utilitarianism, am interested in physics, and as a result came to longtermism several years ago on my own. I actually wrote a book called “Ways to Save The World” essentially about innovative broad strategies to sustainably, systemically reduce existntial risk. Really excited to share it with the EA community and have my ideas challenged and improved by fellow highly intelligent, rational do-gooders!

Hi all! I'm new to the EA forum. My husband's been involved in EA for years, and I am finally in a place to want to join in as well. Specifically, I'm an efficiency consultant, specializing in operations and productivity improvement. I would love to take my talents to the EA world to make charities and the people involved more impactful.

Hello! I am the Affective Altruist, and I am building a little dating website for EAs. I'm starting off with WordPress to keep it simple. Consider this a fun side project of mine, and friendly competitor to reciprocity.io.  ^_^

Is anyone interested? What simple features would you want from such a site? Should I make a top-level question asking this?

My general advice for people building projects that require network effects is to think about how to 100% of a small market before you try to tackle the entire market. Peter Thiel has written about this dynamic in Zero to One. Can you get all EAs in your city/region perhaps?

9
Affective❤️Altruist
Yeah, I've read in another book, the Cold Start Problem by Andrew Chen, that to form an atomic network you should think even more specific than you normally would. I was considering EA as kinda niche but it would make sense that people generally want to date others constrained by location. Though early adopters might care a bit less if they're willing to travel or have online relationships?

I had a conversation with my partner yesterday about how we want to do good better, but at the same time nobody can do 100% and taking care of yourself is important. She described to me a concept that is a simple but important change from how I have understood EA, and I'd like to share it. While I normally thought of doing good better as

devoting more resources toward highly impact efforts

what she described was

using whatever amount of resources you are going to use for good and making sure those resources are having the greatest impact.

This isn't a ... (read more)

4
Bary Levy
For me, knowing my giving is effective makes me more confident to give more. Before learning about EA I never considered donating 10% of my income because I never thought it will be so helpful, and I saw charity as something I was sometimes obliged to donate small amounts to.
2
Guy Raveh
I look at it this way: EA is about maximising the total amount of good you do over your lifetime. If you can do lots of good right now but it will tear you down - you may not be more impactful overall by doing it.

Hi everyone! I generally go by Velociraptor online, but if you find that too silly, please call me Lu. I had a pretty awful experience burning myself out trying to do too much volunteer work during the peak of covid, and when I was seeking more reasonable and high-impact ways to return to helping, I stumbled across effective altruism a few months ago. The ideas have really appealed to me, although I'm still uncertain about some aspects (mostly the global focus, I'm generally a proponent of local efforts as participants tend to have more in-depth knowledge ... (read more)

2
Guy Raveh
If you're from an affluent community or country, there's a trade-off between doing things you strongly know to be good (because you're local), and helping the people who are the least fortunate (who are nowhere near you). A solution might be finding ways to elicit local knowledge and help with impactful work in other places (the Global South, the future, factory farms etc.).

Hi Everyone, this is my introduction post. I've put some info in my bio, so I'll elaborate on it here. You can find out a little more about me here https://snlawrence.com/.
I was introduced to EA through an interview with William MacAskill on Sam Harris' meditation app, Waking Up. In the interview, William mentioned 80000 hours, which I then googled after. I began reading through their key idea and career review articles and was quickly convinced of the value of doing impactful work over my career. The articles are well written, well researched and very hon... (read more)

1
rass
Hi Sean, we met online last year and through 80,000 hours, nice to see you on the forum! Let's keep the conversation going, I'm in a similar boat looking to maximise exploration value over the 24 months - keen to trade ideas. 

Hi, call me Rahela.  I'm working in Anima International and Open Cages PL, as IT manager. In free time I write my personal blog about animals, effective helping, ethics and life on the countryside. I also host a podcast about similar topics. You can find me here https://hodowlaslow.pl/. 

I found EA, thanks to my colleagues from Anima International. Before that I was working 13 years in fashion industry, as a designer thinking all days what am I doing here. Took me a long time to became pragmatic, not fanatic. (I was radical vegan 4 years ago).

 You can contact me about some fundraising topics and IT if you need some help. 

I love meditation and cats. Try to meditate with 3 cats!  Feel free to contact me. 

Thoughts/comments on potential new series of posts ("Gates are Open, Come In")?

 

Someone I know has benefited a lot from interactions with major EA funders (for reasons that aren't clear, the funders just seem communicative and benevolent).

This person is thinking of writing up a series of posts about their experiences, in a positive, personally generous way, to provide value and insight to others. 

They would share actual documents (they wrote) as well as describing their views of communications and key points that seem important to their interacti... (read more)

1
DC
This looks like a great idea!

The new effectivealtruism.org homepage looks fantastic.

3
Locke
Out of curiosity, what's the logic with those graphs as the center and focus on the homepage? 

It does, but why is CEA capitalizing "effective altruism" now? 😕

2
Rahela
Wow, I didn't even know that there is a new design. Looks really good. 

Hello everyone,
I'm a PhD student using non-invasive brain stimulation to enhance human attention. I'm convinced that using non-invasive brain stimulation to enhance human intelligence has massive potential in improving productivity across the global economy. 

Unlike its productivity-enhancing counterparts (invasive brain stimulation and artificial intelligence) it is vastly underfunded, making it an ideal target for effective altruism!

Compared to current AI human intelligence is already general, so enhancing it can be applied to all aspects of society.... (read more)

1
Luca Parodi
Hi Jack. I am really into cognitive enhancement. In 2020 (right before COVID) I did a two months research period at Bernhard Hommel's cognitive enhancement lab in Leiden. While I was a Cognitive Science student in Milan I did an exam with Roberta Ferrucci and one with Alberto Priori, two prominent TDCS as a cognitive enancher experts. At the last EAxOxford I spoke with Anders Sandberg about cognitive enhancement as an EA cause area. All to say that I am interested in what you are doing and that could be valuable to connect more people that are into "serious" (e.g. non risky and unproved biohacking shit) cognitive enhancement research
1
Jake Toth
Hi Luca, That sounds really interesting, it is good to hear from others in this space! I have connected with you on LinkedIn, hopefully, we can find a way to work on this together in the future.
2
Aaron Gertler 🔸
Ahead of the full post, I'd like to know what you think the most compelling evidence is for non-invasive brain stimulation actually working. This could be a paper, a blog post from some self-experimenter, or something else — whatever made you think this was important to study further. (I know nothing about this topic at all, and don't even have a mental picture of what NIBS would physically look like.)
1
Jake Toth
Thanks Aaron, I will make sure to include this information but hopefully this will help in the meantime: Non-invasive brain stimulation is any method of causing brain activity to change without surgery. This can include using electrodes to apply a small amount of current to the scalp with a headset like this: https://www.neuroelectrics.com/solutions/starstim Creating a magnetic field in the brain with a device like this: https://www.healthline.com/health/tms-therapy#What-is-TMS-therapy?   Or by using ultrasound waves with a device that looks something like the image here: https://www.semanticscholar.org/paper/Technical-Review-and-Perspectives-of-Transcranial-Yoo/c26b8b3655561cfb24dfb262d4fbf5ad76bc6867 The electrical and magnetic stimulation methods are well established with decades of research covering tens of thousands of participants and proven safety profiles. The magnetic method is too bulky for a consumer headset, and the electrical method has issues with reliability across subjects (my research plays a small part in helping to address this.)  The ultrasound method is more new, but with the promise of much more accurate stimulation. Without going too deep into the technical challenges that remain I think an electrical stimulation based headset that increases intelligence significantly could be available to consumers within 5 years. With an ultrasound-based headset superseding that once the research is more firmly established.   
2
Charles He
Can you explain why this technology/approach is so underfunded/neglected, when some implementations seem simple/benign, and the benefits seem large?
2
Jake Toth
Great question, I think it's largely because the implementation wouldn't be as simple as it may first appear so relatively deep pockets are required. Also, the amount of researchers in this field is pretty low (low thousands?). It's still much simpler than invasive stimulation (e.g. Neuralink), but not something that can be implemented overnight. The easiest headset to initially implement would use electrical stimulation, and there are devices on the market that use electrical stimulation, for example, this one for depression: https://flowneuroscience.com/ The issue is that we all have different shaped heads, skull thickness, shapes of brain etc and this can lead to up to a 100% difference in the electric field in the brain https://www.sciencedirect.com/science/article/pii/S1935861X19304115. To phrase that differently, because our brains and heads are different giving two people the same stimulation can mean one has improved intelligence and the other does not. But luckily there is a way around this, namely taking an MRI scan of the user's head, simulating brain stimulation, then personalising the stimulation to their head and brain. This essentially gets rid of much of this variability between people by accounting for the different shape of the head and brain. The issue of course is that we can't go and have an MRI scan when we buy this headset, it's expensive time consuming and doesn't scale across the population. This is where the field has sat for a few years, have personalised stimulation at great expense or don't and have it and get poor results. Most research groups cannot afford to put every participant through an MRI, so most research on this topic has poor results.  Instead, a prospective startup needs to find a way to personalise the stimulation without an MRI scan. One way is to use AI to generate an MRI scan based on the shape of the persons head, their demographics and maybe even their DNA  (see https://developer.nvidia.com/blog/kings-college-londo

Hi! Long time listener, first time caller. I currently work in operations in higher ed and I just know I can be doing the same exact job in the EA community and be making much more of an impact and have more of an opportunity to test my skills and grow into related fields. I actually just applied for a position at CEA which would be a dream! I'm curious if any one else from the community came into EA from student affairs or enrollment management and if so what are you doing now and how was the transition?

👋 I'm Seth Ariel Green, I mostly write here: https://setharielgreen.com/blog/, I'm a freelance writer currently based in New Orleans, about to go finish up a thru-hike of the Appalachian Trail that I mostly completed last year. Long-time lurker, might start posting, looking forward to getting into it with y'all

4
JP Addison🔸
Welcome! Props for that accomplishment. Our editor decided to interpret the comma after your link as part of your link. I fixed it for you, I hope you don't mind.
1
Seth Ariel Green 🔸
TY TY!

I like the new colored icons on posts with certain tags (e.g. Farmed animal welfare, Existential risk) 😀

6
JP Addison🔸
Thanks, Evelyn!

Hello everyone, my name is Emre. I am the co-founder and director of Kafessiz Türkiye, a farmed animal advocacy organisation in Turkey. Looking forward to learning from you all!

Hello everyone!

I am a human rights activist from Russia. I work as a ML scientist at a medical tech startup in Germany. When the war with Ukraine started 8 years ago, I decided to record an antiwar video as a reply to Ukrainian students. It was my first time trying to organize a protest, and it was way scarier than just participating. What if one of the students got expelled for this? What if at the rally I'd organize in their support someone got accused of hitting a cop? Suddenly it looked like my little initiative could turn into a years-long nightmare. I decided to do it and was very glad to discover that an Open Russia journalist had the same idea and we could merge our efforts.

No one got in trouble for the recording, but it didn't change anything, either. So I went looking for more effective ways to help Ukraine and free my own country. As protests in Russia dwindled, I decided that building a friendly AI was my best bet. I got into machine learning, read most books on MIRI's reading list and was in the middle of a MIRI interview when COVID struck and they stopped hiring programmers. My plan no longer called for staying in Russia, so I moved to Germany last year, to stop supporting Putin's war and oppression with my taxes.

Hello there !

I'm David, 31, French, father of 2 - recently moved to Madagascar.

I would be really interested to get in touch with EA community members in Madagascar. Also I believe there's also an opportunity to spread the movement here, given the poverty and inequalities issues are really tangible here.

Currently, I hold the role of Chief Technology Officer at Baobab+, a social business aiming at enabling access to energy, digital and finance products. We distribute our products in rural areas in Africa, and sell all our products in "pay as you go" (Similar to leasing) to make them affordable to the most (typical cost < 0.5 usd / day) .

Customers, proving their trustworthyness with good repayment enter a virtuous circle and get access to larger products (e.g. basic phone or a fridge) or loans.

I would be thrilled to study a bit closer the impact we're having compared to other initiatives.

Why I Am (Not) a LongTermist

I am copy and pasting my newest endeavor to meditate on the meaning of "long-termism." https://whatiscalledthinking.substack.com/p/why-i-am-not-a-long-termist?s=w

1.

The Long-Term is like the Maimonidean conception of God—you know it when you don’t see it.

2.

The Divine Face, like the distant future, is hidden. But Moses is permitted to see the back of God’s face. Similarly, today’s super-forecasters cannot know the future, but they can see the back of the future.

3.

Of God we know nothing, says Franz Rosenzweig, but our ignorance is ... (read more)

Hi all! Recently found this community and I'm really impressed with the discourse here!

This is kind of meta and not about EA per se, but from a community-builder's perspective I was wondering how this forum is moderated (self or otherwise), and how it was built up to such a vibrant space! Are there other forums like this (I know lesswrong runs on a similar-looking community blogging model)? Have there been any moderation challenges? 

I read through some of these posts (https://forum.effectivealtruism.org/tag/discussion-norms) but would appreciate any o... (read more)

Hello, at age sixteen some combination of debating a pastor about universalism, visiting worship centers of various faiths, and Rick and Morty killed my religion. With nothing remaining that seemed worthwhile, I booked a ticket to Singapore and began wandering around odd destinations for the next few years in variable states of despair. I tried to construct a new sense of meaning through pragmatic mythicalism, the idea that untestable ideas can still be believed in based on their utility. I decided it would be useful to believe that the well being of people are worth fighting for, but still felt miserably alone. 
Then I discovered EA, or rather it discovered me as I was ranting half-crazed to someone about the fermi paradox and great filters to which someone replied "oh yeah, those are called existential risks in effective altruism," to which I replied "what the HELL is effective altruism?"
Then there was no turning back. The concept that a community exists with such a purposeful drive to improve lives gave me a rope to grasp as I clawed my way back to life like it matters. The ideology granted me a beacon to strive towards, but lacking interaction or connection with the communi... (read more)

[anonymous]8
0
0

Hello everyone! I'm a member of the Polish EA community. Over the last few days we've witnessed an outpouring of support for Ukraine which is amazing. But among the information overload, both donors as well as those in need, may find it difficult to single out credible forms of help.

We’re aiming to create a database of verified information to make sure people can make the biggest impact when donating.

This FORM allows those of you who have information about existing initiatives to submit them for our evaluation. Please, spread it in your groups / communitie... (read more)

Sam Harris and Rob Reid just put out this podcast that seems very relevant to this community:

[The After On Podcast] 58: Recipes for Future Plagues | Kevin Esvelt #theAfterOnPodcast

https://podcastaddict.com/episode/136135023 via @PodcastAddict

Basically, the US government is trying to find all the pandemic-capable viruses it can, and it will then POST THEIR FULL GENOMES ONLINE.

This is potentially a catastrophically stupid blunder that we intend to make but have not made yet. The recommended actions from Rob are to tell USAID directly at https://www.usaid.gov/contact-us, tweet at them, if you live in a state with a senator on the subcommittee on state department and USAID management (https://www.govtrack.us/congress/committees/SSFR/14) contact your senator, contact Washington State University if you have a relevant tie, and otherwise spread this, get attention, apply whatever leverage you have.

Twitter thread from Kevin Esvelt (professor at MIT, speaker at EA global on mitigating catastrophic biorisks):
https://twitter.com/kesvelt/status/1498409798903209996

2
JMonty🔸
Here's some very well done podcast notes if you like text more than audio: https://docs.google.com/document/d/1ORM6XjEQCycmzBrCt_D3nyl5O_fNPGwS3kYpAAy364c/edit?usp=sharing

(X-posting from LW open thread)

 

I'm not sure if this is the right place to ask this, but does anyone know what point Paul's trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)

Suppose you have a P probability of the best thing you can do and a one-minus P probably the worst thing you can do, what does P have to be so it’s the difference between that and the barren universe. I think most of my probability is distributed between you would need somewhere between 50% and 99% chance of good things and then put some

... (read more)
6
RyanCarey
As I understand it, he gives two possibilities. 1. Our capacity for happiness is symmetric while our "reality" (i.e. humanity's historical environment) has been asymmetric. 2. Our preferences themselves were asymmetric, because we were "trained" to suffer more from adverse events, making us have greater capacity for suffering. (1) gives more reason for optimism than (2) because we are more able to change the environment than our capability for happiness/suffering. FWIW, I think we might be able to change our capability for happiness/suffering too, and so thinking along these lines, the question might ultimately hang on energy efficiency arguments anyway.
1
Anirandis
Cheers for the response; I'm still a bit puzzled as to how this reasoning would lead to the ratio being as extreme as 1:a million/bajillion/quadrillion, which he mentions as something he puts some non-negligible credence on (which confuses me as even a small probability of this being the case would surely dominate & make the future net-negative.)
3
RyanCarey
It could be very extreme in case (2) if for some reason you think that the worse suffering is a million times worse than the best happiness (maybe you are imagining severe torture) but I agree that this seems implausibly extreme. Re how to weigh the different possibilities, it depends whether you: 1) scale it as +1 vs 1M, 2) scale it as +1 vs 1/1M, or 3) give both models equal vote in a moral parliament.

Hi, I'm Jonny, a software engineer based in London. I've recently come across EA and am looking to re-align my career along a higher impact path, most likely focusing on AI risk, however I've still not fully bought into longtermism just yet so am hedging by also considering working on climate change or global health. I look forward to using this forum to try and answer some of my questions and clarify my own thinking.

2
Chris Leong
What are you thinking about regarding next steps to become more involved with AI Safety?
9
Jonny Spicer 🔸
I've taken a few concrete steps: * Applied for 80k career advising, which fortunately I got accepted for. My call is at the end of the month * Learned the absolute basics of the problem and some of the attempts in progress to try and solve it, by doing things like listening to the 80k podcasts with Chris Olah/Brian Christian, watching Rob Miles' videos etc * Clarified in my own mind that AI alignment is the most pressing problem, largely thanks to posts like Neel Nanda's excellent  Simplify EA Pitches to "Holy Shit, X-Risk" and Scott Alexander's "Long-Termism" vs "Existential Risk" (I'd not spent much time considering philosophy before engaging with EA and haven't had enough time to work out whether or not I have the beliefs required in order to subscribe to longtermism. Fortunately those two posts showed me I probably don't need to make a decision about that yet and can focus on alignment knowing that it's likely the highest impact cause I can work on). * Began cold-emailing AI safety folks to see if I can get them to give me any advice * Signed up to some newsletters, joined the AI alignment Slack group I plan on taking a few more concrete steps: * Continuing to reach out to people working on AI safety who might be able to offer me practical advice on what skills to prioritise in order to get into the field and what options I might have available.  * In a similar vein to the above, try to find a mentor, who can help me both focus my technical skills as well as maximise my impact * Getting in contact with the folks at AI Safety Support * Complete the deep learning for coders fast.ai course My first goal is to ascertain whether or not I'd be a good fit for this kind of work, but given that my prior is that software engineers are likely to be a good fit for working on AI alignment and I'm a good fit for a software engineer, I am confident this will turn out to be the case. If that turns out to be true, there are a few career next steps that I think seem
3
Chris Leong
Nice, I'd also recommend considering applying for the next round of the AGI Safety Fundamentals course. To be honest, I don't have much else I can recommend, as it seems like you've already got a pretty solid plan.
2
Norman Borlaug Stan
If you're interested in more resources to help you decide, may I recommend https://80000hours.org/ It has a pretty good set of decision-making tips for someone like yourself. They also occasionally give out personalized career advice which might be of benefit.
8
JP Addison🔸
Welcome! Super exciting you're thinking of using your career for impact. I'm also a software engineer and was in the same position in 2016, and now I make this Forum. Take your time to discuss the ideas, and don't feel any pressure to come to any particular conclusions.

Hi everyone, I am Oisín from Ireland. I am relatively new to EA (about 4 months), and am currently in university studying Theoretical Physics (3rd year), though I'm pretty sure I won't graduate with a first to be quite honest. The general field of EA I would currently be most invested in is animal welfare/advocacy. I am also in the middle of the AAC training course and finding it intruiging. Would you know how someone with my sort of degree could be useful in EAA (effective animal advocacy) or other areas of EA? Thanks for all the advice

1
Erich_Grunewald 🔸
you might want to have a look at animal advocacy career's website. they have a section for career advice as well as an introductory online course. (if you are interested in other areas too, there is also 80k hours which you probably already heard about. they offer 1-on-1 advice too.)

Hello I'm Timothy from Germany I just joined the forum after finding out about EA through Peter Singer a couple of days ago. I am just 18 years old so I still have my whole career ahead of me. I'm currently thinking about what to study and what to do in the next six months before university will start. Any suggestions welcome, especially for what to do in the next six months. 

Hi Timothy, it's great that you found your way here! There's a vibrant German EA community (including an upcoming conference in Berlin in September/October that you may want to join). 

Regarding your university studies, I essentially agree with Ryan's comment. However, while studying in the UK and US can be great, I appreciate that doing so may be daunting and financially infeasible for many young Germans. If you decide to study in Germany and are more interested in the social sciences than in the natural sciences, I would encourage you (like Ryan) to consider undergraduate programs that combine economics with politics and/or philosophy. I can recommend the BA Philosophy & Economics at the University of Bayreuth, though you should also consider the BSc Economics at the University of Mannheim (which you can combine with a minor in philosophy or political science).

In case you are interested in talking through all this sometime, feel free to reach out to me and we'll schedule a call. :)

7
RyanCarey
It depends what your strengths and interests are, but let me give some generic thoughts. Most EA high-schoolers who like math/science should at least consider a CS degree (useful for AI safety research and job security in software development), or a math/econ double degree (useful for Econ PhD, policy, and big picture strategy research). I would recommend that a strong student apply to US universities, because they are far stronger than any outside US/UK/CH. But it's a few months past the deadline for those (and UK universities too). If you're confident you can lodge a strong application to US schools, but you didn't do it this year, then you could take a gap year, and apply in 6 months. For people who dislike maths and are excited about policy or politics, another option is law, which in a US setting could follow an undergrad in some combo of polisci, philosophy, and econ. I'd be interested to hear what others think too!

Would it be beneficial for the EA community to have dedicated financial planners who help community members invest for personal and altruistic goals (i.e. investing to give), kind of like 80K advising? I see that we have some financial planners registered on EA Hub.

3
mic
Founders Pledge thinks it's fairly difficult to make an impact through one's investments, at least in large stock markets – see Impact Investing Executive Summary | Founders Pledge.
4
Eevee🔹
I meant investing to give, not impact investing - but that's helpful!

Hello everyone,

I have a quick question: if I want to have maximum impact to mitigate climate change, what's the best use of a small monthly donation? I was planning to pay the extra money to my utility company every month for renewable energy, but I figured there might be a more effective use of that same money. Any suggestions?

3
saulius
This is totally not my area but since no one else answered in six days, I'll just say that Founders pledge has a report on best climate change interventions with some charity recommendations at the bottom. Also, there is this post, though I don't know if recommendations are up to date there. And probably there is much more EA stuff that I don't know about on this topic.
1
Norman Borlaug Stan
Thank you!