Bio

Participation
8

How others can help me

SoGive is accepting clients for advising. I’d love to make connections with major donors. Book a 1-1 with me to see how I can help!

I’m also looking for advice on being a better advisor and researcher! So if you’re a fellow advisor or researcher, please do reach out.

And lastly, I’d like to stay up to date on what’s going on in the funding space. So if you’re a grantmaker or evaluator and changes are coming up in your organization, please do reach out. 

How I can help others

I’m a philanthropy advisor at SoGive. I help major donors give note and give better, by doing research and making recommendations based on their personal values and moral weights. If you have:
- a donation budget of $100,000 or more,
- or a foundation,
- and uncertainty stopping you from making the most of your philanthropic potential

...And you need someone who can:
- clarify your mission and strategy,
- take research off your plate,
- connect you with organizations and get your questions answered,
- translate your giving from "I donated $$$" into "I saved lives/made people happier and healthier/reduced the chance of a disaster"...

Then don't be shy, message me! I'm available to work with clients from around the world.

Posts
15

Sorted by New

Comments
35

Topic contributions
1

Hi Peter, I found this old post in my bookmarks! I went through your post history and couldn't find the time when you clearly became more supportive of x-risk research, but you run IAPS now. I am still sympathetic to a lot of what you say in this old post, so I was wondering if you could describe when you became more supportive of x-risk work and why?

I've been in awe of your team these last couple years. Thank you for your great work. It meant so much to me that I got to meet with you personally when I took the pledge and became a GWWC Ambassador in 2022! Best of luck with your next steps.

Thank you for checking it out! I'll check the settings on this. I haven't been able to find a way to make this visible yet, and I think the best that Guided Track has to offer might be to click the previous section headings...

Thanks, I largely agree with this, but I worry that a Type I error could be much worse than is implied by the model here.

Suppose we believe there is a sentient type of AI, and we train powerful (human or artificial) agents to maximize the welfare of things we believe experience welfare. (The agents need not be the same beings as the ostensibly-sentient AIs.) Suppose we also believe it's easier to improve AI wellbeing than our own, either because we believe they have a higher floor or ceiling on their welfare range, or because it's easier to make more of them, or because we believe they have happier dispositions on average.

Being in constant triage, the agents might deprioritize human or animal welfare to improve the supposed wellbeing of the AIs. This is like a paperclip maximizing problem, but with the additional issue that extremely moral people who believe the AIs are sentient might not see a problem with it and may not attempt to stop it, or may even try to help it along.

Thank you Philippe. A family member has always described me as an HSP, but I hadn't thought about it in relation to EA before. Your post helped me realize that I hold back from writing as much as I can/bringing maximum value to the Forum because I'm worried that my work being recognized would be overwhelming in the HSP way I'm familiar with.

It leads to a catch-22 in that I thrive on meaningful, helpful work, as you mentioned. I love writing anything new and useful, from research to user manuals. But I can hardly think of something as frightening as "prolific output, eventually changing the course of ... a discipline." I shudder to think of being influential as an individual. I'd much rather contribute to the influence of an anonymous mass. Not yet sure how to tackle this. Let me know if this is a familiar feeling.

I'm also wondering if the butcher shop and the grocery store didn't have different answers because of the name you gave the store. Maybe it was because you gave the quantity in pounds instead of in items? 

You previously told ChatGPT "That’s because you’re basically taking (and wasting) the whole item." ChatGPT might not have an association between "pound" and "item" the way a "calzone" is an "item," so it might not use your earlier mention of "item" as something that should affect how it predicts the words that come after "pound." 

Or ChatGPT might have a really strong prior association between pounds → mass → [numbers that show up as decimals in texts about shopping] that overrode your earlier lesson.

To successfully reason in the way it did, ChatGPT would have needed a  meta-representation for the word “actually,” in order to understand that its prior answer was incorrect.

What makes this a meta-representation instead of something next-word-weight-y, like merely associating the appearance of "Actually," with a goal that the following words should be negatively correlated in the corpus with the words that were in the previous message?

Thank you for your integrity, and congratulations on your successful research into the cost-effectiveness of this intervention!

So true! When I read the 80k article, it looks like I'd fit well with ops, but these are two important executive function traits that make me pretty bad at a lot of ops work. I'm great at long-term system organization/evaluation projects (hence a lot of my past ops work on databases), but day-to-day fireman stuff is awful for me.

Load more