CG

Charlie_Guthmann

918 karmaJoined

Bio

pre-doc at Data Innovation & AI Lab
previously worked in options HFT and tried building a social media startup
founder of Northwestern EA club

Comments
246

Do you have any new thoughts on the probabilities/timelines of when he is going to invoke the insurrection act? 

Marcus, ton of respect for your open-mindedness and prediction ability. Sort of parroting Lintz here but if you have the time, I would greatly appreciate if you could give some insight on how to improve the questions.

I understand that questions pertaining to 2028 and maybe even midterms suffer from long term market issues. So maybe we could create a chain of conditional markets? or at least some intermediate steps that we think are proxies and have a reasonable chance of occurring in the next few months?

Additionally, would you say you have updated your views since this comment chain?

Definitely coming in biased because of where my head is at, but I think building back the strength of small groups is a way to combat this and somewhat tractable. I like TT post below. 
https://mathstodon.xyz/@tao/115259943398316677
Funnily enough EA has a similar problem (if you consider it a problem). Lack of structure or centralization disproportionately shifts power to the wealthy and already powerful. 

 

don't let perfect be the enemy of good! I agree the standard expectation of what a group might look like is hard to run. but -

https://forum.effectivealtruism.org/posts/agFxcinYtBqjDgCNk/sam-s-hot-takes-on-ea-community-building
 ^ see this post. 

When I was organizing at northwestern we had a no direction get-together at a house near campus every Friday night and I'd guess this was more important than everything else we did combined.

Not what you are saying necessarily but I think local ea groups focusing on local outcomes is somewhat reasonable. It would possibly make the group feel like they had more purpose outside of discussion, could beta test and be a proving ground for people on a smaller scale, and build reputation in the city the group is in. 

I feel like as a general rule of thumb, and this doesn't really fall on the gov/not gov axis but can be applied, too many EAs work in intellectual pursuits and not enough in power/relationship pursuits. 

This isn't based on a numerical analysis or anything, just my intuition of the status incentives and personal passions of group members.

So e.g. I wouldn't necessarily expect the amount of EAs in government to be too low but maybe those working directly in partisan politics/organizing/fundraising to be too low. If I had to guess we are ~properly allocating towards policy makers both within think tanks and within executive branch orgs. 
 

This is because we care about incentivising new content, rather than surfacing the best

Does this go for comments also? I find this a bit perverse and think you have your incentives a little off but it's definitely nuanced and I see both sides. 

Is there currently an effective altruism merch/apparel store? If not do people think there is demand? I'd be happy to run it or help someone set it up. (quick search shows previous attempts that are now closed - if anyone knows why that would be cool too)

Way out of my depth here but I'm not sure why feelings and valence couldn't also evolve in llms to "motivate choices that increase fitness (token prediction)". @Steven Byrnes might have a more coherent take here. 

disregarding "asking it what it likes", do you believe that, if an agent experiences valence, it is more likely than not to do higher valence things (not sure exactly how to structure this claim but hopefully you get the idea. 

Load more