D

DavidNash

5263 karmaJoined Working (6-15 years)
gdea.substack.com/

Bio

How others can help me

I'm considering what my next career path should be. I'm currently looking at the following areas;

-- Global development and EA meta work (connecting development professionals, events, virtual programs, info sharing)

-- AI & global development

-- EA & economic growth interventions

-- Chief of staff or philanthropic advising roles

-- Joining or founding a startup that is aiming for direct impact in LMICs

-- I'm interested in more structural areas, that provide support for other business to build ( fintech, communications, infrastructure, electrification/energy, import/export)

How I can help others

If you're thinking about being a community organiser or are currently organising an EA related group then I'd be happy to share ideas on strategy and community building. Especially for people working on cause specific work or in neglected regions of the world.

If you're a global development professional I'd be happy to chat about the EA & development landscape and swap ideas on how to improve this area.

Comments
336

Topic contributions
1

I think it makes more sense to consider this part of their marketing budget than their 'trying to do good' budget.

I would think it's more peer appreciation than public appreciation that matters.

Do you think CoGi need broad appeal if they're mainly looking for multi millionaire donors?

Answer by DavidNash11
2
0

r/badmathematics have already looked at it.

"Being very generous, I think their attempt is to invoke this result of Chaitin to basically say "if the universe was a simulation, then there would be a formal system that described how the universe worked. By Chaitin, there's some 'complexity bound' for which statements beyond this bound are undecidable. But, these statements have physical meaning so we could theoretically construct the statement's analog in our universe, and then the simulation would have to be able to decide these undecidable statements."

What they don't explain is:

  • why we should think that we're guaranteed to be able to construct such physical analogs of these statements,
  • why they think that whatever universe that is simulating ours must have the same axioms as ours (e.g. Godel only applies to proving statements within the formal system under considerations),
  • why they can rule out that the hypothetical simulating computer wouldn't be able to just throw some random value out when it encounters an undecidable statement (i.e. how do we know that physics is actually consistent without examining all events everywhere in the universe?),
  • ...or a bunch of other necessary assumptions that they're making and not really talking much about.

They also get into some more bad mathematics (maybe bad philosophy?) by appealing to Penrose-Lucas to claim that "human cognition surpasses formal computation," but I don't think this is anywhere near a universally accepted stance."

Thanks for adding this Nithya, I agree with both points. This post was more about raising a question I hadn't seen discussed much in EA spaces and so there is likely research that supports or weakens the argument I didn't come across.

My general impression is that non profits tend to teach skills that are less valuable longer term when globally 90% of jobs are in the private sector, even if the skills are valuable to some extent.

It's also about who is learning the skills, if the people who would have been the top 5% of entrepreneurs/leaders/scientists are not working in those spaces that seems like a loss for those countries.

I haven't looked into it, I think GiveWell recommended charities make up a very small percentage of most countries NGO workforce, so it seems unlikely to be much of a difference.

Doesn't that still depend on how much risk you think there is, and how tractable you think interventions are?

I think it's still accurate to say that those concerned with near term AI risk think it is likely more cost effective than AMF.

I think your experience matches what most people interested in EA actually do, the vast majority aren't deeply involved in the 'community' year-round. The EA frameworks and networks tend to be most valuable at specific decision points (career/cause changes, donation decisions) or if you work in niche areas like meta-EA or cause incubation.

After a few years, most people find their value shifts to specific cause communities (as you noted) or other interest-based networks. I think it might actually be a bad sign if there was more expectation of people being very involved as soon as they hear about EA and forever more.


I'd also push back on Hanson's characterisation, which was more accurate at the time it was written but less so now. The average age continues rising (mean 32.4 years old, median 31) and more than 75% aren't students.

There are people now with 15+ years of EA engagement in senior positions across business, tech and government, and there are numerous, increasingly professional and sizable, organisations inspired by EA ideas.

The methods within EA differ markedly from typical youth movements, there's minimal focus on protests or awareness-raising except where it's seemingly more strategic within specific cause areas.

I think China, in the last few years, has approved a few crops (from here, lots of interesting sections).

Maybe that's why I'm more optimistic, despite the public being against GMOs (In 2018 46.7% of respondents had negative views of GMOs and 14% viewed GMOs as a form of bioterrorism aimed at China), China leadership is still pushing ahead with them as it benefits the country. 

Over time the countries that don't use GMOs will either have to import, give larger subsidies to their farmers, or have people complain about why their food costs so much vs neighbouring countries.

I'm not so sure it has gone 'badly' wrong vs other tech innovations but I'm not as well read on tech adoption and the ups and downs of going from innovation to mass usage.

There has been less uptake than may have been hoped for (and I think animal feed is a large percentage in US at least), but it still could be considered impressive growth since the 90's. 

It's hard for me to know what the expected take off for this technology should have been and how it compares to similar things (slower than AI and smartphones but faster than tv's and electricity, but these aren't great reference classes).

I'm not as convinced by public opinion surveys as I imagine you'd probably get a similar proportion, if not higher, that think factory farming should be banned, which doesn't stop them being used if people are prioritising price/taste/etc.


With reducing poverty, I think that is a whole host of other things that GMO's wouldn't have made much of a difference, even if they were 100% of food.

Load more