huw

Co-Founder & CTO @ Kaya Guides
1592 karmaJoined Working (6-15 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
237

Reposting this from Daniel Eth:

On the one hand, this seems like not much (shouldn’t AGIs be able to hit ‘escape velocity’ and operate autonomously forever?), but on the other, being able to do a month’s worth of work coherently would surely get us close to recursive self-improvement.

huw
25
0
0
8

Some general thoughts about India specifically:

  • The EA community is slowly developing, but the biggest obstacle is the lack of a clear hub city. Government is in Delhi, tech is in Bengaluru, many orgs are also in Pune or Mumbai (such as my own).
  • The philanthropic sector isn’t tuned to EA ideas just yet, but we think it might get more feasible to find local funding. Anecdotally, this seems to be easier in mental health, which is well-understood by the traditional philanthropic sector. Further development of EGIs and the local community will help here.
  • Anecdotally at EAGxIndia 2024, most younger attendees were interested in AI work, and far fewer into GHW/AW. There’s probably some bias here, since it was hosted in Bengaluru, which is heavier on tech. That is to say, I’m not convinced the talent pipeline for an India-based AIM-like org is quite there yet, although AIM could be nudged to incubate more often there.
  • On the other hand, legally operating in India is more complex than almost any other country AIM incubates into, and having India-specific expertise and operational support, while expensive, would pay dividends

Just wait until you see the PRs I wanna submit to the forum software 😛

FWIW the point that I was trying to make (however badly) was that the government clearly behaved in a way that had little regard for accuracy, and I don’t see incentives for them to behave any differently here

The U.S. State Department will reportedly use AI tools to trawl social media accounts, in order to detect pro-Hamas sentiment to be used as grounds for visa revocations (per Axios).

Regardless of your views on the matter, regardless of whether you trust the same government that at best had a 40% hit rate on ‘woke science’ to do this: They are clearly charging ahead on this stuff. The kind of thoughtful consideration of the risks that we’d like is clearly not happening here. So why would we expect it to happen when it comes to existential risks, or a capability race with a foreign power?

Is the assumption here that they would lobby behind the scenes for carbon-neutrality? Because this just sounds like capitulation without a strong line in the sand to me

(Could you elaborate on ‘economics doesn’t have a very good track record on advanced AI so far’? I haven’t heard this before)

Reading between the lines, the narrative that the UK want to push here is that due to Trump’s presumed defunding of NATO & the general U.S. nuclear umbrella, they have to increase defence and cut aid? So if you buy this narrative, this is a follow-on consequence of Trump’s election?

huw
11
3
0
2

Thank you MHFC! As with past grantees, I can attest that working with MHFC was extremely easy & pleasant. They take cost-effectiveness & future prospects seriously, but aren’t onerous in their requirements. If you’re in the mental health space they’re an excellent partner to have!

Load more