- Who: anyone! software engineers will be primary contributors of course, but we will offer optional introductory sessions for the curious / aspiring developer. You do not have to have attended EAG Bay Area to attend the Hackathon.
- Where: Momentum office at 3004 16th St, just off the 16th St Mission BART Station
- When: Mon, 2/27 from 10am - 7pm
- What: work independently or with collaborators on EA-aligned project of your choosing
If you would like to share your Hackathon project idea, please leave a comment!
Agenda:
- 10am-10:15 — participants arrive and get set up
- 10:15-10:20 — welcome and logistics talk by Nicole Janeway Bills of EA Software Engineers
- 10:20-10:30 — opening talk by Austin Chen of Manifold Markets on expectations and ways of working for the event
- 10:30-10:45 — project pitches — people with ideas can share them with the group
- 10:45 — start of work and learning sessions
- 12pm — lunch — vegan and nonvegan options
- 6pm — dinner and project presentations
- 6:45-7pm — prize announcements and wrap up
Learning Sessions:
- 10:45 — setting up your development environment
- 11:30 — basics of git
- 1pm — intro to frontend development
- 2pm — open source contributions in AI safety (presentation link to be added later)
Looking forward to seeing you at the event! Add your photos here.
My proposal is to build a minimal language model that can be used for inferring symbol manipulation algorithms. Though minimal models exist for NLP and modulo arithmetic, I was surprised not to find one for typed lambda calculus. I believe such a model would be particularly useful as it could serve as a universal grammar for a language acquisition device for inductively building interpretable models.
My idea is to use a GAN to train a model that outputs valid proof terms for propositions in simply typed lambda calculus. The generator would attempt to find proof terms that type check when fed to a proof assistant. The discriminator would attempt to produce propositions that confused the discriminator.
The architecture of the discriminator shouldn't really matter. The generator would ideally be an attention-only model with the most informative number of heads and layers.