MathiasKB🔸

Head of Advocacy @ ControlAI
6639 karmaJoined London, UK

Comments
285

(conflict of interest note, I'm pretty good friends with Apart's founder)

One thing I really like about Apart is how meritocratic it is. Anyone can sign up for a hackathon, and if your project is great, win a prize. They then help prize winners with turning their project into publishable research. This year two prize winners even ended up presenting their work orally at ICLR (!!).

Nobody cares what school you went to. Nobody is looking at your gender age or resume. What matters is the quality of your work and nothing but.

And it turns out that when you look just at quality of the work, you'll find that it comes from all over the world - often countries that are otherwise underrepresented in the EA and AI safety community. I think that is really really cool.

I think apart could do a much better job at communicating just how different their approach is to the vast majority of AI upskilling programmes, which heavily rely on evaluating your credentials to decide if you're worthy of doing serious research.

I don't know anything about the cost-per-participant and whether that justifies funding apart over AI safety projects, but there is something very beautiful and special about Apart's approach to me.

The EA movement is chock-full of people who are good at programming. What about open-sourcing the EA source code and outsourcing development of new features to volunteer members who want to contribute?

This post makes me feel very positive about GWWC and its future! It's hard to overrate the value of focus. One thing I would love to learn is what you are doubling down on as a result.

No idea, it's probably worth reaching out to ask them and alert them in case they aren't already mindful of it! I personally am not the least bit interested in this concern, so I will not take any action to address it.

I am not saying this to be a dick (I hope), but because I don't want to give you a mistaken impression that we are currently making any effort to address this consideration at Screwworm Free Future.

I think people are far too happy to give an answer like: "Thanks for highlighting this concern, we are very mindful of this throughout our work" which while nice-sounding is ultimately dishonest and designed to avoid criticism. EA needs more honesty and you deserve to know my actual stance.

I don't mind at all someone looking into this and I am happy to change my mind if presented with evidence, but my prior for this changing my mind is so low that I don't currently consider it worthwhile to spend time investigating or even encouraging others to investigate.

I do a lot of writing at my job, and find myself using AI more and more for drafting. I find it especially helpful when I am stuck.

Like any human assigned with a writing task, Claude cannot magically guess what you want. I find that when I see other people get lackluster writing results with AI, it's very often due to providing almost no context for the AI to work with.

When asking for help with a draft, I will often write out a few paragraphs of thoughts on the draft. For example, if I were brainstorming ideas for a title, I might write out a prompt like:
 

"I am looking to create a title for the following document: <document>. 

My current best attempt at a title is: 'Why LLMs need context to do good work'

I think this title does a good job at explaining the core message, namely that LLMs cannot guess what you want if you don't provide sufficient context, but it does a poor job at communicating <some other thing I care about communicating>.

Please help brainstorm ten other titles, from which we can ideate."


Perhaps Claude comes up with two good titles, or one title has a word I particularly like. Then I might follow up saying:

"I like this word, it captures <some concept>  very well. Can we ideate a few more ideas using this word?"

From this process, I will usually get out something good, which I wouldn't have been able to think of myself. Usually I'll take those sentences, work them into my draft, and continue.

Really incredible job, really exciting to see so many great projects come out of Catalyze. Hopefully people will consider funding not just the projects, but also consider the new incubator which created them!

On a side note, I am especially excited about TamperSec and see their work as the most important technical contribution that can be made to AI governance currently.

Don't put all of your savings into shady cryptocurrencies. If it sounds good to be true, it is because it probably is.

Load more