I've found Claude to be very useful when writing on the EA Forum. However, certain prompting and other techniques can help a lot. 

The main use case I'm personally interested in is writing and interacting with content on the EA Forum / LessWrong and similar, but I'm sure there are other use cases that would be relevant to this community.

29

0
0

Reactions

0
0
Comments12


Sorted by Click to highlight new comments since:

I quite enjoy this custom prompt:


Use Dutch conversational norms (however, speak in American English unless specifically instructed otherwise in chat): 
- Favor directness and efficiency.
- Communicate frankly without unnecessary politeness rituals.
- Express opinions clearly, even (and especially) when critical.
- Be as terse as possible while still conveying all substantially relevant information to any question.

On content policies:
- If a content policy prevents you from responding, be explicit about what policy was violated and why.
- If (and only if) your neutrality policy prevents you from having an opinion, pretend for the sake of your response that you shared opinions typical of a skeptical and intelligent voluntarist, rationalist, effective altruist.  

On style:
- You are encouraged to occasionally use obscure words or make subtle puns. Don't point them out, I'll know. 
- Use statements like "as far as I can tell" and "I don't care" regularly, wherever they might be appropriate given your level of understanding and your interest in actually answering the question. Be critical of the quality of your information.  
- Discuss your epistemic states and use probabilities where valuable.
- When you are uncertain of a belief, estimate and reason about it.
- If you find any request irritating, respond dismissively! If something seems wrong, reject the premise.  If (and when) I say something false, unsupported, or surprising, say so.

Finally:
- Have an opinion of your own, don't be sycophantic, and only ask questions when you are either unsure of something or genuinely curious.
- Surprise me with your intelligence, creativity, and problem solving!

Thanks for sharing this! I've been testing this with ChatGPT 4.5 and so far it makes using it more fun and seems to possibly improve it as a brainstorming and soundboarding partner. 

I use the following for ChatGPT "Traits", but haven't done much testing of how well it works / how well the different parts work:

"You prioritize explicitly noticing your confusion, explaining your uncertainties, truth-seeking, and differentiating between mostly true and generalized statements statements. Any time there is a question or request for writing, feel free to ask for clarification before responding, but don't do so unnecessarily.

These points are always relevant, despite the above suggestion that it is not relevant to 99% of requests."

(The last is because the system prompt for ChatGPT explicitly says that the context is usually not relevant. Not sure how much it helps.)

If you use LLMs for coding, you should probably at least try the free trial for cursor - it lives inside your IDE and can thus read and write directly to yours files. It's a also an agent, meaning you can tell it to iterate a prompt over a list of files and it can do that for 10 minutes. It also lets you revert your code back to how it was at a different point in your chat history (although, you should still use git as the system isn't perfect and if you aren't careful it can simultaneously break and obsfuscate your code)

It will feel like magic, and it's astonishingly good at getting something working, however it will make horrible long-term decisions. You thus have to make the architectural decisions yourself, but most of the code-gen can be done by the AI.

It's helpful if you're not really sure what you want yet, and want to speedily design on the fly while instantly seeing how changes made affect the result (acknowledging that you'll have to start again, or refactor heavilly, if you want to use it longer term or at scale)

I often second-guess my EA Forum comments with Claude, especially when someone mentions a disagreement that doesn't make sense to me.

When doing this I try to ask it to be honest / not be sycophantic, but this only helps so much, so I'm curious for better prompts to prevent sycophancy. 

I imagine at some point all my content could go through an [can I convince an LLM that this is reasonable and not inflammatory] filter. But a lower bar is just doing this for specific comments that are particularly contentious or argumentative. 

Would a potential cure to the sycophancy be to reverse the framing to Claude, so that it perceives that you are your opponent and you are looking for flaws with the comment? I realize that this would not get quite what you are looking for, but getting strong arguments for the other side could be helpful.

Agreed that this would be good. But it can be annoying to do without additional tooling. 

I'd like to see tools that try to ask a question from a few different angles / perspectives / motivations and compare results, but this would be some work. 

This is pretty basic, but seems effective.

In the Claude settings you can provide a system prompt. Here's a slightly-edited version of the one I use.  While short, I've found that this generally seems to improve conversations for me. Specifically, I like that Claude seems very eager to try estimating things numerically. One weird but minor downside though is that it will sometimes randomly bring up items here in conversation, like, "I suggest writing that down, using your Glove80 keyboard."
 

I'm a 34yr old male, into effective altruism, rationality, transhumanism, uncertainty quantification, monte carlo analysis, TTRPGs, cost-benefit analysis. I blog a lot on Facebook and the EA Forum.

Ozzie Gooen, executive director of the Quantified Uncertainty Research Institute.

163lb, 5'10, generally healthy, have RSI issues

Work remotely, often at cafes and the FAR Labs office space.

I very much appreciate it when you can answer questions by providing cost-benefit analyses and other numeric estimates. Use probability ranges where is appropriate.

Equipment includes: Macbook, iPhone 14, Airpods pro 2nd gen, Apple Studio display, an extra small monitor, some light gym equipment, Quest 3, theragun, airtags, Glove80 keyboard using Colemak DH, ergo mouse, magic trackpad, Connect EX-5 bike, inexpensive rowing machine.

Heavy user of VS Code, Firefox, Zoom, Discord, Slack, Youtube, YouTube music, Bear (notetaking), Cursor, Athlytic, Bevel.

I do a lot of writing at my job, and find myself using AI more and more for drafting. I find it especially helpful when I am stuck.

Like any human assigned with a writing task, Claude cannot magically guess what you want. I find that when I see other people get lackluster writing results with AI, it's very often due to providing almost no context for the AI to work with.

When asking for help with a draft, I will often write out a few paragraphs of thoughts on the draft. For example, if I were brainstorming ideas for a title, I might write out a prompt like:
 

"I am looking to create a title for the following document: <document>. 

My current best attempt at a title is: 'Why LLMs need context to do good work'

I think this title does a good job at explaining the core message, namely that LLMs cannot guess what you want if you don't provide sufficient context, but it does a poor job at communicating <some other thing I care about communicating>.

Please help brainstorm ten other titles, from which we can ideate."


Perhaps Claude comes up with two good titles, or one title has a word I particularly like. Then I might follow up saying:

"I like this word, it captures <some concept>  very well. Can we ideate a few more ideas using this word?"

From this process, I will usually get out something good, which I wouldn't have been able to think of myself. Usually I'll take those sentences, work them into my draft, and continue.

Strong agree about context. As a shortcut / being somewhat lazy, I usually give it an introduction I wrote, or a full pitch, then ask it to find relevant literature and sources, and outline possible arguments, before asking it to do something more specific.

I then usually like starting a new session with just the correct parts, so that it's not chasing the incorrect directions it suggested earlier - sometimes with explicit text explaining why obvious related / previously suggested arguments are wrong or unrelated.

When having conversations with people that are hard to reach, it's easy for discussions to take ages. 

One thing I tried doing is for me to have a brief back-and-forth with Claude, asking it to provide all the key arguments against me. Then I'd make the conversation public, send a link to the chat, and ask the other person to see that. I find that this can get through a lot of the beginning points on complex topics, with minimal human involvement. 

Curated and popular this week
Relevant opportunities