Hide table of contents

Excalidraw is a collaborative, shared whiteboard for typed text and sketching. It's really useful for summarising notes & making diagrams for forum posts; it's one of the tools I've used a lot to make these tutorials! Its key attractions are simplicity and easy collaboration. It’s designed for quickly creating simple graphics, and has a ‘sloppy/whiteboard’ style.

This post is a simple tutorial for how to use excalidraw. The landing page also has a pretty good introduction to the basic features. 

When can Excalidraw be useful?

  • Visually summarise an argument 
  • Summarise notes on a topic
  • Build a logic model to help you think through a problem
  • Have a debate over zoom, but draw diagrams for one another in real-time to help explain your point
  • Represent the AI governance or whole AI safety community diagrammatically 

Video Guide

Text Guide

Basic Features


 

If you’d prefer a worked example, you’re looking at one! The above image was made in Excalidraw. Here’s another:

 

The point here isn’t that this is a perfect, wonderful diagram that’s much better than the original– it isn’t. The point is that this took 5-10 minutes to make without having to spend time learning a complex tool, and could be done collaboratively.

Collaboration & Export

In the top right box, you can export your whole drawing either as a file to load back into Excalidraw or as an image. You can press the ‘Live Collaboration’ button to start a session. 

Once you start a session, you can copy a link which gives anybody access to real-time collaboration on your current board. 

When any member leaves the session, they ‘take a version with them’, which they can keep editing locally. 

If you want to keep your whiteboard and re-use it later, you can save it either as an Excalidraw file (editable later) or an image file. 

Libraries

You can also import libraries of icons made in Excalidraw using the book icon at the top of the screen (to the right of the other tools). There are only a few libraries available right now, but it can be useful for common icon sets.

Worked Examples

These are both examples I actually used! The first I made live with somebody while I was running a tutorial on the topic, the second I made as a quick summary for a discussion group. I've linked the source for the second, but the first was from the top of my head.

Summarise a topic (medical causes of headache):

Summarise a topic (patient philanthropy):

 

Personal Thoughts

I really like Excalidraw! It's a very polished implementation of a quick sketching tool, and the collaborative features, like the rest of Excalidraw, work exactly how I'd want them to without any fidgeting. 

At the moment, the icon libraries feature is pretty sparse, consisting mostly of UI and software architecture symbols, but there's nothing stopping you publishing EA-related symbol libraries!

I was mildly irritated there were no options to save canvasses in the web app, which it turns out is behind a 7$/month paywall. This might be worth it if you're using it a lot, but the offline saving options are pretty comprehensive. Maybe consider it to support the tool's continued existence.

Finally, Excalidraw is a somewhat limited tool not suited to really complex diagrams, but this isn't really its intended purpose. If you want to make more precise or detailed diagrams, I can personally recommend Affinity Designer, and GIMP is a reasonable free alternative. This space is pretty saturated, but they all (to my knowledge) have a much steeper learning curve than Excalidraw, and I'm not aware of more powerful tools with equivalent live collaboration.

Try it Yourself!

Try making a sketch of how your job leads to impact! Share it in the comments, so people can compare different jobs & different styles of diagram.

We'll also be running a session on Monday the 30th at 6pm GMT in the EA GatherTown to discuss Excalidraw and do a short exercise!

On Monday: a post discussing Squiggle, a coding language for model-building!

Comments14


Sorted by Click to highlight new comments since:

I bookmarked it and found a chance to use it already today! In general, I appreciate finding out about tools for rational discourse enhancement. Encouraging others to post them even if they don't get too many upvotes. 

There also exists an Excalidraw plug-in for Obsidian which I personally found very valuable.

Hi Nico,

You might also want to try the recently released Obsidian Canvas. I've found both Excalidraw and Canvas to be phenomenal (and free) diagramming tools.

I have a list of other diagramming tools on my public Zotero library here.

CG
3
0
0

Excalidraw + Obsidian's infinite canvas core plugin is truly a delight that I'm excited to see develop further. Lots of possibilities for better epistemics/PKM, and even more incredibly underrated for public sense-making/social epistemics in Obsidian.

As a meta point for these types of tutorials, I would recommend a short section on alternative tools with a short discussion of pros and cons for each alternative.

Right now, this feels more like an advert for excalidraw rather than an open exploration of the options out there.

Yeah, I think you're right and this was a mistake of mine.  I picked this list via generating  possibilities from friends, Twitter and my own use, then asking for feedback on an epistemics slack, and primarily picking the most easy-to-use-seeming ones in each category (Edited to add: and because I liked the idea of not necessarily picking the absolute best things but just getting more of this kind of thing used), but it would have been worth doing a little digging into competitors in each to make sure we weren't missing some good things + being able to give more context.

This is a really great point! Thank you for raising it. I'll see about adding it to future posts.

Does anyone know how this differs from similar-sounding options like Miro, Mural and Lucidspark?

I'm doing AGI Safety Fundamentals right now and they use Miro, and I like it a lot; for the purpose of running a class, I'd use Miro over Excalidraw based on my current experience with both. For more general diagram-making, I'm not yet sure, but if you end up having thoughts we'd love to add them to the post.

Great tool; I've enjoyed it and used it for two years. I (a random EA) would recommend it.

Would recommend it, I use it for most of my diagrams & I like it enough to have gotten a premium subscription.

I should clarify the inspiration to pick excalidraw originally came from Nuno's recommendation, I then played with it and liked it, just so that people don't double update :)

I really love this piece of software, but the only thing that really gets me is the inability to take hand-written sections of notes. sometimes i just need to sketch a graph or a visual representation of something, and excalidraw is really too clunky for my experience. 

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal