Hide table of contents

Excalidraw is a collaborative, shared whiteboard for typed text and sketching. It's really useful for summarising notes & making diagrams for forum posts; it's one of the tools I've used a lot to make these tutorials! Its key attractions are simplicity and easy collaboration. It’s designed for quickly creating simple graphics, and has a ‘sloppy/whiteboard’ style.

This post is a simple tutorial for how to use excalidraw. The landing page also has a pretty good introduction to the basic features. 

When can Excalidraw be useful?

  • Visually summarise an argument 
  • Summarise notes on a topic
  • Build a logic model to help you think through a problem
  • Have a debate over zoom, but draw diagrams for one another in real-time to help explain your point
  • Represent the AI governance or whole AI safety community diagrammatically 

Video Guide

Text Guide

Basic Features


 

If you’d prefer a worked example, you’re looking at one! The above image was made in Excalidraw. Here’s another:

 

The point here isn’t that this is a perfect, wonderful diagram that’s much better than the original– it isn’t. The point is that this took 5-10 minutes to make without having to spend time learning a complex tool, and could be done collaboratively.

Collaboration & Export

In the top right box, you can export your whole drawing either as a file to load back into Excalidraw or as an image. You can press the ‘Live Collaboration’ button to start a session. 

Once you start a session, you can copy a link which gives anybody access to real-time collaboration on your current board. 

When any member leaves the session, they ‘take a version with them’, which they can keep editing locally. 

If you want to keep your whiteboard and re-use it later, you can save it either as an Excalidraw file (editable later) or an image file. 

Libraries

You can also import libraries of icons made in Excalidraw using the book icon at the top of the screen (to the right of the other tools). There are only a few libraries available right now, but it can be useful for common icon sets.

Worked Examples

These are both examples I actually used! The first I made live with somebody while I was running a tutorial on the topic, the second I made as a quick summary for a discussion group. I've linked the source for the second, but the first was from the top of my head.

Summarise a topic (medical causes of headache):

Summarise a topic (patient philanthropy):

 

Personal Thoughts

I really like Excalidraw! It's a very polished implementation of a quick sketching tool, and the collaborative features, like the rest of Excalidraw, work exactly how I'd want them to without any fidgeting. 

At the moment, the icon libraries feature is pretty sparse, consisting mostly of UI and software architecture symbols, but there's nothing stopping you publishing EA-related symbol libraries!

I was mildly irritated there were no options to save canvasses in the web app, which it turns out is behind a 7$/month paywall. This might be worth it if you're using it a lot, but the offline saving options are pretty comprehensive. Maybe consider it to support the tool's continued existence.

Finally, Excalidraw is a somewhat limited tool not suited to really complex diagrams, but this isn't really its intended purpose. If you want to make more precise or detailed diagrams, I can personally recommend Affinity Designer, and GIMP is a reasonable free alternative. This space is pretty saturated, but they all (to my knowledge) have a much steeper learning curve than Excalidraw, and I'm not aware of more powerful tools with equivalent live collaboration.

Try it Yourself!

Try making a sketch of how your job leads to impact! Share it in the comments, so people can compare different jobs & different styles of diagram.

We'll also be running a session on Monday the 30th at 6pm GMT in the EA GatherTown to discuss Excalidraw and do a short exercise!

On Monday: a post discussing Squiggle, a coding language for model-building!

Comments14


Sorted by Click to highlight new comments since:

I bookmarked it and found a chance to use it already today! In general, I appreciate finding out about tools for rational discourse enhancement. Encouraging others to post them even if they don't get too many upvotes. 

There also exists an Excalidraw plug-in for Obsidian which I personally found very valuable.

Hi Nico,

You might also want to try the recently released Obsidian Canvas. I've found both Excalidraw and Canvas to be phenomenal (and free) diagramming tools.

I have a list of other diagramming tools on my public Zotero library here.

CG
3
0
0

Excalidraw + Obsidian's infinite canvas core plugin is truly a delight that I'm excited to see develop further. Lots of possibilities for better epistemics/PKM, and even more incredibly underrated for public sense-making/social epistemics in Obsidian.

As a meta point for these types of tutorials, I would recommend a short section on alternative tools with a short discussion of pros and cons for each alternative.

Right now, this feels more like an advert for excalidraw rather than an open exploration of the options out there.

Yeah, I think you're right and this was a mistake of mine.  I picked this list via generating  possibilities from friends, Twitter and my own use, then asking for feedback on an epistemics slack, and primarily picking the most easy-to-use-seeming ones in each category (Edited to add: and because I liked the idea of not necessarily picking the absolute best things but just getting more of this kind of thing used), but it would have been worth doing a little digging into competitors in each to make sure we weren't missing some good things + being able to give more context.

This is a really great point! Thank you for raising it. I'll see about adding it to future posts.

Does anyone know how this differs from similar-sounding options like Miro, Mural and Lucidspark?

I'm doing AGI Safety Fundamentals right now and they use Miro, and I like it a lot; for the purpose of running a class, I'd use Miro over Excalidraw based on my current experience with both. For more general diagram-making, I'm not yet sure, but if you end up having thoughts we'd love to add them to the post.

Yep, Excalidraw is great! I also used it to make this post:

https://www.lesswrong.com/posts/TvrfY4c9eaGLeyDkE/induction-heads-illustrated

Great tool; I've enjoyed it and used it for two years. I (a random EA) would recommend it.

Would recommend it, I use it for most of my diagrams & I like it enough to have gotten a premium subscription.

I should clarify the inspiration to pick excalidraw originally came from Nuno's recommendation, I then played with it and liked it, just so that people don't double update :)

I really love this piece of software, but the only thing that really gets me is the inability to take hand-written sections of notes. sometimes i just need to sketch a graph or a visual representation of something, and excalidraw is really too clunky for my experience. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr