Hide table of contents

Polis is a surveying platform designed for finding clusters of people with similar opinions on a topic. Participants submit short text statements(<140 characters) which are sent out semi-randomly[1] to other participants to vote on by clicking agree, disagree or pass. 

This post provides links to  video and text tutorials for using polis.

When can Polis be Useful?

  • Identify subgroups of readers/users who you can then target more specifically
  • Understand better the types of people attending your university EA group
  • Survey opinions on how the EA community should be structured
  • Get suggestions for a new project for your organisation that are widely agreed upon in a given community
  • Understand the beliefs of a group of people affected by your project (e.g. AI researchers, parents in Nigeria, people new to EA)

 Tutorials

This tutorial from Computational Democracy is a good introduction:

Polis is a 💡 Wikisurvey, as:

  • the dimensions of the survey are created by the participants themselves
  • the survey adapts to participation over time and makes good use of people's time by showing comments semi-randomly
  • participants do not need to complete the entire survey to contribute meaning

pol.is is the main instance of the technology hosted online, but there are other instances in the wild.

In its highest ambition, Polis is a platform for enabling collective intelligence in human societies and fostering mutual understanding at scale in the tradition of nonviolent communication.

 A good tutorial based on a worked example is available here:

The text input to seed statements can be found in the ‘configure’ tab once you’ve started your conversation, seen at left.

It’s usually a good idea to seed around 10–15 diverse comments. This has a powerful effect on early participation. We’ve found that about 1 in 10 people leave a comment (whereas 9 in 10 only agree disagree and pass on statements submitted by others). Given this ratio, if there are no statements at the outset, it can take a little while for enough statements to build up to make the conversation meaningful. This is an impediment to data collection. So seed away!

Examples

There's one example worth pulling out before the others: vTaiwan. vTaiwan is an experiment in using Polis & other tools to find how opinions group, find the least controversial statements, and help set agendas for future consultations iteratively. 

In 2015, Polis was used in combination with an in-person meeting of key stakeholders to make decisions about regulation of Uber in Taiwan. 

Other Examples:

Personal Experience

Polis is very easy to use & intuitive, and I like that users can submit statements for other users to vote on; the core of the tool seems broadly useful.

However, I think its real edge over other survey tools (grouping beliefs) is a little niche. It's useful for large projects like vTaiwan where identifying groups is important for setting meeting agendas, but I'm not sure how important understanding subgroups is in most circumstances where you're soliciting opinions. For instance, it's not clear to me what benefit was gained by constructing subgroups in getting charity suggestions for Charity Entrepreneurship. 

I think Polis is most useful when you need to understand who exactly believes what within a larger group. For instance, understanding whether people with little experience of EA find the phrase "EA Aligned" useful is more actionable than finding how popular it is among all survey respondents. 

Try it Yourself

Go to this polis and add your opinions on the phrase "EA Aligned"; if you scroll to the bottom of the page, you can see the groupings that have been created.

If you found that interesting, create a poll! Want to know what the forum thinks about a topic? Make a linkpost for a Polis or post it in the comments here!

 

We've reached the end of this sequence now! I'd appreciate if you could let me know how you found the sequence via Polis or in the comments below. Also, join me in the EA GatherTown tomorrow [02/02/23] at 6pm GMT to discuss the use of Polis!

Thanks for reading, and thanks to everybody who gave feedback throughout!

  1. ^

    This is only important if people don't respond to all the statements i.e. especially for large polls. I believe they're ordered to help clustering.

Comments3


Sorted by Click to highlight new comments since:

I wasn’t aware of this sequence, but I’m glad to see someone working on this topic! That being said, I am disappointed by the apparent lack of reference to Kialo or any other form of what I call argument/debate management systems

The specific tools were mostly my choice, not done with an eye towards full categorization of the space, but taken from suggestions from people in a slack about epistemics plus my own experiences. If you wanted to write up a tutorial / explanation, we'd be really excited to have more in this sequences written by others!

Thanks for making tools like this so much more accessible!

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under