Hide table of contents

Epistemic status: an educated guess inspired by my experience in social media startups&non-profits, by the recent discussion about activism on Mastodon, and by many discussions in Russia&Switzerland&Bay Area about social media, activism, polarization, mental health online.

The writing is vague sometimes intentionally. This is an unfinished proposal that I wish we all work on together. I feel that to make something like this good for us all, we all need to design it?

Can be seen as part of work on the cause of "aligning recommender systems with human values"

If you do not like things that are not well-defined, feel free to ignore this post instead of downvoting it. What is the right place to post this, instead of here?

For those who are still here, let's brainstorm!

Ask questions if some parts need more explaining!

There is a bit of text in the Prequel that could read as self-promotion. I believe the details are important for the motivation of the post.

Note that I am a bit more leftist than most people here probably, I hope this doesn't blur the discussion about the idea itself. The proposal has accounts for both left- and right- leaning views and discussed how they can co-exist more nicely.

License: Creative Commons Attribution

Credits: Sergia Volodin (writing and thinking about all this), Association Tournesol (first part of the story), Fave Technologies AG (worked there when thinking of the second part of the story), Mastodon conversations (tbd), discussions at EPFL&in the bay area

Free to share and comment upon! Comments and feedback are welcome 

Prequel 

First, I describe my motivation coming from my past experience.

Social media algorithms are kind of like central planning, with a similar set of issues. Why do I believe that? We here in Switzerland did an academic social media non-profit https://tournesol.app. The algorithm uses more classical ML so it is simple to interpret and study. This is the first (to my knowledge) alternative recommender engine for YouTube, acting as Middleware, one of the proposals to improve social media, built with ethics, security, and robustness in mind. Yet, analysis shows two kinds of videos when comparing hyperparameters: "laminar" (don't change their position in the ranking much) and "turbulent", red ones (ones that change their position a lot) -- see Figure 8 on page 12 in our first paper. Basically, if there are two groups of people, one says "show this video", and another says "don't show this video", any social media platform has to decide something, for everyone:  who to listen to, what to show, etc. For big platforms like Facebook, this seems to be causing a never-ending political, legal and social nightmare, both for the company and for people outside :( The algorithm uses state-of-the-art byzantine-resiliency for ML, and this problem still exists -- it's a human problem, and not a tech problem. The non-profit went one way to solve this, and this is one path - create a better algorithm.

My thinking instead was: humans have to decide, otherwise, it's going to be a never-ending drama like FB or YouTube have, in some form or another (every month there is some lawsuit or the other, to one big social media company or the other, in some country or the other, where improper moderation/lack of it/too much of it/too little of it led to some violence or some misfortune, with proofs of cause-effect relationship, without, all sorts of news stories like that, all shapes, colours, and sizes...).

This just reminds me of the USSR's central planning and the same kinds of never-ending issues, all coming from the way central planning is: some people who never saw a shoe factory in real life are in an office building in a big city, they are busy making some plan on some computer about which shoe factories to shut down and which to keep running... Basically, some centralized entity tries to decide "for everyone", and it inevitably fails: the world is just too complex, and the entity has, as we all do, limited computational abilities, let alone alignment properties. There are always outliers for Deep Learning-based self-driving cars. There are always things that people who design the system just do not know of. Maybe we decide for ourselves? Do we really need to build the algorithm in the first place?

I feel this could be done way easier, otherwise, we'll never see the end of any of this... What if there is no algorithm?

Mastodon already promotes something like that. Can we take it further and add functionality, and consent, to make it nice?

What is important, I feel is that we do it together. How do you feel about this? What do you want to see online? Most of it has not changed much in 20 years (feeds, following, algorithms for recommendations as a concept), there must be a better way. Let's create it together.

Direct Sharing

Overview of direct sharing. So, we had this idea at the social media next startup (Fave) I was at, Direct Sharing. Unfortunately, I left before we could try it, so we agreed to put all this as Creative Commons Attribution.

What if there is no algorithm, just people recommending things to each other?

We did try it via Instagram stories -- people were happy to share their stories with and recommend stuff to us, and those are interesting to read and try. Basically, people recommend things to each other, we like to see that, they like doing it!

I enjoy recommending things to people and getting feedback. I like when my friend recommends me a book or tells me something I did not know of. Maybe we can just do it like this and do not need an algorithm?

See the picture and text below for a more-or-less concrete proposal. None of it is tested at scale. Feedback is welcome!

For Mastodon specifically, this applies to the "Federated" timeline -- there are way too many things to show in the world, so we have to choose. For major social media platforms, this applies to... everything :) Can people choose instead of a cold-blooded algorithm?

 

 

How it looks for a sender and for a receiver. For a person who wants to recommend something, there is a new pipeline: instead of sharing something ("into the Universe"), one can recommend it to specific communities or people. Machine Learning can be used to suggest those people or communities, and by default does not execute anything by itself (how it already is), only suggests. On the other end, the receiving end, the person sees who the recommendation came from. Instead of receiving things "from the algorithm", they receive them from their friend, a fellow fan of their music band, or even from their political opponent (I bet a lot of politicians would love to recommend stuff to their own and to other parties... :). See the picture above!

Requesting recommendations. One can ask a person to recommend them something, a movie, an account, an article, anything! They could be paid for that, as one of the options.

Consent (applicable to indirect sharing too). Consent preferences are really important here. As an activist, I notice that a lot of people who don't really want to see political posts, who don't read them, who don't learn anything from them, are bombarded by political stories and news. On the other hand, there are a lot of people (like me) who want to discover new people in that domain, but just don't know how: most of the interesting people I follow, or someone who is close to them in the social graph, I actually met in real life, *not* via the algorithm! Or, I saw them replying to someone, deep inside the threads. Again, NOT recommended by the algorithm, at all. It simply did not show that to me.

This all shows how algorithmic feeds break our consent every day: at a specific moment, we don't see a lot of what we want to see and see things we don't want to.

A person could instead select with fine-grained detail how many posts from a topic, from a person, from a community, or from a country they receive per day. For example, sometimes the news from the U.S. are just too much and way too many. I could mute them or blur them. There could be a friendly interface for that, like swiping left to mute. There could be a warning if too many important topics are muted for way too long.

Fine-grained consent is also important. For example, when I am hungry or angry, I am no good at reacting to political news. I will probably write something not nice, with no value created. On the other hand, if I'm rested, full and happy, I might donate to a charity or reply with helpful things, or recommend the post to someone who can help. I could swipe left when I'm tired, and not see polarizing posts. All the muted polarizing scary news could be queued, and then read later when a person is ready, like a "do not disturb" mode or a "safe space" mode.

Consent also applies to discoverability in the "who to recommend to" screen. Some people do not wish to be found, for example, survivors of harassment - thanks to a conversation on Mastodon for pointing this out.

Meta: note how this feature is suggested by a member of a different community than I am from. I had no idea that is even needed! Given how conversations with strangers are likely to lead to surprises, I call for you to describe what you need online. There's a Design Justice book on this.

Some people just want their friends to comment on what they eat right now. Direct message etiquette prohibits (usually) just sending random stuff to people. "Feeds" are not like that -- by default, no action is required. That is the case for commenting on the dinner exactly. If one mass-emails or mass-DMs a picture of their dinner to all their friends, that is not OK (conventionally). If they recommend it to their friends, that is much better.

Inboxes ("feeds", "attention markets"), and fans ("viewpoint representatives"). Since there are way more things online than we can mentally deal with (or want to deal with, or would have wanted to deal with - none of us know what's out there!), we have to choose some things over others. Currently, it is chosen by a cold-blooded algorithm... That leads to issues described in the Prequel.

We could do a map of all the things online, and choose from that -- that is one way to do it. Another way is to have actual humans choose things. As we saw before, it feels good to do that!

To send something to a person whose attention is overwhelmed (celebrity, business person, scientist, CEO, artist, musician, etc), one could either pay some amount ("attention market") or have a "go" from a human(s) with a certain viewpoint (a "representative" of that viewpoint, or a "fan") to check and "approve" as well ("market regulation"). With no regulation, there probably will be a lot of ads and scams.

For example, I want to send my painting to an artist to ask how they feel about it, does it inspire them or not. I can recommend it to them, so it appears in their feed ("inbox", can we not say "feed", it sounds like we're animals) and says "recommended by Sergia". If the artist likes money and libertarianism, I'd probably have to pay for that, pay to recommend. If this artist does not like libertarianism (duh), they can appoint representatives of different viewpoints who need to all agree to recommend the post to them. Mixing the options is possible too -- "partially regulated market".

Music fans all try to send their cover to the singer. Actors want to show their demos to producers. Scientists want to show their papers to other researchers.

Fans ("viewpoint representatives"). Usually, every community has its devoted people -- like, fans of music literally think about it every day. There are political people who know a lot about their parties or movements. There are activists who know about the problems they work on. Some of them are really interested in the health and well-being of their community and basically even do it for free sometimes. Those people are "in the know". They know what is up. The recipient of recommendations ("regular person") can subscribe to them, can ask them for recommendations. People who are overwhelmed can nominate fans to only send them the most important things. Celebrities can nominate those to "regulate" their "attention markets"

For example, I can say I want a single most important post about US news for today. I ask one of the fans of that, and they recommend it to me. I can choose who I ask.

Financial side. Some people spent a lot of money to send a statue of Elon Musk to his offices. They could spend that money to recommend a post into his timeline. For this crowd, money-based attention seems less problematic, and I doubt they will want to regulate that market for themselves (?). They could definitely pay for all of the hosting with their activity!

A person can choose how their input "attention market" works: how much human moderation by viewpoint representatives (or, put simply, fans) there is, how much it depends on the money, whether is there an upper bound for a post, etc.

This is like ads, but one can regulate their own attention market with viewpoint representatives ("fans") that they choose, to the point of even not allowing any posts that were paid for.

If a person does not have money (say, a struggling artist wants to send their painting to a famous one for them to check it out), crowdfunding is possible. Basically, the person can determine how their attention market works, how much it's regulated, and whether there is any money in it at all.

Sheriffs and community moderation. Like a "sheriff" of a small town, some of the "fans" could do moderation for that community. They, can be elected by the community, with no issues of centralized planning. Those people could be nominated together by the community, or by it's "celebrities".

No complications for the "regular person". The regular person doesn't really have to know all the details of how all of this works, exactly like a person in real life doesn't really have to know who their sheriff is, and how their government works. It is the other way around -- if they do something that is not nice, then they WILL meet the sheriff. In the same way, a person in the real world doesn't need to know how the stock market works in order to buy a sandwich. On the other hand, if the "sheriff" doesn't do their work well, there's the next election process to replace them. If someone doesn't like the recommendations they are receiving, they can change who recommends things to them, how, and change the parameters of their "attention market". There could be some easy presets (full spectrum from "libertarian" to "tight hippie community"). Consent preferences could have a friendly interface.

What I am uncertain of:

  • Will all of the communities self-regulate like this? Like, would be there enough people who recommend things?
  • How to have better exploration? For example, I'm thinking where to post this proposal. There's on the one hand, the whole internet, the whole of humanity (well, some of it :). On the other hand, I only know a small part of all that. Maybe I could ask a, say, a university professor to recommend a good place for that? What are other ways to do it? How can one discover new people? Here in our proposal we can ask someone to recommend us someone! Like, I can ask a musician to recommend me good music classes. What are other ways to do this?
  • On the one hand, the proposed method deals with polarization this way: political parties can recommend quality content to each other. In our conditions, each person only has limited attention, and everyone knows that. That will probably make people think before they recommend, only sending the meaningful messages. On the other hand, what if everyone sets their consent preferences to 0 for the "other side"? It seems, the whole of humanity, both online and offline, rests on the fact that it does not happen: that we still want to talk to each other. This is more of a social question than a tech question 
  • In this "attention market" in libertarian unregulated version, do we select the post with the most money paid for it? Is there a better way to do this only based on money, with no human moderation?
  • In the "regulated attention market" with fans or "viewpoint representatives" giving a "go" for a busy person to see a recommended post, how do we make sure the busy person has some diversity of views? What if they do not wish to see diversity of views and only want to see posts from a specific information bubble, do we allow that do we include some basic minimum (like, at least 1 post not from the bubble per week) in the consent preferences?
  • What are other ways consent online could be improved?

I'm thinking to implement something like this on Mastodon eventually (it is open-source). So, here's an EA analysis:

Neglectedness: nobody seems to be doing social media like this 

Scale: all of Mastodon people who consent to be a part of this, all of social media if this really works well and good

Tractability: about a year of work for a few engineers, then tons of iterations, checks and balances 

Positive impact: hopefully will make lives online more consensual, informative and happy, reduce polarization and bring people closer to each other (a hypothesis). If it works, other social media platforms will probably copy it (which is good, license is CC A is by choice exactly for this)

What is important, I feel is that we do it together. How do you feel about this? What do you want to see online? None of it has changed much in 20 years (feeds, following, algorithms for recommendations as a concept),  there must be a better way. Let's create it together.

-1

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 3:44 AM

I see downvotes after my other post. Is this a "halo effect"? :)

Can there be objective feedback?

Or, how is the proposal linked to something else I said people mostly don't like here apparently?

Thank you.