Marc Wong

33 karmaJoined Working (15+ years)


Inspiring people to understand and respect others has widespread and far-reaching impact. I think therefore I am. I listen (love), therefore you are. We understand and respect, therefore we are. We bring out the best in each other, therefore we thrive.
I teach people to see ourselves in others, see the best in others, and bring out the best in others.


Obviously, this doesn’t just apply to idols.

Never put ideas, political allegiance, technological innovation, profits, market share, etc., on a pedestal. Always create mechanisms for safety (in AI, genetic engineering, robotics, social media, politics, etc.), even if you don’t anticipate ever needing to use them.

Allow the truth to be multifaceted. Allow yourself and others to be human, capable and flawed. Take advantage of opportunities to understand, respect, and bring out the best in others. This ultimately is the best way to foster cooperation and synergy, and to solve our biggest problems. Be humble. Be grateful for the luck you have received. Never lift yourself by stepping on others.

In general, I like to talk about Can Do attitudes. We don’t use Can Do to talk about simple things. We use it to talk about challenging goals. Examples might include running a marathon, or learning a new language.

Early on, we may convince ourselves that we’ll never run a marathon, or learn a new language. We don’t have the time. We don’t have the talent. But when we put aside our pre-conceived notions, our fears and doubts, make the sacrifices and do the actual work, we may actually achieve our goals. In the process we discover we have more discipline, and are smarter, stronger, and more resourceful than we ever thought.

Equally important to discover are the following:

  • We can understand why others never attempt, or give up halfway through the pursuit of their “impossible” dreams.
  • We can be kind and forgiving to, and learn from those who ridiculed or betrayed us.
  • We can apologize and make amends, even when we really don’t want to.
  • We can learn things and change our minds even if it scares us and makes us uncomfortable.
  • We are often more capable of helping others than we realize.

I believe one way to temper fanaticism, extremism, idolatry, polarization, head-in-the-sand I don’t want to know attitudes, we can’t save the world hopelessness, etc., is to show people how to succeed outside their comfort zones, in as many ways as possible.

This, by the way, is all business friendly. “Think outside the box”, moon shot projects, stretch goals, go above and beyond and do things outside your job description to delight a customer, Nike’s “Just do it”, Home Depot’s “You can do it. We can help.”, Taco Bell's "Think outside the bun", all challenge people to do more than they thought they could.

Yes, I think EA and corporations should promote these ideas to inspire the public to take on our biggest challenges.

Karnofsky wrote recently about spreading messages:

I also want to take this opportunity to list some of the things I’d like to communicate with the masses. In no particular order:

  • As long as money and power carry out-sized weight in human affairs, our technologies and politics will have a tendency to produce inequality and conflict. Our priorities are not quite aligned with our own long-term interests.
  • We can and should use car safety to raise awareness of other technological risks.
  • Descartes said, “I think, therefore I am.” This can be expanded. “I listen, therefore you are.” “We understand and respect, therefore we are.” “We bring out the best in others, therefore we thrive.”
  • We’ve democratized publishing. We now “lead lives of loud desperation”. The next communication revolution will happen when we democratize understanding, respecting, and bringing out the best in others. It is in individuals’, companies’, and societies’ interest to do this.

I've written about these in more detail here:

Whether it’s a knife, a car, social media, or artificial intelligence, technology is power.

There’s no reason why we shouldn’t use the familiar and mature car safety culture and practices to improve AI (and other technologies’) safety.

This means user training (driver licenses), built-in safety features (eg. seat belts, air bags), frequent public service announcements, independent and rigorous safety and reliability reviews, rules and regulations (traffic rules), enforcement (traffic police), insurance, development and testing in controlled environments, guards against deliberate or accidental misuse, guards against (large) advances with (large) uncertainties, and promoting safe attitudes and mutual accountability (eg. reject road rage).

If we can’t educate the public, media, technologists, and politicians in simple, engaging terms, and inspire them to take action, then we’ll always be at risk.

Technology is Power: Raising Awareness Of Technological Risks

Hello, All!

I found EA via the New Yorker article about William MacAskill.
I am the author of "Thank You For Listening". 
I listen, therefore you are. We understand and respect, therefore we are. We bring out the best in each other, therefore we thrive.
Go beyond Can Do. We Can understand, respect, and bring out the best in others, often beyond our expectations.
We know how to cooperate on roads. We can cooperate at home, at work, and in society. Teach everyone to listen (yield), check biases (blind spots), and reject ideological rage (road rage).
Bringing Out  The  Best In Humanity

Politicians like to ask, "Are you better off now than you were 4 years ago?"
I like to ask, "Do you understand and respect more people than you did 4 years ago?"
A couple can get rich and still divorce. A country can prosper and still develop many fault lines. But if we deliberately work at understanding and respecting others, we become more moderate, we enjoy better relationships, and we build a better foundation to cooperate with others.

I have a number of related ideas about this. You can read more here:

Carl Sagan was inspiring and a great educator.
We must inspire people to think better (or be less wrong), and bring out the best in others (or do good better). We can't reason or lecture people into changing their behavior.
Here's an example:
If your next car were 10x more powerful, would you want more safety features, traffic rules, and driver training? Would you trust car companies alone to address all risks created by these 10x more powerful cars? What safety features, regulations, and public education will be needed when social media, AI, nanotechnology, robotics, or genetic engineer becomes 10x more powerful? Do you trust companies alone to address all the risks created by new technology?
Perhaps more importantly, what would you do to help humanity become 10x better at being objective, understanding and respecting others, and helping others?

Lastly, (self-promotion coming...) my post about inspiring humanity to be its best:

I've proposed a number of things that I would love to see tested:
- Can we adapt Carl Rogers to real life? Inspire people to see ourselves in others (empathy), see the best in others (positive regard), and bring out the best in others.
- Can we expand Can Do attitudes? Can we put aside our doubts, do the work, and end up understanding, respecting, cooperating with, and bringing out the best in others in unexpected ways?
- Can we use our complex buying behaviors to foster empathy and positive regard? We buy different things, but we often have similar coveting, budgeting, browsing, shopping, and hoarding experiences. Can knowledge and sophistication in buying widgets help people build empathy and positive regard for people who buy doodads?
- Can exposing people to different kinds of biases  help them better understand different kinds of discrimination?
- Safe driving allows us to foster altruistic and cooperative behavior on a global scale. Can we expand it to general society?  Teach everyone to listen (instead of yield), check their biases (instead of the blind spots), reject Ideological Rage (instead of road rage).

More details here:

What happens when we create AI companions for children that are more “engaging” than humans? Would children stop making friends and prefer AI companions?
What happens when we create AI avatars of mothers that are as or more “engaging” to babies than real mothers, and people start using them to babysit? How might that affect a baby’s development?
What happens when AI becomes as good as an average judge at examining evidence, arguments, and reaching a verdict?

What if your next car was 10 times more powerful? What new kinds of driver training, traffic rules, and safety features would you think are necessary? What kinds of public education, laws, and safety features are necessary when AI, genetic engineering, or robotics becomes 10 times more powerful? How do we determine the risks?
Bonus question: why is it important to have good analogies so the general public can understand the risks of technology?

Load more