#5 The foundations of humane technology: Creating shared understanding

I’m taking at my pace the free online course “Foundations of humane technology” by the Center for Humane Technology (whom I had heard of after watching the excellent documentary “The Social Dilemma”.) These are my notes.


  1. Module 1 – setting the stage
  2. Module 2 – respecting human nature
  3. Module 3 – minimizing harmful consequences
  4. Module 4 – centering values
  5. Module 5 – creating shared understanding
  6. Module 6 – supporting fairness & justice
  7. Module 7 – helping people thrive
  8. Module 8 – ready to act

“Humanity’s biggest problems require a collective intelligence that’s smarter than any of us individually, but we can’t do that if our tools are systematically downgrading us.”

Center for Humane Technology

A shared understanding helps with making sense and processing noisy information to inform better choices. For example, it’s one’s ability to asses the trustworthiness of claims, or catalogue the values held by a community, or find common ground across multiple view points. It can be achieved through responsible journalism, scientific method, non violent communication.

This module focuses on social media in particular because it warps reality and thus ends our capacity for shared understanding and effective collaboration.

Beware the common social media distortions

These distortions or side effects of social media or information technologies are particular manipulation artifacts that people are more or less aware of, which contribute to altering the way people grasp concepts or understand reality.

Engaging content distortion

People’s curated best moments distort these people’s lives and even our views of our own lives, because social media encourages a race to the virality lottery. When you flood the attention landscape with engaging content, everyone else must mimic this to be heard and seen.

Moral outrage distortion

Algorithms optimize for moral outrage where negativity or inflammatory language spreads more quickly than neutral or nuanced or positive emotions. This affects television and news which must keep up to stay relevant. As a result, negative caricatures are disproportionately represented and it feels like there is more outrage than there actually is. Public discourse dominated by artificial moral outrage makes progress impossible.

“Extreme emotion” distortion

This is leveraging subconscious trauma or fears by feeding people content they can hardly look away from.

Amplification distortion

This is when social media trains to, and rewards users who present more extreme views, which then get amplified.

Microtargeting distortion

This is the ability to deliver a specific resonant message to one group while delivering the opposite message to another. If these groups don’t overlap much, the contradictory messaging becomes invisible and social conflict can be orchestrated.

“Othering” distortion

This relates to the “fundamental attribution error” cognitive bias where we are more likely to attribute our own errors to our environment or life circumstances, and others’ to their personality. Social media provides ample evidence (which can and usually us strung together by algorithms) of the bad behaviour of particular groups of “others”.

Disloyalty distortion

More than a distortion, it’s really a side effect whereby you may be attacked by members of your own groups when you express compassion or try to understand another group.

Information flooding distortion

For example, fake accounts and bots that make topics trend and thus influence what people are exposed to (hence altering their reality) can be very easily engineered.

Weak-journalism distortion

More than a distortion, it’s a side effect due to the tension between substance and attention-grabbing headlights, forcing news organisations to invest less in depth and nuance.

🤔 Personal reflection: Distortions

Think of your work or consider technology products you regularly use in your personal or professional life. Where might they have contributed or amplified any of the distortions? What design features can increase or combat distortion in our shared understanding?

I have fallen more or less strongly for most of the distortions from having been an early adopter of the mainstream social media platforms. After about 15 years of being subjected to the manipulations I had a late but exponential sense of awareness which spurred me to reclaim my mental health, social and societal sanity. I consider myself in recovery and find it more useful to protect myself first and then the people in my immediate circles, by letting or not the outside signal and noise creep on my radar and approaching information with care. I believe that at-scale three-ponged methods with education on bias, design choices that optimise for people’s benefit, and corporate communication may contribute to reducing distortions and improving shared understanding.

Rebuilding shared understanding

“Democracy can not survive when the primary information source that its citizen use is based on an operating model that customizes individual realities and rewards the most viral content.”

Center for Humane Technology

How can design choices steward trust and mutual understanding? How to get to a healthier information ecosystem?

Fight the race for human attention
  • Helpful friction (such as sharing limits, prompting to read before sharing, or to revise language when harmful language is detected) can have notable results. According to Twitter, 34% of the people who were prompted to revise language did, or refrained from posting.
  • Optimize algorithms to create more “small peak” winners instead of a small number of “giant peak” ones.
  • Apply to online spaces teachings from the physical spaces and how its regulation works (such as blocking political ads, or limiting their targeting precision, in time of elections.)
  • Build empathy by surfacing people’s backgrounds and conditions. Encourage curiosity, not judgement or hate.
  • Invite one-on-one conversations over public broadcast.
  • Provide avenues for de-escalation of online disagreement.
Allow for addressing crises
  • Assume that crises will happen and plan for rapid response.
  • Seek to identify negative externalities among non-users to grow the ability to make design choices that contribute to a more resilient social fabric.
  • Maintain cross-team collaboration so that no bad design decisions are made in isolation. (e.g., are the teams which may be at the origin of harmful features enough in touch with those that mitigate or fix the features?)
  • Enable “blackouts” where features that may create harms are turned off during critical periods (e.g., leading to an election, turning off recommendations, microtargeting, trends, ads, autocompletion suggestions.)
Heal the years of toxic conditioning and mental habits, recover and re-train
  • Call out the harms so that there is mutual recognition. Public educational materials like the documentary “The Social Dilemma” should be distributed.
  • Teach/learn to cultivate intellectual humility, explore worldviews, reject the culture of contempt.
  • Rehumanize, then de-polarize.
  • Design to reach consensus in spite of / in harmony with differences.
  • Build smaller places for facilitated conversations where participants don’t compete for the attention of large audiences, but can see and enjoy others’ humanity and rich diversity.

Mind the (perception) gap

A perception gap is the body of false beliefs about another party, and that party’s beliefs. It leads to polarization. For instance, the perception gap leads to the incorrect belief that people hold views that are more extreme than they actually are. The negative side-effect is that people see each others as enemies. Very engaged parties want to win over the others at all costs, while the exhausted majority simply tunes out. Both outcomes are negative for shared understanding (or for democracy in the case of political polarization.)

Bridging the perception gap is a first step to minimize division and toward willingness to find common ground, overcome mistrust and advance to progress.

🤔 Personal reflection: The perception gap

The degree of perception gap matches the degree of dehumanization of the other side: the higher the perception gap the more likely someone was to find the other side bigoted, hateful, and morally disgusting. How might you be able to reach out to someone unlike you and better learn about their perspective on key issues? How might the “perception gap” concept and measurements inform the development of technology that creates shared understanding?

Independent and neutral parties who at critical times like election periods, or social upheaval, shed the nuanced light on the gap that might bridge polarised parties. Or else, a great deal of zen because there is a cost in time and energy, and a personal psychological risk to approaching people who may not see or recognise the good faith of a first step, or be wiling to find common grounds. Surfacing elements of the perception gap to the extent that it is known or can be deduced, and displaying these as helpful indicators, would come a long way to recalibrate one’s perspectives. Proprietary platforms may not always suggest varied sources of information to put forward, but anything that compromises between integrity and profit is a step towards progress.

Badge earned: Create shared understanding

#4 The foundations of humane technology: Centering Values

I’m taking at my pace the free online course “Foundations of humane technology” by the Center for Humane Technology (whom I had heard of after watching the excellent documentary “The Social Dilemma”.) These are my notes.


  1. Module 1 – setting the stage
  2. Module 2 – respecting human nature
  3. Module 3 – minimizing harmful consequences
  4. Module 4 – centering values
  5. Module 5 – creating shared understanding
  6. Module 6 – supporting fairness & justice
  7. Module 7 – helping people thrive
  8. Module 8 – ready to act

The myth of “neutral” tech

“Systemic fragility will persist as long as it is culturally and legally justifiable.”

Center for Humane Technology

Claiming that technology is “neutral” is the most common justification for its harms, and that whether it’s good or bad depends on how people use it. This is abdicating responsibility. Technology is not, and can not ever, be neutral.

Example: In social media (e.g., YouTube, Instagram, LinkedIn, Twitter/X, etc.) there is no neutral way to personalize how someone receives information, because of design choices made when developing algorithms.

The goal of humane technologist is to seek alignment (not neutrality, since it cannot exist). Alignment is being in service to the values of the users, the stakeholders, the broader community, etc. Humane technologists align the products they design with their mission statement, and endeavour for “stated” values and “demonstrated values” to match.

🤔 Personal reflection: The conditions that shaped my present

Pause to consider the conditions and people that have shaped you and your reality, such as where you were born, if it was a safe and healthy environment, who taught you to read and write, who helped you to become a technologist, which technologies were most influential in your life growing up.

I mentioned in a previous post about a previous module of the course, that I have been fortunate with regard to where and when I was born, and am aware of my white-and-wealthy privilege. And grateful. From my upbringing and education to the paths I chose, I have been either lucky or informed about the unconscious and conscious choices I made, the people I met. I see this as a virtuous circle feeding virtue back along the way. Good begets good.

I’m not a technologist myself but I have been working in tech now precisely half my life. The kind of tech that begets more tech: web standardization. We seek to develop technology with the most benefits to humanity and the least friction and harms.

Metrics are not neutral either

Metrics drive decision-making in product development but also consumption of a product. They offer a partial and biased sense of things, though and it’s important to remember externalities. For example:

  • Time on site / engagement may lead to addiction, loneliness, polarization
  • Attention-grabbing clickbait may lead to a shift in values, the rise of extremism, or a race to grab the most attention and the loss of sight of simpler yet more meaningful matters
  • Artificial Intelligence / Machine Learning systems may self-reinforce feedback loops, exploit vulnerabilities amplify bias

Be metrics-informed but values-driven.

🤔 Personal reflection: Identifying gaps in metrics

Metrics are efficient, and efficiency might make decisions easier and products more profitable. But what if that increased efficiency decreases a human capacity worth protecting?

Perhaps one way to avoid/mitigate unintended harms or to bring reducing harms as a goal, would be to rely on enough metrics –not too few and not too many– (so as to widen the pool of trends to consider), and to include as part of the measured elements one or more that derive directly from the product stated values (as as to balance things more towards values).

Harnessing values

  • We turn what we like in the world/society into values
  • Our values shape our work/products
  • Our work/products shape the world/society

🤔 Personal reflection: Understanding values

Consider the life experiences which have shaped the values you hold. What are those experiences, and which values did they shape?

Justice, fairness and equity is probably the group of values that I’ve held the longest, because my parents based all of their parenting on those and that has had a durable impact on their children: my twin brother and me. Altruism is probably the value I adopted next, as a result of volunteering at the French Red Cross, which was NOT motivated by altruism at all: I was convinced by my brother to accompany him because he didn’t want to take a first-aid course alone, and he was motivated by the prospect of adding something positive to his first resume. But we received enough and more in useful teachings, sense of worth, friendships, experiences etc., that we both joined and committed for years into adulthood. Aspiring to make a difference is a value I picked up at work, being surrounded by so many people (internally and externally in close circles) who do make a difference, that it’s inspiring. Finally, I want to highlight integrity as a value that appeared when I started parenting myself and has grown with my child because it’s fundamental to upbringing.

How has your personal history and the values that arise from it shaped the unique way you build products? (If you don’t build products, consider how they’ve informed your career and/or engagement with technology.)

I’m not sure. It may be the other way around where the strict organizational value-based process to designing web standards has shaped how I engage with technology and the care I give to approach any endeavor.

Once your product is out in the world, it will have unintended consequences and will be shaped by values other than your own. How might your product be used in ways that conflict with your own values? (If you don’t build products, think about something else you’ve previously built, organized, or supported.)

[I don’t have any experience to illustrate this aspect of understanding values]

Genuine trustworthiness

Genuine trustworthiness provides a path where success is defined by values, not markets; by a culture of values alignment; where stakeholders are partners rather than adversaries. Key factors of trustworthiness are:

  • Integrity (intentions/motivations are aligned with stated values)
  • Competency (ability to accomplish stated goals)
  • Accountability (fulfilling integrity and competency directly supports your stakeholders)
  • Education (a result of your work over time is the reduction of the asymmetries of power and knowledge)
Badge earned: Center on values

How I made the Firefox “reader view” popin NOT overlap text

I ❤️ the Firefox feature called Reader View. I’ve been a fan for many years and I use it often. A lot of the time, I click the “listen” button, and I read while the text is being read to me.

BUT, if you leave the popin (clicking it a second time shelves it back) in order to use the pause/skip/speed buttons and slider while you read along, it blocks some of the text as the text automatically scrolls. See what I mean:

Screenshot of a browser page on desktop in reader view with the 'listen' popin showing pause/skip/speed buttons and slider that blocks some of the text from the page.

Because there’s a lot of empty space on the left side in reader view, I thought it should be possible to get the popup to be drawn over the empty space on the other side rather than overlapping the text.

I initially applied this:

** for the "reader view" popup to NOT overlap the text **/
.dropdown .dropdown-popup {
inset-inline-start: -260px !important;
}

Then, my colleague Bert Bos (who knows a thing or two about CSS because he co-invented it with HĂĄkon Lie) advised me to use the below, for the toolbar to stay at the left margin, and as long as that margin is wider than the popup, it will not overlap the text.


/* Put the toolbar in the reader view as far left as possible. */
#toolbar.toolbar-container .toolbar.reader-toolbar {
margin-inline-start: 1px !important;
}

I’m much happier now; this is how it looks:

Screenshot a browser page on desktop in reader view with the 'listen' popin showing pause/skip/speed buttons and slider at the left of the page, thus no longer blocking any of the text from the page.

Add a user stylesheet

  • In the Firefox address bar, type 'about:profiles' and click the button “Show in Finder” (or similar if not on Mac OS) next to “Local Directory”
  • Open the “chrome” directory (or create it if it does not exist)
  • Edit the file 'userContent.css' (or create it if it does not exist)
  • Paste the custom snippet and save the file:
    /** Put the toolbar in the reader view as far left as possible. **/
    .dropdown .dropdown-popup {
    inset-inline-start: -260px !important;
    }
  • In the Firefox address bar, type 'about:config' and set 'toolkit.legacyUserProfileCustomizations.stylesheets' to true
  • Open the menu Tools > Browser Tools > Web Developer Tools
  • Click the three-dot menu, and choose “Settings”. Under “Inspector”, check the “Show Browser Styles” checkbox
  • Restart Firefox
  • Enjoy!

Vote it up as an idea in Mozilla Connect

I put it on the Mozilla Idea Exchange and if you think this change should be the default, please vote it up!

#3 The foundations of humane technology: Minimizing harmful consequences

I’m taking at my pace the free online course “Foundations of humane technology” by the Center for Humane Technology (whom I had heard of after watching the excellent documentary “The Social Dilemma”.) These are my notes.


  1. Module 1 – setting the stage
  2. Module 2 – respecting human nature
  3. Module 3 – minimizing harmful consequences
  4. Module 4 – centering values
  5. Module 5 – creating shared understanding
  6. Module 6 – supporting fairness & justice
  7. Module 7 – helping people thrive
  8. Module 8 – ready to act

Externalities

“Our economic activity is causing the death of the living planet and economists say, ‘Yeah, yeah, that’s an environmental externality.’ There’s something profoundly wrong with our theories if we’re dismissing or just happy to label the death of the living planet as an externality.”

Kate Raworth, “renegade economist”

Negative externalities are unaccounted side effects that result in damages or harms. At small scale they may be acceptable, but aggregated they may be catastrophic. They are external in that the company causing them does not pay the costs. Social, health-related, and environmental costs usually end up borne by society, now or in the future.

Example 1: We can regularly upgrade to the latest exciting smartphone. Externality: 50 million tons of toxic e-waste globally per year (source).

Example 2: The idiots who live in the house behind mine, including their large dog, regularly make a lot of noise, day or night, despite my many complaints over the years. Externality: additional cost of my cranking up the ventilator so the resulting white noise may cover theirs.

🤔 Personal reflection: Externalities

What are the conversations required to identify externalities in your own work? In what ways has your company, organization, or industry already learned lessons about externalities?

There is a proven feedback loop in place as part of our work process (https://www.w3.org/Consortium/Process/) which ensures that any issue is surfaced during the stages of development of open web standards. Externalities then may become work items themselves, or may be specific to particular specifications only. Conversations take place in the open in our multi-stakeholders forum.

Our organization was founded as an industry consortium a few years after the Web took off, by the inventor of the Web himself, to collaboratively and globally create the protocols and guidelines needed for the web to be a universal and agnostic platform which would guarantee its interoperability and availability for everyone. The way the consortium evolved over the years is a testament about learned lessons about externalities.
For example, in 1997, 2.5 years into its existence, W3C launched the International Web Accessibility Initiative (https://www.w3.org/Press/WAI-Launch.html), to remove accessibility barriers for all people with disabilities, with the endorsement of then USA President Bill Clinton who wrote “Congratulations on the launch of the Web Accessibility Initiative — an effort to ensure that people with disabilities have access to the Internet’s World Wide Web.” (https://www.w3.org/Press/Clinton.html)
25 years after, the W3C’s WAI is still very active as Web accessibility goals evolve just as Web technologies are created.

🤔 Personal reflection: Externalities at scale

In an effort to help users celebrate the people in their lives, a photo sharing app unveils a “friend score” that correlates with the people whose posts you most engage with on the app. How might this situation generate serious externalities when scaled to millions or billions of users? For example, how might they impact personal relationships, mental health, shared truth, or individual well-being?

Externalities: I can think of two exernalities: 1) What I post is influenced in a way that may either be reinforcing the feedback loop or breaking it, but in either cases I no longer retain control because what I post becomes driven by the score. 2) This is likely to antagonize myself with the subset of friends which do not feature or feature insufficiently in the score.

In our economic systems

“unchecked economic growth can destroy more value than it creates”

Center for Humane Technology

A few concepts:

  • Extraction occurs when a company removes resources from an environment more quickly than they are replenished.
  • On the other hand, stewardship is about creating value by helping value thrive in place.
  • To create long-term value, we must balance efficiency and scale with resilience, the ability of a complex system to adapt to unexpected changes. When we steward the complex systems that we rely on, they return the favor by supporting us in unexpected ways when crisis hits.

Over-extracting may lead to collapse

Our economic systems tend to operate at an unsustainably extractive scale:

  • prioritize growth at all costs, at the expense of environmental damage, or exploitation of labor forces;
  • are based on notions such as “Nature is a stock of resources to be converted for human purposes“;
  • foster competitive behaviours where even companies that understand the harms of over-extraction and wish to chart a different path face a harsh reality: to stay competitive everyone keeps being engaged, in eve in harmful behavior not because they want to, but because if they don’t, someone else in the market will.

“Often the best way to escape a [competitive behaviour] trap is to band together and change the rules. Competition must be balanced with collaboration and regulation, especially in places where extraction at scale creates widespread risks.
Just as businesses need markets to compete in, they need movements to collaborate in, especially when those businesses are values-driven.”

Center for Humane Technology

🤔 Personal reflection: Assess externalities

Imagine: Your product, which helps creators, mostly teens, express themselves to their friends and broader public audiences, is extremely good at training people to create compelling content and rewards them socially for doing so, but has contributed to a new externality: Reports about severe mental health struggles among influencers (maintaining a large, engaged audience is causing burn out, anxiety, social isolation, etc.) Yet young people are hooked on the idea: “social media influencer” is now the fourth-highest career aspiration among elementary school students.

  1. Scale of the externality. How widespread will it be?
    One out of four is already quite widespread. Compounded with the fact that teens tend to obsess unreasonably over fads, this could spread even more to toxic levels.
  2. Stakes of the impact. For example, is the impact irreversible?
    There is a risk of real harm, including self-harm, given the small number of potentially successful social media influencers, or the relatively short span of the success window. The impact may include lost opportunities to pursue a path of more sustainable livelihood.
  3. Vulnerability of the group or system impacted. How exposed is it?
    Teens are very vulnerable because easily influenced. They are at a critical point in their development on the track to adulthood where the choices they make are likely to shape and define them on the long-term.
  4. Long-term costs. If this externality is left unaddressed, who will bear the cost and how costly will it be?
    Like over-extraction, the value will diminish as the market is flooded. The race to the social media influence likely clears a path with less competition for other careers, but for as long as it takes for the race to lose its appeal, many will be left losing while very few succeed.
  5. Paths to reduction. How might less of this externality be created?
    Designing more around creating compelling content and less around social rewarding.

badge earned: "minimize harmful consequences"

The ultimate goal should not be to have companies pay to mitigate the harms their products create—it should be to avoid creating harm in the first place.

Center for Humane Technology

People and safety over profits, please

When algorithms prioritize content based on engagement, the most harmful and engaging content goes viral.

In the case of social media, it takes many orders of magnitude more effort to verify a fact than to invent a lie. Even a company spending billions on fact checking will always have their team outnumbered by those creating disinformation. For this reason, creating structural changes to disincentivize disinformation will almost always be more effective than hiring fact checkers to address the problem, even though fact-checking is an important part of the solution.

“Facebook’s Integrity Team researchers found that removing the reshare button after two levels of sharing is more effective than the billions of dollars spent trying to find and remove harmful content.”

Centre for Humane Technology, #OneClickSafer campaign

The Yak-layering problem

If yak-shaving is the masterful art of removing problems one by one until you eventually get to what you originally wanted to fix, then yak-layering is the unfortunate piling on top of each other of unintended consequences, which may become after years quite complex, obscured and often difficult to change.

Less is more

Addressing harmful externalities by doing less of the activities that generate them gives humans and our ecosystem a chance to get healthy again. So, while it’s easy to think that more technology can solve our problems, rather than creating a technological solution to address an externality, we can work to reduce the externality itself.

For example, implementing energy efficiency programs or appliances so that we use less energy is often cheaper and more environmentally friendly than generating more energy from renewable sources.

Paradigm shifting

current paradigmparadigm shifting
negative externalitiesturn into design criteria
profit/growth at all costsbind scale to responsibility
fix tech with more techcreate fewer risks
designdesign for the better
hiding/ignoring externalitiesadd mitigations to road-map
traditional success metrics (KPIs)align with your values