#4 The foundations of humane technology: Centering Values

I’m taking at my pace the free online course “Foundations of humane technology” by the Center for Humane Technology (whom I had heard of after watching the excellent documentary “The Social Dilemma”.) These are my notes.


  1. Module 1 – setting the stage
  2. Module 2 – respecting human nature
  3. Module 3 – minimizing harmful consequences
  4. Module 4 – centering values
  5. Module 5 – creating shared understanding
  6. Module 6 – supporting fairness & justice
  7. Module 7 – helping people thrive
  8. Module 8 – ready to act

The myth of “neutral” tech

“Systemic fragility will persist as long as it is culturally and legally justifiable.”

Center for Humane Technology

Claiming that technology is “neutral” is the most common justification for its harms, and that whether it’s good or bad depends on how people use it. This is abdicating responsibility. Technology is not, and can not ever, be neutral.

Example: In social media (e.g., YouTube, Instagram, LinkedIn, Twitter/X, etc.) there is no neutral way to personalize how someone receives information, because of design choices made when developing algorithms.

The goal of humane technologist is to seek alignment (not neutrality, since it cannot exist). Alignment is being in service to the values of the users, the stakeholders, the broader community, etc. Humane technologists align the products they design with their mission statement, and endeavour for “stated” values and “demonstrated values” to match.

🤔 Personal reflection: The conditions that shaped my present

Pause to consider the conditions and people that have shaped you and your reality, such as where you were born, if it was a safe and healthy environment, who taught you to read and write, who helped you to become a technologist, which technologies were most influential in your life growing up.

I mentioned in a previous post about a previous module of the course, that I have been fortunate with regard to where and when I was born, and am aware of my white-and-wealthy privilege. And grateful. From my upbringing and education to the paths I chose, I have been either lucky or informed about the unconscious and conscious choices I made, the people I met. I see this as a virtuous circle feeding virtue back along the way. Good begets good.

I’m not a technologist myself but I have been working in tech now precisely half my life. The kind of tech that begets more tech: web standardization. We seek to develop technology with the most benefits to humanity and the least friction and harms.

Metrics are not neutral either

Metrics drive decision-making in product development but also consumption of a product. They offer a partial and biased sense of things, though and it’s important to remember externalities. For example:

  • Time on site / engagement may lead to addiction, loneliness, polarization
  • Attention-grabbing clickbait may lead to a shift in values, the rise of extremism, or a race to grab the most attention and the loss of sight of simpler yet more meaningful matters
  • Artificial Intelligence / Machine Learning systems may self-reinforce feedback loops, exploit vulnerabilities amplify bias

Be metrics-informed but values-driven.

🤔 Personal reflection: Identifying gaps in metrics

Metrics are efficient, and efficiency might make decisions easier and products more profitable. But what if that increased efficiency decreases a human capacity worth protecting?

Perhaps one way to avoid/mitigate unintended harms or to bring reducing harms as a goal, would be to rely on enough metrics –not too few and not too many– (so as to widen the pool of trends to consider), and to include as part of the measured elements one or more that derive directly from the product stated values (as as to balance things more towards values).

Harnessing values

  • We turn what we like in the world/society into values
  • Our values shape our work/products
  • Our work/products shape the world/society

🤔 Personal reflection: Understanding values

Consider the life experiences which have shaped the values you hold. What are those experiences, and which values did they shape?

Justice, fairness and equity is probably the group of values that I’ve held the longest, because my parents based all of their parenting on those and that has had a durable impact on their children: my twin brother and me. Altruism is probably the value I adopted next, as a result of volunteering at the French Red Cross, which was NOT motivated by altruism at all: I was convinced by my brother to accompany him because he didn’t want to take a first-aid course alone, and he was motivated by the prospect of adding something positive to his first resume. But we received enough and more in useful teachings, sense of worth, friendships, experiences etc., that we both joined and committed for years into adulthood. Aspiring to make a difference is a value I picked up at work, being surrounded by so many people (internally and externally in close circles) who do make a difference, that it’s inspiring. Finally, I want to highlight integrity as a value that appeared when I started parenting myself and has grown with my child because it’s fundamental to upbringing.

How has your personal history and the values that arise from it shaped the unique way you build products? (If you don’t build products, consider how they’ve informed your career and/or engagement with technology.)

I’m not sure. It may be the other way around where the strict organizational value-based process to designing web standards has shaped how I engage with technology and the care I give to approach any endeavor.

Once your product is out in the world, it will have unintended consequences and will be shaped by values other than your own. How might your product be used in ways that conflict with your own values? (If you don’t build products, think about something else you’ve previously built, organized, or supported.)

[I don’t have any experience to illustrate this aspect of understanding values]

Genuine trustworthiness

Genuine trustworthiness provides a path where success is defined by values, not markets; by a culture of values alignment; where stakeholders are partners rather than adversaries. Key factors of trustworthiness are:

  • Integrity (intentions/motivations are aligned with stated values)
  • Competency (ability to accomplish stated goals)
  • Accountability (fulfilling integrity and competency directly supports your stakeholders)
  • Education (a result of your work over time is the reduction of the asymmetries of power and knowledge)
Badge earned: Center on values

#3 The foundations of humane technology: Minimizing harmful consequences

I’m taking at my pace the free online course “Foundations of humane technology” by the Center for Humane Technology (whom I had heard of after watching the excellent documentary “The Social Dilemma”.) These are my notes.


  1. Module 1 – setting the stage
  2. Module 2 – respecting human nature
  3. Module 3 – minimizing harmful consequences
  4. Module 4 – centering values
  5. Module 5 – creating shared understanding
  6. Module 6 – supporting fairness & justice
  7. Module 7 – helping people thrive
  8. Module 8 – ready to act

Externalities

“Our economic activity is causing the death of the living planet and economists say, ‘Yeah, yeah, that’s an environmental externality.’ There’s something profoundly wrong with our theories if we’re dismissing or just happy to label the death of the living planet as an externality.”

Kate Raworth, “renegade economist”

Negative externalities are unaccounted side effects that result in damages or harms. At small scale they may be acceptable, but aggregated they may be catastrophic. They are external in that the company causing them does not pay the costs. Social, health-related, and environmental costs usually end up borne by society, now or in the future.

Example 1: We can regularly upgrade to the latest exciting smartphone. Externality: 50 million tons of toxic e-waste globally per year (source).

Example 2: The idiots who live in the house behind mine, including their large dog, regularly make a lot of noise, day or night, despite my many complaints over the years. Externality: additional cost of my cranking up the ventilator so the resulting white noise may cover theirs.

🤔 Personal reflection: Externalities

What are the conversations required to identify externalities in your own work? In what ways has your company, organization, or industry already learned lessons about externalities?

There is a proven feedback loop in place as part of our work process (https://www.w3.org/Consortium/Process/) which ensures that any issue is surfaced during the stages of development of open web standards. Externalities then may become work items themselves, or may be specific to particular specifications only. Conversations take place in the open in our multi-stakeholders forum.

Our organization was founded as an industry consortium a few years after the Web took off, by the inventor of the Web himself, to collaboratively and globally create the protocols and guidelines needed for the web to be a universal and agnostic platform which would guarantee its interoperability and availability for everyone. The way the consortium evolved over the years is a testament about learned lessons about externalities.
For example, in 1997, 2.5 years into its existence, W3C launched the International Web Accessibility Initiative (https://www.w3.org/Press/WAI-Launch.html), to remove accessibility barriers for all people with disabilities, with the endorsement of then USA President Bill Clinton who wrote “Congratulations on the launch of the Web Accessibility Initiative — an effort to ensure that people with disabilities have access to the Internet’s World Wide Web.” (https://www.w3.org/Press/Clinton.html)
25 years after, the W3C’s WAI is still very active as Web accessibility goals evolve just as Web technologies are created.

🤔 Personal reflection: Externalities at scale

In an effort to help users celebrate the people in their lives, a photo sharing app unveils a “friend score” that correlates with the people whose posts you most engage with on the app. How might this situation generate serious externalities when scaled to millions or billions of users? For example, how might they impact personal relationships, mental health, shared truth, or individual well-being?

Externalities: I can think of two exernalities: 1) What I post is influenced in a way that may either be reinforcing the feedback loop or breaking it, but in either cases I no longer retain control because what I post becomes driven by the score. 2) This is likely to antagonize myself with the subset of friends which do not feature or feature insufficiently in the score.

In our economic systems

“unchecked economic growth can destroy more value than it creates”

Center for Humane Technology

A few concepts:

  • Extraction occurs when a company removes resources from an environment more quickly than they are replenished.
  • On the other hand, stewardship is about creating value by helping value thrive in place.
  • To create long-term value, we must balance efficiency and scale with resilience, the ability of a complex system to adapt to unexpected changes. When we steward the complex systems that we rely on, they return the favor by supporting us in unexpected ways when crisis hits.

Over-extracting may lead to collapse

Our economic systems tend to operate at an unsustainably extractive scale:

  • prioritize growth at all costs, at the expense of environmental damage, or exploitation of labor forces;
  • are based on notions such as “Nature is a stock of resources to be converted for human purposes“;
  • foster competitive behaviours where even companies that understand the harms of over-extraction and wish to chart a different path face a harsh reality: to stay competitive everyone keeps being engaged, in eve in harmful behavior not because they want to, but because if they don’t, someone else in the market will.

“Often the best way to escape a [competitive behaviour] trap is to band together and change the rules. Competition must be balanced with collaboration and regulation, especially in places where extraction at scale creates widespread risks.
Just as businesses need markets to compete in, they need movements to collaborate in, especially when those businesses are values-driven.”

Center for Humane Technology

🤔 Personal reflection: Assess externalities

Imagine: Your product, which helps creators, mostly teens, express themselves to their friends and broader public audiences, is extremely good at training people to create compelling content and rewards them socially for doing so, but has contributed to a new externality: Reports about severe mental health struggles among influencers (maintaining a large, engaged audience is causing burn out, anxiety, social isolation, etc.) Yet young people are hooked on the idea: “social media influencer” is now the fourth-highest career aspiration among elementary school students.

  1. Scale of the externality. How widespread will it be?
    One out of four is already quite widespread. Compounded with the fact that teens tend to obsess unreasonably over fads, this could spread even more to toxic levels.
  2. Stakes of the impact. For example, is the impact irreversible?
    There is a risk of real harm, including self-harm, given the small number of potentially successful social media influencers, or the relatively short span of the success window. The impact may include lost opportunities to pursue a path of more sustainable livelihood.
  3. Vulnerability of the group or system impacted. How exposed is it?
    Teens are very vulnerable because easily influenced. They are at a critical point in their development on the track to adulthood where the choices they make are likely to shape and define them on the long-term.
  4. Long-term costs. If this externality is left unaddressed, who will bear the cost and how costly will it be?
    Like over-extraction, the value will diminish as the market is flooded. The race to the social media influence likely clears a path with less competition for other careers, but for as long as it takes for the race to lose its appeal, many will be left losing while very few succeed.
  5. Paths to reduction. How might less of this externality be created?
    Designing more around creating compelling content and less around social rewarding.

badge earned: "minimize harmful consequences"

The ultimate goal should not be to have companies pay to mitigate the harms their products create—it should be to avoid creating harm in the first place.

Center for Humane Technology

People and safety over profits, please

When algorithms prioritize content based on engagement, the most harmful and engaging content goes viral.

In the case of social media, it takes many orders of magnitude more effort to verify a fact than to invent a lie. Even a company spending billions on fact checking will always have their team outnumbered by those creating disinformation. For this reason, creating structural changes to disincentivize disinformation will almost always be more effective than hiring fact checkers to address the problem, even though fact-checking is an important part of the solution.

“Facebook’s Integrity Team researchers found that removing the reshare button after two levels of sharing is more effective than the billions of dollars spent trying to find and remove harmful content.”

Centre for Humane Technology, #OneClickSafer campaign

The Yak-layering problem

If yak-shaving is the masterful art of removing problems one by one until you eventually get to what you originally wanted to fix, then yak-layering is the unfortunate piling on top of each other of unintended consequences, which may become after years quite complex, obscured and often difficult to change.

Less is more

Addressing harmful externalities by doing less of the activities that generate them gives humans and our ecosystem a chance to get healthy again. So, while it’s easy to think that more technology can solve our problems, rather than creating a technological solution to address an externality, we can work to reduce the externality itself.

For example, implementing energy efficiency programs or appliances so that we use less energy is often cheaper and more environmentally friendly than generating more energy from renewable sources.

Paradigm shifting

current paradigmparadigm shifting
negative externalitiesturn into design criteria
profit/growth at all costsbind scale to responsibility
fix tech with more techcreate fewer risks
designdesign for the better
hiding/ignoring externalitiesadd mitigations to road-map
traditional success metrics (KPIs)align with your values

#2 The foundations of humane technology: respecting human nature

I’m taking at my pace the free online course “Foundations of humane technology” by the Center for Humane Technology (whom I had heard of after watching the excellent documentary “The Social Dilemma”.) These are my notes.


  1. Module 1 – setting the stage
  2. Module 2 – respecting human nature
  3. Module 3 – minimizing harmful consequences
  4. Module 4 – centering values
  5. Module 5 – creating shared understanding
  6. Module 6 – supporting fairness & justice
  7. Module 7 – helping people thrive
  8. Module 8 – ready to act

Technology vs. human nature

As humans, we inherit evolutionary conditioning that developed over millions of years.

Center for Humane Technology

🤔 Not to freak out entirely just yet, but my brain just leapt to this thought: When I think of the damage done over just ten years of asocial behaviour on so-called social media, I really worry about evolutionary conditioning and its possible acceleration in the face of individualism and instant gratification.

⚠️ heuristics (shortcuts for making decisions and sometimes fast decisions on incomplete data) may become cognitive biases (such as social conformity biases where our opinions are shaped by (the actions of) others) which can be manipulated and exploited for profit or power.

Instead of exploiting these biases for short-term profits, humane technology uses them as design constraints for wise choices.

Center for Humane Technology

An interesting overconfidence bias is the above-average effect: many people think they’re above average, which of course is statistically impossible. Because of that, we believe we can not easily be manipulated, influenced, and shaped. The truth is that we make decisions based on context and/or emotions, however great we (think we) are.

For example:

  • What we think is a fair price depends on the first price we hear. This is known as price anchoring.
  • If someone has just read a paragraph containing the word “Ocean,” and then you ask them to identify their favorite clothing detergent, they’re more likely to choose “Tide.” This is known as subliminal cueing.
  • A nauseating environment inclines us toward harsher moral judgments.

🤔 Personal reflection: hijacked biases

Are there any instances in the last day or two you can think of when your brain’s heuristics may have been hijacked, by technology or anything else?

Yes. It happens at least once a day and more, that I pick up my smartphone to do something specific and that seeing a particular app icon, or a notification totally hijacks my focus. Most of the times I don’t even accomplish that specific thing I had set out to do until after I’ve put the phone down and something else reminds me of what I had intended to do. It makes me wonder the amount of things I ended up not doing after all 😂

Badge earned: Respect human nature

Design to enable wise choices, NOT to exploit human vulnerabilities

Human vulnerabilities and tendencies need to be factored in as we study user behavior. Ignoring values risks to trade what’s right for what works.

When a machine learning algorithm can profile someone, identify their psychological weaknesses, and flood them with information tailored to shift their behavior, manipulation of the many by the few becomes possible in a new and terrifying way.

Center for Humane Technology

Good practice: technologies or products should be capable of:

  • Receiving information from users about their experience
  • Learning how your technology does or doesn’t support user values
  • Evolving towards increasing alignment between platform design and user well-being

To “give people what they want” is often an easy way to avoid moral responsibility for exploiting human desire.

Center for Humane Technology

#1 The foundations of humane technology: setting the stage

I’m taking at my pace the free online course “Foundations of humane technology” by the Center for Humane Technology (whom I had heard of after watching the excellent documentary “The Social Dilemma”.) These are my notes.


  1. Module 1 – setting the stage
  2. Module 2 – respecting human nature
  3. Module 3 – minimizing harmful consequences
  4. Module 4 – centering values
  5. Module 5 – creating shared understanding
  6. Module 6 – supporting fairness & justice
  7. Module 7 – helping people thrive
  8. Module 8 – ready to act

Setting the stage (notes from 2022-07-10)

“The real problem of humanity is the following: we have paleolithic emotions, medieval institutions; and god-like technology.”

Dr. E.O. Wilson, Sociobiologist

🤔 Personal reflection: taking stock of our current situation

Looking at the UN’s Sustainable Development Goals, which of these goals are most important to you? Which challenges represented by these goals have you experienced personally (directly or indirectly)? Which challenges have you been spared from? Which issues have affected your loved ones or communities you care about?

Climate action as well as peace, justice and strong institutions are the most urgent goals to me, because I believe they have respectively the potential to give humanity the time it needs and the framework required, to make significant progress on many other of the UN stated sustainable development goals.

All of the issues are truly important. I am fortunate enough as a middle-aged white European woman, to have been born in a place at an era where I don’t suffer from poverty or hunger, or lack of clean water. I have received an education of quality and I am healthy and live in a country where a health system and well-being are key benefits.

Personally I am directly negatively impacted by gender inequality. Not as much as generations before mine, and hopefully more than generations to come. I teach my teenager son the values which put humans first and that recognize that women and men are equally important. I made choices for myself and him that speak to responsible consumption, and I hope and expect he will follow a similar or better path, if not lead others to build a better world.

What role should technology play in achieving the goals you care about most?

People rely more and more on technology, which is a blessing and a curse. Because science without conscience can have disastrous consequences. There is hope where technical choices are individual’s, but it’s not always the case. Technology can empower the people through participatory governance. Safe-fails and careful design and planning are key. Technology may play an essential role in each and every of the UN stated sustainable development goals. But before we get there, there needs to be broader realization of the stakes, and global awareness and will to put the humans first.

Our shared understanding of the world is being polluted just like the Earth. The information we use to inform our choices is being mediated by technology designed to maximize engagement, not collective wisdom.

Center for Humane Technology

Systems thinking (notes from 2022-07-15; 2022-07-17)

Systems thinking is the ability to see the problems and solutions in relationship to one another rather than as isolated concerns.

Persuasive technologies use scientifically tested design strategies to manipulate human behavior towards a desired goal like increasing time on site or user engagement. The creation and amplification of these technologies have been called “the race to the bottom of the brain stem.”

Ledger of Harms by the Center for Humane Technology lists the harms to society that technology platforms have created in the race for human attention, due to immense pressure to prioritize engagement and growth:

  • The Next Generations :: From developmental delays to suicide, children face a host of physical, mental and social challenges. Exposure to unrestrained levels of digital technology can have serious long term consequences for children’s development, creating permanent changes in brain structure that impact how children will think, feel, and act throughout their lives.
  • Making Sense of the World :: Misinformation, conspiracy theories, and fake news. A broken information ecology undermines our ability to understand and act on complex global challenges.
  • Attention and Cognition :: Loss of crucial abilities including memory and focus. Technology’s constant interruptions and precisely-targeted distractions are taking a toll on our ability to think, to focus, to solve problems, and to be present with each other.
  • Physical and Mental Health :: Stress, loneliness, feelings of addiction, and increased risky health behavior as technology increasingly pervades our waking lives.
  • Social Relationships :: Less empathy, more confusion and misinterpretation. While social networks claim to connect us, all too often they distract us from connecting with those directly in front of us, leaving many feeling both connected and socially isolated.
  • Politics and Elections :: Propaganda, distorted dialogue & a disrupted democratic process. Social media platforms are incentivized to amplify the most engaging content, tilting public attention towards polarizing and often misleading content. By selling micro targeting to the highest bidder, they enable manipulative practices that undermine democracies around the world.
  • Systemic Oppression :: Amplification of racism, sexism, homophobia and ableism. Technology integrates and often amplifies racism, sexism, ableism and homophobia, creating an attention economy that works against marginalized communities.
  • Do Unto Others :: Many people who work for tech companies — and even the CEOs — limit tech usage in their own homes. Many tech leaders don’t allow their own children to use the products they build — which implies they’re keenly aware that the products from which they make so much money from pose risks, especially for young users.

Humanity has always faced difficult challenges, but never before have they been able to so quickly scale up to create truly global threats.

Center for Humane Technology

The leverage points framework

Inspired by Dr. Donella Meadows’ Leverage Points to Intervene in a System, the Center for Humane Techology developed a simplified model of leverage points for intervening in the extractive tech ecosystem. (3-minute video explaining the levers and giving simple examples.)

Leverage increases from left to right on the framework, as does the difficulty of implementing changes.

  1. Design Changes: These are adjustments that technology companies themselves make in the visual design and user experience of their platforms.
  2. Internal Governance: These changes are implemented by decision-makers within platforms to shift how internal systems and structures operate.
  3. External Regulation: This occurs when legislators or regulators, pass laws that set requirements related to safety, transparency, interoperability with competing platforms, or create liabilities for unsafe business practices or harms.
  4. Business Model: These changes shift the fundamental operations and profit structures of a firm, for example, a company could move to a subscription model with a sliding scale to ensure broad access.
  5. Economic Goal: Redefining economic success can radically alter how systems behave.
  6. Operating Paradigm: Paradigm changes are the highest leverage point and generally the most difficult to shift. They occur when there is a widespread change in core beliefs, values, behaviors, and operating norms.

We cannot meet our biggest global challenges if our technology distracts us, divides us, and downgrades our collective ability to solve problems.

Center for Humane Technology

🤔 Personal reflection: The ledger of harms

Did any of these harms factor into your decision to take this course? Why do they matter to you personally? Are any of these harms new to you and worth deeper contemplation?

​I am acutely aware of all of these harms to society. If I have personally experienced some of them first-hand, I have witnessed all of them unfold increasingly during the past decade or two, in the three decades that passed since the invention of the web by Tim Berners-Lee, who then created the Web Consortium (W3C) for whom I work, with a mission to develop the open web standards that make it more powerful and helpful to society.

My growing frustration and feeling of helplessness in the face of market interests factored in my decision to take a course on approaching humane technology. W3C and its stakeholders are gradually, albeit slowly, moving from purely technical standards work to value-based systems thinking. The technical work is already anchored around values that put people first: “User needs come before the needs of web page authors, which come before than the needs of user agent implementers, which come before than the needs of specification writers, which come before theoretical purity.” (called Priority of Constituencies.) I am watching with hope the social aspect of our work as there is growing interesting among our stakeholders to make “sustainability” a resident among the wide horizontal review group, joining long-term subjects web accessibility (for disabled people primarily), internationalization (for all of the languages and scripts of the earth), security and privacy (for everyone).

Showing people that humane technology is possible can help transform consumer distrust of existing platforms into consumer demand for something else.

Center for Humane Technology

We need humane technology that:

  • Respects, rather than exploits, human vulnerabilities.
  • Accounts for and minimizes negative consequences.
  • Prioritizes values over metrics.
  • Supports the shared understanding necessary for effective democracies.
  • Promotes fairness and justice.
  • Helps humans thrive.

🤔 Personal reflection: acting with leverage

How can your own work help to push a leverage point in a meaningful way?

At my level (which isn’t that of a product designer, or technical architect), I strive to influence my peers and colleagues by ensuring we keep being guided by a strong moral and ethical compass, by reminding others of relevant aspects, and in some cases by making proposals for internal discussions.

Where do you see opportunities to push on other leverage points?

The decisions I take for myself are more and more informed by my values, and manifest in whether I choose to support, or not support, certain types of activities, businesses, practices, etc. I evangelize when I can. I teach my teenager the values which I believe make the world a better place. I try to lead by example. Beyond this, I believe that one’s exercising democratic preference is the next best leverage point that is available to me.


A common pattern in the tech industry is thinking that systemic problems have algorithmic solutions. Sometimes it doesn’t work. Per the following xkcd illustration.


Bad earned: setting the stage