Highlights: “We need to rewild the Internet”

Today I read We Need To Rewild The Internet by Maria Farrell and Robin Berjon (April 16, 2024), and below are my personal annotated highlights from it.

Subtitle of the article, to set the tone:

The internet has become an extractive and fragile monoculture. But we can revitalize it using lessons learned by ecologists.

Concept of ‘shifting baselines’ which is useful in many other contexts when considering ‘change’:

As Jepson and Blythe wrote, shifting baselines are “where each generation assumes the nature they experienced in their youth to be normal and unwittingly accepts the declines and damage of the generations before.” Damage is already baked in. It even seems natural.

Rewilding vs. timid incremental fixes that afford no true progress:

But rewilding a built environment isn’t just sitting back and seeing what tender, living thing can force its way through the concrete. It’s razing to the ground the structures that block out light for everyone not rich enough to live on the top floor.

Deep defects run deep:

Perhaps one way to motivate and encourage regulators and enforcers everywhere is to explain that the subterranean architecture of the internet has become a shadowland where evolution has all but stopped. Regulators’ efforts to make the visible internet competitive will achieve little unless they also tackle the devastation that lies beneath. 

Public utilities need to be recognized as such, and funded as such (including the non-profit organization I work for, W3C, which develops standards for one application of the Internet: the Web):

[Instead, w]We need more publicly funded tech research with publicly released findings. Such research should investigate power concentration in the internet ecosystem and practical alternatives to it. We need to recognize that much of the internet’s infrastructure is a de facto utility that we must regain control of.

Better ways of doing it (also, read as a pair with the concluding paragraph of the article which I labeled ‘manifesto’):

The solutions are the same in ecology and technology: aggressively use the rule of law to level out unequal capital and power, then rush in to fill the gaps with better ways of doing things.

Principled robust infrastructure:

We need internet standards to be global, open and generative. They’re the wire models that give the internet its planetary form, the gossamer-thin but steely-strong threads holding together its interoperability against fragmentation and permanent dominance.

Manifesto (which I read several times and understood more of each time):

Ecologists have reoriented their field as a “crisis discipline,” a field of study that’s not just about learning things but about saving them. We technologists need to do the same. Rewilding the internet connects and grows what people are doing across regulation, standards-setting and new ways of organizing and building infrastructure, to tell a shared story of where we want to go. It’s a shared vision with many strategies. The instruments we need to shift away from extractive technological monocultures are at hand or ready to be built.


I am looking forward to a piece (or a collection of pieces) that will talk to the people in a manner that they hear this [understand it] and that moves them to make different choices.

Pretty much as I am doggedly and single-mindedly making different and sensible ecological choices for the planet, while I look forward to people being moved at last to durably do the same.

My thanks to Robin (who I know and work with) and to Maria (who I’d like to know now)!

Day-to-day work at W3C

An author for IEEE asked me last week, for an article he’s writing, to write a high level introduction to the World Wide Web Consortium, and what its day-to-day work looks like.

Most of the time when we get asked, we pull from boilerplate descriptions, and/or from the website, and send a copy-paste and links. It takes less than a minute. But every now and then, I write something from scratch. It brings me right back to why I am in awe of what the web community does at the Consortium, and why I am so proud and grateful to be a small part of it.

Then that particular write-up becomes my favourite until the next time I’m in the mood to write another version. Here’s my current best high level introduction to the World Wide Web Consortium, and what its day-to-day work looks like, which I have adorned with home-made illustrations I showed during a conference talk a few years ago.

World Wide What Consortium?

The World Wide Web Consortium was created in 1994 by Tim Berners-Lee, a few years after he had invented the Word Wide Web. He did so in order for the interests of the Web to be in the hands of the community.

“If I had turned the Web into a product, it would have been in people’s interest to create an incompatible version of it.”

Tim Berners-Lee, inventor of the Web

So for almost 28 years, W3C has been developing standards and guidelines to help everyone build a web that is based on crucial and inclusive values: accessibility, internationalization, privacy and security, and the principle of interoperability. Pretty neat, huh? Pretty broad too!

From the start W3C has been an international community where member organizations, a full-time staff, and the public work together in the open.

Graphic with illustrations showing that the public and members contribute to 52 work groups, and that 56 people in the w3c staff help create web standards of which there were 400 at the time I made this drawing
W3C Overview

The sausage

In the web standards folklore, the product –web standards– are called “the sausage” with tongue in cheek. (That’s one of the reasons behind having made black aprons with a white embroidered W3C icon on the front, as a gift to our Members and group participants when a big meeting took place in Lyon, the capital of French cuisine.)

Since 1994, W3C Members have produced 454 standards. The most well-known are HTML and CSS because they have been so core to the web for so long, but in recent years, in particular since the Covid-19 pandemic, we’ve heard a lot about WebRTC which turns terminals and devices into communication tools by enabling real-time audio/video, and other well-known standards include XML which powers vast data systems on the web, or WebAuthn which significantly improves security while interacting with sites, or Web Content Authoring Guidelines which puts web accessibility to the fore and is critical to make the web available to people of all disabilities and all abilities.

The sausage factory

The day to day work we do is really of setting the stages to bring various groups together in parallel to progress on nearly 400 specifications (at the moment), developed in over 50 different groups.

There are 2,000 participants from W3C Members in those groups, and over 13,000 participants in the public groups that anyone can create and join and where typically specifications are socialized and incubated.

There are about 50 persons in the W3C staff, a fourth of which dedicate time as helpers to advise on the work, technologies, and to ensure easy “travel” on the Recommendation track, for groups which advance the web specifications following the W3C process (the steps through which specs must progress.)

Graphic showing a stick figure with 16 arms and smaller drawings of stick figure at a computer, stick figure talking to people, and stick figure next to documents. The graphic lists nine different roles: super interface, representation of w3c in groups, participation and contribution, technical expertise, mastering the process, creation of groups and their management, liaison with other technical groups, being consensual.
Role of the W3C staff in work groups

The rest of the staff operate at the level of strategy setting and tracking for technical work, soundness of technical integrity of the global work, meeting the particular needs of industries which rely on the web or leverage it, integrity of the work with regard to the values that drive us: accessibility, internationalization, privacy and security; and finally, recruiting members, doing marketing and communications (that’s where I fit!), running events for the work groups to meet, and general administrative support.

Graphic with stick figures representing Tim Berners-Lee, the CEO and the team, and four areas of help: support, strategy, architecture & technology, industry, project.
W3C team

Why does it work?

Several of the unique strengths of W3C are our proven process which is optimized to seek consensus and aim for quality work; and our ground-breaking Patent Policy whose royalty-free commitments boosts broad adoption: W3C standards may be used by any corporation, anyone, at no cost: if they were not free, developers would ignore them.

Graphic showing the steps from an idea to a web standard
From an idea to a standard

There are other strengths but in the interest of time, I’ll stop at the top two. There are countless stories and many other facets, but that would be for another time.

Sorry, it turned out to be a bit long because it’s hard to do a quick intro; there is so much work. If you’re still with me (hi!), did you learn anything from this post?

My other website behind the curtain

I’ve been editing the W3C website for a few decades now (gasp!) and in leading its redesign from the 2008 design, I am learning an astounding amount of new things about it! Here are some of the things I know about it.

Illustration of a spotlight lighting a man running, graphs and a book

Spotlight on the W3C website

In the 21 years I’ve been with the W3C, I remember only 3 different designs, the current one dates from a decade ago. Redesigning our website is crucial to improve the overall experience of those who depends on our Web standards work.

The website is managed by W3C itself and has been up for three decades. It currently contains over 2 million web pages. They’re static HTML or built in Perl, PHP, come from WordPress or are custom built using Symfony.

Illustration showing a woman at her computer leaning against stacked objects adorned with a gear

Tech stack summary

  • Debian Linux
  • Apache is used for serving the static content
  • MySQL for database storage
  • Varnish HTTP Cache is used for full-page caching
  • HAProxy is used for load balancing
  • There are over 3,700 Apache .htaccess files with different rewrite rules
Illustration showing hands at a keyboard in front of a screen

Hosting & content

In a large-scale hosting setup, there are around 100 servers running Linux Debian on OpenStack, of which 20 to 30 servers are related to the primary website.

Web content is stored mostly in CVS and databases via CMS tools (WordPress, Symfony), and secondarily in GitLab and GitHub.

Most content is managed as static HTML edited locally (e.g. emacs, vi, BlueGriffon) and committed into CVS repositories using CVS clients, the terminal or HTTP PUT or WebDAV. Or, content is generated dynamically using Symfony or statically via makefiles, XML and XSLT.

25 instances of WordPress power the W3C Blog (over 950 posts) and W3C News (over 4,200 items), but also our Talks, working groups blogs, a test site, and W3C Community and Business Groups.

Illustration of an alien beamed by a UFO

The W3C Homepage

The current homepage of the W3C website is a mix of HTML snippets which usually appears elsewhere on the W3C site, generated via XML, XSLT, PHP and other tools:

  • The News items are read from WordPress
    • The “homepage news” category determines what to show on the W3C homepage; we typically show up to 9 entries
    • The “top story” category determines which news item is expanded on the W3C homepage; we prefer to feature one, but have at times shown two or more
  • The right-hand side shows the last three posts from the W3C Blog
  • W3C Member Testimonials rotate from a database
  • The Events and Talks are shown from a Symfony app and WordPress respectively
  • The search bar links to an external DuckDuckGo search (that we chose for its good reputation for data privacy)
  • The rest is static

Markup errors in any of the source files will likely “break” the homepage. On average, I break the homepage 10% of the time!

Goodbye Facebook; Hello open Web

I grew weary of Facebook a long time ago. Yet I was drawn to it all the while. There’s one thing they got right: showing me snippets of the life of family and friends by suppressing frontiers, overcoming distance and time zones. That is what I’ll miss –its unique ability to show me, at my pace, inklings that are valuable, endearing, funny.

But I grew wary of it too, because after searching for tutorials on alcohol ink techniques on my smartphone’s mobile browser the Facebook app immediately suggested I join a few groups on the subject. Whether this was relevant or useful is beyond the point. The Facebook app has hardly any business spying on the history of the browser app.

Screenshot of the Facebook delete page showing the pop-in to permanently delete the account

So I waved goodbye to Facebook’s intrusive practices a few days ago. So long, daily dose of comfort and social peep show.

It may take a bit of effort to write on one’s blog or maintain a Website, and probably takes a massive one for those unfamiliar with the open Web to open the garden wall door and explore the Web, use it.

Someone lamented that they would miss seeing my drawings. But Facebook was just an additional space that I shared those on —a space of crappy definition images— just because there’s a world of apps on smartphones and a population of app users who happen to find it convenient to be fed those.

My drawings go to my blog, in high-resolution definition. My blog has a syndication feed. It means that any update to my blog is signaled. And any feed aggregator can pick up that signal and relay it. This is the principle behind RSS (really simple syndication).

You can read more in a recent article at Wired.