Datafication, Phantasmagoria of the 21st Century

Category: Digital Ecology (Page 2 of 3)

Datafied. A Critical Exploration of the Production of Knowledge in the Age of Datafication

This is the abstract of my PhD submitted in August 2022

As qualitative aspects of life become increasingly subjected to the extractive processes of datafication, this theoretical research offers an in-depth analysis on how these technologies skew the relationship between tacit and datafied ways of knowing. Given the role tacit knowledge plays in the design process, this research seeks to illuminate how technologies of datafication are impacting designerly ways of knowing and what design can do to recalibrate this imbalance. In particular, this thesis is predicated on 4 interrelated objectives: (1) To understand how the shift toward the technologies of datafication has created an overreliance on datafied (i.e., explicit) knowledge (2) To comprehend how tacit knowledge (i.e. designerly ways of knowing) is impacted by this increased reliance, (3) To critically explore technologies of datafication through the lens of Walter Benjamin’s work on the phantasmagoria of modernity and (4) To discover what design can do to safeguard, protect and revive the production of tacit knowledge in a world increasingly dominated by datafication.

To bring greater awareness into what counts as valid knowledge today, this research begins by first identifying the principles that define tacit knowledge and datafied ways of knowing. By differentiating these two processes of knowledge creation, this thesis offers a foundation for understanding how datafication not only augments how we know things, but also actively directs and dominates what we know. This research goes on to also examine how this unchecked faith in datafication has led to a kind of 21st century phantasmagoria, reinforcing the wholesale belief that technology can be used to solve some of the most perplexing problems we face today. As a result, more tacit processes of knowledge creation are increasingly being overlooked and side-lined. To conclude this discussion, insights into how the discipline of design is uniquely situated to create a more regenerative relationship with technology, one that supports and honours the unique contributions of designerly ways of knowing, are offered.

Fundamental principles framing Grounded Theory are used as a methodological guide for structing this theoretical research. Given the unprecedented and rapid rate technology is being integrated into modern life, this methodological framework provided the flexibility needed to accommodate the evolving contours of this study while also providing the necessary systematic rigour to sustain the integrity of this PhD.

Keywords: datafication, tacit knowledge, phantasmagoria, regeneration, ecology of knowledge

Chris Jones – Designing Designing

A few words from John Thackara (who wrote the afterword of Chris Jones “Designing Designing”) on Chris Jones’ mission and philosophy (the full post can be found on Thackara’s blog).

As a kind of industrial gamekeeper turned poacher, Jones went on to warn about the potential dangers of the digital revolution unleashed by Claude Shannon

Computers were so damned good at the manipulation of symbols, he cautioned, that there would be immense pressure on scientists to reduce all human knowledge and experience to abstract form

Technology-driven innovation, Jones foresaw, would undervalue the knowledge and experience that human beings have by virtue of having bodies, interacting with the physical world, and being trained into a culture. 

Jones coined the word ‘softecnica’ to describe ‘a coming of live objects, a new presence in the world’. He was among the first to anticipate that software, and so-called intelligent objects, were not just neutral tools. They would compel us to adapt continuously to fit new ways of living. 

In time Jones turned away from the search for systematic design methods. He realized that academic attempts to systematize design led, in practice, to the separation of reason from intuition and failed to embody experience in the design process.”

All of the above ring very true today. The reductionist approach to knowledge, the general disdain for the richness of human knowledge and experience, the widespread contempt for embodied knowledge, the radical separation of reason and intuition, the hidden shaping of a new belief system around the superiority of rational machines, the invisible but violent bending of human friendly ways of living to fit machine dominated new ways of living.

Regulation & Regeneration

In the context of an economic environment deficient in self-regulation (also called wisdom), is there a space for outer regulation in Regenerative spaces?

This question was triggered by Musk purchase of Twitter. Since in regenerative communities, we are using metaphors from nature, the take-over of a global platform that carries a massive chunk of the global public debate, which algorithms are opaque and is known to influence the result of elections by a self-professed libertarian billionaire who has clearly indicated that he wants to restore “free speech” (whatever that means) on the platform and who is known to use it for self-serving purposes is a bit like a human-produced toxic algae bloom spreading on live water habitats and killing all life. Never enough seems to qualify the initiative appropriately.

So, in this context, I was wondering about regulation. Living systems, when left to their own device, self-regulate. This is what I would see as “inner regulation”, or in human terms, “wisdom”. I don’t think it would be overly pessimistic to think that inner regulation is found in (very) limited quantity right now in our social and economic environments.

So what about outer regulation? There are many ways outer regulation functions, from the traditional prescriptive approaches to softer ones that involve sway and incentives. Design as a discipline employs the latter ones all the time. I was thinking it is an important discussion to be had in the context of a community focused on Regenerative Economics because many projects start with the best of intentions and fall prey to unintended consequences.

And I am also interested to hear from those of us who have direct experience in designing regulation frameworks in the complex systems that are online communities who share the same purpose. Do we combine incentives for inner regulation with outer regulation, and if so, how? Do we leave it to the invisible hand? I would love to hear different voices chime on this topic.

The Dark Side of AI

I came across this article in the Financial Times yesterday (March 19 2022) on the Dark Sides of Using AI to Design Drugs by Anjana Ahuja.

Scientists at Collaborations Pharmaceuticals, a North Carolina company using AI to create drugs for rare diseases, experimented with how easy it would be to create rogue molecules for chemical warfare.

As it happens, the answer is VERY EASY! The model took only 6 hours to spit out a set of 40,000 destructive molecules.

And it’s not surprising. As French cultural theorist and philosopher Paul Virilio once said, “when you invent the ship, you invent the shipwreck”. Just like social platforms can be used both to connect with long lost friends AND to spread fake news, AI can be used both to save lives AND to destroy them.

This is a chilling reminder about the destructive potential of increasingly sophisticated technologies that our civilisation has developed but may not have the wisdom to use well.

Web 3.0 Hype & Healthy Critical Thinking

In an article published by Cigi (Centre for International Governance Innovation) on January 14, 2022, ethics in AI professor and researcher Elizabeth M. Renieris reminds us that “without a critical perspective, familiar harms will not only be replicated; they will be exacerbated.”

https://www.cigionline.org/articles/amid-the-hype-over-web3-informed-skepticism-is-critical/

Learning from the past and applying those lessons requires a critical perspective. Without such perspective, proposed “solutions” can only be cosmetic, papering over root causes. Computational or technological attempts to “decentralize” power without addressing the social, political and economic enablers of concentrated power and wealth, such as decades of neo-liberal policies predicated on the illusion of individual choice and control, are bound to fail.”

This Is No Way To Be Human (The Atlantic)

This is a link to Alan Lightman’s article in the Atlantic in January 2022.

For more than 99 percent of our history as humans, we lived close to nature. We lived in the open. The first house with a roof appeared only 5,000 years ago. Television less than a century ago. Internet-connected phones only about 30 years ago. Over the large majority of our 2-million-year evolutionary history, Darwinian forces molded our brains to find kinship with nature, what the biologist E. O. Wilson called “biophilia.” That kinship had survival benefit. Habitat selection, foraging for food, reading the signs of upcoming storms all would have favored a deep affinity with nature.

Social psychologists have documented that such sensitivities are still present in our psyches today. Further psychological and physiological studies have shown that more time spent in nature increases happiness and well-being; less time increases stress and anxiety. Thus, there is a profound disconnect between the natureless environment we have created and the “natural” affections of our minds. In effect, we live in two worlds: a world in close contact with nature, buried deep in our ancestral brains, and a natureless world of the digital screen and constructed environment, fashioned from our technology and intellectual achievements. We are at war with our ancestral selves. The cost of this war is only now becoming apparent.”

https://www.theatlantic.com/technology/archive/2022/01/machine-garden-natureless-world/621268/

And when we look to the future digital technologies being developed at the moment, what do we see?

Web 3.0 Data Ownership, Solution to the Excesses of the Data Economy?

There is much hope at the moment that web 3.0 will provide solutions to the problems brought about by the data economy (by the way, I just realised that with just one sleight of hand, hope becomes hype, and vice-versa). Web 3.0 proposes that users own their own data, instead of leaving it to other actors to use freely. The reasoning is that they can then decide what they want to do with that data, and who they want to release it to and when. We often hear the expression “paradigm shift” when it comes to Web 3.0. Is it? It proposes to solve the issues of surveillance capitalism by shifting data ownership from companies to the users themselves (i.e. users own their own data and the problem will be solved). But are we in fact trying to solve problems with the same tools that created them in the first place?

Karl Polanyi in The Great Transformation (1944) explored how capitalism creates fictitious commodities. Capitalism commodifies nature into land, human activity into work, and exchange into money. Nature, life and exchange are not tradable. Land, work and money are. The word “fictitious” is important here. It suggests that commodification creates tradable products out of something that is not tradable. In the 19th and 20th centuries, industrial capitalism commodified nature, the environment we live in. In the 21st century, surveillance capitalism is commodifying human life.

We have the environmental problems we have today because fundamentally, we see nature as an object to be grabbed, sold and exploited. Similarly, human life today is grabbed (i.e., datafied), traded or used as raw material to create valuable behavioural products that are traded on behavioural markets for profit (See Zuboff “The Age Of Surveillance Capitalism”). The concept of ownership and property is solidly anchored into the capitalist idea that everything out there can be owned and turned into tradable commodity. First, during the industrial revolution, it was nature which was divided, sold and exploited for its resources. Today, under surveillance capitalism, it is human life. Two different objects, but same process. When we talk about paradigm shift, we need to explore whether the avenues we are embarking on right now (such as ownership of one’s own data) truly represent a paradigmatic shift or whether we need to review our assumptions.

Furthermore, users’ ownership of their own data is a neat idea in principle, but its application raises many complex questions because real life is not neat. Ownership of data is not a clear-cut category. The concept of ownership is structured (a Yes or No proposition), life is not. Ownership does not necessarily provide the type of structure that accurately reflects life. There is a large dimension of life that happens outside of this paradigm.

First, having ownership of our own data does not mean that we will have the wisdom to use it well. We have been trained and conditioned for the past 20 years to value convenience above all other things when it comes to using digital technologies. But convenience is not the value most conducive to sustainability. It is more convenient to throw garbage through the window rather than recycle, but in doing so, we create an ecological crisis. By choosing convenience in our digital lives, we also create an ecological crisis. How many of us prefer to visit a website rather than an app (apps have many more prying capacities)? How many of us take the time to change our phones settings to increase privacy, and to review them regularly, to delete apps downloaded once and never used again? How many of us read through privacy policies? How many of us just click yes out of convenience when a website asks us whether to accept all cookies (instead of spending a few minutes customising them)? Not that many. So when given the choice between releasing all data or customising, how many people would actually take the time to choose which data to release and which not to?

Also, releasing one’s data “according to what’s needed” presupposes that we understand very clearly how it is being used, what are the consequences of releasing it, and what is truly needed and what is not. Say I own the data I produce and I can choose to release it or not. That does not solve the issue of what is done with aggregated data once it is released. If that data is transformed slightly in one way or another, is it still mine? Or can someone else trade it or turn it into a product to be traded?

There are tricky questions that pertain to the ambiguous nature of the digital terrain. How do we treat ownership of metadata (the data about data)? How do we treat data that is not about a person but a group of people, or communities? Who owns what in this case, and who decides? And who decides who decides? Who owns the data that is recorded by someone but includes someone else (like for example police patrols, or like when I post a photo of me on Instagram but my friends are also there)? And what happens to the zillions of terabytes of data that are already “out there”, irretrievable? How do we put in place those infrastructures against the backdrop of a probable huge pushback from those who benefit from the data economy? And how do we make sure that the data released is used to perform what it is supposed to perform and not used in another way? Blockchain mechanisms promise absolute certainty and privacy, but this also presupposes that absolutely everything happens on the blockchain.

How, as a caring society, do we protect the vulnerable? How about children? Or those who are not digitally literate (probably 99% of the world, because knowing how to use a smart phone does not equate with digital and technical literacy and awareness)? How about those who live at the fringe of society or at the fringe of the power centres of the world? It’s all very good to say that all our data is in a little box on our phone, but that presupposes that all have physical access to it, and the means to get the phone and the little box. Do we think about this from the point of view of an Indian mother in a village, or a Mexican child, or anyone who is not part of the 1%, or are we (AGAIN) going to develop the next generation technology from the eyes of a white male from a developed country?

Then, there is the essential question of translation. The digital is a translation of real life, it is not real life itself. It is just a map. From the beginning of AI, data science has been trying to create a language that could adequately reflect life, but so far it has not succeeded. Because of historical and technical reasons, the digital language that its used today has been developed along the lines of information theory. Information theory is based on Shannon’s linear communication model. Humans, and life in general, do not communicate like this. The digital has not been able to domesticate and integrate tacit knowledge. This is seen when data science uses proxies for aspects of life that cannot be turned into discrete data, like using US zip code as a proxy for wealth or education for example.

Furthermore, data is not information. Data is a way to classify. Classifications and standards are imbricated in our lives. They operate invisibly, but they create social order (Bowker & Star). Despite all the hype (and the hope) about the digital revolution, the digital is still trying to fit the messiness of life into the clearcut categories of the linear world of the industrial revolution. Data creates classifications, but data is not information. The enterprise of datafication (i.e., turning human life into discrete computer-ready data) is essentially a reductionist enterprise, it does not creates real knowledge, but as Bernard Stiegler once mentioned, “stupid” knowledge. It is the issue with algorithms today. Ownership of data does not address the fundamental fact that datafication creates a world that is not fit for humans, because it denies and destroys that which makes us humans, i.e., tacit knowing.

Finally, as mentioned above, datafication is a process of commodification of human life. For all the benefits of Web 3.0, the decentralised blockchain-based web anchors this process even more strongly into the fabric of society.

Something Is Broken… (from The MarkUp)

Nonprofit Websites Are Riddled With Ad Trackers

Such organizations often deal in sensitive issues, like mental health, addiction, and reproductive rights—and many are feeding data about website visitors to corporations

By: Alfred Ng and Maddy Varner

Originally published on themarkup.org

Last year, nearly 200 million people visited the website of Planned Parenthood, a nonprofit that many people turn to for very private matters like sex education, access to contraceptives, and access to abortions. What those visitors may not have known is that as soon as they opened plannedparenthood.org, some two dozen ad trackers embedded in the site alerted a slew of companies whose business is not reproductive freedom but gathering, selling, and using browsing data. 

The Markup ran Planned Parenthood’s website through our Blacklight tool and found 28 ad trackers and 40 third-party cookies tracking visitors, in addition to so-called “session recorders” that could be capturing the mouse movements and keystrokes of people visiting the homepage in search of things like information on contraceptives and abortions. The site also contained trackers that tell Facebook and Google if users visited the site.

The Markup’s scan found Planned Parenthood’s site communicating with companies like Oracle, Verizon, LiveRamp, TowerData, and Quantcast—some of which have made a business of assembling and selling access to masses of digital data about people’s habits. 

Katie Skibinski, vice president for digital products at Planned Parenthood, said the data collected on its website is “used only for internal purposes by Planned Parenthood and our affiliates,” and the company doesn’t “sell” data to third parties. 

“While we aim to use data to learn how we can be most impactful, at Planned Parenthood, data-driven learning is always thoughtfully executed with respect for patient and user privacy,” Skibinski said. “This means using analytics platforms to collect aggregate data to gather insights and identify trends that help us improve our digital programs.” 

Skibinski did not dispute that the organization shares data with third parties, including data brokers. 

A Blacklight scan of Planned Parenthood Gulf Coast—a localized website specifically for people in the Gulf region, including Texas, where abortion has been essentially outlawed—churned up similar results. 

Planned Parenthood is not alone when it comes to nonprofits, some operating in sensitive areas like mental health and addiction, gathering and sharing data on website visitors.

Using our Blacklight tool, The Markup scanned more than 23,000 websites of nonprofit organizations, including those belonging to abortion providers and nonprofit addiction treatment centers. The Markup used the IRS’s nonprofit master file to identify nonprofits that have filed a tax return since 2019 and that the agency categorizes as focusing on areas like mental health and crisis intervention, civil rights, and medical research. We then examined each nonprofit’s website as publicly listed in GuideStar. We found that about 86 percent of them had third-party cookies or tracking network requests. By comparison, when The Markup did a survey of the top 80,000 websites in 2020, we found 87 percent used some type of third-party tracking. 

About 11 percent of the 23,856 nonprofit websites we scanned had a Facebook pixel embedded, while 18 percent used the Google Analytics “Remarketing Audiences” feature. 

The Markup found that 439 of the nonprofit websites loaded scripts called session recorders, which can monitor visitors’ clicks and keystrokes. Eighty-nine of those were for websites that belonged to nonprofits that the IRS categorizes as primarily focusing on mental health and crisis intervention issues.

“As a user of this website, by sharing your information with them, you probably don’t assume that this sensitive information is shared with third parties and definitely don’t assume that your keystrokes are recorded,” Gunes Acar, a privacy researcher who copublished a 2017 study on session recorders, said. “The more sensitive the website is, the more worried I am.” 

Tracy Plevel, the vice president of development and community relations at Gateway Rehab, one of the nonprofits with session recorders on its site, said that the nonprofit uses trackers and session recorders because it needs to stay competitive with its larger, for-profit counterparts.

“As a nonprofit ourselves, we are up against for-profit providers with large advertising budgets as well as the addiction treatment brokers who grab those seeking care with similar online advertising tactics and connect them with the provider who is offering the greatest ‘sales’ compensation,” Plevel said. “Additionally we know user experience has a big impact on following through on treatment. When someone is ready to commit to treatment, we need to ensure it [is] as easy as possible for them before they get frustrated or intimidated by the process.” 

Other nonprofits had a significant number of trackers embedded on their sites as well. The Markup found 26 ad trackers and 50 third-party cookies on The Clinic at Sharma-Crawford Attorneys at Law, a Kansas City legal clinic that represents low-income people facing deportation.

Rekha Sharma-Crawford, the board president of The Clinic, wrote in an emailed statement, “We take privacy and security concerns very seriously and will continue to work with our web provider to address the issues you have identified.”

Save the Children, a humanitarian aid organization founded more than 100 years ago, had 26 ad trackers and 49 third-party cookies. March of Dimes, a nonprofit started by President Franklin D. Roosevelt that focuses on maternal and infant care, had more than 29 ad trackers on its site and 58 third-party cookies. City of Hope, a Californian cancer treatment and research center, had 25 ad trackers and 47 third-party cookies. 

Paul Butcher, assistant vice president of global digital strategy at Save the Children, said in an emailed statement that the organization “takes data protection very seriously.” Butcher also wrote that Save the Children collects some data through ad trackers “to improve user experience” and that the organization is in the process of revamping its data retention policies and recently hired a new head of data.

March of Dimes and City of Hope did not respond to requests for comment.

State-Level Privacy Laws Miss Nonprofits

While health data is governed by HIPAA, and FERPA  regulates educational records, there are no federal laws governing how websites track their visitors. Recently, a few states—California, Virginia, and Colorado—have enacted consumer privacy laws that require companies to disclose their tracking practices and allow visitors to opt out of data collection. 

But nonprofits in two of those states, California and Virginia, don’t need to adhere to the regulations. 

Sen. Ron Wyden (D-OR), who has proposed his own federal privacy legislation, said that nonprofits accrue a large amount of potentially sensitive data. 

“Nonprofits store incredibly personal information about things we’re passionate about, from political causes and social views to which charitable causes we care about,” Wyden said in an emailed statement. “If a data breach reveals someone donates to a domestic violence support group or an LGBTQ rights organization or the name of their mosque, any of that information could be incredibly private.”

Nonprofit leaders, however, argue that they lack the infrastructure and funding to comply with privacy law requirements and must gather and share information on donors in order to survive. 

“One of the most substantive and impactful uses of data by nonprofits has been our fundraising,” said Shannon McCracken, the CEO of The Nonprofit Alliance, an advocacy group made up of nonprofits and businesses. “Without the ability to cost-effectively reach prospective new donors and current donors, then nonprofits can’t continue to be as impactful as they are today.” 

But purposeful or not, privacy experts say, nonprofits are feeding personal information to data brokers and tech giants like Facebook and Google. 

“A nonprofit might share your phone number and name with LiveRamp. Tomorrow, a for-profit entity can then reuse that same data to target you,” said Ashkan Soltani, a privacy expert and former chief technologist at the Federal Trade Commission. “The data flows that go into these third-party aggregators and data brokers come often from nonprofits as well.” 

Soltani, who was appointed executive director of the California Privacy Protection Agency on Oct. 4, helped draft the California Consumer Privacy Act, which was originally introduced with the nonprofit exemptions.

Many major nonprofits work with data brokers to help organize and analyze their data, Jan Masaoka, CEO of the California Association of Nonprofits, said. 

“People that have big donor lists use them extensively, pretty much all of them use one of the services,” Masaoka said. “They don’t keep it in-house, pretty much everybody keeps it with one of these services.” 

She noted that Blackbaud is a company that nonprofits often turn to. The registered data broker’s marketing material promotes a co-op database that combines donor data from more than 550 nonprofits with public information on millions of households. 

Blackbaud didn’t respond to a request for comment.

Because of a lack of funds, nonprofits also rely on third-party platforms—which also happen to be data brokers—to manage their data’s security and privacy, McCracken said. But these kinds of companies aren’t immune to cyberattacks either: Blackbaud disclosed a ransomware attack in 2020 in which hackers stole passwords, Social Security numbers, and banking information, according to a Securities and Exchange Commission filing. Hundreds of charitable organizations, schools, and hospitals were affected, along with more than 13 million people, according to the Identity Theft Resource Center. 

“They rely on this kind of problematic ecosystem to achieve their work, and as a result, they share number lists, email addresses, or browsing behavior with third-party advertising companies and subject their members to risk,” Soltani said.

The Exception

Unlike its predecessors in California and Virginia, Colorado’s privacy bill doesn’t have an exemption for nonprofits. 

In both California and Virginia, the bills’ main supporters gave nonprofits an exemption as a political maneuver. Alastair Mactaggart, a real estate developer and founder of Californians for Consumer Privacy, who was the driving force behind the California Consumer Privacy Act, said his proposal was already facing opposition from tech giants and didn’t want a political showdown with nonprofits, too. 

“You gotta take the first step, so we figured this was the one that would be the easiest to bounce off,” Mactaggart said. “Eventually, I hope that the big nonprofits are included as well.”

David Marsden, the state senator who introduced the Virginia Consumer Data Protection Act, echoed that sentiment, reflecting that the law wasn’t perfect but still a good start.

“Does this pick up everybody that it should, or exempt everybody who needs an exemption? Probably not, but it comes pretty close,” Marsden said. “We were able, with this bill, to get it passed without people getting up and objecting to what we were trying to do.” 

Colorado state senator Robert Rodriguez, who co-sponsored the state’s privacy bill, said he didn’t include an exemption for nonprofits because he felt that any entity that had data on more than 100,000 people should have to follow privacy protections. He also didn’t understand why other states had exemptions. 

“Someone that has over 100,000 records is a good size,” he said in an email. “They should have some protections or requirements to follow.” 

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

WhatsApp Terms of Service & Privacy Policy as of March 2021

In the post “Why I Am Quitting WhatsApp – Part II” below I mention a link to the Terms of Services and the Privacy Policy. Since those terms change with time, I enclose an excerpt below of the terms as of 27 March 2021 in pdf format.

The clause “Information We Collect” is divided into three groups:

  1. Information you provide (hinting that the other two are information that you do NOT (and may not want to) provide),
  2. Automatically collected information,
  3. Third-party information.

Please see content below.

Information You Provide

• Your Account Information. You provide your mobile phone number to create a WhatsApp account. You provide us the phone numbers in your mobile address book on a regular basis, including those of both the users of our Services and your other contacts. You confirm you are authorized to provide us such numbers. You may also add other information to your account, such as a profile name, profile picture, and status message.

• Your Messages. We do not retain your messages in the ordinary course of providing our Services to you. Once your messages (including your chats, photos, videos, voice messages, files, and share location information) are delivered, they are deleted from our servers. Your messages are stored on your own device. If a message cannot be delivered immediately (for example, if you are offline), we may keep it on our servers for up to 30 days as we try to deliver it. If a message is still undelivered after 30 days, we delete it. To improve performance and deliver media messages more efficiently, such as when many people are sharing a popular photo or video, we may retain that content on our servers for a longer period of time. We also offer end-to-end encryption for our Services, which is on by default, when you and the people with whom you message use a version of our app released after April 2, 2016. End-to-end encryption means that your messages are encrypted to protect against us and third parties from reading them.

• Your Connections. To help you organize how you communicate with others, we may create a favorites list of your contacts for you, and you can create, join, or get added to groups and broadcast lists, and such groups and lists get associated with your account information.

• Customer Support. You may provide us with information related to your use of our Services, including copies of your messages, and how to contact you so we can provide you customer support. For example, you may send us an email with information relating to our app performance or other issues.

Automatically Collected Information

• Usage and Log Information. We collect service-related, diagnostic, and performance information. This includes information about your activity (such as how you use our Services, how you interact with others using our Services, and the like), log files, and diagnostic, crash, website, and performance logs and reports.

• Transactional Information. If you pay for our Services, we may receive information and confirmations, such as payment receipts, including from app stores or other third parties processing your payment.

• Device and Connection Information. We collect device-specific information when you install, access, or use our Services. This includes information such as hardware model, operating system information, browser information, IP address, mobile network information including phone number, and device identifiers. We collect device location information if you use our location features, such as when you choose to share your location with your contacts, view locations nearby or those others have shared with you, and the like, and for diagnostics and troubleshooting purposes such as if you are having trouble with our app’s location features.

• Cookies. We use cookies to operate and provide our Services, including to provide our Services that are web-based, improve your experiences, understand how our Services are being used, and customize our Services. For example, we use cookies to provide WhatsApp for web and desktop and other web-based services. We may also use cookies to understand which of our FAQs are most popular and to show you relevant content related to our Services. Additionally, we may use cookies to remember your choices, such as your language preferences, and otherwise to customize our Services for you. Learn more about how we use cookies to provide you our Services.

• Status Information. We collect information about your online and status message changes on our Services, such as whether you are online (your “online status”), when you last used our Services (your “last seen status”), and when you last updated your status message.

Third-Party Information

• Information Others Provide About You. We receive information other people provide us, which may include information about you. For example, when other users you know use our Services, they may provide your phone number from their mobile address book (just as you may provide theirs), or they may send you a message, send messages to groups to which you belong, or call you.

• Third-Party Providers. We work with third-party providers to help us operate, provide, improve, understand, customize, support, and market our Services. For example, we work with companies to distribute our apps, provide our infrastructure, delivery, and other systems, supply map and places information, process payments, help us understand how people use our Services, and market our Services. These providers may provide us information about you in certain circumstances; for example, app stores may provide us reports to help us diagnose and fix service issues.

• Third-Party Services. We allow you to use our Services in connection with third-party services. If you use our Services with such third-party services, we may receive information about you from them; for example, if you use the WhatsApp share button on a news service to share a news article with your WhatsApp contacts, groups, or broadcast lists on our Services, or if you choose to access our Services through a mobile carrier’s or device provider’s promotion of our Services. Please note that when you use third-party services, their own terms and privacy policies will govern your use of those services.

Algorithmic Sociality

I had a discussion about cells membranes and boundaries with a friend. The discussion arose from a quote by Fritjof Capra in his course The Systems View Of Life: “Boundaries in the biological realm are not boundaries of separation but boundaries of identity”. My friend’s questions was ‘What is the function of a membrane in social dynamics?’

This discussion about social membranes creating social identity reminds me about the phenomenon of “filter bubbles” created by the algorithms of social media platforms (for those unfamiliar with the concept, Eli Pariser’s TED talk is a good entry point. Basically, by editing what information we get access to (through search or in our newsfeed), online algorithms create a membrane around us that narrowly define our identity, and this is reinforced by constantly feeding us more of the same.


A few years ago in 2017, I did an explorative and investigative study on Facebook to interrogate the algorithmic black box. I created two fake profiles, Samsara Marks (female) and Bertrand Wooster (male). Samsara’s profile was richly fleshed out (highly educated professional with feminist interests), but gave FB only minimal info about Bertrand: his age (late 40s), a (random) Hong Kong mobile number, his residency (Hong Kong), his country of citizenship (UK) (and of course FB had pieces of digital info even though I created the profile behind the university’s firewall from a random computer at uni).


With this limited info, FB suggested 150 friends for Bertrand at the first login (interestingly most of them outside of Hong Kong). I accepted all FB suggestions (and subsequent suggestions as well). I am not going to bore you with details, but to make a long story short, Bertrand found himself transported to the bowels of FB: explicit sexual content, prostitution and what I suspected could be pedophile networks, and “how to” videos on how to make weapons to shoot down missiles (I am not making this up) amongst others. Friendless but highly accomplished Samsara on the other hand keeps receiving ads for kitchen appliances and dresses.


My purpose for posting this is to bring attention to the active role of social platforms in shaping sociability and creating social membranes around us. One of the conclusions of the experience was that, once the algorithms has established the membrane, it takes conscious efforts, extreme determination and a very consistent strategy to change what the membrane lets in and out.

« Older posts Newer posts »