Datafication & Technology

Datafication, Phantasmagoria of the 21st Century

Page 4 of 5

Web3 Analysis by Moxie Marlingspike

A must-read blog post by Moxie Marlinspike, founder of Signal, sharing his thoughts on Web3.

The basic argument is that although Web3 concept is for decentralization of internet away from platforms, practically it has just reverted back to Web2 (centralized internet) with only superficial trappings of decentralization.

His points:
1) Blockchain and “crypto” (as it’s now commonly referred to meaning blockchain/cryptocurrency rather than the original meaning “cryptography” aka encryption) is discussed in terms of “distributed” and “trustless” and “leaderless”. One might think that this means that every USER involved is a peer in the chain. But practically it’s not about USERS, it’s about SERVERS. The distributed nature is based on SERVERS, not what Moxie calls “clients” (aka YOUR computer, YOUR phone, YOUR device). So the blockchain concept is supposed to follow distributed trustless and leaderless methods between SERVERS. The problem is that your phone is not a server. Your computer is not a server. Your devices are not servers. All of your devices are END-USER devices. Very few people will actually be setting up, running and maintaining their own server. It’s difficult, requires technical knowledge, and time consuming and costs money to maintain.

So what actually ends up happening is that the whole interface of Web3 turns to: Blockchain <-> Servers <-> End-user client devices. And the problem with Web3 so far is that all the end-user interaction with the blockchain has now consolidated to very few servers, aka returned to the phenomenon of platformisation (which describes how Web2 platforms decentralised their API throughout the entire web to centralise data back to their servers in the 2010s). As of now, most of the Web3 “decentralised apps” interact with the blockchain through two companies called Infura and Alchemy. These two companies run the servers in between blockchain and end-user client devices. So if you are using MetaMask and do something with your cryptocurrency wallet in MetaMask, MetaMask will basically communicate to Infura and Alchemy who then communicate with the actual blockchain.

His two sub-complaints to this are:
A) Nobody is verifying the authenticity of information that comes from Infura / Alchemy. There is currently no system in place on the client side (aka MetaMask on user side) to ensure that what information Infura / Alchemy returns to the end-user is actually what is truly on the blockchain. Theoretically if you have 5BTC in your wallet on the blockchain, and you load up MetaMask to query the balance in your wallet, MetaMask might contact Infura / Alchemy requesting your BTC balance and Infura / Alchemy can respond to say you have 0.1BTC. MetaMask won’t verify if that’s actually true, it’s just taken at its word.
B) Privacy concerns with routing all requests via Infura / Alchemy. Moxie’s example is: imagine every single web request you make is first routed through Google before being routed to your actual intended destination.

2) He gives the example of how NFTs are in fact just URLs stored on the blockchain. And these URLs point to servers hosting the actual content. So when you buy an NFT, you only own the URL on the blockchain that DIRECTS to the artwork, NOT the “artwork” itself. He did an exercise where he made an NFT that looks like a picture when viewed through OpenSea, but looks like a poo emoji when accessed via someone’s crypto wallet. Because ultimately the server hosting the image (to which the URL on the actual blockchain points to) is ultimately in control of the artwork.
Even worse, his NFT ended up being deleted by OpenSea. But somehow his NFT ALSO stopped appearing in his wallet. How is this possible? Even if OpenSea deletes the NFT from their website, the NFT should still be on the blockchain, right? Why doesn’t it still show up in his wallet? Well he says that due to this centralisation of supposedly “de-centralised” apps, his wallet is in fact communicating not with the blockchain directly, but through a few centralised platforms (one of which is OpenSea). So because OpenSea deleted his NFT, his wallet also no longer shows the NFT. It doesn’t matter that his NFT still belongs to him on the blockchain if the whole end-user system is totally divorced from the blockchain and instead reliant on the middle servers.

3) Finally, he is saying that Web3 as we know it now is really just Web2 with some fancy “Web3” window dressing. And the window dressing actually makes the whole system run worse than if it just stuck to pure Web2. But why force the window dressing? Simply to sell the whole thing as a next generation Web3 package as part of what he calls a gold rush frenzy over Web3.

Raising Consciousness & Spiral Dynamics®

Sunday Morning Musings on Raising Consciousness and Spiral Dynamics®.

I always have a problem with the term “raising consciousness”; first because there’s something subtly arrogant and hubristic about it, it presupposes that A) I, as a person, know exactly at what level everybody else is (rather unlikely), and that B) some are below me and they need to be lifted to my level. 😅😔 This is the vertical hierarchy of values underlying the mentality of colonisation, eugenics and commodification. God at the top, me and those like me just below, and the rest needing to be enlightened (or exploited) below me.

But also because it implies a view of the world that is imbued with the idea of infinite progress. This idea is so deeply pervasive to the western civilisation that we do not even question its validity. It’s important to do so though, because infinite progress is also the idea that validates a related concept: infinite growth. But while a beautiful concept, infinite progress is as unlikely as infinite growth. Progress is not a core idea to eastern philosophies or indigenous wisdom.

This goes back to the core of the Spiral Dynamics® model and how it’s been incorporated in philosophies and ideologies that have progress as their core value. As I understand it, Clare Graves developped his ECLET model not out of a concern with moving humanity up the hierarchy of values. He was more concerned about alignment within each level. His enquiry happened during a period of time when Maslow’s work became mainstream and the pyramid became an icon, but his question was very different. His driving metaphor was not the pyramid (a useful but somewhat basic shape). He was focusing on complexity, and more precisely, on the alignment between complexity in the environment and the capacity to deal with that level of complexity in one’s mind.

To reflect this balance, he did not use colours (which simplify but obfuscate important aspects of the purpose) but a set of two letters to describe the levels. AN for beige, BO for purple, CP for red, DQ for blue, ER for orange, FS for green, GT for yellow and HU for turquoise. One letter represented level of complexity in the environment and the other ability to handle complexity. “Capacity to handle complexity” is absolutistic for DQ (either-or, good/bad, us/them), pluralistic for ER (there is a range of different possibilities and I choose what’s the best one for me), contextual for FS (it all depends on context) and probabilistic for GT. He also said that from his research (and the research of some of his students after his death) very (VERY) few people were truly aligned at the second tier although higher tiers are attraction points for personal projections from lower levels. In other words, from an ER point of view, GT looks extremely sexy, and DQ will tend to see oneself as FS.

He wrote that a person would lead a more coherent and more fulfilled life if he or she was aligned at their level, regardless of where that level stood in the hierarchy of value. This model underlies his theory of change: when someone whose ability to handle complexity is thrown into a more complex environment, there is a transition period to adapt to the new levels of complexity. Similarly, one can be thrown to a lesser complex environment by life circumstances (say in the case of civil war for example when survival becomes key), and one’s ability to handle complexity can also go from more to less (as in the case of illness affecting cognitive faculties for example). There was no inkling of the desirability of a vertically upward moving progress in his work, and no mention of consciousness. For him progress was synonymous with alignment. It’s only later that his model was simplified into colours and it became easy to integrate into an integralist view of the world that takes vertical upward progress as its core value.

So, I would propose that we need new metaphors and a new vocabulary to replace “raising consciousness” which presupposes a vertical upward moving hierarchy. Metaphors and language that flatten vertical hierarchies into multidimensional complex networks. Fractals instead of pyramids. And then (and this is where the hard work begins! 😅😜), we need to fully integrate those metaphors and language, to get so familiar with them that they become like a limb, a full part of us and how we see the world. And maybe then, only then, will we have opened our “consciousness” enough to realise that what we projected onto the world was all within ourselves. Until then, it is probably safer to see ourselves on the less evolved side of the spectrum. 🙃

Legal Frameworks

I had an interesting discussions with folks in a “Using the Law for Regeneration” call yesterday. We touched upon legal frameworks needing to shift in response to once in a century technology-induced shifts in the larger system.

In 2022, we are at a point not dissimilar to the late 19th century when the industrial revolution was profoundly changing how people lived, worked, and thought. We are only starting to realise that digital technologies have created an unprecedented universe which we inhabit but are only starting to fathom. Unprecedented is the keyword. In many ways, we still hold attitudes and beliefs coherent with the pre Web 2.0 and pre data economy world. Our legal frameworks are also lagging and playing a “blindfolded in a forest” catch up game. Legal frameworks are not neutral. Like all systems within a system, they are cultural artefacts. They carry with them the ideological assumptions and beliefs of the age and place they emerge from.

I came across the following article which illustrates the conundrum and raises questions around privacy. However, more importantly, in the larger scheme of things, this article is really about what it means to be an individual in the digital 21st century. To summarise the article (who has time to read one more article or watch one more video? 🙃), a US federal appeals court ruled against LinkedIn in favour of data analytics firm hiQ allowing hiQ to keep scraping data off LinkedIn. The court discounted the argument that “LinkedIn users have an expectation of privacy in a public profile”, and concluded that this was justified because the survival of hiQ business was threatened. In other words, the US court concluded that business survival trumps privacy.

In the article “What Economists Get Wrong About Climate Change”, Steve Keen argues that many economists get climate change wrong because they have a reductionist view of climate, mistaking it with weather temperatures instead of seeing the intricate interconnection between natural phenomena, life and economic activity. According to Keen, some even argue that the impact on climate change on the economy will be minor because most of the economic activity takes place indoors. As the saying goes, “when a finger points to the moon, which do you look at?”

Similarly, the conclusion of the ninth circuit court ignores the larger picture. By endorsing data scraping, an extractive activity par excellence, it gives legal teeth to the extractive aspect of the online universe. The concept of an unalienable individual is the building block of the liberal order. In the “real” world, it is a scaffold for legal frameworks (at least in some countries). The building block of the data economy is data. In datafied worlds, law is not law, code is law (the title of an article by legal scholar Lawrence Lessig in 2000).

There is a gap that legal frameworks in different countries are only starting to fill with more or less success. These initiatives clearly show two things: 1. much of the debate is in fact a debate on values, and 2. it crazy complex! 🙃 because when prescriptive regulation or softer incentives are set up, they end up affecting the whole system. It seems to me that, as regenerative thinkers and practitioners, this is an important topic that needs to be considered.


The Dark Side of AI

I came across this article in the Financial Times yesterday (March 19 2022) on the Dark Sides of Using AI to Design Drugs by Anjana Ahuja.

Scientists at Collaborations Pharmaceuticals, a North Carolina company using AI to create drugs for rare diseases, experimented with how easy it would be to create rogue molecules for chemical warfare.

As it happens, the answer is VERY EASY! The model took only 6 hours to spit out a set of 40,000 destructive molecules.

And it’s not surprising. As French cultural theorist and philosopher Paul Virilio once said, “when you invent the ship, you invent the shipwreck”. Just like social platforms can be used both to connect with long lost friends AND to spread fake news, AI can be used both to save lives AND to destroy them.

This is a chilling reminder about the destructive potential of increasingly sophisticated technologies that our civilisation has developed but may not have the wisdom to use well.

Web 3.0 Hype & Healthy Critical Thinking

In an article published by Cigi (Centre for International Governance Innovation) on January 14, 2022, ethics in AI professor and researcher Elizabeth M. Renieris reminds us that “without a critical perspective, familiar harms will not only be replicated; they will be exacerbated.”

https://www.cigionline.org/articles/amid-the-hype-over-web3-informed-skepticism-is-critical/

Learning from the past and applying those lessons requires a critical perspective. Without such perspective, proposed “solutions” can only be cosmetic, papering over root causes. Computational or technological attempts to “decentralize” power without addressing the social, political and economic enablers of concentrated power and wealth, such as decades of neo-liberal policies predicated on the illusion of individual choice and control, are bound to fail.”

This Is No Way To Be Human (The Atlantic)

This is a link to Alan Lightman’s article in the Atlantic in January 2022.

For more than 99 percent of our history as humans, we lived close to nature. We lived in the open. The first house with a roof appeared only 5,000 years ago. Television less than a century ago. Internet-connected phones only about 30 years ago. Over the large majority of our 2-million-year evolutionary history, Darwinian forces molded our brains to find kinship with nature, what the biologist E. O. Wilson called “biophilia.” That kinship had survival benefit. Habitat selection, foraging for food, reading the signs of upcoming storms all would have favored a deep affinity with nature.

Social psychologists have documented that such sensitivities are still present in our psyches today. Further psychological and physiological studies have shown that more time spent in nature increases happiness and well-being; less time increases stress and anxiety. Thus, there is a profound disconnect between the natureless environment we have created and the “natural” affections of our minds. In effect, we live in two worlds: a world in close contact with nature, buried deep in our ancestral brains, and a natureless world of the digital screen and constructed environment, fashioned from our technology and intellectual achievements. We are at war with our ancestral selves. The cost of this war is only now becoming apparent.”

https://www.theatlantic.com/technology/archive/2022/01/machine-garden-natureless-world/621268/

And when we look to the future digital technologies being developed at the moment, what do we see?

Web 3.0 Data Ownership, Solution to the Excesses of the Data Economy?

There is much hope at the moment that web 3.0 will provide solutions to the problems brought about by the data economy (by the way, I just realised that with just one sleight of hand, hope becomes hype, and vice-versa). Web 3.0 proposes that users own their own data, instead of leaving it to other actors to use freely. The reasoning is that they can then decide what they want to do with that data, and who they want to release it to and when. We often hear the expression “paradigm shift” when it comes to Web 3.0. Is it? It proposes to solve the issues of surveillance capitalism by shifting data ownership from companies to the users themselves (i.e. users own their own data and the problem will be solved). But are we in fact trying to solve problems with the same tools that created them in the first place?

Karl Polanyi in The Great Transformation (1944) explored how capitalism creates fictitious commodities. Capitalism commodifies nature into land, human activity into work, and exchange into money. Nature, life and exchange are not tradable. Land, work and money are. The word “fictitious” is important here. It suggests that commodification creates tradable products out of something that is not tradable. In the 19th and 20th centuries, industrial capitalism commodified nature, the environment we live in. In the 21st century, surveillance capitalism is commodifying human life.

We have the environmental problems we have today because fundamentally, we see nature as an object to be grabbed, sold and exploited. Similarly, human life today is grabbed (i.e., datafied), traded or used as raw material to create valuable behavioural products that are traded on behavioural markets for profit (See Zuboff “The Age Of Surveillance Capitalism”). The concept of ownership and property is solidly anchored into the capitalist idea that everything out there can be owned and turned into tradable commodity. First, during the industrial revolution, it was nature which was divided, sold and exploited for its resources. Today, under surveillance capitalism, it is human life. Two different objects, but same process. When we talk about paradigm shift, we need to explore whether the avenues we are embarking on right now (such as ownership of one’s own data) truly represent a paradigmatic shift or whether we need to review our assumptions.

Furthermore, users’ ownership of their own data is a neat idea in principle, but its application raises many complex questions because real life is not neat. Ownership of data is not a clear-cut category. The concept of ownership is structured (a Yes or No proposition), life is not. Ownership does not necessarily provide the type of structure that accurately reflects life. There is a large dimension of life that happens outside of this paradigm.

First, having ownership of our own data does not mean that we will have the wisdom to use it well. We have been trained and conditioned for the past 20 years to value convenience above all other things when it comes to using digital technologies. But convenience is not the value most conducive to sustainability. It is more convenient to throw garbage through the window rather than recycle, but in doing so, we create an ecological crisis. By choosing convenience in our digital lives, we also create an ecological crisis. How many of us prefer to visit a website rather than an app (apps have many more prying capacities)? How many of us take the time to change our phones settings to increase privacy, and to review them regularly, to delete apps downloaded once and never used again? How many of us read through privacy policies? How many of us just click yes out of convenience when a website asks us whether to accept all cookies (instead of spending a few minutes customising them)? Not that many. So when given the choice between releasing all data or customising, how many people would actually take the time to choose which data to release and which not to?

Also, releasing one’s data “according to what’s needed” presupposes that we understand very clearly how it is being used, what are the consequences of releasing it, and what is truly needed and what is not. Say I own the data I produce and I can choose to release it or not. That does not solve the issue of what is done with aggregated data once it is released. If that data is transformed slightly in one way or another, is it still mine? Or can someone else trade it or turn it into a product to be traded?

There are tricky questions that pertain to the ambiguous nature of the digital terrain. How do we treat ownership of metadata (the data about data)? How do we treat data that is not about a person but a group of people, or communities? Who owns what in this case, and who decides? And who decides who decides? Who owns the data that is recorded by someone but includes someone else (like for example police patrols, or like when I post a photo of me on Instagram but my friends are also there)? And what happens to the zillions of terabytes of data that are already “out there”, irretrievable? How do we put in place those infrastructures against the backdrop of a probable huge pushback from those who benefit from the data economy? And how do we make sure that the data released is used to perform what it is supposed to perform and not used in another way? Blockchain mechanisms promise absolute certainty and privacy, but this also presupposes that absolutely everything happens on the blockchain.

How, as a caring society, do we protect the vulnerable? How about children? Or those who are not digitally literate (probably 99% of the world, because knowing how to use a smart phone does not equate with digital and technical literacy and awareness)? How about those who live at the fringe of society or at the fringe of the power centres of the world? It’s all very good to say that all our data is in a little box on our phone, but that presupposes that all have physical access to it, and the means to get the phone and the little box. Do we think about this from the point of view of an Indian mother in a village, or a Mexican child, or anyone who is not part of the 1%, or are we (AGAIN) going to develop the next generation technology from the eyes of a white male from a developed country?

Then, there is the essential question of translation. The digital is a translation of real life, it is not real life itself. It is just a map. From the beginning of AI, data science has been trying to create a language that could adequately reflect life, but so far it has not succeeded. Because of historical and technical reasons, the digital language that its used today has been developed along the lines of information theory. Information theory is based on Shannon’s linear communication model. Humans, and life in general, do not communicate like this. The digital has not been able to domesticate and integrate tacit knowledge. This is seen when data science uses proxies for aspects of life that cannot be turned into discrete data, like using US zip code as a proxy for wealth or education for example.

Furthermore, data is not information. Data is a way to classify. Classifications and standards are imbricated in our lives. They operate invisibly, but they create social order (Bowker & Star). Despite all the hype (and the hope) about the digital revolution, the digital is still trying to fit the messiness of life into the clearcut categories of the linear world of the industrial revolution. Data creates classifications, but data is not information. The enterprise of datafication (i.e., turning human life into discrete computer-ready data) is essentially a reductionist enterprise, it does not creates real knowledge, but as Bernard Stiegler once mentioned, “stupid” knowledge. It is the issue with algorithms today. Ownership of data does not address the fundamental fact that datafication creates a world that is not fit for humans, because it denies and destroys that which makes us humans, i.e., tacit knowing.

Finally, as mentioned above, datafication is a process of commodification of human life. For all the benefits of Web 3.0, the decentralised blockchain-based web anchors this process even more strongly into the fabric of society.

Something Is Broken… (from The MarkUp)

Nonprofit Websites Are Riddled With Ad Trackers

Such organizations often deal in sensitive issues, like mental health, addiction, and reproductive rights—and many are feeding data about website visitors to corporations

By: Alfred Ng and Maddy Varner

Originally published on themarkup.org

Last year, nearly 200 million people visited the website of Planned Parenthood, a nonprofit that many people turn to for very private matters like sex education, access to contraceptives, and access to abortions. What those visitors may not have known is that as soon as they opened plannedparenthood.org, some two dozen ad trackers embedded in the site alerted a slew of companies whose business is not reproductive freedom but gathering, selling, and using browsing data. 

The Markup ran Planned Parenthood’s website through our Blacklight tool and found 28 ad trackers and 40 third-party cookies tracking visitors, in addition to so-called “session recorders” that could be capturing the mouse movements and keystrokes of people visiting the homepage in search of things like information on contraceptives and abortions. The site also contained trackers that tell Facebook and Google if users visited the site.

The Markup’s scan found Planned Parenthood’s site communicating with companies like Oracle, Verizon, LiveRamp, TowerData, and Quantcast—some of which have made a business of assembling and selling access to masses of digital data about people’s habits. 

Katie Skibinski, vice president for digital products at Planned Parenthood, said the data collected on its website is “used only for internal purposes by Planned Parenthood and our affiliates,” and the company doesn’t “sell” data to third parties. 

“While we aim to use data to learn how we can be most impactful, at Planned Parenthood, data-driven learning is always thoughtfully executed with respect for patient and user privacy,” Skibinski said. “This means using analytics platforms to collect aggregate data to gather insights and identify trends that help us improve our digital programs.” 

Skibinski did not dispute that the organization shares data with third parties, including data brokers. 

A Blacklight scan of Planned Parenthood Gulf Coast—a localized website specifically for people in the Gulf region, including Texas, where abortion has been essentially outlawed—churned up similar results. 

Planned Parenthood is not alone when it comes to nonprofits, some operating in sensitive areas like mental health and addiction, gathering and sharing data on website visitors.

Using our Blacklight tool, The Markup scanned more than 23,000 websites of nonprofit organizations, including those belonging to abortion providers and nonprofit addiction treatment centers. The Markup used the IRS’s nonprofit master file to identify nonprofits that have filed a tax return since 2019 and that the agency categorizes as focusing on areas like mental health and crisis intervention, civil rights, and medical research. We then examined each nonprofit’s website as publicly listed in GuideStar. We found that about 86 percent of them had third-party cookies or tracking network requests. By comparison, when The Markup did a survey of the top 80,000 websites in 2020, we found 87 percent used some type of third-party tracking. 

About 11 percent of the 23,856 nonprofit websites we scanned had a Facebook pixel embedded, while 18 percent used the Google Analytics “Remarketing Audiences” feature. 

The Markup found that 439 of the nonprofit websites loaded scripts called session recorders, which can monitor visitors’ clicks and keystrokes. Eighty-nine of those were for websites that belonged to nonprofits that the IRS categorizes as primarily focusing on mental health and crisis intervention issues.

“As a user of this website, by sharing your information with them, you probably don’t assume that this sensitive information is shared with third parties and definitely don’t assume that your keystrokes are recorded,” Gunes Acar, a privacy researcher who copublished a 2017 study on session recorders, said. “The more sensitive the website is, the more worried I am.” 

Tracy Plevel, the vice president of development and community relations at Gateway Rehab, one of the nonprofits with session recorders on its site, said that the nonprofit uses trackers and session recorders because it needs to stay competitive with its larger, for-profit counterparts.

“As a nonprofit ourselves, we are up against for-profit providers with large advertising budgets as well as the addiction treatment brokers who grab those seeking care with similar online advertising tactics and connect them with the provider who is offering the greatest ‘sales’ compensation,” Plevel said. “Additionally we know user experience has a big impact on following through on treatment. When someone is ready to commit to treatment, we need to ensure it [is] as easy as possible for them before they get frustrated or intimidated by the process.” 

Other nonprofits had a significant number of trackers embedded on their sites as well. The Markup found 26 ad trackers and 50 third-party cookies on The Clinic at Sharma-Crawford Attorneys at Law, a Kansas City legal clinic that represents low-income people facing deportation.

Rekha Sharma-Crawford, the board president of The Clinic, wrote in an emailed statement, “We take privacy and security concerns very seriously and will continue to work with our web provider to address the issues you have identified.”

Save the Children, a humanitarian aid organization founded more than 100 years ago, had 26 ad trackers and 49 third-party cookies. March of Dimes, a nonprofit started by President Franklin D. Roosevelt that focuses on maternal and infant care, had more than 29 ad trackers on its site and 58 third-party cookies. City of Hope, a Californian cancer treatment and research center, had 25 ad trackers and 47 third-party cookies. 

Paul Butcher, assistant vice president of global digital strategy at Save the Children, said in an emailed statement that the organization “takes data protection very seriously.” Butcher also wrote that Save the Children collects some data through ad trackers “to improve user experience” and that the organization is in the process of revamping its data retention policies and recently hired a new head of data.

March of Dimes and City of Hope did not respond to requests for comment.

State-Level Privacy Laws Miss Nonprofits

While health data is governed by HIPAA, and FERPA  regulates educational records, there are no federal laws governing how websites track their visitors. Recently, a few states—California, Virginia, and Colorado—have enacted consumer privacy laws that require companies to disclose their tracking practices and allow visitors to opt out of data collection. 

But nonprofits in two of those states, California and Virginia, don’t need to adhere to the regulations. 

Sen. Ron Wyden (D-OR), who has proposed his own federal privacy legislation, said that nonprofits accrue a large amount of potentially sensitive data. 

“Nonprofits store incredibly personal information about things we’re passionate about, from political causes and social views to which charitable causes we care about,” Wyden said in an emailed statement. “If a data breach reveals someone donates to a domestic violence support group or an LGBTQ rights organization or the name of their mosque, any of that information could be incredibly private.”

Nonprofit leaders, however, argue that they lack the infrastructure and funding to comply with privacy law requirements and must gather and share information on donors in order to survive. 

“One of the most substantive and impactful uses of data by nonprofits has been our fundraising,” said Shannon McCracken, the CEO of The Nonprofit Alliance, an advocacy group made up of nonprofits and businesses. “Without the ability to cost-effectively reach prospective new donors and current donors, then nonprofits can’t continue to be as impactful as they are today.” 

But purposeful or not, privacy experts say, nonprofits are feeding personal information to data brokers and tech giants like Facebook and Google. 

“A nonprofit might share your phone number and name with LiveRamp. Tomorrow, a for-profit entity can then reuse that same data to target you,” said Ashkan Soltani, a privacy expert and former chief technologist at the Federal Trade Commission. “The data flows that go into these third-party aggregators and data brokers come often from nonprofits as well.” 

Soltani, who was appointed executive director of the California Privacy Protection Agency on Oct. 4, helped draft the California Consumer Privacy Act, which was originally introduced with the nonprofit exemptions.

Many major nonprofits work with data brokers to help organize and analyze their data, Jan Masaoka, CEO of the California Association of Nonprofits, said. 

“People that have big donor lists use them extensively, pretty much all of them use one of the services,” Masaoka said. “They don’t keep it in-house, pretty much everybody keeps it with one of these services.” 

She noted that Blackbaud is a company that nonprofits often turn to. The registered data broker’s marketing material promotes a co-op database that combines donor data from more than 550 nonprofits with public information on millions of households. 

Blackbaud didn’t respond to a request for comment.

Because of a lack of funds, nonprofits also rely on third-party platforms—which also happen to be data brokers—to manage their data’s security and privacy, McCracken said. But these kinds of companies aren’t immune to cyberattacks either: Blackbaud disclosed a ransomware attack in 2020 in which hackers stole passwords, Social Security numbers, and banking information, according to a Securities and Exchange Commission filing. Hundreds of charitable organizations, schools, and hospitals were affected, along with more than 13 million people, according to the Identity Theft Resource Center. 

“They rely on this kind of problematic ecosystem to achieve their work, and as a result, they share number lists, email addresses, or browsing behavior with third-party advertising companies and subject their members to risk,” Soltani said.

The Exception

Unlike its predecessors in California and Virginia, Colorado’s privacy bill doesn’t have an exemption for nonprofits. 

In both California and Virginia, the bills’ main supporters gave nonprofits an exemption as a political maneuver. Alastair Mactaggart, a real estate developer and founder of Californians for Consumer Privacy, who was the driving force behind the California Consumer Privacy Act, said his proposal was already facing opposition from tech giants and didn’t want a political showdown with nonprofits, too. 

“You gotta take the first step, so we figured this was the one that would be the easiest to bounce off,” Mactaggart said. “Eventually, I hope that the big nonprofits are included as well.”

David Marsden, the state senator who introduced the Virginia Consumer Data Protection Act, echoed that sentiment, reflecting that the law wasn’t perfect but still a good start.

“Does this pick up everybody that it should, or exempt everybody who needs an exemption? Probably not, but it comes pretty close,” Marsden said. “We were able, with this bill, to get it passed without people getting up and objecting to what we were trying to do.” 

Colorado state senator Robert Rodriguez, who co-sponsored the state’s privacy bill, said he didn’t include an exemption for nonprofits because he felt that any entity that had data on more than 100,000 people should have to follow privacy protections. He also didn’t understand why other states had exemptions. 

“Someone that has over 100,000 records is a good size,” he said in an email. “They should have some protections or requirements to follow.” 

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

WhatsApp Terms of Service & Privacy Policy as of March 2021

In the post “Why I Am Quitting WhatsApp – Part II” below I mention a link to the Terms of Services and the Privacy Policy. Since those terms change with time, I enclose an excerpt below of the terms as of 27 March 2021 in pdf format.

The clause “Information We Collect” is divided into three groups:

  1. Information you provide (hinting that the other two are information that you do NOT (and may not want to) provide),
  2. Automatically collected information,
  3. Third-party information.

Please see content below.

Information You Provide

• Your Account Information. You provide your mobile phone number to create a WhatsApp account. You provide us the phone numbers in your mobile address book on a regular basis, including those of both the users of our Services and your other contacts. You confirm you are authorized to provide us such numbers. You may also add other information to your account, such as a profile name, profile picture, and status message.

• Your Messages. We do not retain your messages in the ordinary course of providing our Services to you. Once your messages (including your chats, photos, videos, voice messages, files, and share location information) are delivered, they are deleted from our servers. Your messages are stored on your own device. If a message cannot be delivered immediately (for example, if you are offline), we may keep it on our servers for up to 30 days as we try to deliver it. If a message is still undelivered after 30 days, we delete it. To improve performance and deliver media messages more efficiently, such as when many people are sharing a popular photo or video, we may retain that content on our servers for a longer period of time. We also offer end-to-end encryption for our Services, which is on by default, when you and the people with whom you message use a version of our app released after April 2, 2016. End-to-end encryption means that your messages are encrypted to protect against us and third parties from reading them.

• Your Connections. To help you organize how you communicate with others, we may create a favorites list of your contacts for you, and you can create, join, or get added to groups and broadcast lists, and such groups and lists get associated with your account information.

• Customer Support. You may provide us with information related to your use of our Services, including copies of your messages, and how to contact you so we can provide you customer support. For example, you may send us an email with information relating to our app performance or other issues.

Automatically Collected Information

• Usage and Log Information. We collect service-related, diagnostic, and performance information. This includes information about your activity (such as how you use our Services, how you interact with others using our Services, and the like), log files, and diagnostic, crash, website, and performance logs and reports.

• Transactional Information. If you pay for our Services, we may receive information and confirmations, such as payment receipts, including from app stores or other third parties processing your payment.

• Device and Connection Information. We collect device-specific information when you install, access, or use our Services. This includes information such as hardware model, operating system information, browser information, IP address, mobile network information including phone number, and device identifiers. We collect device location information if you use our location features, such as when you choose to share your location with your contacts, view locations nearby or those others have shared with you, and the like, and for diagnostics and troubleshooting purposes such as if you are having trouble with our app’s location features.

• Cookies. We use cookies to operate and provide our Services, including to provide our Services that are web-based, improve your experiences, understand how our Services are being used, and customize our Services. For example, we use cookies to provide WhatsApp for web and desktop and other web-based services. We may also use cookies to understand which of our FAQs are most popular and to show you relevant content related to our Services. Additionally, we may use cookies to remember your choices, such as your language preferences, and otherwise to customize our Services for you. Learn more about how we use cookies to provide you our Services.

• Status Information. We collect information about your online and status message changes on our Services, such as whether you are online (your “online status”), when you last used our Services (your “last seen status”), and when you last updated your status message.

Third-Party Information

• Information Others Provide About You. We receive information other people provide us, which may include information about you. For example, when other users you know use our Services, they may provide your phone number from their mobile address book (just as you may provide theirs), or they may send you a message, send messages to groups to which you belong, or call you.

• Third-Party Providers. We work with third-party providers to help us operate, provide, improve, understand, customize, support, and market our Services. For example, we work with companies to distribute our apps, provide our infrastructure, delivery, and other systems, supply map and places information, process payments, help us understand how people use our Services, and market our Services. These providers may provide us information about you in certain circumstances; for example, app stores may provide us reports to help us diagnose and fix service issues.

• Third-Party Services. We allow you to use our Services in connection with third-party services. If you use our Services with such third-party services, we may receive information about you from them; for example, if you use the WhatsApp share button on a news service to share a news article with your WhatsApp contacts, groups, or broadcast lists on our Services, or if you choose to access our Services through a mobile carrier’s or device provider’s promotion of our Services. Please note that when you use third-party services, their own terms and privacy policies will govern your use of those services.

« Older posts Newer posts »