Datafication & Technology

Datafication, Phantasmagoria of the 21st Century

Page 2 of 5

Privacy Guides – Restore Your Online Privacy

Privacy Guides is a cybersecurity resources and privacy-focused tools to protect yourself online.

Start your privacy journey here. Learn why privacy matters, the difference between Privacy, Secrecy, Anonymity and Security and how to determine what is the threat model that corresponds best to your needs.

For example, here are some examples of threats. You may want to protect from some but don’t care much about others.

  • Anonymity – Shielding your online activity from your real identity, protecting you from people who are trying to uncover your identity specifically.
  • Targeted Attacks – Being protected from hackers or other malicious actors who are trying to gain access to your data or devices specifically.
  • Passive Attacks – Being protected from things like malware, data breaches, and other attacks that are made against many people at once.
  • Service Providers – Protecting your data from service providers (e.g. with E2EE, which renders your data unreadable to the server).
  • Mass Surveillance – Protection from government agencies, organisations, websites, and services which work together to track your activities.
  • Surveillance Capitalism – Protecting yourself from big advertising networks, like Google and Facebook, as well as a myriad of other third-party data collectors.
  • Public Exposure – Limiting the information about you that is accessible online—to search engines or the general public.
  • Censorship – Avoiding censored access to information or being censored yourself when speaking online.

Here, you can read about Privacy Guides recommendations for a whole range of online privacy tools, from browsers to service providers (cloud storage, email services, email aliasing services, payment, hosting, photo management, VPNs etc), softwares (sync, data redaction, encryption, files sharing, authentication tools, password managers, productivity tools, communication such as messaging platforms etc) and operating systems.

You can also understand some common misconceptions about online privacy (think: “VPN makes my browsing more secure”, “open source is always secure” or “complicated is better” amongst others).

You can also find valuable information about account creation: what happens when you create an account, understanding Terms of Services and Privacy Policies, how to secure an account (password managers, authentication software, email aliases etc). And just as important (maybe more), about account deletion (we leave A LOT of traces in the course of our digital life, and it’s important to become aware of what they are and how to reduce their number).

AND MUCH MORE!

I can’t recommend this website enough. Visit it, revisit it, bookmark it and share it with friends and enemies. 🙂

[HOW TO] Mitigating Tracking

This is from an exchange with a privacy and security expert friend. I am publishing his replies to my questions “as is” (no editing).

Many people ask me about tracking. What is it? Can we prevent it?

Meta/FB pixel and Google Analytics are the two most pervasive tracking tools that follow people all around the web. Vast majority of sites have either or both running silently in the background. And each can see down to the most minute detail everything a user does on a website – every link or page that gets clicked or accessed, your mouse movements, the data you enter into every form or text box or search bar, the credentials you input to sign up or register for a service, the time you spend viewing a certain piece of content on the site, and countless other things etc… (visit deviceinfo.me to see example of all the little things a site can track and recognise about your computer).

And then all that data gets recorded and associated with your identity, on either a 100% precise “deterministic” basis (meaning FB or Google know you personally are the user), or on a “probabilistic” basis (when they don’t know for a fact it is you but can infer that it is likely you based on a range of clues/patterns).

Tracking is deterministic for most internet users (i.e. those not taking precautions to prevent and block tracking). Tracking is probabilistic for the small segment that actively try to mitigate against the tracking with various techniques (someone like me).

The goal for someone who cares and is operating in the probabilistic bucket is to actively thwart the tracking to the extent where FB/Google is unable to, with a good degree of confidence, link your identity to the given activity.

But there is otherwise no way to 100% prevent such tracking, to fully escape all deterministic and probabilistic tracking of your activity, other than not owning digital devices and never accessing the internet.

The most basic + doable + minimal pain actions to take to move oneself away from being in the deterministic bucket and into the probabilistic category are:

  1. Practice “browser isolation“, meaning use one browser exclusively for Facebook/meta/Instagram + Google/Gmail things, and for nothing else. And then use another separate browser for all your other non-FB/Google internet activity. Key is to make sure you NEVER sign into your FB/Google/Gmail accounts on your non-FB/Google browser (as the moment this happens, FB/Google are able to immediately link that browser and all its future activity to your personal identity).
  1. Do NOT use Google Chrome Web browser as your non-FB/Google browser. Use Firefox or Brave Browser instead. And again, NEVER log into any FB/Google account on your Firefox/brave browser (and try to avoid as much as possible even visiting any FB/Google products or websites on that browser).
  1. Install and activate the browser extension uBlock Origin into your non-FB/Google browser.
  1. Do not use Google Search in your non-FB/Google browser, and don’t go to Google to make searches. Use privacy alternatives like DuckDuckGo (www.duckduckgo.com) or Brave Search. This preference can be toggled in the browser settings.

Of course one of the most effective actions is to fully delete your accounts with and entirely avoid using any Facebook/Meta + Google products/services, but this is too big a jump for most people and still doesn’t mitigate the tracking 100% (as even without a formal account on FB/Google, without further mitigations in place, they are still able to identify you as a unique user and track you using their created “shadow profile”).

All of this is only basic tracking mitigation for standard desktop web browser activity (i.e. just visiting websites on your computer). The many other ways our digital behaviour is tracked require their own other set of mitigations, so this only covers one part of it, but is an effective and easy start.

Can you outline a complete strategy to mitigate tracking?

I’d say overall there are a few key domains to look at:

  • Web browsing (basic mitigation as above).
  • Mobile devices because these are one of the biggest sources of privacy leakage in most people’s lives (mitigation being switching to a de-googled android device instead of iPhone or regular android + limiting installed apps to only vital ones).
  • Social media for obvious reasons (deleting and avoiding social media, or at least Facebook or generally be sparing in use and minimise data consciously shared on platform).
  • Email because all email on traditional providers is not private, all content can be and is actively read and analysed by provider (migrate away from Gmail, outlook, yahoo, apple etc and move to trustworthy privacy respecting email providers like protonmail or tutanota).
  • Cloud storage services, for the same reason as email (migrate away from Dropbox/other big tech cloud storage providers, also move to privacy friendly ones like proton).
  • Communications, because normal communications are either not private or secure or both (try to use Signal www.signal.org over WhatsApp, try to use Signal call/message over regular phone call or SMS, even WhatsApp is better for voice calls/messaging compared to traditional phone call/SMS as at least it is end to end encrypted).
  • Use unique account credentials for each of your online accounts, with different complex password for each. Avoid using the same password (or the same password with only minor variations) for all services (more for general security but still important as cannot have privacy without security, for basic use recommend Bitwarden www.bitwarden.com with a very strong master password that you keep close guard over).
  • Use multi-factor or two-factor (MFA or 2FA) authentication to secure accounts wherever possible (ideally use TOTP time based codes via an app like Aegis or enteAuthenticator).

NB: The links above are clean (i.e., not affiliated links), I do not get any reward when you subscribe to those services.

Leaving Traces Online, Identifiers.

Visit this website (or copy and paste https://www.deviceinfo.me) and it will show you a long list of all the identifiers that every website you visit can find out about you, your location, your device etc… All these different data points then used to create a “fingerprint” of your web browser, allowing the rest of your web activity on that same browser/device to be trackable.

NB: You can visit this website from any of your devices (mobile or desktop/laptop).

A Lost Civilisation

My PhD research on the epistemological shifts that accompany the rise of the digital age
has alerted me to the high value the digital civilisation puts on turning the non-
quantifiable aspects of our lives and our experience into quantified, computer-ready
data. I have become more aware of the numerous deleterious consequences of this
phenomenon.

Qualitative dimensions of life such as emotions, relationships or intuitive perceptions (to
name a few) which draw upon a rich array of tacit knowing/feeling/sensing common to
all of life are undermined and depreciated. In many areas of decision-making at the
individual and collective levels, the alleged neutrality of the process of digital
quantification is put forward as an antidote to the biases of the human mind, unreliable
as it is encumbered by emotions and prejudices. While certain areas of the economy
lend themselves to quantitative measurement, most crucial aspects of the experience of
living do not.

There is a logical fallacy in the belief that digital data are neutral. They are produced in
and by social, cultural, economic, historical (etc) contexts and consequently carry the
very biases present in those contexts. There is a logical fallacy in the belief that
algorithms are neutral. They are highly designed to optimise certain outcomes and fulfil
certain agendas which, more often than not, do not align with the greater good.

Far from being a revolution, the blind ideological faith in digital data is directly inherited from the
statistical and mechanistic mindset of the Industrial Revolution and supports the
positivist view that all behaviours and sociality can be turned into hard data. The
enterprise of eradicating uncertainty and ambiguity under the guise of so-called scientific
measurement is such an appealing proposition for so many economic actors that we
have come to forget what makes us human.

A civilisation that has devalued and forgotten the humanness of being human is a lost civilisation.

UK Police to Double The Use of Facial Recognition (The Guardian) & Fawkes

This is an article published by The Guardian on October 29, 2023.

https://www.theguardian.com/technology/2023/oct/29/uk-police-urged-to-double-use-of-facial-recognition-software

The UK policing minister encourages police department throughout the country to drastically increase the use of facial recognition software, and include passport photos into the AI database of recognisable images.

Excerpts:

Policing minister Chris Philp has written to force leaders suggesting the target of exceeding 200,000 searches of still images against the police national database by May using facial recognition technology.”

He also is encouraging police to operate live facial recognition (LFR) cameras more widely, before a global artificial intelligence (AI) safety summit next week at Bletchley Park in Buckinghamshire.”

Philp has also previously said he is going to make UK passport photos searchable by police. He plans to integrate data from the police national database (PND), the passport office and other national databases to help police find a match with the “click of one button”.”

If the widespread adoption of facial recognition softwares (that can recognise and identify a face even when it is partially covered) concerns you, you may want to consider using FAWKES, an image cloaking software developed at the Sand Lab at the university of Chicago.

The latest version (2022) includes compatibility with Mac 1 chips.

http://sandlab.cs.uchicago.edu/fawkes

This is what the Sand lab website says:

The SAND Lab at University of Chicago has developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes “poisons” models that try to learn what you look like, by putting hidden changes into your photos, and using them as Trojan horses to deliver that poison to any facial recognition models of you. Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.

I downloaded FAWKES on my M1 MacBook, and while a bit sloe, it works perfectly. You may have to tweak your privacy and security settings (in System Settings) to allow FAWKES to run on your computer. I also recommend to use the following method to open the app the first time you use it: go to Finder > Applications > FAKWES. Right click on the app name and select “Open”.

Be a bit patient, it took a 2-3 minutes for the software to open when I first used it. And it may take a few minutes to process photos. But all in all, it is working very well. Please note that it only seems to work for M1 chip MacBook but not iMac.

Siri Beerends, AI Makes Humans More Robotic

Siri Beerends is a cultural sociologist and researches the social impact of digital technology at media lab SETUP. With her journalistic approach, she stimulates a critical debate on increasing datafication and artificial intelligence. Her PhD research (University of Twente) deals with authenticity and the question of how AI reduces the distance between people and machines.

Her TEDx talk caught my attention because, as a sociologist of technology, she looks at AI with a critical eye (and we need MANY more people to do this nowadays). In this talk, she gives 3 examples illustrating how AI does not work for us (humans), but we (humans) work for it. She shows how AI changes how we relates to each other in very profound ways. Technology is not good or bad she says, technology (AI) changes what good and bad mean.

Even more importantly, AI is not a technology, it is an ideology. Why? Because we believe that the social and human processes can be captured into computer data, and we forget about aspects that data cannot capture. Also, AI is based on a very reductionist understanding of what intelligence means, i.e. what computers are capable of, one that forgets about consciousness, empathy, intentionality, and embodied intelligence. Additionally, contrary to living intelligence, AI is very energy inefficient and has an enormous environmental impact.

AI is not a form of intelligence, but a form of advanced statistics. It can beat us in stable environments with clear rules, or in other terms, NOT the world we live in, which is contextual, ambiguous and dynamic. AI at best performs very (VERY!) poorly, at worst creates havoc in the messy REAL world because it can’t adapt to context. Do we want to make the world as predictable as possible? Do we want to become data clicking robots? Do we want to quantify and measure all aspects of our lives she asks. And her response is a resounding no.

What then?

Technological progress is not societal progress, so we need to expect less from AI and more from each other. AI systems can help solve problems, but we need to look into the causes of these problems, the flaws in our economic systems that trigger these problems again and again.

AI is also fundamentally conservative. It is trained with data from the past and reproduces patterns from the past. It is not real innovation. Real innovation requires better social and economic systems. We (humans) have the potentials to reshape them. Let’s not waste our potentials by becoming robots.

Watch her TEDx talk below.

Algorithmic Bias in Education. Case Study From The MarkUp.

The MarkUp (an investigative publication focusing on Tech) has released an investigation into the Wisconsin’s state algorithm used to predict middle school students’ dropout before they graduate from high school.

Read the whole story here.

The algorithm is called The Dropout Early Warning System (DEWS). Students dropout is an important issue that needs to be addressed. Improving the chances of students staying in school and graduating from high school is a laudable goal. The question is: are algorithms reliable tools to do so? As it happens, it seems that they are not.

DEWS has been used for over a decade. The data used to create scores includes test scores, disciplinary records, and race. Students scoring below 78.5% are marked as High Risk (and a red mark appears next to their name). The MarkUp reports that comparisons between DEWS prediction and state records of actual graduations show the system is wrong three quarter of the time, especially for Black and Hispanic students. In other words, the system invalidates the very purpose for the system to exist.

Even more telling: the Department of Public Instruction (DPI) ran its own investigation into DEWS and concluded that the system was unfair. That was in 2021. In 2023, DEWS is still in use. Does this mean that our over-reliance on algorithmic systems has created a situation where we know they fail us, but we have no credible alternative, so we keep using them?

I am reminded of the seminal book by Neil Postman “Technopoly”. He says that in a Technopoly, the purpose of technology is NOT to serve people or life. It justifies its own existence merely by existing. In a Technopoly, technology is not a tool, “people are the tools of their tools” (p68). More importantly, and more problematically, “once a technology is admitted, it plays out its hand; it does what it is designed to do. Our task is to understand what that design is” (p128). It is safe to say that digital technologies have been admitted, but do we really have any understanding of what their design is?

[HOW TO] Manipulate Photos That Can’t be Reversed Engineered Using Signal.

You want to send or post a photo, but don’t want to show the whole image. Maybe it’s a screenshot and you do not want to tell the world about your mobile provider and other personal visible details on a screenshot, or you may want to blur your background to hide your location, or or or…

Did you know that it is easy to reverse engineer cropped, blurred or manipulated photos back to their original state, thereby revealing what you wanted to hide by manipulating the photo in the first place? It is called an “exploit” (as in exploiting a loophole or weakness in a programme or app). Recently, such a weakness has been found in the built-in cropping feature on Google Pixel phones, but the weakness is also present in iPhones and other Android phones (read this Wired article to know more).

While companies can patch the exploits, all redacted photos already online (and if you use a cloud service, your photos are most likely already online) are vulnerable to it. When you crop a photo, what happens is the process tells the file to pretend that the cropped out section is not there, but it actually is still there.

As we all now know (and if you don’t, you should), if there is anything you do not want to make public, do not post it online. It is safe to consider that anything you have posted online is now in one way or another known to someone. And deleting what you have already posted does not help. You are just removing it from your view. Your photos are probably already in multiple datasets.

One way to really crop photos is to use… SIGNAL! Yes. You may know Signal as one of the most secure and private messaging platform, but it is also a great tool to REALLY crop out stuff from your photos so they can’t be reversed engineered. How to do that? Open Signal, take a photo, open the editing tool, crop, change as needed and save. Then send to “Note To Self” (another great feature of Signal for storing info).

If you have not downloaded Signal yet, you can find it in your app store, or here.

[Podcasts Series] Surveillance Report Podcast

In the Podcast Series, I am going to start posting links to interesting podcasts that cover topics we are interested in.

One of those is the Surveillance Report Podcast, described on their website as a “weekly security and privacy news – Presented by Techlore & The New Oil”. Every week, you get about 50 minutes of news on topics around privacy and security, including news about data breaches, exploits, new research etc. Each episode presents and analyses a highlight story, usually a piece of news that has gone viral in the privacy and security community. It is quite informative although sometimes a bit technical. Each episode also presents a list of sources for what is discussed.

The Surveillance Report Podcast is available on Youtube, RSS, Apple podcasts and Spotify.

Algorithmic Technology, Knowledge Production (And A Few Comments In Between)

So, digital technologies are going to save the world.

Or are they?

Let’s have a no non-sense look at how things really work.

A few comments first.

I am not a Luddite.

[Just a side comment here: Luddites were English textile workers in the 19th century who reacted strongly against the mechanisation of their trade which put them out of work and unable to support their families. Today, they have become the poster-child of anti-progress, anti-technology grumpy old bores, and “you’re a Luddite” is a common insult directed at techno-sceptics of all sorts. But Luddites were actually behaving quite rationally. Many people in the world today react in a similar fashion in the face of the economic uncertainty brought about by technological change.]

That being said, I am not anti-technology. I am extremely grateful for the applications of digital technology that help make the world a better place in many ways. I am fascinated by the ingenuity and the creativity displayed in the development of technologies to solve puzzling problems. I also welcome the fact that major technological shifts have brought major changes in how we live in the world. This is unavoidable, it is part of the impermanent nature of our worlds. Emergence of the new is to be welcomed rather than fought against.

But I am also a strong believer in using discrimination to try to make sense of new technologies, and to critically assess their systemic impact, especially when they have become the object of such hype. The history of humanity is paved with examples of collective blindness. We can’t consider ourselves immune to it.

The focus of my research (and of this post) is Datafication, i.e., the algorithmic quantification of purely qualitative aspects of life. I mention this because there are many other domains that comfortably lend themselves to quantification.

I am using a simple vocabulary in this post. This is on purpose, because words can be deceiving. Names such as Artificial Intelligence (AI) or Natural Language Processing (NLP) are highly evocative and misleading, suggesting human-like abilities. There is so much excitement and fanfare around them that it’s worth going back to the basics and calling a cat a cat (or a machine a machine). There is a lot of hype around whether AI is sentient or could become sentient but as of today, there are many simple actions that AI cannot perform satisfactorily (recognise a non-white-male face for one), not to mention the deeper issues that plague it (bias in data used to feed algorithms, the illusory belief that algorithms are neutral, the lack of accountability, the data surveillance architectures… just to name a few). It is just too easy to discard these technical, political, social issues in the belief that they will “soon” be overcome.

But hype time is not a time for deep reflection. If the incredible excitement around ChatGPT (despite the repeated urge for caution from its founder) is any indication, we are living through another round of renewed collective euphoria. A few years ago, the object of this collective rapture was social media. Today, knowing what we know about the harms they create, it is becoming more difficult to feel deliciously aroused by Facebook and co., but AI has grabbed the intoxication baton. The most grandiose claims are claims of sentience, including from AI engineers who undoubtedly have the ability to make the machines, but whose expertise in assessing their sentience is highly debatable. But in the digital age, extravagant assertions sell newspapers, make stocks shoot up, or bring fame, so it may not all be so surprising.

But I digress…

How does algorithmic technology create “knowledge” about qualitative aspects of life?

First, it collects and processes existing data from the social realm to create “knowledge”. It is important to understand that the original data collected is frequently incomplete, and often reflects the existing biases of the social milieu from where it is extracted. The idea that algorithms are neutral is sexy but false. Algorithms are a set of instructions that control the processing of data. They are only as good as the data they work with. So, I put the word “knowledge” in quotation marks to show that we have to scrutinise its meaning in this context, and use discrimination to examine what type of knowledge is created, what function it carries out, and whose interests it serves.

Algorithmic technology relies on computer-ready, quantified data. Computers are not equipped to handle the fuzziness of qualitative, relational, embodied, experiential data. But a lot of data produced in the world everyday is warm data. (Nora Bateson coined that term by the way, check The Bateson Institute website to know more, it is well worth a read). It is fuzzy, changing, qualitative, not clearly defined, and certainly not reducible to discrete quantities. But computers can only deal with quantities, discrete data bits. So, in order to be read by computers, the data collected needs to be cleaned and turned into “structured data”. What does “structured” mean? It means that it has to be transformed into data that can be read by computers; it needs to be turned into bits; it needs to be quantified.

So this begs the question: how is unquantified data turned into quantified data? Essentially, through two processes.

The first one is called “proxying”. The logic is: “I can’t use X, so I will use a proxy for X, an equivalent”. While this sounds great in theory, it has two important implications. Firstly, a suitable proxy may or may not exist so the relationship of similarity between X and its proxy may be thin. Secondly, someone has to decide which quantifiable equivalent will be used. I insist on the word “someone”, because it means that “someone” has to make that decision, a decision that is far from neutral, highly political and potentially carrying many social (unintended) consequences. In many instances, those decisions are made not by the stakeholders who have a lived understanding of the context where the algorithmic technology will be applied, but by the developers of the technology who lack such understanding.

Some examples of proxied data: assessing teachers’ effectiveness through their students’ test results; ranking “education excellence” at universities using SAT scores, student-teacher ratios, and acceptance rates (that’s what the editors at US News did when they started their university ranking project); evaluating an influencer’s trustworthiness by the number of followers she has (thereby creating unintended consequences as described in this New York Times investigative piece “The Follower Factory”); using credit worthiness to screen potential new corporate hires. And more… Those examples come from a fantastic book by math-PhD-data-scientist turned activist Cathy O’Neil called “Weapons of Math Destruction”. If you don’t have time or the inclination to read the book, Cathy also distills the essence of her argument in a TED talk, “The era of blind faith in big data must end”.

While all of the above sounds like a lot of work, there is data that is just too fuzzy to be structured and too complex to be proxied. So the second way to treat unstructured data is quite simple: abandon it. Forget about it! It never existed. Job done, problem solved. While this is convenient, of course, it becomes clear that this leaves out A LOT of important information about the social, especially because a major part of qualitative data produced in the social realm falls into this category. It also leave out the delicate but essential qualitative relational data that weaves the fabric of living ecosystems. So in essence, after the proxying and the pruning of qualitative data, it is easy to see how the so-called “knowledge” that algorithms produce is a rather poor reflection of social reality.

But (and that’s a big but), algorithmic technology is very attractive, because it makes decision-making convenient. How so? By removing uncertainty (of course I should say by giving the illusion of removing uncertainty). How so? Because it predicts the future (of course I should say by giving the illusion of predicting the future). Algorithmic technology applied to the social is essentially a technology of prediction. Shoshana Zuboff describes this at length in her seminal book published in 2019 “The Age of Surveillance Capitalism: The Fight for a Human Future in the New Frontier of Power”. If you do not have the stomach to read through the 500+ pages, just search “Zuboff Surveillance Capitalism”, you can find a plethora of interviews, articles and seminars she gave since the publication. (Just do me a favour and don’t use Google and Chrome to search, but switch to cleaner browsers like Firefox and search engines like DuckDuckGo). She clearly and masterfully elucidates how Google’s and Facebook’s money machines rely on packaging “prediction products” that are traded on “behavioural futures markets” which aim to erase the uncertainty of human behaviour.

There is a lot more to say on this (and I may do so in a later post), but for now, suffice it to say that just like the regenerative processes of nature are being damaged by mechanistic human activity, life-enhancing tacit ways of knowing are being submerged by the datafied production of knowledge. While algorithmic knowledge creation has a place and usefulness, its widespread use overshadows and overwhelms more tacit, warm, qualitative, embodied, experiential, human ways of knowing and being. The algorithmisation of human experience is creating a false knowledge of the world (see my 3mn presentation at TEDx in 2021).

This increasing lopsidedness is problematic and dangerous. Problematic because while prediction seems to make decision-making more convenient and efficient, convenience and efficiency are not life-enhancing values. Furthermore, prediction is not understanding, and understanding (or meaning-giving) is an important part of how we orient ourselves in the world. It is also problematically unfair because it creates massive asymmetries of knowledge and therefore a massive imbalance of power.

It is dangerous because while the algorithmic medium is indeed revolutionary, the ideology underlying it is dated and hazardous. The global issues and the potential for planetary annihilation that we are facing today arose from a reductionist mindset that sees living beings as machines and a positivist ideology that fundamentally distrusts tacit aspects of the human mind.

We urgently need a pendulum shift to rebalance algorithmically-produced knowledge with warm ways of knowing in order to create an ecology of knowledge that is conducive to the thriving of life on our planet.

« Older posts Newer posts »