Datafication, Phantasmagoria of the 21st Century

Author: admin (Page 3 of 5)

[Podcasts Series] Surveillance Report Podcast

In the Podcast Series, I am going to start posting links to interesting podcasts that cover topics we are interested in.

One of those is the Surveillance Report Podcast, described on their website as a “weekly security and privacy news – Presented by Techlore & The New Oil”. Every week, you get about 50 minutes of news on topics around privacy and security, including news about data breaches, exploits, new research etc. Each episode presents and analyses a highlight story, usually a piece of news that has gone viral in the privacy and security community. It is quite informative although sometimes a bit technical. Each episode also presents a list of sources for what is discussed.

The Surveillance Report Podcast is available on Youtube, RSS, Apple podcasts and Spotify.

Algorithmic Technology, Knowledge Production (And A Few Comments In Between)

So, digital technologies are going to save the world.

Or are they?

Let’s have a no non-sense look at how things really work.

A few comments first.

I am not a Luddite.

[Just a side comment here: Luddites were English textile workers in the 19th century who reacted strongly against the mechanisation of their trade which put them out of work and unable to support their families. Today, they have become the poster-child of anti-progress, anti-technology grumpy old bores, and “you’re a Luddite” is a common insult directed at techno-sceptics of all sorts. But Luddites were actually behaving quite rationally. Many people in the world today react in a similar fashion in the face of the economic uncertainty brought about by technological change.]

That being said, I am not anti-technology. I am extremely grateful for the applications of digital technology that help make the world a better place in many ways. I am fascinated by the ingenuity and the creativity displayed in the development of technologies to solve puzzling problems. I also welcome the fact that major technological shifts have brought major changes in how we live in the world. This is unavoidable, it is part of the impermanent nature of our worlds. Emergence of the new is to be welcomed rather than fought against.

But I am also a strong believer in using discrimination to try to make sense of new technologies, and to critically assess their systemic impact, especially when they have become the object of such hype. The history of humanity is paved with examples of collective blindness. We can’t consider ourselves immune to it.

The focus of my research (and of this post) is Datafication, i.e., the algorithmic quantification of purely qualitative aspects of life. I mention this because there are many other domains that comfortably lend themselves to quantification.

I am using a simple vocabulary in this post. This is on purpose, because words can be deceiving. Names such as Artificial Intelligence (AI) or Natural Language Processing (NLP) are highly evocative and misleading, suggesting human-like abilities. There is so much excitement and fanfare around them that it’s worth going back to the basics and calling a cat a cat (or a machine a machine). There is a lot of hype around whether AI is sentient or could become sentient but as of today, there are many simple actions that AI cannot perform satisfactorily (recognise a non-white-male face for one), not to mention the deeper issues that plague it (bias in data used to feed algorithms, the illusory belief that algorithms are neutral, the lack of accountability, the data surveillance architectures… just to name a few). It is just too easy to discard these technical, political, social issues in the belief that they will “soon” be overcome.

But hype time is not a time for deep reflection. If the incredible excitement around ChatGPT (despite the repeated urge for caution from its founder) is any indication, we are living through another round of renewed collective euphoria. A few years ago, the object of this collective rapture was social media. Today, knowing what we know about the harms they create, it is becoming more difficult to feel deliciously aroused by Facebook and co., but AI has grabbed the intoxication baton. The most grandiose claims are claims of sentience, including from AI engineers who undoubtedly have the ability to make the machines, but whose expertise in assessing their sentience is highly debatable. But in the digital age, extravagant assertions sell newspapers, make stocks shoot up, or bring fame, so it may not all be so surprising.

But I digress…

How does algorithmic technology create “knowledge” about qualitative aspects of life?

First, it collects and processes existing data from the social realm to create “knowledge”. It is important to understand that the original data collected is frequently incomplete, and often reflects the existing biases of the social milieu from where it is extracted. The idea that algorithms are neutral is sexy but false. Algorithms are a set of instructions that control the processing of data. They are only as good as the data they work with. So, I put the word “knowledge” in quotation marks to show that we have to scrutinise its meaning in this context, and use discrimination to examine what type of knowledge is created, what function it carries out, and whose interests it serves.

Algorithmic technology relies on computer-ready, quantified data. Computers are not equipped to handle the fuzziness of qualitative, relational, embodied, experiential data. But a lot of data produced in the world everyday is warm data. (Nora Bateson coined that term by the way, check The Bateson Institute website to know more, it is well worth a read). It is fuzzy, changing, qualitative, not clearly defined, and certainly not reducible to discrete quantities. But computers can only deal with quantities, discrete data bits. So, in order to be read by computers, the data collected needs to be cleaned and turned into “structured data”. What does “structured” mean? It means that it has to be transformed into data that can be read by computers; it needs to be turned into bits; it needs to be quantified.

So this begs the question: how is unquantified data turned into quantified data? Essentially, through two processes.

The first one is called “proxying”. The logic is: “I can’t use X, so I will use a proxy for X, an equivalent”. While this sounds great in theory, it has two important implications. Firstly, a suitable proxy may or may not exist so the relationship of similarity between X and its proxy may be thin. Secondly, someone has to decide which quantifiable equivalent will be used. I insist on the word “someone”, because it means that “someone” has to make that decision, a decision that is far from neutral, highly political and potentially carrying many social (unintended) consequences. In many instances, those decisions are made not by the stakeholders who have a lived understanding of the context where the algorithmic technology will be applied, but by the developers of the technology who lack such understanding.

Some examples of proxied data: assessing teachers’ effectiveness through their students’ test results; ranking “education excellence” at universities using SAT scores, student-teacher ratios, and acceptance rates (that’s what the editors at US News did when they started their university ranking project); evaluating an influencer’s trustworthiness by the number of followers she has (thereby creating unintended consequences as described in this New York Times investigative piece “The Follower Factory”); using credit worthiness to screen potential new corporate hires. And more… Those examples come from a fantastic book by math-PhD-data-scientist turned activist Cathy O’Neil called “Weapons of Math Destruction”. If you don’t have time or the inclination to read the book, Cathy also distills the essence of her argument in a TED talk, “The era of blind faith in big data must end”.

While all of the above sounds like a lot of work, there is data that is just too fuzzy to be structured and too complex to be proxied. So the second way to treat unstructured data is quite simple: abandon it. Forget about it! It never existed. Job done, problem solved. While this is convenient, of course, it becomes clear that this leaves out A LOT of important information about the social, especially because a major part of qualitative data produced in the social realm falls into this category. It also leave out the delicate but essential qualitative relational data that weaves the fabric of living ecosystems. So in essence, after the proxying and the pruning of qualitative data, it is easy to see how the so-called “knowledge” that algorithms produce is a rather poor reflection of social reality.

But (and that’s a big but), algorithmic technology is very attractive, because it makes decision-making convenient. How so? By removing uncertainty (of course I should say by giving the illusion of removing uncertainty). How so? Because it predicts the future (of course I should say by giving the illusion of predicting the future). Algorithmic technology applied to the social is essentially a technology of prediction. Shoshana Zuboff describes this at length in her seminal book published in 2019 “The Age of Surveillance Capitalism: The Fight for a Human Future in the New Frontier of Power”. If you do not have the stomach to read through the 500+ pages, just search “Zuboff Surveillance Capitalism”, you can find a plethora of interviews, articles and seminars she gave since the publication. (Just do me a favour and don’t use Google and Chrome to search, but switch to cleaner browsers like Firefox and search engines like DuckDuckGo). She clearly and masterfully elucidates how Google’s and Facebook’s money machines rely on packaging “prediction products” that are traded on “behavioural futures markets” which aim to erase the uncertainty of human behaviour.

There is a lot more to say on this (and I may do so in a later post), but for now, suffice it to say that just like the regenerative processes of nature are being damaged by mechanistic human activity, life-enhancing tacit ways of knowing are being submerged by the datafied production of knowledge. While algorithmic knowledge creation has a place and usefulness, its widespread use overshadows and overwhelms more tacit, warm, qualitative, embodied, experiential, human ways of knowing and being. The algorithmisation of human experience is creating a false knowledge of the world (see my 3mn presentation at TEDx in 2021).

This increasing lopsidedness is problematic and dangerous. Problematic because while prediction seems to make decision-making more convenient and efficient, convenience and efficiency are not life-enhancing values. Furthermore, prediction is not understanding, and understanding (or meaning-giving) is an important part of how we orient ourselves in the world. It is also problematically unfair because it creates massive asymmetries of knowledge and therefore a massive imbalance of power.

It is dangerous because while the algorithmic medium is indeed revolutionary, the ideology underlying it is dated and hazardous. The global issues and the potential for planetary annihilation that we are facing today arose from a reductionist mindset that sees living beings as machines and a positivist ideology that fundamentally distrusts tacit aspects of the human mind.

We urgently need a pendulum shift to rebalance algorithmically-produced knowledge with warm ways of knowing in order to create an ecology of knowledge that is conducive to the thriving of life on our planet.

Beware of The Dark Patterns of Online Design

If you do not know what they are, watch this very informative talk by designer Sally Woellner at TEDx Sydney in 2022.

She takes us through four Dark Patterns (DP) commonly used in user experience (UX) design.

Dark Patterns are design tricks used in the process of designing online interfaces to manipulate our behaviour in a way that serves the agenda of the website owner (buy what we do not need, spend more than we intended to, release more information on ourselves than is necessary etc), but not that of the user.

Some years ago, my research took me to look into how design manipulates behaviour. This is especially critical in online environments because they are completely mediated, i.e., we do not have direct access to “things” as we do in the physical environment. In the physical world, you can see, touch, and sit on a chair, or hold a cup in your hand. It is not the case when you are online. I do not know if you have ever thought about it, but the only direct experience you have when online is through an interface. HOW that interface is designed is not neutral. And since most online environments we dwell in are in one way or another trying to get something from us (money, data etc), there is a very strong incentive to play the biases of the human mind to direct users’ behaviours.

I will not go into the background theories of why Dark Patterns “work” here (it will be for another post). I want to present some useful content that will help you understand and detect them. Because when you know about them, you can recognise them and are less likely to fall for them.

Today, there is an abundance of online content explaining what they are and how to spot them. I first came across the name Dark Patterns in the mid 2010s when I found a video by a UK User Experience (UX) practitioner called Harry Brignull and his site Darkpatterns.org. It is worth checking this site (and the one below)! You will be amazed at the ingenuity of some of those patterns that have been arranged in categories by types. Some have interesting names such as “ConfirmShaming”, “Roach Motel”, or “Privacy Zuckering”, reflecting their unpleasant nature (and the elevation of Zuck to Greatest Villain of the early 21st Century?). He collected 400 of them in his Hall of Shame. The biggest offenders? Google, Facebook, Amazon and LinkedIn (are you surprised?)

How Dark Patterns trick you online (2018). This is a good introduction. Also read the comments, they make you realise the depth of the problem due to a lack of awareness).

Harry Brignull also recommends this website: https://darkpatterns.uxp2.com. This group of UX researchers at Purdue University investigates how making Dark Patterns more recognisable can lead to a more ethical and socially responsible UX practice. They also compile real examples of Dark Patterns on websites that you probably use (check the “corpus search” tab). You can also report some Dark Patterns you have come across. Below is a SlideShare summarising their work and findings.

//www.slideshare.net/slideshow/embed_code/key/CtUP7a10TBPFvB

The Dark (Patterns) Side of UX Design from colin gray

Datafied. A Critical Exploration of Knowledge Production in The Digital Age (PhD)

This is a short abstract of my PhD research. I will post more details in the coming days and weeks.

I first look at the epistemological processes behind datafied knowledge and contrast them with the processes of tacit knowledge production. I extract 5 principles of tacit knowledge and contrast them to 5 principles of datafied knowledge, and I contend that datafied knowledge is founded on a reductionist ideology, a reductionist logic of knowledge production, reductionist data and therefore, produces a reductionist type of knowledge. Instead of helping us to understand the world we inhabit in more systemic, holistic and qualitative ways, it relies essentially on quantitative, disembodied, computationally structured computer-ready data, and algorithmically optimised processes.

Through the filter of Walter Benjamin’s work “The Arcades Project”, I argue that datafication (defined as the quantification of the qualitative aspects human experience) is a Phantasmagoria, a dream image, a myth, a social experience anchored in a culture of commodification. The digital production of knowledge is supported by a need to reduce uncertainty and increase productivity and efficiency. It essentially serves a predictive purpose. It does not help us to understand the intricate, beautiful, fragile, qualitative, embodied experience of being alive in a deeply interconnected and interdependent world, an experience that to a great extent, defines humaneness and life in general. In this sense, datafied knowledge is hostile to life.

Finally, I call for a rebalance between tacit and datafied ways of knowing, and a shift to a more regenerative ecology of knowledge based on the principles of living systems.

Helene Liu – PhD Thesis Visual Map

Identifiers. You Are Being Watched Online

When you browse the web, you probably focus on the content that you come across. However, built within the digital architectures of the web are invisible little scribes that record what you do, where you go, and how you behave as you browse. Those little digital scribes are called identifiers. What exactly can they know about you as you live your life online? And why is it important?

This website tells you what identifiers you leave behind when you browse. Click on the link below and it will show you a long list of all the identifiers that every website you visit can find out about you, your location, your device etc…

https://www.deviceinfo.me

Why should you care? Because all these different data points are then used to create a “fingerprint” of your web browser, allowing the rest of your web activity on that same browser/device to be trackable.

I checked and the results show that it is possible to know my device type or model, operating system, browser, IP address (and whether I am using a VPN), country, ISP and servers names, the connection type. I expected that. What I did not expect was the system to detect “fingerprinting resistance”, as well as details about my hardware (number of webcams, microphones. which graphic card, RAM, battery status, number of fonts) and my browser (extensions, content filtering, cookies enabled or not etc). It also checked for “live”, i.e., changing operations as I was using the computer: live device motion (checked at intervals of 500ms), rotation and acceleration including gravity, live page visibility changes and live screen orientation and resolution changes. It could see the live current scroll position, the keys pressed, the mouse position on the screen amongst others.

We can’t stop browsing. What to do? Short of not being online, it is not possible to completely avoid surveillance. Use a good VPN, which means a paid-for VPN (the free ones most probably sell your data) that is well regarded in privacy circles. I use ProtonVPN, from the same company as ProtonMail, one of the most privacy conscious email providers on the planet. Use browsers that are known for their security and privacy features such as Firefox (do NOT use Microsoft Bing or Google Chrome). Regularly clear your cache and cookies (I do it several times a day). Install browser extensions that give you some control over what happens when you browse, such as No Script, Privacy Badger, HTTPS everywhere, Canvas Blocker, uBlock Origin, Facebook. Container. Set your browser privacy (in settings) to strict.

And spend a bit of time learning about how to protect yourself online. There are now MANY good publications, and articles in tech-oriented magazines such as WIRED, MIT TechReview and others that describe how to set up some levels of protection online. I know one can easily feel disempowered today in the face of the incongruous levels of unrestricted digital surveillance, but do not give in to despair. Technology is invented everyday to help us, and it IS possible to avoid a reasonable number of surveillance features that are built into our digital architectures. If you are really into privacy and security, check Michael BAZZELL’s book “Extreme Privacy, What It Takes To Disappear”, 4th Ed. 2022. There is an updated eBook on mobile devices. Check his website to learn more (I do not get any commission, I am recommending him because his book is phenomenal, and he is very knowledgeable).

[HOW TO] Protect From Data Theft? (Privacy)

Many people ask me how to protect from data theft from Big Tech. This is a really important question, so I asked a digital security expert friend of mine. This is his (unedited) reply. Some of those are more directly actionable than others. I will regularly add to the list.

Use a *trustworthy* VPN for all devices like Mullvad or ProtonVPN (or tor/I2P for truly sensitive things) with reliable DNS protection (but also aware VPN has own risks, strictly only to mask your true IP + mask your web activity from your ISP + provide more secure internet when connected to public or unsafe networks).

Use Linux on desktop (or any open source privacy friendly version, just avoid MacOS, Windows and ChromeOS).

Use de-googled android (grapheneOS) on mobile. Neither Androids not iPhones are safe. A mobile phone is most invasive and privacy leaking device in our lives.

Delete all social media and big tech accounts.

Replace the services/apps one uses with open source/libre software alternatives. Email, contacts, calendar, cloud storage, apps on phone etc… Especially avoid any products or services by big tech (e.g. Google docs, Gmail, drive, youtube, search, Chrome, WhatsApp etc…).

Use privacy friendly web browser (recommend “brave” browser) with disabled telemetry and tracking blocking and fingerprinting resistance settings set to maximum.

Use privacy friendly search engine (duckduckgo is OK), do not use Google search, Microsoft Bing, etc.

Understand how internet and web infrastructure works (networking basics) as this is key to knowing how to manage own data trail and emissions. Key part is understanding that every single action taken in relation to internet or digital anything leaves a permanent record and digital trail of breadcrumbs. So to know how to get by using alias information when possible, and to be extremely judicious in providing any true personal data in any digital context. Doesn’t matter that one uses the most private and secure computer system if they just go and share their personal life story and details by posting such on the internet. Disclose as little as possible online, and if needed use false/alias data.

Use end to end encrypted and metadata minimising methods of online communication (e.g. Signal is not perfect but probably best balance between privacy/security and usability/widespread use).

Generally opt to use software and services that rely on well-implemented encryption technology and *end to end* and *zero knowledge* encryption wherever possible.

Do not use regular phone call or SMS (use secure WiFi call or message via secure apps instead).

Airports at Christmas: Why AI Cannot Rule The World

It is the week leading to Christmas. I’m at the airport waiting for someone to arrive and as I observe what’s happening here, I can’t help myself thinking about the place we have allowed digital technology to take in our lives. In 2022, AI pervades decision-making in all areas of human experience. What this means is that the deepest qualitative dimensions of being alive on this planet are being reduced to computer data, those data are then fed to algorithms designed by computer scientists which have become the ultimate decision-makers in how life is lived on planet earth.

My contention is that the blind faith that we, the “moderns”, have in algorithms and what we call AI (often without really knowing what that means) is misplaced. There is a place for algorithmic decision-making, but the we need to learn to value the qualitative, embodied, experiential dimension of being alive in a human body, with a human mind.

To understand why AI cannot rule the world, go to the arrival level of an airport at Christmas time, and observe. See the reunion between people who love each other, who have missed each other, the smiles on their faces, the tears of joy of finding each other after several weeks, or months or even years of absence, the excitement, the laughter, the warm hugs… And you will realise why the cold logic of AI can’t cover the reality of the experience of being human, of life.

I have little patience for those who profess that the laws of pure logic rule the social and that we can sort everything out with cold data. What about the rich warm relational dimension of being human? Those people go around claiming that logic and science are all we need, but the irony is that they fail to see that they are surrounded by networks of other persons who provide love, care and warm attention.

Feminine & Masculine Ways of Knowing – A Deep Imbalance

The following post is inspired by Safron Rossi’s interview on her book about Carl Jung’s views and influence on modern astrology. In the interview, she says:

“One way to approach this point (Jung’s unique contribution) is why is Jung’s work significant in the field of psychology. And for me, I would say that it has to do with the way he attempted to meld together the wisdom of the past with modern psychological understanding and methods of treatment.

The Jung psychology is one that grows organically from traditional understandings, particularly in the realms of spirituality, religion, mythology, and comparative symbolism. And in an era where psychology was becoming increasingly behavioural and rationalistic, Jung insisted on the importance of a spiritual life because that has been the core of the human experience from time immemorial. Why all of a sudden would the spiritual life really not be so important? It’s a really big question.”

What she mentions is central to the argument of my PhD. Suddenly, in the 19th century, at the time of the industrial revolution, the tacit experience and understanding of living became not so important, or rather, not so reliable as a way of knowing. The belief that emotions are clouding the (rational) mind and that the machine was more reliable than humans because it had no messy emotions became the mainstream ideology.

But tacit knowing (i.e. the qualitative knowing that results from embodied experience and which can also be called intuitive knowing) is a fundamentally feminine way of knowing. Instead with the Industrial Revolution, it has been replaced with faith in masculine ways of knowing, so called scientific, but in fact, “mechanistic” more than “scientific”.

As Mikhail Polanyi argues in his books Personal Knowledge (1958) and The Tacit Dimension (1966), tacit knowing is fully part of science. What I call the statistical mindset is a reductionist, mechanistic way of knowing that solely has faith in mechanistic, explicit and importantly, measurable knowledge.

Here, Rossi says that Carl Jung gave (feminine) tacit knowing a place in modern psychology at a time (the time of the industrial revolution) when disciplines such as psychology and sociology were overwhelmed by the statistical mindset that values measurability above all. Examples of this in the field of psychology is the behavioural school, in sociology, Auguste Comte and positivism.

In Europe, the 19th century was the century when women were believed to be too irrational to make important decisions (like voting for example) and it was also the century when purely statistical, measurable pseudo sciences (e.g., the dark science of eugenics) were born; it was the time when the factory line became the model for everything, mass production, but also the health system, the economy, psychology, education etc…

It is important to realise that the rationalisation of the social sciences was not in and of itself a “bad” thing. In a way, it was also a way to bring some degree of rigour to the field, and more importantly, to experiment with what can and cannot be measured. Walter Benjamin talked about the Phantasmagoria of an age, i.e., the set of belief system that underlies the development of thought during that period of time. Measuring, fragmenting the whole into parts, analysis, control over the environment were all part of the phantasmagoria of the Industrial Revolution and the Modern Age. All disciplines went through this prism (including Design, I may do a post on this later). Jung melded WISDOM into MODERN PSYCHOLOGY, which was very unusual at the time.

Statistical knowledge is predictive knowledge. We use statistics to know something that will happen in the future, like the likelihood of a weather event to happen, or market movements, or usage of public transport etc… It is the best knowledge we have to OPTIMISE, when the values of EFFICIENCY and convenience are primordial (like in urban or business planning for example). It is founded on the masculine principle trait of linear logic (if A and B, then C), and on the equally masculine principle trait of goal orientation (Jung’s definition of masculinity: know what you want and how to go and get it).

This is not in and of itself bad or good, there is no value judgement here. Again, it is not a matter of superiority (which is a masculine concept, i.e., fragmenting and analysing by setting up hierarchies), but of BALANCE. Today, we live in a world (more specifically, the geographies at the centre of power) where feminine ways of knowing, which emphasise regeneration, intuitive insights, collaboration, inter-dependencies and relationality are not trusted and are suppressed, often in the name of science.

Living systems function on the principles of feminine ways of knowing. But it is not really science itself that smothers feminine ways of knowing, it’s the reductionist mechanistic mindset (and the values of efficiency and optimisation) which is applied to areas of life and of living experience where it has nothing to contribute.

As I argue in the PhD, while digital technologies are indeed revolutionary in terms of the MEDIUM they created (algorithmic social platforms), from the point of view of the belief system that underlies them, they in fact perpetuate an outdated mindset (described above) which serves the values of efficiency and optimisation with a disregard for life.

Datafied. A Critical Exploration of the Production of Knowledge in the Age of Datafication

This is the abstract of my PhD submitted in August 2022

As qualitative aspects of life become increasingly subjected to the extractive processes of datafication, this theoretical research offers an in-depth analysis on how these technologies skew the relationship between tacit and datafied ways of knowing. Given the role tacit knowledge plays in the design process, this research seeks to illuminate how technologies of datafication are impacting designerly ways of knowing and what design can do to recalibrate this imbalance. In particular, this thesis is predicated on 4 interrelated objectives: (1) To understand how the shift toward the technologies of datafication has created an overreliance on datafied (i.e., explicit) knowledge (2) To comprehend how tacit knowledge (i.e. designerly ways of knowing) is impacted by this increased reliance, (3) To critically explore technologies of datafication through the lens of Walter Benjamin’s work on the phantasmagoria of modernity and (4) To discover what design can do to safeguard, protect and revive the production of tacit knowledge in a world increasingly dominated by datafication.

To bring greater awareness into what counts as valid knowledge today, this research begins by first identifying the principles that define tacit knowledge and datafied ways of knowing. By differentiating these two processes of knowledge creation, this thesis offers a foundation for understanding how datafication not only augments how we know things, but also actively directs and dominates what we know. This research goes on to also examine how this unchecked faith in datafication has led to a kind of 21st century phantasmagoria, reinforcing the wholesale belief that technology can be used to solve some of the most perplexing problems we face today. As a result, more tacit processes of knowledge creation are increasingly being overlooked and side-lined. To conclude this discussion, insights into how the discipline of design is uniquely situated to create a more regenerative relationship with technology, one that supports and honours the unique contributions of designerly ways of knowing, are offered.

Fundamental principles framing Grounded Theory are used as a methodological guide for structing this theoretical research. Given the unprecedented and rapid rate technology is being integrated into modern life, this methodological framework provided the flexibility needed to accommodate the evolving contours of this study while also providing the necessary systematic rigour to sustain the integrity of this PhD.

Keywords: datafication, tacit knowledge, phantasmagoria, regeneration, ecology of knowledge

Chris Jones – Designing Designing

A few words from John Thackara (who wrote the afterword of Chris Jones “Designing Designing”) on Chris Jones’ mission and philosophy (the full post can be found on Thackara’s blog).

As a kind of industrial gamekeeper turned poacher, Jones went on to warn about the potential dangers of the digital revolution unleashed by Claude Shannon

Computers were so damned good at the manipulation of symbols, he cautioned, that there would be immense pressure on scientists to reduce all human knowledge and experience to abstract form

Technology-driven innovation, Jones foresaw, would undervalue the knowledge and experience that human beings have by virtue of having bodies, interacting with the physical world, and being trained into a culture. 

Jones coined the word ‘softecnica’ to describe ‘a coming of live objects, a new presence in the world’. He was among the first to anticipate that software, and so-called intelligent objects, were not just neutral tools. They would compel us to adapt continuously to fit new ways of living. 

In time Jones turned away from the search for systematic design methods. He realized that academic attempts to systematize design led, in practice, to the separation of reason from intuition and failed to embody experience in the design process.”

All of the above ring very true today. The reductionist approach to knowledge, the general disdain for the richness of human knowledge and experience, the widespread contempt for embodied knowledge, the radical separation of reason and intuition, the hidden shaping of a new belief system around the superiority of rational machines, the invisible but violent bending of human friendly ways of living to fit machine dominated new ways of living.

« Older posts Newer posts »