GNU nano 5.8 blog.html v u l p e b l o g

>trust, black boxes, and technological understanding

This blog is primarily going to be a place to jot down any thoughts I want to build upon and think about further; getting words down on the "page", so to speak, is just a useful practice that I and others I think have forsaken in recent times. Which brings me to my first post here, on the growing dissociation between the end-user and the producers of modern technologies.

I read a fascinating quote from a book I really must get around to reading some time soon, Autonomous Technology from Langdon Winner, which jumpstarted my thoughts on the matter again. The quote is as follows:

"Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole."

"Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…" (Winner 1977)

>trust and viruses

At first read I could only think of the simple lack of understanding of say, how WiFi access works, everyone with a bit of tech knowledge has had to assist *that* relative before. But this great disconnect is more insidious than that. Misinformation present on the modern web, and just how simple it is for so many to get stuck in a trap of an echo-chamber, manipulated by misinformation, convinced to hand over any large amount of private data to corporations to use a given platform... The list goes on and on. Without a strong understanding of the fundamentals of how these technologies work, it's a simple matter from there to be deluded by anyone with a greater understanding.

A plainly visible example of this is COVID-19 misinformation and conspiracy. It has been an anecdotal experience of mine where I can give a full breakdown of how the vaccine works to someone without familiarity of the field, and only basic background of biology, but they don't have any trust in the answer given to them.

This isn't just a matter of simple misunderstanding or poor explanation (though maybe I'm worse than I would like to believe at explaining it), but even with an appeal to authority as a researcher in the field, and apparent understanding from the individual, they simply refuse to believe it. I believe this is a result of the subject being a "black box" to them for so long, their entrenched beliefs founded upon this dissociation from understanding has led them to take a faith-based approach to the matter.

Why then, would someone take a faith based approach to something that can be explained, in theory, but the concepts may be challenging to understand? I would surmise that it's as simple as a lack of education in the matter. What then, when the educational deficiencies compound under the weight of simple, intuitive systems being replaced by ever more complex ones without the understanding ever being achieved of how and why these systems work, and what their uses are to us?

What I would argue is that systems involving many modern technologies from the internet and internet-capable devices most people use daily, to vaccinations and virology, have become sufficiently complex and education has fallen sufficiently behind so as to make these systems no better than a black box for the majority of people. This results in a near requirement for the layperson to place their "trust" in an externality, in an appeal to authority to understand what they can not, and thus recieve direction on how to proceed from that expert.

Why do people go to doctors to get a diagnosis for an illness? The medical field is dense and diagnosis of any number of ailments typically takes an expert who has studied the field to reach a good understanding of how these ailments present and how to treat them.

Similarly so, why have a representative democracy? To place experts with an understanding of the law and politics beyond the layperson in office, to make better educated decisions that (hopefully) benefit the layperson better than they could themselves manage.

But what if that trust erodes, despite the "expert" being by all other accounts correct? Those who placed their trust in the expert then regard facts as presented by the expert as untrustworthy as well.

At the same time, anyone with a better understanding than this individual who has lost their "faith" in an expert can swoop in and replace that expert with any lie they want to spin. This individual is essentially a blank slate ready to accept another expert who can explain the unexplainable to them, or even worse, tell them what to believe.

Fear of the unknown and poor understanding of any given subject can easily convince someone to place their faith in any number of ideas that have no real credibility behind them.

This is only exacerbated by the "black box" of communication that almost everyone uses today, in social networking and this modern interpretation of communication.

Recommendation algorithms are not truth seeking, they are not "experts" with good intent, they exist to capture a users attention and make them use a product more. What happens when everyone communicates primarily through a medium that has an unknowable, invisible filter that selects what you see that is not aligned with bettering the user's life and wellbeing?

This individual who has lost trust in an expert posts to their account and is bombarded with beliefs contrary to that expert, beliefs they're likely to engage with, beliefs that grab their attention and funnel them into communities of people who all echo the same sentiment, and rally behind figureheads who know they're selling lies but can easily dupe those who have been funneled through the pipeline without their knowledge.

While I didn't want this to become a piece simply on COVID misinformation and pipelines into extremism, the current social upheaval caused by these things serves as an adage that I believe many can relate to.

>the wider picture

So we have an issue with education and understanding, of trust in experts, and trust in systems we don't understand. What I would like to approach next are these systems we don't understand. Before we can even tackle the issue of misinformation and education, we have to first tackle the issue of communication in mediums out of our own control, in black boxes purposely kept unknowable.

We first have to determine which systems are purposefully kept hidden from us, and we have to *STOP USING THEM*. The debate on censorship of social media completely misses the point of the matter. Why is our primary form of communication with one another in the modern age any number of platforms where what we see and what we send is visible at the whims of a company, an intentionally obscured algorithmic black box, or both?

Understanding of the systems used that is required in order to not be deceived or misled is impossible when the knowledge needed is purposefully obscured, or unknowable by humans period. The latter example is the case with things like the recommendation algorithms Google and Twitter both use. How can we know this is helpful to people if no one knows how it works.

Furthermore, how can we trust an actor that has shown that its only interest is profit to create an obscured system that is beneficial to its users first and foremost? We can't, it's utterly contradictory to do so.

In order to ever achieve an environment conducive to education so that everyone might better their own circumstances as well as others, the technologies we rely upon to interact with one another must be built upon principles of usefulness to the end-user. Their understanding of the system and how it benefits them, be it in communication or in performing a certain task, is paramount.

If the system is designed with the user in mind, this can only be beneficial to the platform and the user. But if the intent is to manipulate the user to ends against their wellbeing, this transparency is dangerous to the creators motives, so it is hidden.

Until these systems that are intentionally obscured are replaced with methods allowing for an open and unbiased dialogue to occur (federated platforms such as Mastodon are a good start) then I fear these systems will simply continue to slide down the obfuscation slope.

The more these systems become understood by the people who use them, the more the users understand how they're being manipulated, and the less money the platforms can make off of these people, and thus why would they ever do anything BUT obscure understanding of these systems.

We have to work to become trustworthy experts of our own right, in everything we do, or we hand over the keys to our beliefs to whoever will take them, and whoever takes those keys oftentimes will not have your best interests in mind, but their own.

But before that can ever occur, exposure of systems built to abuse their users by never allowing them to develop this understanding must be a priority.

Oct 29, 2021

>return to blog archive

>home