This post is a bit of a mixed bag about technology and fragility, a bit about AI and tiny bit on politics. You've been warned.
Back in the communist and then early capitalist Poland, where I grew up, one could often get used soviet equipment such as optics, power tools etc. Back in the day these things were relatively cheap and had the reputation of being very sturdy and essentially unbreakable (often described as pseudo Russian "gniotsa nie łamiotsa" which essentially meant you could "bend it and it would not break"). There are multiple possible reasons why that equipment was so sturdy, one hypothesis is that soviet factories could not control very well the quality of their steel and so the designers had to put in additional margin into their designs. When the materials actually turned out to be of high quality, such over engineered parts would then be extra strong. Other explanation is that some of that equipment was ex-military and therefore designed with an extra margin. Nevertheless, these often heavy and over-engineered products were contrasted in the early 90's with modern, optimized, western made things. Western stuff was obviously better designed and optimized, lighter, but as soon became apparent, a lot less sturdy. My dad still has several post-soviet machines, while he used and replaced numerous western made modern power tools.
Optimized vs. fragile
Obviously one could argue, that the soviet tools were inferior: heavier than necessary, using more raw material than necessary and (it it wasn't for the strange economics of the falling soviet empire of the late 80's they would have to be more expensive). But the undeniable fact is, that many of these objects still work fine after 30+ years while many more optimized counterparts failed long ago and had to be replaced. There is a clear tradeoff - build more expensive and in some sense less optimal parts in order to make them more resistant to unknown and unanticipated perturbations (and the flow of time), or build stuff with very well designed spec envelope and fitting very precisely that envelope. The latter will be lighter, possibly cheaper to manufacture, in many ways better, but... will fail as soon as the usage condition goes outside of the tight design limits.
Information and predictability
All these considerations eventually boil down to predictability and information. If we exactly know (or can predict) what kind of stress will a given part experience in its life, and we know exactly the specs of material it is made of, we can manufacture a very optimal piece that will have exactly as much strength as it needs. As soon as uncertainty lurks in any of these areas, things begin to complicate. If we cannot very well characterize the conditions in which the part will be used or cannot guarantee the parameters of the materials used, or cannot guarantee the precision of the manufacturing process, the part needs to be made with a margin of extra strength. This is why e.g. military equipment tends to be very sturdy and robust: the battlefield is a somewhat unpredictable environment, and things that operate in those conditions need to be stronger than usual, even if it means they will be more expensive, heavy or less pretty (or whatever other measure of optimality). Same goes for any other equipment, often a distinctions is made: military grade, commercial grade, consumer grade etc. These groups typically vary by how much stress can a given product take before failure and how much is given customer group willing to pay for that extra sturdiness.
Technology can be fragile
This leads us to the recent times where technology (particularly computing) have lead to enormous progress. We now enjoy many wonders, such as GPS, smartphones, cloud computing, digital photography etc. All these advancements share one common feature: they are highly optimized solutions. What I'd like to focus on in this post is the question: how fragile are these new products/services? I'm not saying up-front that all technology is necessarily fragile, Internet itself is an example of a design, where robustness to communication problems was a primary consideration (for those who don't remember, Arpanet was designed by DARPA to be a communication network resistant to nuclear attack). In that sense the Internet is extremely robust. But today we are being introduced to many other instances of that technology, many of which do not follow the decentralized principles that guided the early Internet, but are rather highly concentrated and centralized. Centralized solutions are almost by definition fragile, since they depend on the health of a single concentrated entity. No matter how well protected such central entity is, there are always ways for it to be hacked or destroyed.
Testing for fragility
The simplest test to reveal whether something is fragile or not is to perturb it and see what happens. Obviously many of the things I'd like to talk about here could not be tested directly, but is is often enough to run a mental experiment, in which some key assumption about the working environment of a system is questioned and imagine what would then happen to that system. Such perturbations can be rare or even unheard of, nevertheless it is a worthwhile exercise.
Example: Self driving car can be very fragile
Imagine a society where self driving cars have been widely adopted and hardly anybody can even still drive. Let's assume that as promised by the the proponents of that technology, these cars are safe and generally everybody is happy in a utopian kind of way.
Now imagine a disaster happens, we shall list several cases example:
- Say the fleet of cars is dependent on localization via a satellite system such as GPS. One day a powerful solar storm comes and fries all the satellites (it may also be an enemy state attack or whatever else). It may take years to rebuild the satellite fleet. What happens to that society? Well, majority of their transportation infrastructure becomes useless, without transportation the society quickly goes into chaos. Fragile.
- Say the cars depend on very precise mapping of position of road and every object. One day an earthquake strikes, causes several meter displacement of giant pieces of land (as they often do). Aside from the broken infrastructure which may need to be circumvented in a novel way (at least until the primary infrastructure is rebuild months or years later), suddenly the autonomous vehicle fleet cannot localize. Fragile.
- Say the fleet depends on a connectivity with a central computer (cloud server). One day that server gets hacked/burns down/whatever else. The fleet of cars dies. Fragile.
- Say the fleet runs the exact same version of software. One day a hacker group discovers a flaw in the software that allows to say take over the car remotely. Entire fleet of cars is prone to a massive synchronized attack, performed for example by an enemy state. Fragile.
There are recurring patterns here, which are in fact easy to spot in many of today's technological offerings. In general any solution that:
- has many identical copies
- depends on a communication with a centralized entity
- depends on single source of energy
- crucially needs a particular modality of data available (such as GPS)
- crucially depends on a single assumption about the infrastructure or the world in general
will be fragile. We have seen this a lot with botnet attacks on operating systems (since we now have millions of identical computers running the exact same software), hacker attacks on centralized databases (recent Equifax being a prominent example at the time of writing in late 2017), and bricked hardware manufactured and sold by a company that later on failed and could not sustain the necessary central infrastructure (such as e.g. Revolv), hence why I generally like to avoid subscription based solutions.
A centralized entity will always be more fragile then a distributed one - a damaging disaster can always happen locally (such as somebody detonating a nuke), but it is a lot less probable that the same kind of disaster will strike in many places at the same time - everyone serious about their data backup knows that!. A large number of identical copies will always be more prone to attacks than a diverse population, since a single attack vector will open up access to many millions of entities and hence the incentive to construct such an attack is vastly stronger (even if the technical complexity of such attack is enormous, as it is the case with many contemporary hacks).
Unfortunately decentralized and diverse will be more expensive, since infrastructure needs to be replicated and various mutations of code need to be simultaneously supported. It is easy to trade the resistance to hypothetical future disaster (which may or may not happen) into tangible and real profits today. This will generally pay off, until obviously the disaster happens and they all the profits get erased.
Todays AI is very fragile
I would not be myself If I did not relate these thoughts to AI (artificial intelligence). Fact is, today's artificial intelligence is still extremely fragile, in the sense that it fails rather catastrophically as soon as the data is outside of the domain the system was trained on. So if you think that a self driving car will invent some clever ways of avoiding a new situation (road flooded with mud, soil liquified by an earthquake etc.), anything the designer did not anticipate or found too "improbable to care", think again. That fragility is in striking contrast to the essence of human intelligence which is exactly the opposite: ability to survive and take advantage of the ever changing and unpredictable environment (applies to many other animals too). If you have doubts about that, talk to the nearest roboticist, or watch 2015 DARPA robotics challenge videos, or read my other posts.
Distributed and decentralized is a lot less fragile
This is somewhat intuitive, and very much visible in the popularity of blockchain technology - a distributed and cryptographically self proving database. The most well known usage if this technology is bitcoin, a decentralized database of financial transactions often called a digital currency or crypto-currency. Although I'm not a big fan of bitcoin in particular (in the sense that I don't invest in this particular asset), it is hard to ignore the primary principles of this endeavor: fully decentralized and distributed with strong emphasis on consistency and security. This is the exact opposite of highly centralized contraptions supported by flimsy security such as the infamous Equifax.
The idea to question the basic assumptions under which a given system would work is not new, and essentially boils down to Murphy's law: if things can go in a bad way they will at some point. If there is a big centralized database of great value, it is safe to assume it will be hacked. If there are millions of copies of a particular entity of high potential value, it will get hacked. If a crucial part of infrastructure depends on a single centralized unit/modality/data source, that entity will be the primary target of any enemy state or terrorist organization.
Another aspect of "fragilizing" is the way it composes in large systems. Whereas "robustness" tends to mute small perturbations in various parts of the system, fragility amplifies them in a cascade failure effect. In a large system of interacting parts, when one element of the chain fails in some way, it will typically perturb other parts. Similarly to epidemics threshold, if these other actors are relatively resistant (robust), the perturbation will generally die out exponentially and the large system will remain healthy. However if the "percolation threshold" is reached (which means effectively every part of the system is fragile and will break under small perturbation), even a small event in the periphery will grow explosively and take down the entire system.
It is hard not to be under the impression that with technology being more and more widely adopted and everything in the economic supply chain more and more centralized and optimized we are becoming more prone to such systemic cascade failure, whether that be economic or technological. And this is in my opinion perhaps the untold story behind the "anti-elite" or "anti-progress" sentiments in contemporary society as evident from recent election results and growing popularity of solutions such as bitcoin or movements such as the "right to repair" initiative. I don't think people are explicitly against technology or progress, but people are intuitively against fragilizing, being enclosed in a box that depends on the health of some giant entity they have zero control over. The so called liberal, progressive elites are fascinated with progress and push forward solutions which arguably are more optimal (such as global market, offshoring services, subscription based services etc.), but expose societies to dangers they have not faced before. I don't want this post to turn into a political rant and in no way do I want to endorse here any particular political movement (especially since I don't think any political establishment correctly identified "fragilizing" as a thing on their agenda). But at the same time I feel there might be something in that sentiment, and although I myself generally support progress, I think the issues discussed in this post are valid concerns and deserve more attention.
If you found an error, highlight it and press Shift + Enter or click here to inform us.