
“These related views are so good but it’s also spoiling that I start thinking less. I’m not sure if that’s really a good thing.”
– Agency plus automation: Designing artificial intelligence into interactive systems
Eve Layton believed that her mission in life was to be the vanguard–it did not matter of what. Her method had always been to take a careless leap and land triumphantly far ahead of all others. Her philosophy consisted of one sentence–“I can get away with anything.”
– The Fountainhead
Students in primary school may spell better than adult writers in the twenty-first century. The Editing component of the Language Use paper explicitly tests spelling. Of twelve errors in a passage, six are spelling errors. Trained to correct spelling errors, students recognise incorrect arrangements of letters – ‘ei’ or ‘ie’, inclusion of letters which do not belong and exclusion of letters which do belong.
While there are very often connections between how a word sounds and the letters which constitute its spelling, spelling a word is not always a straightforward and intuitive process. A synonym for the word, ‘middle’, can be spelled ‘centre’ or ‘center’. Someone relying on the sound-letter connection because the correct spelling has not been memorised (not memorized), might be more inclined to choose ‘center’. This would be marked wrong because this is the American version of the word whereas the British version, which is used here takes the spelling, ‘centre’.
Adults though do not necessarily have to be mindful of these nuances thanks to spell-check and auto-correct functions in word processors. Auto-correct is an example of Intelligence Augmentation (IA) – where machines “extend people’s ability to process information and reason about complex problems)” and is distinguishable from Artificial Intelligence (AI) which is “computational methods for perception, reasoning, and action” (Heer, 2019).
AI can be conceptualised as or has as its final aim, to be to the extent safely possible, human out of the loop (HOTL) technology. IA on the other hand has a human in the loop (HITL) to be the final decision maker. IA saves time and effort by way of minimising costly drawdowns from working memory which is, useful as it may be, finite.
An example of this is the predictive technology embedded in email software. When an email is constructed, this technology augments human cognition through intuitively suggesting chunks of words, even complete sentences depending on word(s) typed by a human writer and the location in the overall structure of the email where such typed words appear.
If and when the technology rightly predicts the human’s intention, the writer merely has to accept its recommendations wholesale. Such recommendations, based on machine learning of patterns of use by the writer in particular and writers in general are consequently mechanical. The machine does not recognise the unique context of an email which includes cultural factors such as relational dynamics between writers and recipients and as such its recommendations may lack nuance. It will also not recognise shorthand or idiosyncratic formulations which have been developed within a community or between interlocuters – hai. In other words, the recommendations are based on form and not substance.
A machine does not recognise substance and its output is one of form, not substance. Processing of substance takes time, draining the working memory and this is why human responses are more deliberate than those of machines which do not shoulder the responsibility of dealing with consequences of recommendations. This is why AI researchers have been advocating for HITL mechanisms at the design stage.
Stories of disastrous consequences of HOTL technology mechanisms abound.
The Viking Sky is a cruise ship and like the Titanic was touted as a “state-of-the-art sea faring ship that has the latest in capabilities and equipment” (Eliot, 2019). In 2019, in the midst of a raging sea and turbulent winds, its engine was shut down by an automated HOTL system. This left stranded a good number of passengers and crew members, swaying violently with the waves during the night, with no means of charting their own course. The system had been programmed to shut down the ship’s engine when fuel levels were low. While in this case, the fuel levels were low, they may not have been low enough to justify disabling the engines. If the technology had allowed for human override, the captain could have decided to steer the vessel to the nearest available docking space based on his assessment of mileage obtainable from what fuel was left.
Quoine, a cryptocurrency exchange was exposed to claims by users of its automated trading platform. A technical fault in its trading system resulted in the sale of Ethereum (a cryptocurrency) at a wildly inflated price (about 250 times) nominated in Bitcoin by one user to two other users. The entire sale happened automatically without any human intervention. When Quoine realised what had happened, it reversed the trades and was subsequently informed that the trades were to be honoured. This meant further exposure to claims by the buyers who had bought Ethereum at the inflated prices without human action.
There are also instances of unnecessary human intervention with automated systems which result in disaster. Michael Crighton’s nail biting thriller, Airframe shows how this can happen. Here, an over-zealous pilot overrode the auto-pilot function and overcorrected when TransPacific Airlines Flight 545 appeared to experience turbulence. This resulted in fatalities and an expensive, protracted investigation. This example underscores the importance of discretion and composure in human-computer interactions (HCI). A deft and light touch often outweighs a heavy hand. An example of this Captain Chesley B. Sullenberger who manually landed US Airways 1549 on the Hudson river, saving all lives on board.
Researchers seem unanimous in their views that automation should complement and not replace human agency. In a complementary role, intelligent automation enriches consumer experience. Netflix, Google and Spotify recommendations often help consumers discover new offerings in a genre of interest or entirely new genres which interest. These are examples of what is termed Artificial Narrow Intelligence (ANI) – ability to perform specific tasks with pre-determined rules and boundaries.
Intelligence akin to human intelligence – ability to make decisions on the fly in response to changes in the environment almost intuitively is termed as Artificial General Intelligence (AGI). Fjelland (2020) in Why general artificial intelligence will not be realised shows why humans cannot be replaced. He cites the example of AlphaGo which was the first computer programme to defeat a skilful human player.
This was a big deal because, Go unlike chess requires intuitive play and players are often unable to articulate concretely why they undertook the moves they did. Indeed, they often “were only able to describe a board position as having ‘a good shape’” (Fjelland, 2020). It is difficult to concretise pathways for winning moves. Accordingly, a programme designed to play Go cannot be structured in a rule based rigid way. That AlphaGo managed to beat a skilful human meant that AI, like humans was able to absorb nuances of a situation in an intuitive way. However, in later tests, it “turned out to be vulnerable to tiny changes” and deep reinforcement learning technology which underpins AlphaGo, struggles to find commercial application.
Depictions of AI in movies and marketing efforts by developers of such technology might lead users to believe in the infallibility of their processing ability which humans cannot rival. Users may feel impressed by grand sounding technical terminology associated with them or spewed by them (think of any Sci-fi movie) and might be tempted to delegate decision making to them. Their recommendations or actions may seem to humans like they are based on fantastic leaps of logic humanly impossible. However, it should be borne in mind that sometimes automated systems are plain illogical.
When consumers are still fretting over the safety features of driverless cars, air taxis are already being tested by companies like SkyDrive. Preliminary research on the Urban Air Mobility (UAM) market suggests that users very much still want the presence of an experienced pilot in the cockpit.
When a human captain and an automated system interact to get passengers safely from point A to B, there must be harmony between the two. Two cannot walk together unless they agree.
In cases of conflict, given the limitations of ANI, the recommendations of the automated system must necessarily be subject to the captain’s feel of the road ahead.
The Brain Dojo
