‘tis nobler read about the Jeopardy Quiz Show challenge the other day that will pit human champions against a ‘thinking’ machine, similar to past contests between Grandmaster chess champions and their technological opponents. This will provide an insight into how nuanced Artificial Intelligence (AI) has become and how far there is yet to go. Is that the Singularity I see up ahead?
Our decision making is beset with nuance, opinion, hope and bias; these influences, and many more besides, play a much greater role than a logical, systematic analysis of the available data. Synthesising a broad theme from the huge amount of work in this area leads to this ‘tis nobler adage:
It’s more important not to lose certain immediate inconsistencies.
If this sentence is unpacked, four things fall out – it’s more important not to lose, it’s more important to opt for certainty, it’s more important to favour the immediate and that these three produce the many inconsistencies in our choices.
When faced with a decision or dilemma, the odds are (for, after all, we live in a probabilistic world) that you will favour not losing over the possibility of winning even when the chances of each outcome are identical, you will favour a small certainty over a much better but less certain outcome and that you will favour taking immediate issues into account at the expense of broader, potentially much more important criteria.
And then there are all the other influences. Perhaps all decision making reduces to a comparative assessment of whether what we lose in the fire, we gain in the flood:
Every step of an experiential learning or behavioural change journey is accompanied by decisions and judgments. Some are trivial, some matter, some are crucial and a few could be life-changing. How will you discriminate between these types and then, within these types, how discriminating will you be?