A report has been produced in parallel with the RoboDebt Royal Commission that proposes that a
more “human-centred” approach will reduce administrative problems.
The Royal Commission has demonstrated all too well how disastrous a human-centred approach,
with behaviour ranging from the noble – risking your job by saying “This is stealing!” – to the ignoble
– smashing a phone to emphasise that you will do as I say.
If we are going to make it human-centred, then we need to understand the limits:
 The clients are poor and struggling, and may not be able to understand their responsibilities,
and/or communicate well. It seems that the legislation has been written to allow for this –
“best efforts”.
 The staff may not understand the legislation well, or have even chosen not to read it
(legalese fries some people’s brains – they refuse to even look at it).
The different specialties may understand a limited part, but not other parts. In particular, a lawyer
may function poorly when needing to interact with a systems analyst/programmer, and vice versa.
“best efforts” as a glaring example – it is not defined in the legislation (it is an open-ended
instruction – how could it be), so it does not exist.
What we are suggesting is taking the unconscious mind out of the process.
How does that work?
At the moment, people read the legislation, and unconsciously decide what the words mean, when
some words have many meanings, and some words need to be clumped to have the correct meaning
(“an imaginary flat surface” – the flat surface is the thing that is imaginary).
The alternative is to have a machine read the legislation, and with some help from a small group of
humans, come up with an understanding of it which can function as an authoritative source. Rather
than being locked away in a human brain, it can show the meanings of words and clumps of words in
a particular context.
Can’t the small group of humans helping it make mistakes?
Indeed they can, and are very likely to, as well as any mistakes in the original legislation. The
difference is that people have a Four Pieces Limit – no more than four pieces of information that
they can keep “live” in their conscious mind. The machine doesn’t have this limit, so it can point out
all the inconsistencies. It can do that because it has turned the words into small pieces of machinery
which interact with each other – exactly the same as happens when a person reads text, just at a
much larger scale – it isn’t troubled by a reference 50 pages back or 500 pages forward, because its
first job is to stitch everything together.
Once the machinery has been built, it can function as a working model – the wheels can be set in
motion. No, it can’t be an operational tool for millions of transactions, but its output can be
compared with the output of a production algorithm and the reason for differences investigated
(which may turn out to be an excruciatingly long journey).

But it still won’t know what “Best efforts” means!
True, but it runs on English, so some text that says “Approach employer first for pay slips” or
whatever process that can be described in English. It can show what should have been done.
Sounds OK – what are the drawbacks?
The introduction of this new means of organisational support – let’s call it Active Structure – will
have to wait for the resolution of many serious managerial inadequacies, as revealed by the
Robodebt Commission.
It will require someone with concern for the Department’s clients, and for the staff. Diplomatically
introducing the concept of the Four Pieces Limit is likely to cause great concern. The analogy of using
a truck to carry a heavy load is close, but this is a cognitive load.