Some psychological concepts are hard to communicate. Either
that, or psychologists are so prone to jargon that they fail to communicate
even very simple ideas. Whatever the reason, it is hard to explain
security/access trade-offs, particularly without drawing graphs.
Currently the Web is set up to provide as much security as
possible “to keep you and your data safe”. You already know the routine: you
are first asked to set up a username. This is not your name, which would be too
easy, but the name given to you by the service provider. They may sometimes
send you a username, which will not be a name but a string of digits and
letters. Sometimes they let you choose your own, and give you helpful hints,
such as combining your surname with your initials and your birth date. Other
websites allow you to use your email address. Incidentally, when you try to
login again later on, none of these websites will give you a clear indication which
of these conventions they have adopted. All this, for your security. If they had
the decency to remind you on the front website page which sort of username they
wanted, it would be easier for you to remember it.
Then, they ask you to provide a password. At this stage, you
enter your favourite password, which you use with all websites, in the way that
you use your name with all your friends. This procedure is frowned upon by web
security experts, who tell you to alter your passwords. Anyway, you enter your
password (having written it down somewhere so that you can remember it when you
enter the website again, a practice also frowned upon by security experts) and
the website rejects your efforts. At that late stage, triumphantly, they tell
you that, as far as they are concerned (for your safety) you must use upper and lower case letters,
and/or some numbers, and/or some punctuation marks, and/or a certain password
length. You end up with a password you have never used before, which you write
down somewhere. The passwords are assessed for strength, and not for
memorability. Memorability is suspect.
Naturally, when you return to the website a month later, to
check a bill or to re-order something, you hit a blank wall. You recall neither
the username nor password. You have lost access. You try some of your usual
combinations, all of which are rejected. Access is then permanently blocked. Now
you have to go through a tedious routine to try to regain access, possibly
providing answers to key questions you have been asked for security purposes,
answers you have already forgotten.
Of course, the problem lies in security experts not considering
the limits of human memory. Our own names are so dear to us that an EEG will
show a blip if our names are called quietly while we are soundly asleep. We even
mistakenly “recognise” our names in ambiguous sounds. We cherish our names.
Usernames and passwords are not so familiar. We do not over-learn them. Websites
provide us with very few contextual cues, of the sort we can use when we go on
a real journey, and start recognising familiar roads on the way to a rented holiday
cottage we visited three years before. So our memory for passwords is poor,
unless we can use a favourite.
Even then, some websites most definitely do not want you to
remember and re-use your well-remembered password. Bad security. So every 3
months they ask you to change it. UCL follows this routine. Even more bossily,
they reject any new password which is too like your last password, and they
have the ultimate say as to what counts as too similar. Distressingly, there
are no rewards for choosing a strong password in the first place.
Is there a way out of this trap, in which we are denied
access to our own filing cabinets and our own internet front doors?
Some argue for technical solutions: iris recognition, finger
print recognition, code generating devices and so on. This somewhat misses the
point, because the basic problem remains: if you set security requirements too
high, access will be made too difficult. This would also apply to devices,
because they too have error rates. Notice how often iris recognition devices
fail. People have been spooked by worst case scenarios into demanding too
restrictive barriers to entry, and applying them too generally.
We need to balance the cost of a security breach against the
cost of losing access. Set security too high (as at present) and access is
frequently lost. Set access too high and security may be compromised. Ideally,
we would follow the simple equation: Risk = probability of an event X consequences.
We do not have good estimates of risk, and only partial and entirely personal
estimate of consequences. Money can be made by security providers from
frightening people about internet insecurity. Less money can be made arguing
for easier access.
One approach would be to grade the security/access
trade-offs into three categories: high, medium and low.
In the High category would be bank accounts, credit cards,
and anything else you decided you personally wanted to put into that category.
In Medium would be entry to most websites offering things
for sale, because no payments could be made without later hitting High security
payment barriers (the secure payment website http area).
In Low security would be most other sites, including all the
social media in which the purpose is to communicate to the world what you are
eating, drinking and, possibly, thinking. Why demand high security if your inter-personal
behaviour is low security?
Notice some of the advantages of this categorisation: the
consumer gets to choose. You could elect to have everything at High security
levels, and use a password generator for most of your sites. Some banks provide
these devices, which only generate passwords when you enter your own PIN number,
and the pass code must be entered within a minute. Three failures and you are
locked out of your own piggy bank.
Another approach would be for websites to reward you for
choosing stronger passwords. They could let you keep them forever (or for
precisely the thousands of years of computer time it would take to crack them).
Sites could be labelled clearly with their security requirements (not just the
padlock symbol and https later on) so that you knew what levels of security the
provider demanded before you provided a username to use the website. I
certainly don’t want to think up new usernames and passwords for a plumbing
website. If they won’t very clearly accept my own offers, I will shop
elsewhere.
The current system infantilises us. It assumes we have to be
protected, and cannot make any judgments about risk. It is also a blame game:
unwilling to ever be sued for poor security, providers place the onus on us to
manage entry, and regard our loss of access as our own fault. It is akin to airline
security requirements, in which no airline is allowed to openly state that it
will do spot checks on its passengers, using whatever risk estimates it likes,
in return for faster access to planes.
Edward Teller, the father of the Hydrogen bomb, understood
the security/access dilemma. He argued the extreme case, which is that nuclear
weapons scientists in the US should not be hampered by security restrictions
(which severely inhibit scientific cross-fertilisation) but should be able to
discuss whatever they like, thus creating such a ferment of innovation and
manufacturing prowess that the open society would always surge ahead of its
enemies.
At a more prosaic level, most of us are willing to protect
our houses with one, or at the most two front door keys. We know that
determined burglars can always use a JCB bulldozer if they want to break down a
door, or even a wall.
So: “No pretty-good security without pretty-good access”.
I encrypt my passwords before I write them down. Naturally it proves possible to forget how to decrypt them.
ReplyDeleteThe costs of caution
ReplyDelete