NOTE ON FORMATTING: (removed: out of date)
> The inquiry invites evidence on security issues affecting private > individuals when using communicating computer-based devices, either > connecting directly to the Internet, or employing other forms of > inter-connectivity. > ....... > What role do software and hardware design play in reducing the risk posed > by security breaches? How much attention is paid to security in the design > of new computer-based products?
As an amateur in these matters (since I do not do research on security), I have the impression that most of the faults are in human brains, first the brains of unscrupulous and often very clever cheats and second the brains of users of computer based devices who have all sorts of reasons based on diverse personal histories for regarding information as trustable or not trustable.As a result, progress in this area must be based on deep understanding of human mechanisms and processes, including how they vary from one culture to another and how they change over time in individuals, as experience is gained -- whether as an unscrupulous cheat or as a potentially vulnerable user. (The latter consideration is easier and more urgent to take into account.)
Example 1:
Tedious security precautions currently in use which require users to remember all sorts of different things for different sites they use often will have the effect that people start writing down information on bits of paper near their computers or in files they can easily access on the computer, because the memory demands are too great.Thus badly designed security measures produce new security gaps via human brains.
Finding security measures that depend more on recognition of prompts initially selected at registration time from a list (as some sites already do) may be a good move, but will require research on human psychology, cultural and national differences, age differences, etc. in order to provide good options for prompts to select.
One of the issues that I've not heard mentioned, though probably it is well known, is the need for people to have different security checks for different services, since otherwise a criminal in one company can acquire information that can be used to access customers' information in other companies.
Has anyone found a way to address that while satisfying the constraints of human memory?
E.g. it might mean different companies trying to ensure that they use different prompts: not an easy task, if the prompts have to work for all potential users. Would a national (or international) register of prompts already in use help or just give criminals more useful information?
Example 2:
Many people have got used to rejecting most of the email spam that gets through their ISP's spam filter, but I've noticed that every now and again clever criminals find a new form of words that leave even intelligent people wondering whether they should be clicking on something. Some develop a 'if-there's-the-slightest-doubt-don't' strategy, but I suspect many people are more gullible.
One of the things I have never heard said about phishing email is that the wide-spread default of using html for email (e.g. even some professors of computer science do it, totally unnecessarily, often even unwittingly because they, or their institutions, buy systems with bad defaults) is just a gift to phishers and other spammers because with html what you see is very often NOT what you get.So one of the requirements for software to increase security should be turning off the html email default in the widely used email tools[*] except for communications between individuals who know one another and where recipients have requested or agreed to receipt of html.
Html is heavily used in email for marketing purposes. If instead marketing messages were required to be sent in plain text with web or email addresses that are fully visible, then naive customers would be less likely to be trapped by accessing or sending email to web addresses that are not from a recognised reputable source.
Could this constraint on marketing email be legally enforced?
(It would require international agreements, and/or email service providers being willing to translate html to plain text, and show only the plain text by default, with an option to look at the html either as text or interpreted.)
It would free up a lot of internet bandwidth!
Also email browsers should be designed so that by default when they display html messages they should always give a warning that the message has hidden complexities and should not be trusted even if the From: line makes it look like a message from a known person.
Such a system could be trainable so that it uses relationships between the From: line and Received: lines to tell whether a message is likely to be a fake.
Parents should be able to lock a system so that a child cannot change the default, though the problem there is the competence of parents -- another human issue.)
(As a side effect children might learn to communicate better using gimmick free plain text?)
My impression is that most researchers and developers are more interested in providing clever utilities to make things 'easier' for sellers, buyers and other users than providing checks and warnings related to human-related security issues, which requires far more tedious empirical research instead just the fun of designing and implementing.Maybe things are changing now. Perhaps just as psychologists, biologists and medics have to have their work approved by ethical committees, CS/SE people working on tools and interfaces that are potentially subject to abuse should also be audited, to check that they are taking the security issues into account.
Research councils could add a section on this in proposal forms, which at present they don't have, as far as I know.
Apologies if this is all too obvious and already fully discussed elsewhere.
After hearing news reports about the iSoft/NHS shambles I have drafted a letter to my MP (Lynne Jones) about how governments should think about large IT projects, available here along with comments from various people in and out of academe.
The above was circulated to an email list. One reader of the message suggested that there might be a process of privacy certification similar to certification requirements for aircraft safety. Since the problem is the interface between software and other entities purely formal analyses cannot suffice (though they remain essential).Additional responses may be added here.
People who don't know whether they are sending email in html format should look here.
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham