Interview with David Harley, Senior Research Fellow at ESET

About David Harley

David Harley is an IT security researcher, author, and consultant living in the UK. He has worked in IT (initially in medical informatics) since the 1980s, increasingly focused on security and anti-malware research since 1989. Between 2001 and 2006 he managed the UK National Health Service’s Threat Assessment Centre, and since 2006 he has provided authoring and consultancy services to the anti-virus industry, notably for ESET, where he holds the title Senior Research Fellow. He is a former director of the Anti-Malware Testing Standards Organization (AMTSO). He was principal author and technical editor of The AVIEN Malware Defense Guide for the Enterprise. He co-authored Viruses Revealed with Robert Slade and Urs Gattiker, and has contributed to many other books, including OS X Exploits and Defense. His blogging for ESET can be found at

Interview Questions

[] Can you provide a rough outline of what cybersecurity has come to mean as a discipline and a career? How has it come to be incorporated into the larger fields of IT and computer science/programming?

[Mr. Harley] Here is the set of assumptions I tend to work from. IT security is the dimension of information technology that in a rational world would constantly be invoked to protect — as far as possible — the consumers and providers of that technology (and the data it contains and processes) from breaches of what we currently understand (or assume to be covered) by the term security. This starts from the classic tripod model (confidentiality/privacy, integrity, availability), but extends to areas that aren’t always seen as fitting into that model — for example, accountability, compliance, audit, civic responsibility and good citizenship, politics and politicization, critical thinking, social science, anthropology, and other things I may think of as I go along. Within the IT industry, security continues to be seen as a specialism often marginalized if not divorced from mainstream computer science. This has resulted in a world where the range of services online is far wider than the range of services where security and privacy are properly addressed.

In that rational world I mentioned, the need for third-party security research would still exist, especially in the area of tracking trends in malicious activity and contributing to the discovery of bad actors. However, more research would be directed towards the strengthening of devices, operating systems, applications, and services, and less towards the provision of technology intended to plug the gaps in poorly protected technical services. Major platform providers like Microsoft and Apple are already working hard in this space of course, and tend not to be overly enthused about third-party security software (especially as regards to mobile platforms). While that’s largely a PR issue, it probably also has something to do with a general naïve feeling that a good operating system should be totally secure.

[] With that in mind, what should we be teaching the next generation of IT and computer science specialists about cybersecurity?

[Mr. Harley] I’m not well-acquainted with current academic trends in teaching computer science. While I did sometimes talk to secondary schools about security a few years ago, I did so without particular reference to the UK’s National Curriculum and government measures for assessment of teacher competence and student achievement. In the UK, there are much more in the way of security specialist opportunities in undergraduate and postgraduate education than there were in the 1980s (i.e. there are some). Even the more general networking/IT degree that my daughter recently acquired included appreciably more security content than the computer science component of the degree I completed in 1990. (As I recall, the only somewhat related content was focused on database management with SQL).

I’d hope that current teaching of programming would go further into issues like vulnerability and exploit management, as well as more conventional defensive programming techniques. I remember being disappointed to learn that defensive programming is about protecting the program, not the program user. I suspect that you’d have to look pretty hard to find computer science courses that seriously take into account the psychosocial dimensions of security/insecurity. Indeed, the sector of the security industry with which I find myself most associated continues to suffer from an inability to see beyond bits and bytes to the human factors.

[] On a practical level, what does the day-to-day work of cybersecurity look like, and what kind of person/personality is well suited to this kind of work?

[Mr. Harley] Pretty much everyone who works with a computer (and not many people don’t) is at the security front line, even if they’re simply using some sort of computing device at home. They should be aware of that fact, but all too often aren’t. We tend to regard security as something that should be taken care of by the ISP or OS/app provider at home, or by the IT department at work. Well, service providers, vendors, and IT departments should always have security in mind, but the number of ways in which naïve end users can sabotage the efforts of IT and IT security professionals is depressingly impressive. While it’s probably not common for security awareness to be a major element in the interview process for jobs that aren’t specified as security posts, it behoves all kinds and sizes of company to encourage a culture of security awareness in which everyone accepts responsibility for their own security and that of those around them.

It might seem that security as a specialism might require a specialized skillset and type of personality, but it’s not exactly so. Obviously, some traits are beneficial in many roles: caring about the safety of yourself and others; common sense and an analytical bent; a painstaking approach to problem solving; adaptability and coolness in a crisis. Other traits are clearly role-specific: security evangelists tend to be extroverted (but extroversion helps for anyone in the public eye, including researchers lumbered with conference presentations). A security administrator needs a broad range of technical skills, usually including a comprehensive grasp of programming, server and desktop operating systems, a range of security programs and how they work, and so on. Threat analysts need coding and analytical skills, ferocious concentration and attention to detail, and so on.

Those of us with an educational remit (whether it’s formal training, technical writing, or geek-to-English translation in the media) are constantly trying to strike a balance between geek-ish pedantry and the need to simplify without oversimplifying. There are many in the security community who think that security is all about research and hands-on administration of servers, firewalls, and so on. But that is a bit like expecting doctors and nurses to run all aspects of a healthcare service, or fighting troops to manage all their own catering, equipment, nursing, logistics, communications, budgeting, disciplinary issues, training, and intelligence. All specialists sometimes require support and management addressing issues outside their own competence. There are as many types of managers as there types of unit to be managed, but clearly there is a range of necessary business skills that aren’t specific to security management.

It won’t surprise you that I also consider an understanding (not necessarily an in-depth knowledge) of security to be a necessary business skill. Security is certainly an area where the hybrid manager has an edge, but because being up-to-date technically and performing the usual managerial functions is terribly demanding, it’s also important for a security manager to know his or her limitations and be ready to seek and accept advice. It also helps to have the people skills to be able to assess whose advice to take.

In all areas of security, a certain amount of paranoia, cynicism, pessimism, and ability to spot the weak points in a proposition is helpful. These may not be altogether admirable qualities in a human being, but they definitely bolster the security skillset. In Howards End, one of E.M. Forster’s characters says, “the confidence trick is the work of man, but the want-of-confidence trick is the work of the devil.” In security, it helps to look for the devil in the detail. As even Forster’s Margaret Schlegel admits, “I’d rather mistrust people than lose my little Ricketts. There are limits.” And while the loss of a small painting is a calculable financial loss, in IT a small breach may have incalculable knock-on effects. However, the ability to think like an attacker when it comes to assessing the attack surface needs to be combined with a firm grasp of ethics, with personal honesty and integrity.

[] What kinds of coursework and practical training should students look for in an advanced degree in cybersecurity, and what kind of experience outside of the classroom are helpful in cultivating expertise in the field?

[Mr. Harley] Not sure this is my area. I only have a first degree, and even that is as much in social sciences as in computer science, and as mentioned before, the courses were largely security-content-free. I kind of fell into the whole thing. It does seem to me that a great deal of malware-related academic research is compromised by its distancing from the nuts and bolts of security research as carried out by commercial vendors, so that I see papers that take it for granted that anti-malware is restricted to detection by static signature, which hasn’t been the case for decades. On the other hand, commercial companies could benefit from reading some of the research with a more psychological basis on phishing, for example. I suppose I’m saying that if students spent more time with pragmatic security researchers out in the marketplace, both sides of the commercial/academic divide would benefit over the long haul.

[] How did you get into the field, what drew you to it, and how have you seen it evolve over the last decade or so?

[Mr. Harley] I started off doing a degree in social sciences at the end of the ’60s. I didn’t get around to finishing it until the end of the ’80s. In fact, from then till 1986 I was doing completely different things (music, nursing, the building trade). When the woodworking company I was working for — as a wood machinist — started to disintegrate, I edged into the IT industry by way of office administration, which spread into database admin, systems and user support, and a little coding. My boss at the time contributed to my returning to my degree via distance learning (the UK’s Open University). I focused on technology and computer science modules, which, given my background in social sciences and psychiatric nursing, made a nice combination for the security arena.

My initial primary interest was in coding rather than in security. However, I kept being drawn into security-related issues. Part of the attraction was the fact that it was a specialism that was being inadequately understood and addressed by most of the IT units with which I was working, and it became something of a mission to raise IT standards and help consumers/end-users to understand the problems and help themselves. Looking back, I guess I was always more of an educationalist than a programmer. I actually enjoyed writing documentation.

From 1986 to 2001, I was in medical informatics in some capacity or other (but only with a security bias from 1989). The story of my first encounter with malware — actually, the program we tend to think of as the first ransomware — can be found here. From 1995 to 2006, I was fairly conspicuously on the fringes of security (my first major conference presentation was delivered in 1997), but still occupied a customer/systems admin role rather than working directly in the security industry. Maybe that gave me a wider audience than if I’d been firmly identified with a vendor.

I’d been involved with writing and editing since the 1960s, so I was in a position in the mid-’90s to make a conscious decision to extend those activities beyond in-house so as writing to create a more “public” profile that combined my interest in writing with my fascination with the psychosocial aspects of computer crime. The first step was my involvement in/direction of communal projects, notably the alt.comp.virus FAQ. Then, I authored conference papers and specialist articles, and after that some books.

By then, I was acquiring vocational certs and qualifications, in areas like Unix, Novell, malware analysis, and various Microsoft bits and pieces. I went on to get CISSP, ITIL/service management, project management, HR-related stuff, BS7799/ISO 27002 lead auditor, BCS (British Computer Society, now the BCS Institute) membership, and others (FBCS, CITP). Eventually (fairly recently, in fact), I decided that I didn’t really want to go on paying subscriptions to (ISC)2 and the BCS just to keep the letters after my name, so I’m no longer entitled to use the acronyms CISSP, FBCS, or CITP. It’s not that I’m not in sympathy with their aims: it was just a simple cost/benefit analysis, as described here: Being the “authority” on AV for an organization with 1.4 million employees probably helped more than it hindered, and journals in the UK still ask me for NHS/ commentary based on that experience.

When the National Health Service had a highly politicized reorganization, I decided to become a consultant rather than move into a different work area and location. I’m not a deep-dive bits and bytes man, but I’m lucky enough to be able to transfer some writing skills that are probably more typical of a much lighter style of writing to a semi-tech context. Perhaps I couldn’t have done that before the age of the blog. And, fortunately, ESET were kind enough to commission some tech writing from me, which evolved to a point where I have no other regular clients.

The evolution of the field in general has been… interesting. The shift from self-replicating malware written for kicks and peer approval was already well underway ten years ago, with the convergence between worms and bot malware. The trend has been towards a range of malware that uses various techniques and increased modularity in order to spread and deliver a wide range of payloads, primarily motivated by monetarized criminality. The range of campaigns and payloads reflects the convergence between various types of attack involving phishing, bank fraud, spam dissemination, ransomware, fake security software, and so on. Even tech support scamming, which started out as social engineering backed up by a few tricks involving deliberate misinterpretation and misuse of Windows system utility output, has increased in technical sophistication and on several occasions has been associated with real malware.

[] What are employers looking for in cybersecurity hires and how should someone who’s aiming to enter the field prepare him or herself?

[Mr. Harley] There appear to be three main schools of thought. There’s the BOFH [Bastard Operator From Hell] view that the only useful security is based on active white-to-grey-hat hacking and purely hands-on configuration and logfile microscopy, preferably using open source system tools. There’s the school of thought that favors specialist courses such as those provided by SANS/GIAC. While I don’t think quite as highly of SANS as SANS does, these often represent good training in specific technologies and issues, so a prospective employer might well regard them as positive indicators of the right knowledge and mindset. There are vendor-specific courses. Those that lead to industry-recognized qualifications such as MCSE or CCNA are never wasted, of course. Many of those I’ve been sent on in the past have been less valuable, since the certificate essentially amounts to “Harley was in the room while the lecturer was talking and had his workstation switched on.”

Then there’s the more management-oriented type of course, such as the CISSP certification offered by (ISC)2. This is a wide-rather-than-deep test of knowledge, and is despised by many adherents to schools 1 and 2 as too generalist to be useful, too easy to game, purely exam-based, and so on. Actually, this sort of criticism is in many respects fallacious — I’m not here to promote (ISC)2, but CISSP certification requires extensive experience as well as passing a fairly exhausting exam, sponsorship, and so on. There are also alternative qualifications for the less experienced and the more specialized, but CISSP is kind of a gold standard for certain types of job. It’s probably used inappropriately some of the time simply because hirers are desperate for competence metrics. On the other hand, I have suggested from time to time that highly-talented analysts could usefully acquire such a qualification, not just in order to increase their employability, but so that they understand the needs of the business user better.

[] What should we be teaching the next generation, and even the current generation of information security specialists and technicians, both in terms of skills and ethics?

[Mr. Harley] It shouldn’t be about trying desperately to teach the latest technology without reference to the fundamentals. Mostly, it should be about teaching people to learn, and to evaluate rather than absorb other people’s views. And it depresses me that (in the UK at any rate) we teach younger people to use prefabricated ultra-high-level modules and call it programming, reminding me of the way that we often teach the use of the calculator and call it mathematics.

Ethics? I sometimes think that the AV research community is pretty much the only sector of the industry that cares, and that other sectors are all too ready to use that against us, and ask “How can you be any good if you don’t do the same things a hacker does?” I’d certainly like to see more awareness of marketing ethics, extending to ethical disclosure, alongside discussion of what constitutes appropriate researcher ethics.

[] How is the interplay between government policies, technological innovations, economic forces, and social dynamics impacting the evolution of cybersecurity, and what are the biggest factors shaping education and employment in the field?

[Mr. Harley] These are complex and often contradictory factors. Take, for instance, the conflict between the average government’s urge to monitor and accumulate data on anyone who might be a danger to the status quo, and its desire to encourage the population to conform to good security practice as an aid to personal and national prosperity. I often think that when government agencies offer advice on security and privacy — in this context, often polar opposites — they resemble the cynical definition of chivalry: a man’s urge to defend a woman from every other man except himself. This dichotomy is probably very much at the heart of the modern security scene: national security, law enforcement, legitimate prosperity, and personal privacy often pulling in different directions.

Then there’s the remorseless drive towards social media and shared data, not to mention the Internet of Everything. Information is often not just poorly secured, but easily accessed. No wonder people of all ages are so often confused. I regret the absence of critical thinking, the encouragement of students to see things from all perspectives and especially a psychological viewpoint, in so much education and training.

[] From your perspective, what are the one or two biggest misconceptions that people seem to have — even people “in the know” — about cyber attacks, malware, and information security?

[Mr. Harley] There’s a misconception that there is no OS X malware (or that OS X malware isn’t dangerous because it isn’t viral), and that Windows malware is entirely due to the inherent insecurity and incompetence of the operating system. (Microsoft didn’t always work as hard at security as it does now, admittedly.) That “Don’t click on suspicious links or attachments” is good advice. Most people don’t click on anything if they think it’s suspicious. That encrypting your documents somehow protects them against ransomware. Or that the answer to ransomware is backups. (The answer to ransomware is a backup strategy that doesn’t expose your backups to unauthorized encryption.)