What’s on your network? 

 Assistant Professor Cybersecurity Program Director
 By: Robert Jorgensen
In the last couple of years, security vulnerabilities have gone from obscure bulletins and esoteric CVE numbers to a marketer’s dream with catchy names, clever logos, and extensive news coverage.While this level of cybersecurity awareness promotes a more secure society, it is applying greater pressure than ever to IT managers and their teams.

Executives and management are suddenly aware of vulnerabilities blissfully ignored in the past. This awareness brings questions to IT staff, the most common being “Does this impact us?”

(SEE ALSO: LogjamShellshockGhost)

Unfortunately, this question is often met with silence, a Magic 8 Ball “ask again later” response, or a non-committal “I don’t know”. Those are answers no one wants to give when asked about a widely reported vulnerability.

Even worse, those are sometimes the answers to the question not asked enough:

“What is actually on our network, and how is it configured?”

Security Metrics subscribe

Network Inventory

Around the turn of the millennium, there was a widely reported story about a particular server at a university that lived a solitary life in a server closet walled in a number of years earlier. While nostalgic administrators often tout this tale as a fictitious example of how they built operating systems in the good old days, imagine that scenario now. You have a server on your network. You can see it. You can talk to it. You might even think you control it. But you have no idea where it is.

As a security professional, the wistful daydream of the system administrator quickly turns into something that keeps you awake at night.

Confidentiality, integrity, availability
Security professionals are tasked with three goals for information systems: confidentiality, integrity, and availability. Using the extreme example mentioned above, it is pretty easy to see how each of these is compromised.

If the physical location is unknown, there is no way to know if someone is tapping or viewing the data (confidentiality), modifying the data or the system directly (integrity), or if the server has reliable power, fire suppression, or theft prevention (availability).

Fortunately, most organizations do not run into an example this extreme. But many organizations struggle with maintaining an up-to-date list of software and hardware throughout the network, especially when it comes to systems that aren’t in production.

It’s not uncommon for organizations to have a pretty good idea of what’s in production for asset tracking and licensing reasons. Many IT departments track production configurations and follow a baseline as servers are deployed. In both cases, when development and test environments are involved, things often get a bit less clear. While certain development licenses and site licenses may reduce the need for granular license tracking and older depreciated hardware used in such environments may appear to reduce the need to track, there still should be concerns about the security of these systems.

While staging and QA servers often mirror the configuration of production devices, development and test servers often sport basic configurations. Default passwords and simplified configurations abound. Hardening is typically reserved for “real” environments.  Whatever the reason, these machines remain vulnerable. Naturally, no one expects them to be accessible to the outside world, but it happens.

Take the State of Utah Medicaid breach in 2012, for example.  More than 700,000 records were breached. The remediation went into the millions of dollars. What happened?

"The server was a test server and when it was put into production there was a misconfiguration. Processes were not followed and the password was very weak," Stephanie Weiss, spokesperson for DTS, told InformationWeek Healthcare.

Yikes!  If regular inventory scans of devices on the production network had been completed, someone could have noticed this machine and remediated the situation.

Configuration management

Network Inventory Organizations commonly monitor critical systems using a variety of software packages. Too often this falls into a pattern of, “Server X is critical for application Y, so we should monitor it” rather than, “We should monitor the network itself for new devices.” Most monitoring software has scan and discovery modes, but how often are they run? Likewise, software inventory and configuration management tools can pull or push information about installed software and configurations. How often does this happen?

So, what is actually on your network and how is it configured?

Having a complete and up-to-date inventory of the devices and software on your network makes answering this much easier. A master software list showing each software package and version installed on servers and workstations can be used to quickly identify potential problem areas. Being able to check configurations regularly will help identify problems sooner.

Some vulnerabilities make answering this question more complicated. For example, the Heartbleed vulnerability affected OpenSSL. None of your systems administrators might remember explicitly installing OpenSSL, but it is used by many software projects to provide TLS support for their applications. While having a complete list of software and versions at hand may not instantly identify all affected software, it will speed up the process as vendors and projects update their user base with new information.

Where to start?
The first step is finding out if your records match reality. Sure, that spreadsheet shows 15 machines on that subnet with 22 total IP addresses, but what is actually there? How many switch ports are active? How many virtual machines are being hosted on that blade server? Identifying everything may seem an overwhelming task at first, but it gets easier in subsequent iterations.

The same goes for installed software and configurations. Pull the information and check against your baseline. It’s amazing how far a little tweak here and there on a server can cause individual instances to diverge over time. Perhaps some debugging tools have been left there from a previous troubleshooting session. How about former system administrator Joe’s account? Was it disabled everywhere?

Once you have this information, it’s a good time to verify patch levels.

The 2015 Verizon Data Breach Incident Report found “99.9% of the exploited vulnerabilities had been compromised more than a year after the associated CVE was published.” Let that one sink in. Does your organization have a vulnerability that is a year or more old? It simply is not possible to know without up-to-date information. Just because something is stable doesn’t mean it is secure.

Security

Once you have established that your records reflect reality, it is time to monitor to ensure they are accurate. How often will depend on your organization’s overall security posture, but frequent and regularly scheduled updates will go a long way to ensure you have the best view of your systems.

A quick network scan a couple times a day will have little impact on performance, but may reveal the development workstation that just inadvertently bridged the production and test networks. More intensive tools should wait until off-peak hours.

When scheduled changes are made, check to see they reflect what was planned. Some things are overlooked and, occasionally, someone slips in an extra change during that maintenance window. As they say, trust but verify.

Finally, remember this is an iterative process subject to constant improvement. As the concept of network, system, and software inventory and configuration management moves from asset tracking and compliance to part of your operational security plan, things will become more efficient.
The confidence of having regular, updated information about your environment will change the entire tone of that inevitable “are we vulnerable” meeting.
Instead of delaying and waffling, you can look everyone in the eye and speak with authority. It may not be the answer they want to hear, but it is the correct answer and your organization can then move forward with remediation as necessary.

Robert Jorgensen is a cybersecurity professional and educator with over 20 years of experience in various technology roles. He holds multiple information security certifications, including CISSP, CISA, GCIA, GCIH, GPEN, and GXPN, as well as networking and systems certifications from Microsoft, Novell, and Cisco. A Utah native, Robert received his Master of Science in Information Systems from the University of Utah. Robert is on the faculty of Utah Valley University as as Assistant Professor and the Cybersecurity Program Director. He is currently building a cyber security academic program at UVU under a $3 million federal grant.

SecurityMetrics Data Security Learning Center