Is Your Business Prepared for the Physical Security Threat?

Prevent stolen tablets, smartphones, and laptops with these basic tips.

David Ellis, Director of Forensic Investigations
By: David Ellis
Practically every business has access to at least one laptop, tablet, and smartphone. For many organizations such as a local restaurant or clinic, these devices help provide quicker access to information, transactions, and services than ever before. 

A problem with physical data security begins when users forget to adequately protect these devices. We inadvertently abandon our technology devices in unlocked offices, forget them on subways, leave them in our cars, and let our kids play with them. According to Healthcare IT News, 9 of 10 of the largest Health Information Portability and Accountability Act (HIPAA) data breaches were caused by physical security issues.
If you store, transmit, or process sensitive data on a device, you will be held liable if any of that sensitive data is lost. Tweet: If you store or transmit sensitive data on a device, you will be held liable if it's lost: #physicalsecurityTweet
If there is a compliance guideline behind that data, serious repercussions, including financial penalties could arise. Standards like the Payment Card Industry Data Security Standard (PCI DSS) or HIPAA include such data protection requirements and consequences for mishandling sensitive information.

Consider implementing these basic physical security guidelines to your business strategy to protect trade secrets, customer data, and your business.

Control physical access to your workplace

The best way to control the physical threat is through a physical security policy, which includes all the rules and processes involved in preserving the business. If you keep confidential information, products, or equipment in the workplace, keep these items secured in a locked area. If possible, limit outsider office/business access to one monitored entrance, and (if applicable) require non-employees to wear visitor badges at all times.

Don’t store sensitive information (like payment card data) in the open. Many hotels keep binders full of credit card numbers behind the front desk, or piled on the fax machine, for easy reservations access. Unfortunately, that also means the collection of files is easy access to anyone within arms reach of the front desk or fax. 

Keep inventory of all removable devices

Not allowing devices to go home with their users is an important step to keeping data out of the hands of criminals. Some healthcare offices require their employees to check out a tablet each morning and return it to a locked safe at day’s end. Each user has an assigned tablet slot, and it is obvious if the space is left empty. 

Another solution you might consider is attaching external GPS tracking technology on all laptops, tablets, external hard drives, flash drives, and mobile devices. 

SEE ALSO: Balancing Mobile Convenience and PHI Security

Document physical security processes

It’s crucial to document the who, what, when, where, and why of device use to determine the responsible party if data is lost. Items that should be documented include a list of authorized users, locations the device is assigned or is not allowed, and what applications are allowed to be accessed on the device.

Oddly enough, it’s also recommended to document what sensitive data your business is trying to protect. Obviously, that must also be protected, and strict controls must be placed to allow access only to authorized users.

Download a free physical security policy template.

Train your employees

While you care about customer card information, patient data, or your own proprietary data, your employees may not. That’s why regular security trainings are so important.

Social engineering is a serious threat to smaller businesses. A social engineer uses social interaction to gain access to private areas, steal information, or perform malicious behavior…and your employees fall for their tricks more often than you think. 

If a man walked into your storefront and told you he was there to work on your network and needed you to lead him to the server room, would your employees think twice to further identify him and verify his presence? 

SEE ALSO: Social Engineering - It's OK To Be Paranoid

Train your employees to question everything! It’s better to be safe than sorry. Establish a communication and response policy in case of suspicious behavior. Train employees to stop and question anyone who does not work for the company, especially if the person tries to enter back office or network areas. 

David Ellis (GCIH, QSA, PFI, CISSP) is Director of Forensic Investigations at SecurityMetrics with over 25 years of law enforcement and investigative experience. Check out his other blog posts.
Hey, Healthcare. Your Usernames and Passwords Are Embarrassing.

The IT security failure spanning every healthcare organization.

Brand Barney, Security Analyst
By: Brand Barney
October is National Cyber Security Awareness Month so I thought I’d close out the month with a security tip for our IT friends in healthcare. 

First, let me give you a big high five on EHR security. Your EHR security is gaining serious traction. Most of you have started to implement unique usernames and passwords on the EHR level. 

Now let me break the bad news. You still have work to do. And I’m not just talking to small practices here. I’m talking to medium entities. Big hospitals. Even organizations with a large IT staff. Here’s why.

HIPAA mandates each healthcare organization employ unique login IDs and passwords. IT professionals, doctors, and compliance managers think that requirement is covered because their EHR has a unique username and password for every employee. But at the network level, they don't. 

What that means: 
  • You’re not HIPAA compliant
  • You’re leaving patient data unsecured
  • Your network’s vulnerabilities pose great risk to your EHR security
Let’s pretend you have a very secure password for your EHR, but your networked computers aren’t protected by a secure and unique password. Let’s also pretend that your organization left remote desktop protocol wide open, or some IT guy left the Telnet protocol wide open. Both scenarios are extremely common in any healthcare network size. 

A hacker cracks your crappy network password and gets in. He installs keylogger malware that records everything you type on your keyboard. He starts watching your traffic. In a matter of hours (or minutes) he now has the password to your ‘super duper secure’ EHR system. 

Mark my words. If you are breached in the next few years, it will likely be because of one of these three reasons:Tweet: If you are breached in the next few years, it will likely be because of one of these three reasons: #HIPAATweet
  1. Bad business associate practices
  2. Insecure remote access
  3. You didn’t use secure and unique IDs, passwords at the network level
You’ve got to make sure the network security on your systems is buttoned down. And it all starts with unique login IDs and passwords.

It’s not just about good passwords; it’s about unique ones too

Let me give you an example that applies to practically every healthcare environment. My example dentist office has 4 stations for patient cleaning, running a Dentrix EHR system. The computer login to station 1 is hyg01. The password is drbrown1. 

SEE ALSO: HIPAA Compliant Passwords

Any security professional (or hacker) could crack that username/password combo in a matter of moments. The dentist office probably thinks that password is totally secure because it has more than 8 characters and a number. Wrong. 

But the most grievous part of this scenario is that the username and password are static. They’re not specific to the hygienist or doctor. Anyone can log on to that computer. The dentist’s EHR (Dentrix) may have unique user IDs and passwords, but each station doesn’t. 

Riddle me this. If your organization has a breach, how do you prove who got in if every single person at your organization has the same login as everyone else? How would you prove, as an employer, who stole or lost patient data? 

Consider this healthcare scenario. A 21-year-old former employee lost his job. He’s bitter about it. And guess what? He knows your usernames and passwords because no one has their own. He vindictively thinks, “I’m going to take some patient data. Besides, you can’t track it back to me anyway.”

In 2014, Intermedia found at least 89% of employees retain access to at least one login and password from their former employer. 45% retained access to confidential or highly-confidential data. 

Here’s another example.

Sometimes, computer stations aren’t even locked. I was recently consulting at a dental office and asked the office manager if I could walk around. As I walked passed one of their computers, I flicked the mouse. The computer popped right up at Dentrix with an open patient record. Not only had the dental hygienist not closed out of the patient record, but the system hadn’t been configured to require machines to pop up at the login screen when opened. 

If an attacker had walked in and grabbed a machine, their entire system would have been available to him.

The problem? Laziness? Lack of direction?

Now, I used to work at Dentrix. I know Dentrix systems have the capability to require users to authenticate every time they login to Dentrix, if configured appropriately. I also happen to know that most (if not every) computer system in the world has the ability to set up uniquely identifiable usernames and passwords for multiple users across a network. 

So why is no one implementing screen savers? Why is no one implementing unique IDs and passwords? IT guys know better. They’re often lazy, or don’t have the stomach to inform the C-level their current password situation isn’t good enough. Or worse, the C-level is restricting the IT staff from implementing these measures because they don’t think it’s necessary. 

Setting up unique user IDs and passwords does require a bit of work (hours depend on organization size) from IT. It takes enabling the Active Directory Domain Services (AD) role. A system has to be set up with a domain controller(s) that pushes the policies for unique user IDs and passwords to the forest of computers at an organization.

Need active directory guidance for Windows Server 2008, Windows Server R2, and Windows 2012?

How to implement strong password policies on computers running Windows 2000, Windows XP, and Windows Server 2003. 


I don’t mean to be too harsh here, but healthcare’s security is embarrassing. 

Please, for the sake of your organization and your patient’s data, make the simple change to require unique usernames and passwords on the network level for each one of your staff members.  Don’t let the myth that ‘our EHR security covers patient data’ convince you otherwise. 

Remember, your security matters!

(Thanks to SingleHop for inspiring the Get Involved NCSAM campaign for cybersecurity, and this post!) 

Brand Barney (CISSP) is an Associate Security Analyst at SecurityMetrics and has over 10 years of compliance, data security, and database management experience. Follow him on Twitter and check out his other blog posts.
SSL 3.0: POODLE Vulnerability Update

Who it affects, how hackers could use it, and what you should do about it.

You’ve probably heard about the newest online security threat, POODLE. While not as menacing as Shellshock or Heartbleed, many are still concerned about its potential impact on their business or personal security. 

Here’s what we think: The chances of this vulnerability being used to compromise your sensitive information is relatively low. The successful exploitation of this attack requires such a large number of preconditions that the chance of this attack being used in the wild is low. This attack would probably only be a concern if you are likely to be targeted by a state-sponsored organization.

Here are the facts

  • POODLE affects browsers with JavaScript enabled that support SSL 3.0 
  • The vulnerability could be used to retrieve authentication cookies that are encrypted via a man-in-the-middle attack
While this is a legitimate attack, the likelihood of being compromised via POODLE is very small.Tweet: While a legitimate attack, the likelihood of being compromised via POODLE is very small.

Can I be compromised through POODLE?

Here’s an explanation of how an attack would have to take place in order for an attacker to exploit POODLE and assume the user’s identity on the target site.
  1. The victim must be logged into a site using HTTPS (and the session cookie must not be expired)
  2. The victim must browse to another website over HTTP before the session cookie expires
  3. The attacker must write a custom JavaScript code to exploit POODLE. To date, no prepackaged tool has been published to exploit POODLE
  4. The attacker must inject ~5,000 requests in order to decrypt the session cookie 
Basically, this attack is extremely difficult because both the merchant and the user have to be vulnerable. 

Our recommendations?

  • If you are still using Internet Explorer 6, you are using an obsolete operating system that is no longer supported. You need to upgrade to a newer operating system. If upgrading is not an option, you need to update to a newer browser that does not support SSL 3.0. 
  • If you are running a webserver and currently support SSL 3.0, you need to evaluate your business requirements to determine if SSL 3.0 is currently being used. If it is not, simply disable SSL 3.0. Otherwise, develop a plan to disable it as soon as possible. 
SecurityMetrics has plans to update our website after notifying customers of the change. We have made this choice because POODLE is not a critical risk vulnerability. We do not believe it puts our customers at direct risk.

SecurityMetrics is ready to help our customers navigate through POODLE with the highest reliability and least business impact. If you have any questions, please contact SecurityMetrics support, 801.705.5700.

My OCR Audit, and How I Survived

An interview with Doreen Espinoza of UHIN.

Doreen Espinoza
Doreen Espinoza, Business Development and Privacy Officer of UHIN answered some tough questions about her audit with The Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR). UHIN (Utah Health Information Network) is a full-service clearinghouse and Health Information Exchange (HIE) that specializes in administrative and clinical exchanges. The organization was randomly chosen for a pilot audit in 2012, and was one of only two clearinghouse entities that passed their audit with “no findings”. 

Our hopes are that this interview gives you better insight on what to expect from any OCR audits in the future. This is her experience, from start to finish.

How did your audit with the OCR begin?

ESPINOZA: We received the letter from Leon Rodriguez (former OCR director) in May of 2012. The letter asked us to put together our documents in two weeks. At the time, we were already going through an EHNAC (Electronic Healthcare Network Accreditation Commission) audit. 

I told the OCR, “Sorry, but two weeks isn’t going to happen. I can’t do two audits at once, not of this magnitude.” Luckily, they worked with me and we negotiated a new date. After we finished our EHNAC audit, (a month after I received the letter from the OCR) I was then able to focus on the OCR audit. 

The amount of time the OCR gives you to prepare for the audit is interesting. Whether you have a really solid program, (which we do) or if you’re new to the program, it takes a lot of time. I ultimately spent 180 hours on the audit, even working nights and weekends. 160 hours were spent merely gathering documentation. It took about a month to get all our documents ready to turn in.

Explain your feelings before the audit.

ESPINOZA: Because we were one of the first to be audited, I wasn’t afraid our documentation would be lacking. As I explained, this wasn’t our first audit. However, if I had been a provider with little to no understanding, I would have been scared.

I did have one concern: Would the OCR auditors understand what they were auditing? The auditing firm, McKesson, is basically an accounting firm and new to HIPAA audits. Since I hold an accounting degree, I understand how they think and what they’re trained to do. The problem is, privacy and security is not the same as a financial audit. 

These were my thoughts before the audit: Do they have any healthcare knowledge? Do they know how to interpret HIPAA rules? Do they have sufficient knowledge to understand our documentation?

When I asked the auditors who had audited a clearinghouse, only one hand of four went up. I think they understood, generally. But I did have to push back on one of the audit points. 

Requirement 164.520 requires a notice of privacy practices, but because UHIN is a clearinghouse, it doesn’t make sense for us to have one. We are technically a covered entity, but we don’t have patients. After a fair amount of explaining, I was able to convince them we were compliant without one.

How intrusive was the onsite audit?

ESPINOZA: Most of the interaction was with me, though our security officer was a part of some conversations as well. 

Besides the thorough examination of our documentation, the auditors went through our office looking at basic facility security, checking to see if doors were locked and where workstations were located. 

I walked them through the building and explained our workflows. I also gave them an explanation of our data center. 

The first 70 documents I submitted to them, they reviewed as a part of their pre-audit evaluation. When the auditors came onsite, they asked for an additional 55 documents. The onsite visit is truly to ask you additional questions and get additional documentation. 

They were there for three days, and those three days were really intense. It felt like an interrogation. They asked a question, I answered it, then they moved to the next question. 

The main focus of the audit was all about privacy and documentation, which was a little disappointing to me. I thought the audit would also focus on them testing security, like passwords and such. I am very proud of our data center and offered to take them, but they didn’t take me up on the offer.
They did a really good job of asking a million documentation questions. They just didn’t take it any further than that.Tweet: Tweet
That’s why I think companies like SecurityMetrics are great. After our OCR audit, we used SecurityMetrics to look at our security and it was a great security review. Honestly, I wish I had SecurityMetrics at that time. If nothing else, just to prove our security to the OCR. I can write policy all day and night. But to show compliance? Security is the tangible way to support privacy. 

SEE ALSO: What to expect with an HHS audit

What was the impact of your organization on this audit?

ESPINOZA: Since privacy is my job, I was probably the most impacted by the audit. Another thing that made this audit so intense was, in 2012, HIPAA 5010 was rolled out. So I didn’t have a whole lot of help preparing for the audit. Everyone else was busy implementing HIPAA 5010. 

What are the differences between an EHNAC and OCR audit?

ESPINOZA: EHNAC is a non-profit organization that accredits large organizations like clearinghouses and clinical health exchanges. We've held our EHNAC accreditation since 2004. To be accredited, we have to undergo an audit. 

The difference between an OCR and EHNAC audit is, OCR auditors wanted you to prove you were compliant with the rule, but didn’t provide examples of acceptable evidentiary documentation. I don’t know that the OCR auditors really knew what to look for, but remember, we were in the pilot audits. EHNAC specifically states which parts of HIPAA you must be compliant with, and gives examples on how to show that compliance. 

What information and documentation did the OCR request? 

ESPINOZA: All in all, 127 documents. Here are some specific examples:
  • Work desk procedures (e.g., thou shalt have a password, thou shalt change that password every 90 days, etc.)
  • Risk analysis
  • Contracts. We don’t do business associate agreements (BAA), but we do enforceable consent agreements (ECA) which incorporate BAA language
  • Training logs
  • Incident management
  • Complaint processes and procedures
  • Password policies
  • Electronic commerce agreement
  • EHNAC Self-Assessment
  • Trading partner security requirements 
  • Lists of vendors 
  • Lists of employees and their access to the system
  • Diagram of what our office looks like and where the exits are
  • Disaster recovery book
  • Employee handbook
  • Breach processes
  • Policies and procedures for security and privacy
  • …… lots more.
In a nutshell, we gave them our policies and procedures, lists, diagrams, workflows, and organizational charts. 

How did you feel after the audit? 

ESPINOZA: I was ecstatic. It was a sigh of relief to know it was over. Remember, I had already gone through the stress of our EHNAC audit. I was so proud and excited to see that we had completed our audit with no findings. 

What do you wish you had known about your audit?

ESPINOZA: If you get a letter and expect to have a good outcome, and don’t have everything prepared now, you’re not going to have time to do proper preparation. Your audit will fail. 

In retrospect, I wish I had known there were companies in addition to EHNAC that could have prepared us for the audit. My advice to anyone out there preparing for an audit is: investigate other organizations that could help you pass your audit. Nobody should have to go through an audit alone. Reach out to organizations like SecurityMetrics and EHNAC now to help you with your data security! 

SEE ALSO: You may not be done with your HIPAA requirements 

How should organizations prepare for an audit now?

  1. Gather your documentation now. Organize it.  
  2. Conduct an annual risk analysis. Not only is it HIPAA required, it makes sense. 
  3. Do periodic mini-audits internally. One day, go through your facility. Are your doors and filing cabinets locked? It doesn’t seem like a big deal, but I promise you’ll find a lot. 
  4. Make sure your company is committed. Make it a priority in your organization. Had UHIN not been committed to privacy and security all along, we would have never passed our audit. It’s all about the commitment of the organization. It really does take the entire group to make this stuff work. 

So, how do you survive an OCR audit?

ESPINOZA: Diet Coke, a calm demeanor, and help from others. :)

Securing Keys and Certificates: A PCI Auditor’s Perspective

Gary Glover and Brandon Benson on keys, Heartbleed, and security.

Businesses must ensure their key servers, certificate authorities, open SSL libraries, and server updates are secure. Christine Drake of Venafi interviews Gary Glover, Director of Security Assessments at SecurityMetrics, and Brandon Benson, Security Analyst at SecurityMetrics.

PCI Requirement 2.4

Listen to the full interview

Christine Drake: Let’s talk about the Payment Card Industry Data Security Standard version 3, and how it applies to visibility in keys and certificates. I’m with Venafi, and we deliver next generation trust protection, securing keys and certificates.

Gary Glover: I’ve been a Qualified Security Assessor (QSA) in this industry for about 10 years, before PCI DSS was even a standard. Over that period of time I’ve conducted many PCI DSS and PA DSS assessments, and a lot of consulting. Now I manage a group of QSAs and Penetration Test engineers at SecurityMetrics.

Brandon Benson: I’ve been a QSA at SecurityMetrics for approximately 4 years now. I work with point-to-point encryption format algorithm standards and have had quite a bit of experience with managing and dealing with keys. I help companies interpret the standards in their environment and see what controls are in place and what they need to fix on a regular basis.

Christine: I want to focus in on Requirement 2.4, which requires an inventory of all system components in scope of the PCI DSS. That includes keys and certificates. Gary and Brandon, do you think businesses know where their keys and certificates are that are in scope of PCI DSS?

Brandon: It is definitely more difficult for newer merchants or companies just starting to undergo the assessment process for the first time. Those are areas they haven’t really focused on in the past.

Gary: Most QSAs will go through a discovery process as they prepare someone for their first audit. During that process, we help businesses identify where those keys are.

Early on, when PCI assessors start asking questions about keys, IT staff members say, “I don’t know where those are.” or “The person who did that key process left the company. Let me figure that out and get back to you.” Scenarios like that happen a lot.

Christine: Sounds like it’s a manual back-and-forth process. As they hear more about what they need to discover, they go out and find it. Scope isn’t a one time process, but a back and forth until you’ve covered everything?

Gary: Yes. When we get people prepared for a full PCI DSS audit, we are constantly educating on data flows and how the digital keys are being used inside their network to protect an SSL stream. Part of the process we go through is identifying which employee knows where the keys are. We have to get them on the phone, talk them through it, and identify the flows. So yes, the discovery process is a bit of a manual thing. After a customer has gone through this process a few times, they’ll know which keys need to be changed, and which ones we’re going to ask them about.

The most difficult part of a PCI audit is determining the scope, ensuring it’s correct, and helping the customer understand how it might be modified to minimize scope.

It’s important for people to realize that the keys QSAs talk about are the ones used to secure static credit card data. Ensure you have good key management key procedures around those. Sometimes customers gloss over SSL key expiration dates, and they don’t care if they are self-signed.

Christine: Do you include SSL keys as part of the scope in PCI audits?

Brandon: Yes. PCI DSS Requirement 4 specifically talks about encryption of data over open public networks via transmission. That’s where we should focus more on the certificate type of keys and keys used to encrypt data on the fly.

PCI Requirement 5

Listen to the full interview

Christine: Let’s talk now about Requirement 5 including a new provision that requires companies to evaluate uncommon systems to see if they are susceptible to malware.

We recently saw that to remediate the Heartbleed vulnerability, all keys and certificates had to be replaced. Very recently we saw a compromise in a health systems services company that compromised 4.5 million records because the keys and certificates were not updated. Gartner predicts that 50% of all network threats will use SSL by 2017. So I’d like to get your take on how Requirement 5 applies to keys and certificates.

Brandon: The primary target for malicious software is keys. If I have access to keys, I have access to your encrypted data. That’s why it’s so important to protect the keys.

But what we’re seeing is that malware attacks vulnerabilities in applications that use those keys. The malware for Heartbleed for example, didn’t attack the keys, it attacked the open SSL vulnerability so it could obtain the keys.

I think that’s what Requirement 5 in PCI 3.0 talks about. We need to make sure our key servers, certificate authorities, open SSL libraries, and update servers are secure, because that area has been neglected in the past.

The keys themselves are the target, but it’s not key strength that’s being attacked. It’s how the keys are managed.

Gary: I’d also like to add that when QSAs go through a system, they are supposed to look at all systems in scope and determine how those systems affect network security.

Technically, you can define something as inside a network zone, but it may be a server outside the network zone that’s critical to the security of the card network. It may be a shared key server. Even though it’s not technically inside the boundary of the cardholder data network, we’d want to make sure good controls were placed on that server. That might include anti-virus and anti-malware protection.

Christine: It really doesn’t matter if an organization sees keys or certificates as a common or uncommon system attacked by malicious software. Really, they’re supposed to be looking at their entire environment. Requirement 5 emphasizes that.

Brandon, you brought up the point that keys and certificates are a target, and compromising those assets helps in the delivery of malware. It sounds like protection would go beyond anti-virus.

Brandon: As you start looking at the security of an environment, the strength of the key is the key! No pun intended. We recommend all our customers to monitor their key locations. We don’t want keys to be swapped out or changed in any way.

If any keys are compromised, released, or disclosed, your environment becomes vulnerable to attack. Key misuse has a direct impact to the security of a cardholder environment.

I also wanted to mention that we’re seeing malware stream data from environments using SSL. So it's not like the bad guys don’t know the importance of encrypting data, because now they’re using it to encrypt data in these environments.

Christine: Sounds like it’s important to do anomaly detection and make sure these assets are being used as they are intended to be used. Do you think Heartbleed will put more focus on key and certificate security in audits going forward?

Brandon: The short answer to that question is yes. The long answer is, it’s not just keys we have to focus on. It’s also the systems and components supporting those keys. A key could be 100% secure. You can store a key in a hardware security module or a key management server that’s 100% isolated from the system. But the moment I need to use a key, for example in an open SSL library for receiving web traffic, I need to put protections around the key in all locations.

Heartbleed reemphasized the need for companies and assessors to understand where all keys are located and ensuring proper controls are in place to protect them.

Christine: Sounds like Heartbleed has a big ripple effect, but really what matters is remediation.

Brandon: Because Heartbleed attacked Open SSL to steal the secret keys, that’s what companies had to patch. Once they patched their open SSL libraries, then they had to issue new private keys for everyone. We saw some companies patch, but not replace their keys. Some companies replaced keys but didn’t patch. When you look at security, it’s really a multi-layer approach.

Christine: Whether it’s Heartbleed or another attack, you have to make sure remediation happens. I would assume you look at the latest attacks when you’re doing audits?

Brandon: Any QSA should consider that as part of their process. As we learn about new vulnerabilities, we communicate them to our customers. It’s not uncommon for me to email my customers and say, “I know you’re using Open SSL, have you seen this?” I know you’re using Oracle or My SQL, have you seen these vulnerabilities released today?”

Gary: The real point of Requirement 5 is, things change. Even if you think a system is out of scope for anti-virus or malware protection, you’ve got to be aware of what’s going on in industry and what’s going on with vulnerabilities. Earlier in the year, your system could be out of scope for anti-malware, but based on how things are happening, it is now.

Christine: Excellent point Gary. Businesses must continue look at their scope and security over time and not expect it’s finished after one review.

Want more? Watch this Infosecurity Magazine webinar:
What’s new in PCI DSS v3 for cryptographic keys and digital certificates?
The new PCI DSS v3 mandates stronger security for the technology that creates trust between servers, devices, and cloud—cryptographic keys and digital certificates. With cybercriminals hungry to steal keys and remediation of Heartbleed still incomplete, there’s sure to be more attention to this in audits. Yet the PCI DSS v3 requirements demand more visibility and security over keys and certificates than most organizations can deliver.