Start the Holidays Right: 20% Discount on Vulnerability Scanning

Security is our present to you.

In an effort to lessen the rash of business compromises that coincide with the holiday shopping season, we are offering a discount on vulnerability scanning for new and returning customers this fall! 

Read the press release.

Vulnerability scans are automated tests that locate vulnerabilities, or holes, in business environments that could allow a hacker access into a network to steal customer credit card data.

Not only are quarterly vulnerability scans one of the easiest and best things you can do to remain secure, they are also a Payment Card Industry Data Security Standard (PCI DSS) requirement for most merchants.

SEE ALSO: PCI Compliance Scanning Requirements

Holiday security

The period between Thanksgiving and Christmas sees more frequent credit card purchases than any other time of year. In fact, according to, total holiday season sales for retail are expected to reach $863 billion this year. The bummer is, this increase in transactions may also mean a potential increase in liability if a business has been breached.

Businesses can and should protect themselves from hackers trying to take advantage of the uptick in holiday transactions by running vulnerability scans as often as possible and fixing the problems those scans find immediately.

Discover more about SecurityMetrics vulnerability scanning.

Remediating vulnerabilities can help you avoid common hacking tactics, like SQL injection and remote access exploitation. And if you need help, SecurityMetrics technical support agents are standing by 24 hours a day, 7 days a week to assist with scans and vulnerability remediation.

Get the 20% SecurityMetrics vulnerability scanning discount!

The #1 Way to Help Your HIPAA Audits Go Faster

How can you secure your organization without knowing how patient data travels?

Tod Ferran, Security Analyst
By: Tod Ferran
Every privacy/security/compliance official should understand the specific details of how patient data flows in their organization. For example, the point of entry, where it flows within an organization, where it is stored, what format it's stored in, exit points for the data, and where it travels. 

That’s a lot of information to keep straight, especially for large providers and hospitals with dozens of departments. How does an official keep track of that? Data flow diagrams. 
Example of a patient data flow diagram
Data flow diagrams are the graphical representations of PHI flow throughout your systems. They are a crucial part of every healthcare’s HIPAA security efforts, especially while creating a complete and thorough risk analysis. 

SEE ALSO: HIPAA Security Tip: Understand Your Data Flow

Unfortunately, lack of data flow diagrams is the #1 problem I see when auditing healthcare entities. Organizations simply don’t have them. How are you supposed to implement appropriate safeguards if you don’t know which areas to safeguard? Maintaining a current PHI flow diagram is absolutely foundational to your security program and HIPAA compliance. 

Besides being a great overview of your systems, here are a few specific reasons you should be creating data flow diagrams:

  • IT doesn’t always set up networks with security in mind. Tracking where PHI travels, enters, and exits will help you track any strange processes and adjust for efficiency.
  • By recording every instance of PHI, you can determine which systems, computers, and users require extra (or less) security technology.
  • Data flow diagrams help IT when it comes time for upgrades, as the diagram shows every computer/role, database, and network that should be included in an upgrade.
  • If your organization undergoes a breach, you will be able to track the possible weaknesses that could have led to the compromise.
  • Your HIPAA audit will go significantly faster if a PHI data flow diagram is already created. I speak from experience here. Your auditor will absolutely love you for it.

What does HIPAA say about data flow diagrams?

Data flow diagrams can greatly enhance network security and can make your HIPAA compliance process easier. 

While HIPAA doesn’t specifically state providers must provide a data flow diagram to be HIPAA compliant, the OCR Audit Protocol does state that auditors must, “determine if the covered entity has identified all systems that contain, process, or transmit ePHI.” What better way to do that then to request a healthcare provider to deliver a PHI flow diagram?
The healthcare security audits I conduct would go much faster if the entity simply had detailed PHI flow diagrams of their system.Tweet: How to speed up healthcare security audits? A detailed PHI flow diagram. #HIPAATweet
The following is a step-by-step process to help you correctly create flows in your healthcare security environment.

Step 1: Scope definition

The first step is learning where your data resides. This is also the first part of a HIPAA Risk Analysis. (Need help with your risk analysis?) Scope is an inventory of all the places your organization accesses, creates, stores, transmits, or maintains PHI. The following may or may not be in scope (containing PHI), depending on your environment:
Security appliances
Patient admissions
Email system
Data warehouse
File shares
Ticketing systems
Telephone recordings
Tablets/smart phones/mobile devices

Take a few minutes and try to identify everything in scope.

Step 2: Interview workforce members

Oftentimes, it’s simply not possible to create a data flow diagram on your own. The only way to ensure accuracy is to interview every single workforce member who has access to PHI. Your employees might know about random processes or data exits that no one else knows about. Interview process owners, web developers, sales force, physicians, third parties, etc. 

SEE ALSO: 5 Things You Should Know About Minimum Necessary PHI

This step is the hardest of the bunch. Trying to track down every PHI location, its flow, and what process put it there is exhausting and extremely time consuming. That’s why keeping detailed documentation of your findings is crucial to your flows…and your sanity.

Step 3: Create flow diagrams

In congruence with your findings from steps 1 and 2, flow diagrams further help you illustrate the location and flows of PHI. It often makes sense to have a separate diagram for each different in-flow and for each different out-flow. Once a diagram is completed, you never have to create it again! All you have to do is update it if processes change, or you change vendors.

Data flow diagrams will make your life easier. I promise.

It’s somewhat embarrassing when healthcare organizations don’t have something so important to their data security as flow diagrams. If your organization is actively working toward its HIPAA compliance, your data flow diagram will play a crucial part in that development. 

Let me know if you need help with your flow diagrams by commenting below, or schedule a consulting session with me by emailing or calling 801.705.5656.

Tod Ferran (CISSP, QSA) is a Security Analyst for SecurityMetrics with 25 years of IT security experience. He provides security consulting, risk analysis assistance, risk management plan support, and performs HIPAA and PCI compliance audits. Check out his other blog posts.
Coding Culture Will Ruin Your Audit…and Your Security

Developers do not follow secure coding guidelines, but it’s not entirely their fault.

Brand Barney, Security Analyst
By: Brand Barney
According to OWASP, one in five companies experienced a data breach due to a web application security incident. Those are pretty bad odds. Unfortunately, coding flaws translate into bad security more often than you might think.

Now, I hate to bash on coders/developers. Some of my best friends are developers. But, I have to be honest about organizational problems in order to help businesses fix security vulnerabilities. And that means exposing the truth about development culture. 

Here are five reasons coders might ruin your audit…and your security.

1. Coders regularly have heavy deadlines that may lead to scrappy quality

From a business standpoint, it makes sense to push product as fast as possible in order to beat competition to market. Well, from a security (and coding) standpoint, that is an absolutely ridiculous idea. 

Coders don’t have the luxury to take their time with secure code. They have bosses, product managers, directors, and VPs harping them to push code as fast as they can. The faster code is pushed, the less time is spent making it secure, and the more mistakes are likely to be made.

2. Coders don’t always use proper documentation 

Each developer in your team probably comes from a different background and has a variety of skills and coding languages up his sleeve: PHP, Java, C, C++, Perl, Ruby, etc. Newbies, veterans, and job jumpers are all familiar with different code. The problem is, they are collaborating on similar projects. 

Consider this very common scenario.

A developer is assigned to add a function to an existing product. He writes the function in an hour. Then, he finds a problem with the code written by one of his predecessors, and his function won’t work until he fixes the problem. So he goes hunting for broken code that he has no idea how to fix. No documentation exists that tells him who wrote the code, when it was written, or why it was created. Because his team never had a formal policy on code writing, he just wasted six hours to push that function (One hour writing the function, and five hours to fix the problem).

The majority of the time, developers write comments in the code, which I highly recommend. However, if comments are literally the only code change documentation, all it takes is one accidental line deletion and that “documentation” is gone forever. 

Check out these hilarious comment examples taken from real code.

// Dear maintainer: //  // Once you are done trying to 'optimize' this routine, // and have realized what a terrible mistake that was, // please increment the following counter as a warning // to the next guy: //  // total_hours_wasted_here = 42 //

/*  * You may think you know what the following code does.  * But you dont. Trust me.  * Fiddle with it, and youll spend many a sleepless  * night cursing the moment you thought youd be clever  * enough to "optimize" the code below.  * Now close this file and go play with something else.  */

//This code sucks, you know it and I know it.   //Move on and call me an idiot later.

// If this comment is removed the program will blow up

// I am not sure if we need this, but too scared to delete.

//I am not sure why this works but it fixes the problem. 

/* after hours of consulting the tome of google i have discovered that by the will of unknown forces without the below line, IE7 believes that 6px = 12px */ font-size: 0px;

3. Coders aren’t magicians

It’s probably safe to say that no developer knows every library and language framework out there. If you ask a web developer to create a scrolling, randomizing function that works on mobile devices in C++, what does he do? He’ll likely Google it.

Developers are living, breathing Google machines. The problem is, half the solutions they find on the Internet (and then implement in product and web code) are 10 years old, or have since proven to be insecure! 

4. Coders code on personal, unencrypted machines

Many companies hire temporary developers during a product launch or for a new website. Usually those developers code on their personal machines. All it takes is one stolen, unencrypted laptop for a hacker to have your entire web code to himself. 

5. Coders experience burnout

Coders truly are artists. They wish to create beautiful secure code that’s well-organized and designed. But good code rarely makes it into work projects. Good code is what your coders do in their free time. It’s what they look at when they’re feeling depressed at work. 

Why? Like I mentioned above, most companies don’t give them license to use good code. The stress of yesterday’s (and tomorrow’s) deadlines slowly beat them down. They start staying long hours to get work done. Then they come to work the next day with a new pile of requests. 

Burnout is the eventual fate of many great developers. The more burned-out a developer is, the less likely he will care about his code being secure. 

How do you break the cycle?

Yes, I just made some serious accusations. Many organizations deny these problems happen on their team. “We have too large of a budget for this to happen!” or “My third party does all my coding. No way they have a problem with this!”

Well, that’s where you’re wrong. I guarantee that at least one of the above problems happens in 99% of organizations, regardless of size, budget, or team. 

Before you begin to fix it, you must get the C-level to understand it’s happening. It’s not enough just to hire more developers or fire everyone and start over. Remember, this is a cultural problem that transcends your own organization. 
If management and development members don’t understand the costs and damage insecure/sloppy coding can have on the business, it’s time to educate them on the benefits of secure coding practices.Tweet: Wake up C-level! The costs of insecure/sloppy coding are huge:

Start secure coding practices 

Following coding standards makes it simple to focus on security, rather than spending time fixing problems in code. If coding standard guidelines are spread across your organization, coders will easily finish each other’s work while constantly maintaining secure code.

If your coders don’t come from a security background, ask them to review the OWASP Top 10, and NIST guidelines.

Here are my recommendations to include in your secure coding practices methodology. 

Test outside of production

Making changes in a production environment is one of the most insecure actions a developer could ever perform. Unfortunately, it’s extremely common. Product managers need a quick product change so they push development to release it as fast as possible. In this case, fresh, untested code is allowed into the wild … along with any accidental vulnerability. Ensure your methodology requires that developers test code outside of production to avoid these situations. 

Code review

I hear this all the time, “Our guys are testing their own code, no worries!” I applaud those who test their own code, but that’s not enough. Code should always be reviewed to catch mistakes through external and internal vulnerability assessment testing. For bigger code releases such as product releases or a new website, it’s imperative to receive a penetration test to ensure your organization is secure and safe from malicious entities.

Patch management

I attended a talk at DefCon 2014, where a white hat SQL researcher discussed how he exploited Oracle’s SQL database for years. He then reported the flaws to Oracle…who would leave the problems for years before fixing them. This unfortunate company is a great example of bad patching practices. Security best practice is to always patch code flaws immediately. If you are participating in compliance mandates, you may have deadlines to consider as well.

SEE ALSO: Cross-Site Scripting, Explained

Code documentation

Not only will proper documentation ensure your success as a team, but it also helps drive the team to understanding the prioritized needs of the entire project. As I suggested above, don’t let code comments become your only form of documentation.

I recommend that you follow and disseminate a properly outlined software development lifecycle. This will help your organization and developers create and maintain your applications in a secure manner. 

This lifecycle should include development methodology, training, vulnerability testing, change control forms, etc. For instance, anytime code is changed, developers should use change control forms to document how the change will impact customers, how it might change functionality, how it impacts security, if there are any back out procedures, etc.


The vast majority of developers do not following secure coding practices. It could result from poorly managed development teams, or just poor product planning in general. Until your plan contains documented, formalized, and secure coding practices, problems will continue to happen. Businesses will continue to lose data. You will fail your security audit. Start changing your coding culture today!

Have coding security questions? Ask away! 

Brand Barney (CISSP) is an Associate Security Analyst at SecurityMetrics and has over 10 years of compliance, data security, and database management experience. Follow him on Twitter and check out his other blog posts.