Online Health Data in Employers’ and Insurers’ Predictive Analytics
Posted Nov 24 2010 12:27am
Did you know that buying generics instead of brands could hurt your credit? Or that a subscription to Hang Gliding Monthly could scare off life insurers? Or that certain employers’ access to electronic health records could lead them to classify you as “high-risk” or “high-cost”?
In all these cases, firms use “predictive analytics” to maximize profits. Consumers are the guinea pigs for these new “sciences” of the human . As Scott Peppet argues , it becomes more difficult to opt out of analytics systems as more people use them. What type of world are they leading us to?
Credit Analytics: Should Frugality be Punished?
One credit analytics company determined that buyers of cheap automotive oil were “much more likely to miss a credit-card payment” than those who paid for a brand-name oil. Spending on therapy sessions may also be a red flag. Appearing too frugal, too anxious, too spendthriftall might lead to higher interest rates or lower credit limits. One R&D head at a credit analytics firm bragged that they consider over 300 characteristics to discover delinquency risk. He was not nearly as forthcoming about how the data is aggregated. Analyzing millions of transactions, the companies observe customers as a gardener might observe a rose garden: weeding out unpromising specimens, and giving a boost to incipient flourishers.
Many have complained about inaccuracy in these new forms of profiling, and consumers’ inability to review and correct digital dossiers collected about them. But let’s just assume that this profiling is correct, and choosing a generic really does correlate with increased credit risk. What’s the social value of this discovery? Maybe credit card companies can reduce rates infinitesimally (and increase profits) by burdening the generic buyers. But I’d be willing to bet that, for every few people whose generic purchases indicate financial trouble, there is another shopper who’s wisely frugal and increasing her chances of successfully repaying all her loans. It seems very odd to penalize the financially responsible merely because they happen to engage in an activity shared by the distressed.
The Dream of the Perfect Profile
Ahh, predictive analysts might reply, you just oversimplify our process. We would never reduce the credit line of someone who purchases generics if that person also, say, has a subscription to Travel and Leisure, or drives a Nexus, or gives over $1,000 a year to the Republican National Committee. They’re not desperatethey’re just careful shoppers. The more information we have, the more fair and accurate we can be. (I can only propose this response, since the industry is so careful about protecting its trade secrets. But this seems like a plausible counterargument.)
Just as free speech advocates often say that the answer to “bad speech” is more or “counter” speech, predictive analysts may argue that the cure for the mistreatment of any given individual is more information about the person’s true motives or opportunities. If privacy advocates are worried that certain surveillance practices will unfairly tarnish the reputation or profile of an individual, the answer is more, not less, information, on that person. The more comprehensive a picture that firms can develop of the individual, the better they are able to properly target resources.
Whatever the merits of this approach, it appears to me that it only applies to one dimension of the credit analytics example above. Rewarding “brand buyers,” in general, is not that likely to alter behavior in ways that could seriously undermine someone’s quality of life. But effectively punishing those who seek therapy or marriage counseling creates a different set of concerns, showing once again the ways in which health care decisionmaking needs to be distinct from the Procrustean forces of market pressures.
Stressed by Sickness in the Risk Society
A recent article by Sharona Hoffman illuminates some problems with pervasive use of health data in predictive analytics.
Employers may obtain and process EHRs [electronic health records] for a variety of reasons. Many require applicants who have received employment offers to provide authorizations for release of medical records in order to verify the individuals’ fitness for duty. At times, employers require records for purposes of workers’ compensation claims, reasonable accommodation requests by individuals with disabilities, or Family Medical Leave Act (FMLA) requests. Employers who are self-insured also process employees’ medical data in order to pay insurance claims.
EHRs will likely provide employers with unprecedented amounts of data. . . . Employers or their hired experts may develop complex scoring algorithms based on EHRs to determine which individuals are likely to be high-risk and high-cost workers. . . . Employers with access to EHRs containing a wealth of medical information may be sorely tempted to exclude certain individuals from the workforce because of concerns about the employees’ future productivity, absenteeism, or medical costs. To disguise unlawful conduct, employers may not act immediately to withdraw a job offer or terminate an employee, but rather, decide not to promote an individual with a disability or to select her for a layoff at a later time.
In other words, predictive analytics in health can lead to more “ death spirals ” for the sick: lost employment, lost insurance due to that lost employment, and future inability to find work due to poor health. Hoffman’s concerns about employers sidestepping relevant regulations were reflected in today’s WSJ article on insurance profiling , too:
[G]iant data-collection firms . . . sort details of online and offline purchases to help categorize people as runners or hikers, dieters or couch potatoes. They scoop up public records such as hunting permits, boat registrations and property transfers. They run surveys designed to coax people to describe their lifestyles and health conditions. Increasingly, some gather online information, including from social-networking sites.
For insurers and data-sellers alike, the new techniques could open up a regulatory can of worms. The information sold by marketing-database firms is lightly regulated. But using it in the life-insurance application process would “raise questions” about whether the data would be subject to the federal Fair Credit Reporting Act, says Rebecca Kuehn of the Federal Trade Commission’s division of privacy and identity protection. The law’s provisions kick in when “adverse action” is taken against a person, such as a decision to deny insurance or increase rates. The law requires that people be notified of any adverse action and be allowed to dispute the accuracy or completeness of data, according to the FTC. Deloitte and the life insurers stress the databases wouldn’t be used to make final decisions about applicants. Rather, the process would simply speed up applications from people who look like good risks.
Many aspects of FCRA have been rendered irrelevant by the all-importance of credit scoringit’s hard to care too much about one’s ability to “correct” one’s credit report if the only thing that really matters is a score whose calculation only contingently depends on any given piece of information in the report. But I had not heard before Deloitte’s assurance that information would “simply speed up” applications, and not “be used to make final decisions.” Quite the creative lawyering behind that distinction.
Relating the Real and the Digital Body
Dan Solove has written extensively on the “digital person,” and perhaps we can see predictive health analytics as an effort to create a “digital body.” As the WSJ reports, we are reaching a point where online “data can reveal nearly as much about a person as a lab analysis of their bodily fluids.” The least we can ask is for the purveyors of data-driven decisionmaking to be much clearer about how they profile individuals. Moreover, in the case of employment, we should seriously consider expanding disability discrimination laws to prevent employers from stratifying employees based on health data. Profits are important, but they shouldn’t come at the expense of sick people who already have enough problems to contend with. As HHS implements PPACA’s promotion of “wellness programs” at workplaces, they should also try to avoid the “Orwellness” of data-driven health profiling.