Your Health Data is Turned Into a Risk Score and Sold

Under a minute read
| Link

Yet another reason why we need privacy laws. Companies collect your health data and turn it into a “risk score” which gets sold to doctors, insurers, and hospitals.

Over the past year, powerful companies such as LexisNexis have begun hoovering up the data from insurance claims, digital health records, housing records, and even information about a patient’s friends, family and roommates, without telling the patient they are accessing the information, and creating risk scores for health care providers and insurers.

There is no law against collecting and using this data.

Check It Out: Your Health Data is Turned Into a Risk Score and Sold

One Comment Add a comment

  1. wab95

    Andrew:

    Thanks for sharing such an important article. This is part of the churn beneath the surface that many if not most patients in the Western world are not aware of, but is already affecting therm, and will affect them even more in the future.

    Without getting into the pros and cons of using patient data to create a score, or about patient surveillance writ large, from a purely clinical perspective, having as near an objective risk indicator for an individual patient for an adverse outcome, be it addiction, heart disease, a type of cancer, a disability, or some other acute or chronic condition is benefit to both patient and physician, but especially the former if that score is reasonably accurate, reliable and can be acted upon within the current standard of care.

    The problem is not the use of patient data to create scores or other outputs of predictive models, but three things: consent, privacy and trust.

    First, for patients to feel comfortable with the practice, and that this is not some vanguard of an Orwellian dystopia over which they have no control, is to provide precisely that; control by the patient. If patients are consented to have their data used for such purposes, they will more comfortable with the practice because they have control over whether or not their data are used in this manner, and with whom it is shared, which leads to the second point.

    Privacy, and more importantly, confidentiality. Many of the discussions around ‘privacy’ ie ‘what data are collected or accessed’, as argued. are actually about ‘confidentiality’ ie ‘with whom are those data shared, and who has access’. This gets to an important issue of terms of use of personal data. This, too, should be under patient control and regulation, barring a demonstrable imperative to protect either the patient’s or society’s safety and security, with emphasis on the word ‘demonstrable’, as opposed to merely someone’s (a third party) subjective opinion.

    If these two regulators of patient control were provided, much of the debate and angst over this and emerging surveillance related tools and practices would be mollified. This is an often under-discussed but essential element of regulation – that done by the individual; but it is perhaps the most important when it comes to our personal information.

    The final piece is trust, and in the spirit of one past US president, ‘trust but verify’. The current trust deficit created by the surveillance tech giants, notably but not exclusively FB, with repeated and publicised breaches of user trust, not to mention others including commercial concerns (department stores, banks, hotel chains, etc) that request/require customer data, necessitates the creation of an enforceable regulatory infrastructure of law, oversight, monitoring, reporting backed by substantive punitive measures for violations and even non-compliance. Such ‘verification’ of compliance by companies with a set of industry standards for data protection and use would go far to reduce the trust deficit, because demonstrable verification would be in place. Far from reducing the likelihood of such data modelling and use for patient good (and other use cases), a regulatory infrastructure would facilitate and fuel such use by creating a safe environment for the individual.

    The bottom line: it’s difficult to have a meaningful discussion about surveillance, modelling and assessment of individual risk without the context of safety parameters for the individual, which include informed consent, privacy/confidentiality and enforceable regulation.

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account