Quantified Self Data Leads to Even Bigger Data, and Big Privacy Concerns
The Quantified Self movement—the use of technology to acquire data on multiple aspects of a person’s daily life—has been gathering steam for quite some time. There has been an influx of affordable, non-intrusive wearable technology, and the latest gadget –Apple Watch – tracks not only wearers’ heart rate and exercise time but also how many calories they burn and how often they take breaks from sitting.
Now that myriad vendors, consumer and pharmaceutical brands have invested heavily in the space, the technology is small and affordable enough to be accessible to the masses. According to an ABI Research, it has been estimated that around 485 million wearable devices will be shipped worldwide by 2018. And beyond wearable technology, the number of “things” connected to the Internet will be over 50 billion by 2020. These “things” can and will track relevant, health-related activities such as sleep and eating patterns, coffee consumption, movement activity within a household, toilet usage and so on.
People likely will be able to opt out of certain types of data tracking and collection. However, if the providers of these products/services can tie in the health and wellness benefits of the Quantified Self movement to get as many people as possible to opt in, then companies will soon be collecting troves of aggregated data that will make what is currently being collected seem minuscule in comparison. The term “Big Data” will hardly suffice.
Big Data is poised to deliver “big savings” and “transform healthcare,” and it is certainly full of promise for the future with its “big benefits.” However, Big Data comes with a big challenge: privacy.
Most data subjects—that is, you and me, ordinary people—are unaware of how our personal data is collected (not only when we provide our details while shopping online but also when we simply browse the web), stored (in which country?), transferred (where will it go?) and used (where it is going?). Simply asking data subjects to click on and agree with lengthy pages of “terms and conditions” before allowing them to use a service is no longer sufficient from the Federal Trade Commission’s (FTC) perspective. Strong advocates and an increasing amount of fines for Big Data misuse show that the “Big Privacy” movement is already building momentum.
There are many questions about Big Data and healthcare. Some argue that the focus of the Big Data phenomenon has already moved from “should we adapt Big Data into our business?” to “how can we use Big Data to make our business grow?” We have moved on from the data-scarce era to an era where we are being flooded with more data than we can comprehend. Undoubtedly, Big Data is helping researchers beyond their dreams: comprehensive medical records covering a wider population, holistic healthcare evaluation from primary care to secondary care and multiple perspectives of a single case are all easily available.
The big question often ignored by Big Data proponents is, where are these data coming from, and do we have the proper informed consent in place from all of these sources? One common reason this question is swept under the carpet is the answer: We don’t know. Data on your desk right now could have been transferred from hundreds of different data brokers already; maybe they are part of an extraction from a larger database so that the original source becomes untraceable. However, “unknown” doesn’t mean it is legal and ethical or even “totally anonymous.” Especially with Big Data analytics, some argue that anonymized data no longer exists.
The next question is, what can or will Big Data analytics do? If these actions involve any possible disadvantages to the data subject even in the future (an increase in insurance premiums or the potential for employment discrimination) or occur without proper consent (or simply discontent by the data subject), Big Privacy will likely win. Facebook and Google have made headlines because of fraud claims over data or privacy violations. And privacy advocates and the FTC have called fitness-tracking apps a “nightmare” and “very disturbing.”
In the battle with Big Data, individuals do not need to be “identified” in order to be placed in a disadvantageous position. For example, if all de-identified medical records were openly available, will health insurance premiums increase simply because one lives in an area with a high prevalence of smoking and obesity? Even when medical records are available only for public-sector research, a sharp drop of women reporting postnatal depression has been observed because of the fear that their babies may be taken away. Or will we refuse to be treated by an HIV-positive nurse or not go near a hospital with higherthan- average rate of hepatitis infections? Given enough money, resources and time, all de-identified data can be identified again.
Looking to the Future
“Stakeholders need to learn “what’s in it for them,” and an awareness campaign with positive stories from data subjects benefiting from Big Data is a good start”
Steps Healthcare Market Researchers Can Take
Big Data has certainly gained the attention of healthcare market researchers. It is important to engage practitioners, patients and regulatory bodies regarding the benefits of participating in Big Data research by conducting individual needs assessments. Stakeholders need to learn “what’s in it for them,” and an awareness campaign with positive stories from data subjects benefiting from Big Data is a good start.
Security and accuracy are the two main concerns from Big Privacy advocates, especially in the healthcare space. If Big Data practitioners can gain the trust of all key stakeholders by safeguarding a transparent process for collecting accurate, accessible data, we might see a happy ending of both Big Data and Big Privacy winning.