Tag Archive for: discrimination

VALA Libraries, Technology and the Future invited my fabulous colleague from Melbourne University Fiona Tweedie and I to participate in a webinar discussion as part of Open Access Week. The webinar was hosted by VALA President, Katie HadenVALA are an independent Australian based not-for-profit organisation that aim to promote the use and understanding of information technology within libraries and the broader information sector.

Is “Open” always “Equitable”?

The theme for Open Access Week for 2018 is ‘Designing equitable foundations for open knowledge.’ But open systems aren’t always set to a default of ‘inclusive’, and there are important questions that need to be raised around prioritisation of voices, addressing perpetual conscious and unconscious bias, and who is excluded from discussions and decisions surrounding information and data access. There are also issues of the sometimes-competing pressures to move toward both increased openness and greater privacy, the latter issue having much currency in the health domain (and more broadly) at present.

  • If we default to inclusive, what does that look like?
  • How do we address conscious and unconscious bias?
  • How do we prioritise voices, identify who is included and/or excluded from discussions?
  • How do we address the pressure to move toward both increased openness and greater privacy, particularly in the area of health data?

You can download the mp4 file of the webinar, or read a summary of what I had to say below.

Most of what I have learned about how to be a good nurse has come from the consumers I have worked with in my clinical practice. I think the people that live most closely to a phenomenon have a unique microscopic vantage point and that as a researcher and clinician, complementing this lived experience with a telescopic view allows us to see both the big picture and the lived experience. Similarly, my experience of innovations in health have been consumer driven: the initiation of text reminders in a health organisation I used to work for because newly arrived Sudanese women asked for it; health promotion activities that included fun and community building, because Pasifika people in South Auckland wanted something more communal. So I am interested in the emergence of data and technology as democratising enablers for groups that experience marginalisation. Consumers with myalgic encephalomyelitis (ME)/chronic fatigue syndrome (CFS) who challenged the influential £5m publicly funded PACE trial which shaped research, treatment pathways and medical and public attitudes towards the illness are an example of how making data open and transparent can be transformative.

The PACE trial found that Cognitive Behavioural therapy (CBT) and Graded Exercise Therapy (GET) achieved 22 percent recovery rates (rather than just improvement rates) as opposed to only seven or eight percent in the control groups who did not engage in CBT and GET. The findings contradicted with the experiences of consumers, who suffered debilitating exhaustion after activities of daily living. A five year struggle by Australian Alem Matthees and supported by many scientists around the world who doubted the study’s conclusion resulted in Queen Mary University of London releasing the original data under the UK Freedom of information (FOI) Act at a cost of £250,000. When challenged about the distress that this had caused patients with ME/CFS the researchers claimed to be concerned about the ethics of sharing the data. However, Geraghty (2017) points out:

did PACE trial participants really ask for scientific data not to
be shared, or did participants simply ask that no personal identifiable information (PIIs) be disclosed?

Subsequent reanalysis showed recovery rates had been inflated  and that the recovery rates in the CBT and GET groups were not significantly higher than in the group that received specialist medical care alone. One of the strategies for addressing the lack of transparency in science is to make data open (particularly if another £5m is unavailable to reproduce the study), sharing data, protocols, and findings in repositories that can be accessed and evaluated by other researchers and the public in order to enhance the integrity of research. Funding bodies are now increasingly making data-sharing a requirement of any research grant funding.

https://forbetterscience.com/2016/02/08/pace-trial-and-other-clinical-data-sharing-patient-privacy-concerns-and-parasite-paranoia/

By Leonid Schneider

This story captures the value proposition of making health data open, it can: hold healthcare organizations/ providers accountable for treatment outcomes; help patients make informed choices from options available to them (shared decision making); improve the efficiency and cost-effectiveness of healthcare delivery; improve treatment outcomes by using open data to make the results of different treatments, healthcare organizations, and providers’ work more transparent; be used to educate patients and their families and make healthcare institutions more responsive; fuel new healthcare companies and initiatives, and to spur innovation in the broader economy (Verhulst et al, 2014).

The growing philosophy of open data which is about democratising data and enabling the sharing of datasets has been accompanied by other data related trends in health including: big data-large linked data from electronic patient records; streams of real-time geo-located health data collected by personal wearable devices etc; and new data sources from non-traditional sources eg social and environmental data (Kostkova, 2016). All of which can be managed through computation and algorithmic analysis. Arguments for open data in health include that because tax payers pay for its collection it should be available to them and that the value of data comes from being used by interpreting, analysing and linking it (Verhulst et al., 2014).

According to the open data handbook, open data refers to:

A piece of data or content is open if anyone is free to use, reuse, and redistribute it — subject only, at most, to the requirement to attribute and/or share-alike.

Usually it has three main features:

Availability and Access: the data must be available as a whole and at no more than a reasonable reproduction cost, preferably by downloading over the internet. The data must be available in a convenient and modifiable form.

Reuse and Redistribution: the data must be provided under terms that permit reuse and redistribution including the intermixing with other datasets.

Universal Participation: everyone must be able to use, reuse and redistribute – there should be no discrimination against fields of endeavour or against persons or groups.

Central to which is the idea of interoperability, whereby diverse systems and organizations can work together (inter-operate) or intermix different datasets.

Here are two useful examples of open data being used for the common good. The first concerns statins, which are widely prescribed and cost more than £400m out of a total drug budget of £12.7 billion pounds in England. Mastodon C (data scientists and engineers), The Open Data Institute (ODI) and Ben Goldacre analyzed how statins were prescribed across England and found widespread geographic variations. Some GPs were prescribing the patented statins which cost more than 20 times the cost of generic statins when generics worked just as well. The team suggested that changing prescription patterns could result in savings of more than 1 billion pounds per year. Another study showed how asthma hotspots could be tracked and used to help people with asthma problems to avoid places that would trigger their asthma. Participants were issued with a small cap that fit on a standard inhaler, when the inhaler was used, the cap recorded the time and location, using GPS circuitry. The data was captured over long periods of time and aggregated with anonymized data across multiple patients to times and places where breathing is difficult, that could help other patients improve their condition(Verhulst et al, 2014).

Linking and analysing data sets can occur across the spectrum of health care from clinical decision support, to clinical care, across the health system, to population health and health research. However, while the benefits are clear, there are significant issues at the individual and population level. In this tech utopia there’s an assumption of data literacy, that people who are given more information about their health will be able to act on this information in order to better their health. Secondly, data collected for seemingly beneficial purposes can impact on individuals and communities in unexpected ways, for example when data sets are combined and  adapted for highly invasive research (Zook et al, 2017). Biases against groups that experience poor health outcomes can also be reproduced depending on what type of data is collected and with what purpose (Faife, 2018).

The concern with how data might be deployed and who it might serve is echoed by Virginia Eubanks Associate Professor of Political Science at the University at Albany, SUNY. Her book gives examples of how data have been misused in contexts including criminal justice, welfare and child services, exacerbating inequalities and causing harm. Frank Pasquale in a critique of big data and automated judgement has identified how corporations have compiled data and created portraits using decisions that are neither neutral or technical. He and others call for transparency, accountability and the protection of citizen’s rights by ensuring algorithmic judgements are fair, nondiscriminatory, and open to criticism. However, it is difficult for people from marginalised groups to challenge or interrogate systems or seek redress if harmed for example through statistical aggregation, so fostering dissent and collaboration by public authorities is necessary. Groups with the worst health outcomes have limited access to interventions or the determinants of health to begin with. So, it’s important to ensure that policy and regulation drive structural changes rather than embedding existing discrimination that exposes minority groups to increased surveillance and marginalisation (Redden, 2018).

The advent of Australia’s A$18.5 million national facial recognition system, the National Facial Biometric Matching Capability will allow federal and state governments access to access passport, visa, citizenship, and driver licence images to rapidly match pictures of people captured on CCTV “to identify suspects or victims of terrorist or other criminal activity, and help to protect Australians from identity crime“. The Capability is made up of two parts, the first comprises a Face Verification Service (FVS) which is already operational and allows for a one-to-one image-based match of a person’s photo against a government record. The second part is expected to come online this year and is the Face Identification Service (FIS), a one-to-many, image match of an unknown person against multiple government records to help establish their identity. Critics are concerned at the false positives that similar technologies have found elsewhere like the US and their failure to prevent mass shootings.

Me checking out the biometric mirror at Uni Melb

Me checking out the biometric mirror an artificial intelligence (AI) system to detect and display people’s personality traits and physical attractiveness based solely on a photo of their face. Project led by Dr Niels Wouters from the Centre for Social Natural User Interfaces (SocialNUI) and Science Gallery Melbourne at University of Melbourne.

Context is also important when considering secondary use of data. Indigenous voices like Kukutai observe that openness is not only a cultural issue but a political one, which has the potential to reinforce discourses of deficit. Privacy also has nuance here, public sharing does not indicate acceptance of subsequent use. Group privacy is also important for those groups who are on the receiving end of discriminatory data-driven policies. Open data can be used to improve the health and well being of individuals and communities. The efficiencies and effectiveness of health services can also be improved. Open data can also be used to challenge exclusionary policies and practices, however consideration must be given to digital literacy, privacy and how conditions of inequity might be exacerbated. Importantly, ensuring that structural changes occur that increase the access for all people to the determinants of health.

References

  • Eubanks V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, 2018.
  • Faife C. The government wants your medical data. The Outline, https://theoutline.com/post/4754/the-government-wants-your-medical-data (2018, accessed 16 October 2018).
  • Ferryman K, Pitcan M. Fairness in Precision Medicine. Data and Society, https://datasociety.net/wp-content/uploads/2018/02/Data.Society.Fairness.In_.Precision.Medicine.Feb2018.FINAL-2.26.18.pdf (2018).
  • González-Bailón S. Social science in the era of big data. POI 2013; 5: 147–160.
  • Kostkova P, Brewer H, de Lusignan S, et al. Who Owns the Data? Open Data for Healthcare. Front Public Health 2016; 4: 7.
  • Kowal E, Meyers T, Raikhel E, et al. The open question: medical anthropology and open access. Issues 2015; 5: 2.
  • Krumholz HM, Waldstreicher J. The Yale Open Data Access (YODA) project—a mechanism for data sharing. N Engl J Med 2016; 375: 403–405.
  • Kukutai T, Taylor J. Data sovereignty for indigenous peoples:current practice and future needs. In: Kukutai T TaylorJ , ed. Indigenous Data Sovereignty: Toward an Agenda. Acton,Australia: ANU Press; 2016: 1–22
  • –Lubet S. How a study about Chronic Fatigue Syndrome was doctored, adding to pain and stigma. The Conversation, 2017, http://theconversation.com/how-a-study-about-chronic-fatigue-syndrome-was-doctored-adding-to-pain-and-stigma-74890 (2017, accessed 22 October 2018).
  • –Pitcan M. Technology’s Impact on Infrastructure is a Health Concern. Data & Society: Points, https://points.datasociety.net/technologys-impact-on-infrastructure-is-a-health-concern-6f1ffdf46016 (2018, accessed 16 October 2018).
  • –Redden J. The Harm That Data Do. Scientific American, 2018, https://www.scientificamerican.com/article/the-harm-that-data-do/ (2018, accessed 22 October 2018).
  • –Tennant M, Dyson K, Kruger E. Calling for open access to Australia’s oral health data sets. Croakey, https://croakey.org/calling-for-open-access-to-australias-oral-health-data-sets/ (2014, accessed 15 October 2018).
  • –Verhulst S, Noveck BS, Caplan R, et al. The Open Data Era in Health and Social Care: A blueprint for the National Health Service (NHS England). http://www.thegovlab.org/static/files/publications/nhs-full-report.pdf (May 2014).
  • –Yurkiewicz I. Paper Trails: Living and Dying With Fragmented Medical Records. Undark, https://undark.org/article/medical-records-fragmentation-health-care/ (2018, accessed 23 October 2018).
  • –Zook M, Barocas S, Crawford K, et al. Ten simple rules for responsible big data research. PLoS Comput Biol 2017; 13: e1005399.

I wrote a piece for the Australian College of Nursing’s (ACN) quarterly publication. Cite as: DeSouza, R. (Summer 2019/20 edition). The potential and pitfalls of AI. The Hive (Australian College of Nursing), 28(10-11).

Many thanks to Gemma Lea Saravanos for the photo.

The biggest opportunity that Artificial Intelligence (AI) presents is not the elimination of errors or the streamlining of workload, but paradoxically the return to caring in health. In eliminating the need for health professionals to be brilliant, as machines will be better at diagnosis and other aspects of care, the need for emotional intelligence will become more pressing.

In his book Deep medicine, he recounts how he grew up with a chronic condition, osteochondritis dissecans which was disabling. At 62, a knee replacement surgery went badly wrong, followed by an intense physical protocol which led to devastating pain and distress leaving him screaming in agony. Topol tried everything to get relief and his orthopaedic surgeon advised him to take antidepressants. Luckily his wife found a book called Arthrofibrosis, which explained why he was suffering a rare complication of inflammation affecting 2-3% of people after a knee replacement. His surgeon could only offer him future surgery, but a physiotherapist with experience of working with people with osteochondritis dissecans (OCD), offered a gentler approach that helped him recover. AI could have helped him by creating a bespoke protocol which took into account his history which the doctor did not. The problems of health care won’t be fixed by technology, but the paradox is that AI could help animate care, in the case of the robotic health professionals he had to deal with in the quest of recovery.

The three D’s

Nursing practice is being radically transformed by new ways of knowing including Artificial Intelligence (AI), algorithms, big data, genomics and more, bringing moral and clinical implications (Peirce et al., 2019). On one hand, these developments have massive benefits for people, but they also raise important ethical questions for nurses whose remit is to care for patients (Peirce et al., 2019). In order for nurses to align themselves to their values and remain patient centred they need to understand the implications of what Topol calls the three D’s: the digitisation of human beings through technological developments such as sensors and sequencing are digitally transforming health care; the democratising of medicine as patient’s knowledge of themselves becomes their possession rather than that of the health system and lastly, deep learning, which involves pattern recognition and machine learning.

Data is fundamental to AI

The massive amounts of data being collected -from apps, wearable devices, medical grade devices, electronic health records, high resolution images and whole genome sequences- allows for increased capability in computing to enable the effective analysis and interpretation of such data, and therefore, making predictions.

Artificial Intelligence (AI) includes a range of technologies which can work on data to make predictions out of patterns. Alan Turing, who is thought to be the founding father of AI, defined it as the science of making computers intelligent; in health AI uses algorithms and software to help computers analyse data (Loh, 2018).

Applications of AI
Data are transforming health in two key ways:

Assisting with enhancing patient care – from improving decision making and making diagnosis more effective and accurate to recommending treatment.
Systemising onerous tasks to make systems more effective for health care professionals and administrators.

Applications are emerging including automated diagnosis from medical imaging (Liu et al., 2019), surgical robots (Hodson, 2019), trying to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission from unstructured clinical and psychiatric notes (Chen, Szolovits, & Ghassemi, 2019), skin cancer diagnosis; heart rhythm abnormalities, interpreting medical scans and pathology slides, diagnosing diseases, and predicting suicide using pattern recognition, having been trained on millions of examples.

These systems overcome the disadvantages of being a human for example being tired or distracted. And from a knowledge translation point of view, rather than waiting for knowledge to trickle down from research into practice over decades, steps could be automated and more personalised (Chen et al., 2019).

AI can also be used to better serve populations who are marginalised. For example, we know that not everyone is included in the gold standard of evidence: randomised trials. This means that they are not representative of entire populations, so therapies and treatments may not be tailored to marginalised populations (Chen et al., 2019; Perez, 2019).

Potential for algorithmic bias in health
However, large annotated data sets on which machine learning tasks are trained aren’t necessarily inclusive. For example image classification through deep neural networks may be trained on ImageNet,which has 14 million labelled images. Natural language processing requires that algorithms are trained on data sets scraped from websites that are usually annotated by graduate students or via crowdsourcing which then unintentionally produce data which embeds gender, ethnic and cultural biases. (Zou & Schiebinger, 2018).

This is because the workforce that designs, codes, engineers and programs AI may not be from diverse backgrounds and the future workforce are a concern also as gender and ethnic minorities are poorly represented in schools or Universities (Dillon & Collett, 2019).

Zou & Schiebinger (2018) cite three examples of where AI applications systematically discriminate against specific populations- the gender biases in the ways google translate converts Spanish language items into English; software in Nikon cameras that alert people when their subject is blinking, identify “Asians “as always blinking and word embedding, an algorithm for processing and analysing natural-language data, identifies European American names as “pleasant” and African American ones as “unpleasant”.

Other similar contexts include crime and policing technologies and financial sector technologies (Eubanks, 2018; Noble, 2018; O’Neill, 2016). Also see (Buolamwini & Gebru, 2018). But, how does one counter these biases? As Kate Crawford (2016) points out “Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?”.

Biased decision-making in a systematic way might happen with individual clinicians but they also rely on clinical judgement, reflection, past experience and evidence.

Digital literacies for an ageing workforce
We have a crisis in healthcare, and in nursing. Our technocratic business models with changes from above are contributing to “callous indifference” (Francis, 2013). Calls to reinstate empathy and compassion in health care, and ensure care is patient-centered, reflect that these features are absent from care.

In the meantime, we have had Royal Commissions into aged care, disability and mental health. For AI to be useful, it’s important that nurses understand how technology is going to change practice. Nurses already experience high demands and complexity in their work, so technological innovations that are driven from the top down risk alienating them and further burning them out (Jedwab, et al. 2019). We are also going to have to develop new models of care that are patient centred and codesigning these innovations with diverse populations is going to become increasingly important.

References
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91). Retrieved from http://proceedings.mlr.press/v81/buolamwini18a.html
Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019). Can AI Help Reduce Disparities in General Medical and Mental Health Care? AMA Journal of Ethics, 21(2), E167–E179. https://doi.org/10.1001/amajethics.2019.167
Crawford, K. (2016, June 25). OpinionArtificial Intelligence’s White Guy Problem. The New York Times. Retrieved from https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html
Dillon, S., & Collett, C. (2019). AI and Gender: Four Proposals for Future Research. Retrieved from https://www.repository.cam.ac.uk/handle/1810/294360
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Retrieved from https://market.android.com/details?id=book-pn4pDwAAQBAJ
Hodson, R. (2019). Digital health. Nature, 573(7775), S97. https://doi.org/10.1038/d41586-019-02869-x
Jedwab, R. M., Chalmers, C., Dobroff, N., & Redley, B. (2019). Measuring nursing benefits of an electronic medical record system: A scoping review. Collegian , 26(5), 562–582. https://doi.org/10.1016/j.colegn.2019.01.003
Liu, X., Faes, L., Kale, A. U., Wagner, S. K., Fu, D. J., Bruynseels, A., … Denniston, A. K. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet Digital Health, 1(6), e271–e297. https://doi.org/10.1016/S2589-7500(19)30123-2
Loh, E. (2018). Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health. BMJ Leader, 2(2), 59–63. https://doi.org/10.1136/leader-2018-000071
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. Retrieved from https://market.android.com/details?id=book–ThDDwAAQBAJ
O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Nueva York, NY: Crown Publishing Group.
Peirce, A. G., Elie, S., George, A., Gold, M., O’Hara, K., & Rose-Facey, W. (2019). Knowledge development, technology and questions of nursing ethics. Nursing Ethics, 969733019840752. https://doi.org/10.1177/0969733019840752
Perez, C. C. (2019). Invisible Women: Exposing Data Bias in a World Designed for Men. Retrieved from https://play.google.com/store/books/details?id=MKZYDwAAQBAJ
Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist — it’s time to make it fair. Nature, 559(7714), 324–326. https://doi.org/10.1038/d41586-018-05707-8

[/et_pb_text][/et_pb_column]
[/et_pb_row]
[/et_pb_section]