Tag Archive for: research

ENQUIRING MINDS: WHAT ARTISTS CAN BRING TO GOVERNMENT AND INDUSTRY RESEARCH

 It’s conference season as we speed toward the end of the year. The Australian Academy of the Humanities hosted their 51st Symposium At the Crossroad? Australia’s Cultural Future. The aim was to facilitate conversations about the transformations needed to secure Australia’s cultural and creative future. It brought researchers, practitioners, creators and policy makers together to consider how innovative cultural policy settings and creative practice could together underpin a path to recovery, for people and communities.

Photo of my screen as we began our webinar

It was such a buzz to be on a panel as part of a satellite session, hosted by The Australia Council for the Arts, exploring the intersections between creative practice, research, industry and government. With me were Pat Grant (UTS): writer, illustrator and author of two graphic novels; Gabriel Clark (UTS): designer, photographer and producer of multimedia storytelling events and Alon Ilsar: drummer, composer, instrument designer and researcher. Our brief was to reflect on the skills artists bring to a research project and to consider the ways in which artists’ predisposition to enquiry, creative thinking, and their ability to communicate ideas could be more intimately involved in research. The panel was beautifully organised and facilitated by Christen Cornell.

Christen asked us to consider questions including: What might be the outcomes of allowing artists to creatively analyse data? How might artists’ creative communication of findings open onto new audiences, such as those who are unlikely or unable to read traditional research reports?

These questions have relevance for access and inclusion, with alternative research outputs for example audio-based (see Alon’s work), or visual representation (Pat and Gabe’s work). They also raise further questions about opportunities for artists interested in working in cross-sectoral industry settings.

On October 7th 2020 I was invited to be a keynote in The Australian Citizen Science Association (ACSA) CitSciOzOnline Early-Mid Career Researcher (EMCR) 1/2 day symposium. The aim of the symposium was to unite citizen science-aligned researchers in Australia to interrogate and explore research and practice in citizen science across the country. It featured keynotes, lightning talks, Q&A, interactive sessions, and networking opportunities, to build a community of practice in citizen science research.

Photo of frog on a banana, to illustrate  a project called FrogID, which used citizen scientists to map the decline in Sydney of the iconic Australian Green Tree Frog (Litoria caerulea).

My abstract
Research can change the world, but how it is undertaken is not always beneficial. First Nations critiques of Western science have suggested that many aspects of research resemble colonial processes and are extractive, taking raw contextualised material from people, and making them abstract and universal for the benefit of researchers or institutions. Building on participatory action research and community-based participatory research (CBPR) methods, where researchers collaborate with community partners to investigate issues, citizen science offers a new iteration of co-producing knowledge and participating in the scientific archive outside the university. However, there are also concerns that a participatory agenda is the outcome of reduced funding, and that underfunded research institutions are using unpaid labour to produce knowledge for no cost. This presentation covers principles for working with community partners in authentic, collaborative, sensitive and culturally safe ways.

I wrote a piece for the Australian College of Nursing’s (ACN) quarterly publication. Cite as: DeSouza, R. (Summer 2019/20 edition). The potential and pitfalls of AI. The Hive (Australian College of Nursing), 28(10-11).

Many thanks to Gemma Lea Saravanos for the photo.

The biggest opportunity that Artificial Intelligence (AI) presents is not the elimination of errors or the streamlining of workload, but paradoxically the return to caring in health. In eliminating the need for health professionals to be brilliant, as machines will be better at diagnosis and other aspects of care, the need for emotional intelligence will become more pressing.

In his book Deep medicine, he recounts how he grew up with a chronic condition, osteochondritis dissecans which was disabling. At 62, a knee replacement surgery went badly wrong, followed by an intense physical protocol which led to devastating pain and distress leaving him screaming in agony. Topol tried everything to get relief and his orthopaedic surgeon advised him to take antidepressants. Luckily his wife found a book called Arthrofibrosis, which explained why he was suffering a rare complication of inflammation affecting 2-3% of people after a knee replacement. His surgeon could only offer him future surgery, but a physiotherapist with experience of working with people with osteochondritis dissecans (OCD), offered a gentler approach that helped him recover. AI could have helped him by creating a bespoke protocol which took into account his history which the doctor did not. The problems of health care won’t be fixed by technology, but the paradox is that AI could help animate care, in the case of the robotic health professionals he had to deal with in the quest of recovery.

The three D’s

Nursing practice is being radically transformed by new ways of knowing including Artificial Intelligence (AI), algorithms, big data, genomics and more, bringing moral and clinical implications (Peirce et al., 2019). On one hand, these developments have massive benefits for people, but they also raise important ethical questions for nurses whose remit is to care for patients (Peirce et al., 2019). In order for nurses to align themselves to their values and remain patient centred they need to understand the implications of what Topol calls the three D’s: the digitisation of human beings through technological developments such as sensors and sequencing are digitally transforming health care; the democratising of medicine as patient’s knowledge of themselves becomes their possession rather than that of the health system and lastly, deep learning, which involves pattern recognition and machine learning.

Data is fundamental to AI

The massive amounts of data being collected -from apps, wearable devices, medical grade devices, electronic health records, high resolution images and whole genome sequences- allows for increased capability in computing to enable the effective analysis and interpretation of such data, and therefore, making predictions.

Artificial Intelligence (AI) includes a range of technologies which can work on data to make predictions out of patterns. Alan Turing, who is thought to be the founding father of AI, defined it as the science of making computers intelligent; in health AI uses algorithms and software to help computers analyse data (Loh, 2018).

Applications of AI
Data are transforming health in two key ways:

Assisting with enhancing patient care – from improving decision making and making diagnosis more effective and accurate to recommending treatment.
Systemising onerous tasks to make systems more effective for health care professionals and administrators.

Applications are emerging including automated diagnosis from medical imaging (Liu et al., 2019), surgical robots (Hodson, 2019), trying to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission from unstructured clinical and psychiatric notes (Chen, Szolovits, & Ghassemi, 2019), skin cancer diagnosis; heart rhythm abnormalities, interpreting medical scans and pathology slides, diagnosing diseases, and predicting suicide using pattern recognition, having been trained on millions of examples.

These systems overcome the disadvantages of being a human for example being tired or distracted. And from a knowledge translation point of view, rather than waiting for knowledge to trickle down from research into practice over decades, steps could be automated and more personalised (Chen et al., 2019).

AI can also be used to better serve populations who are marginalised. For example, we know that not everyone is included in the gold standard of evidence: randomised trials. This means that they are not representative of entire populations, so therapies and treatments may not be tailored to marginalised populations (Chen et al., 2019; Perez, 2019).

Potential for algorithmic bias in health
However, large annotated data sets on which machine learning tasks are trained aren’t necessarily inclusive. For example image classification through deep neural networks may be trained on ImageNet,which has 14 million labelled images. Natural language processing requires that algorithms are trained on data sets scraped from websites that are usually annotated by graduate students or via crowdsourcing which then unintentionally produce data which embeds gender, ethnic and cultural biases. (Zou & Schiebinger, 2018).

This is because the workforce that designs, codes, engineers and programs AI may not be from diverse backgrounds and the future workforce are a concern also as gender and ethnic minorities are poorly represented in schools or Universities (Dillon & Collett, 2019).

Zou & Schiebinger (2018) cite three examples of where AI applications systematically discriminate against specific populations- the gender biases in the ways google translate converts Spanish language items into English; software in Nikon cameras that alert people when their subject is blinking, identify “Asians “as always blinking and word embedding, an algorithm for processing and analysing natural-language data, identifies European American names as “pleasant” and African American ones as “unpleasant”.

Other similar contexts include crime and policing technologies and financial sector technologies (Eubanks, 2018; Noble, 2018; O’Neill, 2016). Also see (Buolamwini & Gebru, 2018). But, how does one counter these biases? As Kate Crawford (2016) points out “Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?”.

Biased decision-making in a systematic way might happen with individual clinicians but they also rely on clinical judgement, reflection, past experience and evidence.

Digital literacies for an ageing workforce
We have a crisis in healthcare, and in nursing. Our technocratic business models with changes from above are contributing to “callous indifference” (Francis, 2013). Calls to reinstate empathy and compassion in health care, and ensure care is patient-centered, reflect that these features are absent from care.

In the meantime, we have had Royal Commissions into aged care, disability and mental health. For AI to be useful, it’s important that nurses understand how technology is going to change practice. Nurses already experience high demands and complexity in their work, so technological innovations that are driven from the top down risk alienating them and further burning them out (Jedwab, et al. 2019). We are also going to have to develop new models of care that are patient centred and codesigning these innovations with diverse populations is going to become increasingly important.

References
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91). Retrieved from http://proceedings.mlr.press/v81/buolamwini18a.html
Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019). Can AI Help Reduce Disparities in General Medical and Mental Health Care? AMA Journal of Ethics, 21(2), E167–E179. https://doi.org/10.1001/amajethics.2019.167
Crawford, K. (2016, June 25). OpinionArtificial Intelligence’s White Guy Problem. The New York Times. Retrieved from https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html
Dillon, S., & Collett, C. (2019). AI and Gender: Four Proposals for Future Research. Retrieved from https://www.repository.cam.ac.uk/handle/1810/294360
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Retrieved from https://market.android.com/details?id=book-pn4pDwAAQBAJ
Hodson, R. (2019). Digital health. Nature, 573(7775), S97. https://doi.org/10.1038/d41586-019-02869-x
Jedwab, R. M., Chalmers, C., Dobroff, N., & Redley, B. (2019). Measuring nursing benefits of an electronic medical record system: A scoping review. Collegian , 26(5), 562–582. https://doi.org/10.1016/j.colegn.2019.01.003
Liu, X., Faes, L., Kale, A. U., Wagner, S. K., Fu, D. J., Bruynseels, A., … Denniston, A. K. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet Digital Health, 1(6), e271–e297. https://doi.org/10.1016/S2589-7500(19)30123-2
Loh, E. (2018). Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health. BMJ Leader, 2(2), 59–63. https://doi.org/10.1136/leader-2018-000071
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. Retrieved from https://market.android.com/details?id=book–ThDDwAAQBAJ
O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Nueva York, NY: Crown Publishing Group.
Peirce, A. G., Elie, S., George, A., Gold, M., O’Hara, K., & Rose-Facey, W. (2019). Knowledge development, technology and questions of nursing ethics. Nursing Ethics, 969733019840752. https://doi.org/10.1177/0969733019840752
Perez, C. C. (2019). Invisible Women: Exposing Data Bias in a World Designed for Men. Retrieved from https://play.google.com/store/books/details?id=MKZYDwAAQBAJ
Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist — it’s time to make it fair. Nature, 559(7714), 324–326. https://doi.org/10.1038/d41586-018-05707-8

[/et_pb_text][/et_pb_column]
[/et_pb_row]
[/et_pb_section]