Context Is Everything When It Comes to Data Literacy

Qlik

The crow doth sing as sweetly as the lark/When neither is attended; (William Shakespeare, The Merchant of Venice. Act 5. Scene 1.)


What do the study and teaching of English Literature and data literacy have in common? For me, the answer is the conveyance of meaning through language. 


The way we organise and communicate data contains many similarities to the way we organise and communicate natural language. We tend to bring the two together, inserting data like foreign words into our verbal or written narrative. When we begin the communication process, we hope that the recipients of our message appreciate our intended meaning. But the more we shift away from the language they find familiar, the more we place barriers in the path of such understanding.   


Our approach to data literacy at Capital & Coast District Health Board (CCDHB) sees us use Qlik Sense in an attempt to make complex data contexts transparent, so as to try to narrow the gap between the meaning constructed by the writer and the meaning interpreted by the reader. 

The Beginnings of Our Data Literacy Journey

But first some context about me, the problem I initially faced, and how we are solving that. My background includes health and public sector consulting and project management. My academic background focused on literature, language, and economics. Today, I am a senior project manager at CCDHB, where I’ve been since 2016.  


When I started, the Chief Executive indicated a problem with 'line of sight' through the organisation, which I personally experienced when I started data-based reviews of clinical services. We have a lot of data but accessing it can be difficult. For example, I always want to combine data about demand (e.g. patients) with data about supply (e.g. staff) and doing so was not an easy task.   


We've worked to resolve the accessibility issue through the implementation of Qlik Sense. Key to our decision was Qlik's ability through mashups to combine high level results from across Qlik applications. This is an important benefit because we enable our readers to access key results from different parts of the business quickly and with little fuss. Increasingly, they can come to one place. As a result, Qlik becomes a common communication platform on which we can build bridges to close the "meaning gap." Given that using a Qlik application forms part of a reader's work context, we integrated data literacy components into Qlik development. 

Building In Data Literacy From the Start

For us, standardisation in application and mash-up development is crucial as it reduces the complexity of working within and across applications. Such complexity creates unnecessary early barriers to the reader’s creation of meaning from the data. We used reader feedback and expert recommendations to standardise the structure of Qlik applications as well as the look and feel of sheets and visualisations. When it came to the design of the Qlik mashups, we went even further and used the same template and a minimal set of chart types. 

Standardisation reduces unnecessary barriers that the reader has to overcome for data interpretation.


It appears this approach is working—according to a benefits review of the Qlik implementation, 63% of users said it takes them less time to access their data than pre-implementation and a further 18% admitted it would have taken less time if they hadn't become distracted by the data they were seeing in the Qlik application! 


Additional feedback, however, indicated that we needed to do more to support data literacy, as people were struggling with interpreting or creating meaning from what they read. Two examples that have influenced the approach to data literacy come to mind. First, a graph was presented to clinicians that compared two years' worth of results. It showed a major increase in demand with an associated recommendation that we needed to somehow get on top of that need. Expanding the data over a longer time period, however, showed that the variation was normal and we avoided an unnecessary intervention. Second, I attended a meeting where a doctor presented a lot of box-whisker graphs to people who nodded sagely, then, after the doctor left, admitted to having not understood them. The doctor had assumed a competency in the audience that did not exist. 


In both cases, the intended meaning was lost and not well oriented toward the reader’s context. We need to be able to overcome this. It’s my job to design and deliver training that will help learners engage with Qlik-accessed data so they can better understand what they see, write more effective data stories, and critique the ones that are put in front of them.

Our Design Approach

My academic background has taught me to be very cautious in my approach to developing data literacy programmes. I know how slippery language can be and I know that meaning cannot be controlled because it arises from a complex interaction between the reader and what is read. My practical experience of working with data is no different. We cannot completely control assumptions and interpretations and we rarely control decisions made in the creation and consumption of data.   


How does this background affect my design of data literacy? I seek to sit between the author (often a Qlik Sense developer) and their readership (including clinicians, operational managers, and management accountants), working to understand the respective contexts within which they are creating and consuming the data. I aim to apply that context knowledge to programme design. In our environment, the most important contexts are the reader’s work processes, so we seek to tailor each programme around goals within a specific process. We also take into account the reality that a data literacy learner can often operate as not only a reader of data but also as the author of data narratives, within the same work process. For example, a series of patient incidents will require data interpretation but also the writing of a report.


We trialled this approach within a national nursing and midwifery safe staffing programme being implemented at CCDHB, the Care Capacity and Demand Management (CCDM) programme.

Trialling Qlik-based Data Literacy

Emma Williams, the manager of the CCDM programme, has previously written about our use of Qlik within the programme (see the link at the bottom). From a data literacy point of view, our first mashup and its associated Qlik applications were created to enable the clinical leaders of our inpatient wards to easily run new meetings called Local Data Councils. These councils use a core data set to review ward performance, identify areas for improvement, and monitor the effectiveness of any changes. The clinical leaders need to guide the data analysis during the meeting, so they have to understand the implications for their ward from what is being shown on the mashup. We were invited by the CCDM programme to pilot our approach to context-based data literacy.
 

The first thing we did was engage the key stakeholders: the professional leaders in midwifery and nursing, operational managers, the CCDM team and union partners. They set specific parameters for the training design, including (but not limited to):


  • supporting the effective functioning of the Local Data Councils
  • ensuring the approach to data literacy support was competency-based, mirroring nursing and midwifery learning and development practice
  • ensuring clinical leaders knew how to read the I Chart and Pareto Chart (as these are used in the mashup) and could work collaboratively on a relatable problem
  • respecting the experience of participants and the fact that not all data comes from graphs and tables. 

The design splits the education into three units: the first focuses on using Qlik graphs and CCDHB data to explain how to read the two charts; the second focuses on analytical questioning; and the third consists of group work using a real-life scenario which creates a relationship between the data and clinical or non-clinical experience. Before beginning the process, learners were required to undertake Qlik navigation training.  


The more challenging part of the design was integrating data literacy competencies. We used the Databilities competency model developed by Data to the People. The stakeholders determined which competencies were relevant for the Local Data Councils and which level of competence they expected from the clinical leaders. We rewrote the generic competency statements into statements that fit directly into the Local Data Council context. We designed aspects of the training session to ensure the competencies were covered and that we could assess whether or not they were achieved. When they weren’t met, we recorded the additional support required and are currently working to put those supports in place. 

Building Competence and Confidence

We've trained more than 100 clinical leaders to date. A recent analysis of our assessment results showed that 72% had achieved their expected competencies within the context of the training. Of the 106 training participants, 84 people provided overwhelmingly positive feedback, largely because they felt the training increased their confidence in working with data. They also enjoyed the combination of teaching and group work; they liked that the scenario was applicable to their work context; and they liked that the scenario was based on real data.  


In every session, people also contributed examples of what might have happened within the scenario from the experiences on their ward. When combined with numeric data, these experiences create powerful narratives—and more importantly, powerful questions. At the conclusion of the training, they are asked to make a recommendation and back it up with reference to the data. Invariably, they have made a recommendation that is based as much on their questioning of the data as it is of the data itself.

When combined with numeric data, staff experiences create powerful narratives—and more importantly, powerful questions.


The impact of combining Qlik applications, Qlik mashups and competency-based data literacy training within the specific Local Data Council context has led to significant quality improvement initiatives which have improved the work environment for staff and the quality of care for our patients. The accessibility of data has supported business cases for required changes in staffing levels as part of the CCDM programme. The Local Data Councils will continue to apply data literacy to ensure that the increases in staffing levels and the quality improvement initiatives are having the desired effect.


However, the approach also highlights what we still need to do, such as focusing on higher competency levels. The data literacy journey with the nursing and midwifery leaders has just begun. 

Building the Wider Data Literacy Progamme

We've taken the learning from the CCDM implementation and are now applying that to other contexts: management accountants and the managers they serve. Accountants need to understand more about what is happening within the hospital to determine the possible drivers for negative financial results. This context means they need more support in understanding a range of different Qlik applications covering a lot of areas of the hospital business. 


As a result, we have written app-specific 'key things to know' slides using Qlik's story functionality. We have developed app-specific training oriented to supporting new scenarios. We have also developed a Qlik stream of user resources with a burgeoning set of stories describing how to read some of the trickier graphs as well as how to understand some key data concepts specific to our wider context (who knew that there could be so many ways of measuring full-time equivalent employees!). 


We've combined all of this work into a data literacy competency course builder within CCDHB's learning and education software. Learners can build their own training and attend at their own pace but at the end they will be assessed against the competencies built into the scenarios.

"A Powerful Piece of Kit"

Our organisation is full of people who want to engage with data for the benefit of their work, which benefits our patients and the communities we serve. The greatest benefit I get from this work is that people who didn’t think they were good with data become more confident with it. As we move up to higher competency levels, we will endeavour to grow their confidence in not only consuming the data as readers but also becoming authors and questioning critics. As this happens, so they will get better at ensuring they are conveying meanings in ways that are understood by the people they are communicating with.


I've also been heartened by the desire of senior staff to engage with the design of specific data literacy programmes. When I’ve asked them if this is what they want to do, I always get an instant and enthusiastic “yes!” followed quickly by, "when can we start?"  


Emma Williams refers to the combination of the core data set, Qlik applications, mashups and the data literacy programme as a "powerful piece of kit.” It appears that this is becoming increasingly recognised within the Capital and Coast District Health Board.  
 

Read more about CCDHB's CCDM programme here.