Navigating the Self-Service Analytics Journey at Swedbank

Qlik

Upgrading your company’s business intelligence operations requires more work than just giving everyone access to data. The self-service model sounds great on paper, but only the most technical-minded among us can decipher mountains of data from the start. Most people require a lot of help getting to that point.


A true self-service model means business units serving themselves in the absence of constant IT involvement. It also requires automating processes that provide access to data and ensure continuous testing and deployment. In an ideal world, users of all backgrounds and organizational levels should have the ability to find their own insights. However, making this model work requires providing everyone with the tools they need to fully explore the data.

A Long History of Analytics

Swedbank is a 200-year-old financial services company offering retail and corporate banking, mortgages, and transaction processing services. Our customers are primarily located in Sweden, Estonia, Latvia, and Lithuania, but we have offices in North America, China, and South Africa. We have more than 16,000 full-time employees and realize an annual total income of roughly SEK 46 billion (2020). The company handles the financial affairs of around 551,000 corporations and 7.7 million private accounts across all home markets.


Being a financial services company, we have a lengthy history of business intelligence within our operations. Finance is largely a numbers game, and data analysis has long been the best way to avoid costly guesses. Going back 15–20 years, our data foundation was a data mart we called KBA. Abbreviated from a translation of customer base analysis, the tool allowed a select group of analysts to study our client data and share insights back to the organization.  These data packages allowed our users to get insights on our customer base, based on the questions our analysts had pre-made for the users.


However, people outside the analyst units wanted greater access to the data. They were not satisfied with waiting to receive insights from this separate group. Individually understanding the data and asking questions was the best way to develop true acumen. That was the inspiration behind the push for a new self-serve model.

Building a Qlik-Based Customer Analysis Tool

In our search to find a better way to allow units to ask their own data questions, we found Qlik. The platform gave us a user-friendly method of exploring our customer base without having to know all questions in advance. Instead, we could continuously ask and get answers during exploration. We replaced KBA with an interactive version called CAT (Customer Analysis Tool).  Without the need to wait for analysts to build pre-made data packages, our business users could learn to answer their own questions using the interactivity of Qlik and understanding the data intuitively from the associative experience (green, white, and grey). This would allow financial advisors, branch managers, and existing analysts to tap into the same pool of information to find answers relevant to their work.


Instead of our standard data packages, we built a bookmarking tool. Analysts still play around with the data to develop their own insights, but they now organize the information into bookmarks that are sent to relevant parties. Like the previous data packages, any normal user can open the bookmark, but now also explore the data and continue to ask questions to get further understanding of the insights provided from the analysts. That goes a long way to decentralize data, which is necessary for true business intelligence.

To establish a self-service data model, prioritize visualizations over raw data.


In the case of marketing campaigns or other focus activities, the bookmarks system is also a way to encourage users to visit and explore data relevant to their needs. Rather than treating staff as spectators, we give them access to bookmarks highlighting key data points. Clicking these bookmarks will take them to a new page with visualizations and selections from the data model. For staff desiring a quick way to prioritize activities, these new screens give them the tools to make smarter decisions even faster. Advanced users are still granted extra access to create and circulate their own bookmarks.

Moving from QlikView to Qlik Sense

Once we had a consistent number of employees using QlikView, we began to truly understand the impact this access could have on our organizational decisions. It became important to move beyond our customer analysis tool and into other units within the organization. We also wanted to completely remove the bottleneck for self-service created by having a central analysis team. That is when we discovered Qlik Sense.

Building a self-service data model doesn’t mean you need to compromise on security.


Qlik Sense offered the opportunity to build a platform almost entirely around the concept of self-service. One of the biggest fears about the complete decentralization of data is the loss of security control. With Qlik Sense, we built a security framework with granular permissions. We could now directly link organizational roles to custom properties within the platform. The central team took ownership of roles within the platform but data access and ownership could be decentralized and owned by each team that wants to use the platform.


Overall, we developed three specific roles within Qlik Sense. Any user has the ability to consume published applications or create stories and bookmarks. 

Power users can also extend applications by creating new worksheets or visualizations for public consumption. Developers, on the other hand, have the authority to create applications from scratch. They get access to their own data container/workspace where they can store QVD files and other data files they need.


One of the other resources we used to help decentralize BI development is classroom training for Qlik developers. The idea was to provide people with the interest and skills, with some basic training on developing Qlik applications. With COVID-19, we had to adapt and switched our classroom concept to launch an on-demand training which bridged a gap in the current need. This meant people could sign up for the training when it was convenient for them, which ultimately helped to facilitate buy-in.


We also created an information page to help with topics like accessing the platform and using data containers. It was also necessary to create a development process to ensure any applications are fit to distribute. This review process applies to every developer and maintains some quality control and consistency. New developers also gain confidence when starting with the knowledge that the output will receive a review before potential mistakes are distributed. As a bank, this type of formal process is needed to follow the legal and compliance standards that are enforced upon us. For other types of companies, some processes might not be needed or implemented. If there is one thing you should take with you from our ways of working, it's that you should look closer at a container concept (e.g. Qlik Deployment Framework). Containers give you the freedom to structure your self-service environment and the flexibility to secure it in a way that fits your organization. When implemented, you might be able to raise it from a Qlik platform level to an analytical level, and gain the benefits of it throughout your analytical landscape.

Looking to the Future

As of this point, there are potential Qlik features that will help us continue our self-service journey. While Qlik Sense has allowed us to provide complete access to the data, there is still a data literacy gap for some users. One feature to help us is using natural language processing (NLP). With NLP, users can ask questions using everyday language and get back analysis. This change could lower the bar for data analysis and would allow users to access dashboards wherever they operate. As an example, staff involved in a conversation using Teams or Skype can have access to Qlik dashboards within their normal workflow instead of browsing to a specific site.


Adding on the functionality of providing analysis through a Natural Language Generation (NLG) interface would be another way to bridge the data literacy gap. By serving business users analysis highlights together with visualizations, it will give them a general understanding of what they are being shown and can give them the inspiration for follow-up questions that they would not have just by looking at some graphs. NLG has the potential to be a great enabler for data-driven decisions, and explaining data sets and analysis in a language that fits each business user’s language.


Going down this road will require considerable investments into creating a business glossary that connects business language with data shown to users. However, I believe this is a natural part of the journey toward truly self-serve BI. We need to focus on the people that are furthest away from data-driven decisions and find a way to bring them into the world of data.