Emergency Passports: Discovery

Investigating internal system processing for the UK Government’s Emergency Passport service

Client: Foreign Commonwealth and Development Office (FCDO) | Agency: CYB

4 months | 5 people 


Overview

As part of its Enabling Emergency Travel group, the FCDO provides British Citizens the ability to apply for an Emergency Passport (EP) - a large, high volume service.

A British Citizen can apply for this if they are overseas, need to travel urgently and cannot get a full British passport in time.

After successful feature builds for the previous sprint, I convinced the client for us to conduct a discovery into the entire process to look at the whole picture and identify the most inefficient parts of the processing.

A new and improved service would mean people would get the help they need quicker and easier, while also reducing workload for government teams.

Objectives

To make processing EP’s by internal staff more cost effective and efficient by identifying data-driven and evidence-based changes.

The team + my role

Lead User Researcher (myself), Delivery Manager, Business Analyst, Data Analyst and Developer.

I planned and conducted the research abroad, analysed the insights, quantified time on task, prioritised the most inefficient parts of the process together with business criticality, and communicated research results back to the client using research artefacts.

I collaborated closely with the Business and Data Analyst to calculate time on task, resource cost, and therefore the cost of the problems.


Planning the Research

Identifying the problem

The government was on a mission to save costs wherever possible, and had identified EP processing to be a major contender for investigation. We were told the internal systems were inefficient (especially during crisis periods), but we did not know which parts, why, or to what extent. Additionally, we did not know the psychological safety of the staff processing the documents, the operational aspects of the job or how other departments affected their work. What did the whole problem space look like?

Deciding on research methods

To understand the whole process, I flew to Madrid with my team for 4 days to observe staff using the systems. This helped me identify their online and offline actions, visit other departments and probe further through interviews. The methods I used were:

  • Quantitative research through identifying elapsed time between tasks

  • Contextual inquiry (observing the entire processing flow)

  • Interviews with staff, senior management and external consular staff

Identifying major problem areas through data triangulation

I used quantitative data from a performance funnel to pinpoint parts of the process with high elapsed time. This, combined with existing data from interviews, allowed me to create a data-driven process flow to show to my team and stakeholders.

It made a complex journey easier to understand, got everyone on the same page, and guided prioritisation — highlighting time consuming parts of the process and those business critical to improve.

High level service map with preliminary pain points and elapsed time metrics (please click to expand)

Preparing for contextual inquiry

Having had some understanding of the existing process from the previous scope of work, I created hypotheses to communicate our assumptions to the client, which gave everyone confidence that we understood the areas of focus. The hypotheses served as a springboard for the discussion guide. To minimise bias, I did not over prepare, to allow for exploration and probing.

As the client was joining us on our visit, one of the difficulties I foresaw was minimising the discussion of solutions and influence of their presence. Therefore, I ensured to explain the purpose of our visit and the research goals prior to starting, and organised independent observations and time alone with staff to allow them to speak in confidence.


Conducting the Research

Observing application processing

During the first two sessions, I observed without probing, so not to influence, and to capture actions carefully. I also took note of their system setups and offline behaviours.

As the processing tasks were complicated and non-linear, I found it difficult to understand the logic and legalities behind them straightaway. Staff also processed different user groups in different ways, adding to the complexity. By observing 6 sessions and exploring the why’s during mini interviews after each session, the process became clearer.

Since I had the support of a business analyst, we took turns observing, taking notes and drawing a loose service map to better understand the real steps involved. Knowing the service to a basic level beforehand helped us ask better questions.

I also interviewed staff and senior management separately to understand deep rooted problems and their success metrics. Additionally, I organised interviews with other departments who were indirectly part of the processing to piece together how they influenced the main journey. This allowed us to paint a fuller picture and identify the interconnection.

At the end of each day, we had a team debrief to double check our understanding, compare notes and relay the information back to our team at home to speed up analysis afterwards.

Observational research in Madrid


Analysis + Prioritisation

What we discovered

  • Contrary to what we thought, staff didn’t process EP’s in the same way, although it was a standardised process. This was one big operational finding that underpinned others.

  • There were multiple instances of time wasting and forgetfulness through manual work between different systems and answering messages and calls unrelated to their cases. This led to staff frustration.

  • Due to safeguarding, each EP application was double checked and sometimes triple checked by senior management, leading to rework and wasted time.

“It just takes so long to do something so simple and sometimes you just forget.” - ETD Agent

Challenges with analysis

The research findings were difficult to analyse and categorise on two levels:

  1. There was a vast amount of information to unpack in just a few days from many sources, so I pooled the team together for chunky affinity mapping workshops.

  2. There were conflicting views on what some things meant, so I put a list of clarifying questions together to ask the EP staff.

  3. There was pressure from stakeholders to see an initial set of possible solutions. I managed this by speaking to our team and categorising the findings into potential solution buckets, so we could prioritise our work and understand their appetite for each bucket, which were: technical, policy-focused, operational and behavioural.

Matrix and blueprint

I created a prioritisation matrix after the research analysis and discussion of solution buckets, to showcase the most crucial areas for improvement and the risks associated with them.

A snapshot of the prioritisation matrix (please click to expand)

This was translated into a research report, which contained:

  • The what, why and where of the identified problems

  • Who the problems affected and the implications of it

  • Cost of the problems

  • Primary and secondary solutions + risks

This was combined with a service blueprint, highlighting time consuming areas so stakeholders could marry up each finding with its specific point of occurrence for better understanding. It also contained the time taken for each problem and the potential cost to the government of each one.

A snapshot of the service blueprint (please click to expand)


Business + Research Constraints

Our clients expected us to identify solutions by the end of the discovery work, and ideate whilst collecting data due to time pressing business needs. I managed to successfully drip feed information throughout the project, manage expectations by communicating the depth of work and presenting updates frequently to a large number of stakeholders.

Lastly, time on task during the visit couldn’t be measured with true accuracy given that it was prone to human error through manual recording, and there were regular interruptions during a processor’s day including instant chats, phone calls. This was caveated in the research report.


Next Steps

Our team and stakeholders had a prioritised backlog of work for the next project, for which the time and cost savings had been quantified and details such as risk and reward of implementing solutions thoroughly researched.