Emergency Passports: Build

Optimising online and internal systems for the UK Government’s Emergency Passport service

Client: Foreign Commonwealth and Development Office (FCDO) | Agency: CYB

4 months | 9 people 


Overview

As part of its Enabling Emergency Travel group, the FCDO provides British Citizens the ability to apply for an Emergency Passport (EP) - a large, high volume service.

A British Citizen can apply for this if they are overseas, need to travel urgently and cannot get a full British passport in time.

The service had not been radically reviewed or changed, so had legacy infrastructure and poor processes with a lack of data creating inefficiencies.

A new and improved service would mean people would get the help they need quicker and easier, while also reducing workload for government teams.

Objectives

Identifying areas for improvement and implementing changes across the online public facing EP service, as well as the systems staff use to process EP applications.

The team + my role

The team consisted of a Lead User Researcher (myself), Service Designer, Delivery Manager, Business Analyst, Technical Lead, Software Developer x 2, Data Analyst and Product Manager.

Key parts of my role involved identifying areas of improvement, developing the research strategy, conducting research sessions, helping with design iterations and managing global stakeholders.

I collaborated with our multidisciplinary team throughout the discovery, design and development process — in particular working closely with our Service Designer and Technical Lead.


Process + Tools

The team worked with an agile approach to design and development, in two week sprints for each release. Pushing changes out in smaller batches helped us test early and often. We used Miro, Figma and Google Sheets for design, research and documentation, and Jira for delivery management.


Business + Research Constraints

This project was personally challenging for me, as the existing data analytics, research insights or service maps across the service and systems were not available as a starting point for research. I therefore heavily relied on qualitative user interviews and extensive user testing to identify improvements, however, this also came with its challenges as staff at the EP centres had limited availability for interviews. I requested that the FCDO product manager spearhead internal recruitment for our agency, which was successful as we were able to get more time with senior management.

The other challenge was the lack of service ownership, which led to difficulty in managing multiple global stakeholder groups with differing requirements and drivers. This meant that the service was inherently policy-led, with more emphasis on efficiencies rather than user centred design. I continuously communicated my research insights to stakeholders, and involved them in observing user interviews with the public, to help champion the importance of a user centred approach.

Lastly, due to budget constraints and a quick pace of delivery, there was no time for contextual inquiry. This meant that it was hard to identify deep-rooted problems within the service, however, by conducting upfront interviews and testing designs frequently as possible, I was able to mitigate problems as best I could.


Research Methods

There were two distinct user groups I conducted research with— internal stakeholders in order for us to optimise internal processes, and the general public for us to improve the existing online public facing journey.

Remote interviews with stakeholders

Due to busy schedules of ground staff, I was unable to speak to them, but by remotely interviewing senior management at the centres in Madrid and Singapore, I was able to drill down on their frustrations and top pain points through virtual demonstrations. However, I understood that this was not a substitute for observational research as the way they used systems compared to their staff members could wildly differ and I could not capture offline actions.

Participatory workshop with stakeholders

I facilitated a participatory workshop across both centres to enable the senior management to bounce thoughts off each other, which gave me richer insights and helped me understand the operational nuances between the two centres. I loved this in particular, as the teams would have never normally had that range of people sat down in a room together trying to identify their pain points, or come up with ideas and solutions. I also invited the development team to observe and ask questions if needed so they were clued up when it came to eventually building features.

Usability testing with the general public

To test the existing public facing journey, I recruited members of the public who had previously been in emergency situations abroad as they would be best placed to put themselves in the mindset of someone needing an EP. I conducted 12 usability tests at varying points in our project cycle, inviting our team and stakeholders to either take notes or observe so they could empathise with our users. The sessions were particularly eye opening for stakeholders as they hadn’t observed the difficulties users had with filling out government forms. The initial user tests enabled me to identify major service gaps and difficulties.

A participant conducting a remote usability test through our Figma prototype

User acceptance testing (UAT) for internal enhancements

After the enhancements were built within the internal systems, it was important they were tested and approved by internal staff to validate the new functionalities. I created a step-by-step test plan for a handful of employees to carry out the required steps to test the features in all instances within the prototype (e.g. for child and group applications). At the final stage of the development cycle, senior management tested the software in a real world environment and documented their feedback through a UAT test plan.


Research Analysis

Prior to the analysis, we created a data logger to capture notes from the usability testing which allowed for quicker analysis as it was easy to see which page the findings were associated to, and if the tester was successful or not. This required the note taker to familiarise themselves with the prototype beforehand, but we were able to move faster. It was important that the note taker documented implicit behaviours and quotes too, so there were sections for that.

I worked with the service designer to categorise changes according to the MSCOW method of prioritisation (must, should, could, won’t have) to relay back to the development team.

Data logger used to record notes during usability testing (please click to expand)


Top Research Findings

The major pain points for internal staff were:

  • Wasting time manually copying, pasting and duplicating data across systems that didn’t speak to each other.

  • Staff wasting time on processing applications that never get paid

  • Applicants arriving at interviews to get their EP’s with incomplete documentation, or not having paid for their application

For the general public, the pain points were:

  • Being told they were ineligible to apply near the end of the process, wasting time.

  • Confusing questions which led to applicants inputting irrelevant information into text fields

  • Inability to explicitly communicate vulnerability or disability information within their application


Ideation + Design

Internal systems

After discussions with the development team, I scoped out a new enhancement for the internal system with our service designer which would ensure staff only work on paid applications.

This enhancement was played back to stakeholders early on through valuable feedback sessions where we changed the designs in front of them based on their feedback, which they hadn’t experienced before.

Our initial idea was to move the payment request forward in the online journey, so eligible applicants could not start their application before payment. This would mean centre staff would only see and work on paid applications, saving them time. However when tested, this posed problems as applicants were frustrated that a non-refundable payment was being requested before they even started their application. After further design iterations and discussions, we created an unpaid queue in the internal system, so staff members could segment the paid applications from unpaid, and ultimately save themselves hours of time.

Online public facing journey

After usability testing, we iterated the service to create an ‘as is’ and ‘to be’ version of the journey, using Figma to the design the wireframes and prototype flows for testing and supporting the development work.

It was optimised to save time by allowing ineligible applicants to exit sooner, allowing applicants to communicate vulnerability and/or impairments, and clearly communicating what documents were required, especially as they would be in a highly stressed situation when completing their application. We showed the revised journey to our stakeholders so we were all aligned on the changes.

Example of a small chunk of the online journey flow in Figma (please click to expand)

Helping applicants communicate vulnerability within the online public facing journey

One area that needed heavy investigation was the ability for the applicant to communicate their vulnerability effectively when applying for an EP. From conversations with management, we understood that identifying vulnerability wasn’t easy, as they could not outright ask applicants if they were vulnerable. This would allow applicant’s to game the system and classify themselves as vulnerable to get a free EP. Instead, it would be identified through photos, their current location, and their age to name a few, but these methods were not always reliable.

We created a new page asking applicants to communicate any disability or accessibility requirements. Applicants felt catered for, and by getting this information upfront, staff would be more prepared to serve them. This was a topic of contention as stakeholders argued that the new page would cause more work to review an additional data point, and may be received badly by users. However, I was able to convince them of the changes by showing other services that had taken a similar approach, explaining the time it would save offline, and the qualitative user data from usability hub in support of it.

Figma mock up of the final disability requirements page used in the journey


Measurement

During the project, our data analyst was able to create a performance dashboard for the EP online service using Google Analytics, which was invaluable to see. It allowed us to strengthen arguments for the changes to allow applicants to exit the journey earlier. Using this data, I presented a benefits deck to stakeholders which communicated our hypotheses in terms of performance.

Note: we later found out that our design interventions decreased the volume of unsuitable applications by 26%, saving time and costs. 

Example of the Google Analytics Dashboard Funnel used to analyse the data


Next steps

I created a 1 year long research roadmap for the FCDO, with 4 projects within it, each building on from the next in terms of discovery, build and identifying improvements for organisational change. I eventually got buy-in from the client to conduct a discovery for the next phase, which was successfully completed.