In the Harvard Business Review article “Why Design Thinking Works,” the author, Jeanne Lietdka, writes about how we often witness the arrival of a new method of working that leads to large improvements. Over a seven-year study, she found compelling evidence that design thinking was one such method and could lead to better results in projects based on innovation. Lietdka defines design thinking as a way of working based on “research, an emphasis on reframing problems and experimentation, the use of diverse teams, and so on.” Not only does it create more relevant products for end users faster, but also it lowers the risk of building the wrong thing while at the same time increasing internal stakeholder buy-in.
While design thinking methodology has gained a strong foothold in the software industry, the literature is still very sparse when it comes to applications of machine learning and big data. There is an increasing trend to make software products smarter by making them data centric. Unfortunately, very often, the delivery of data-centric systems is considered to be a purely technical, rather than a product challenge. Our key argument for using design thinking on these systems is that these are products with human users, like mobile apps or websites, and need to be created with the same degree of user empathy. There are some great examples of this in practice, and taking a small amount of time up front to anticipate these types of problems is where design thinking excels; this is why we encourage any organization to train its data practitioners in design thinking.
Having applied the design thinking methodology over many data-driven projects, we’ve learned what works, and what needs adapting. We outline below some high-level points that should get data teams started. Much of what we suggest is done at the beginning, at the “design sprint” stage. This is a hands-on workshop over multiple days that focuses on cycles of creativity and convergence in order to create a prototype concept ready for validation (The GV design sprint library is a great resource for more information).
Working with data is exacting. Technical requirements, regulations, legacy systems, cost considerations, organizational silos, and other dependencies all need to be accounted for to ensure a product is viable. A team can waste valuable time and money by discovering roadblocks late, obstacles that could have been uncovered in a rapid research phase. For example, committing to building a product that cannot be launched for regulatory reasons. A design sprint, run properly, should uncover unknowns, establish a timeline and budget, and sets expectations that meets the capabilities of the team.
For data-centric projects, this requires data specialists to perform extensive research and tailor a flexible course of activities to answer a wide range of questions, adjusting as needed when new areas of exploration are uncovered. This fluidity is vital for keeping the Design Sprint on track and reduces the risk that an early assumption invalidates later work.
Any team should prepare for each client’s unique culture and stay mindful of organizational, technical, and data silos. Knowledge and resources may well be distributed across different departments that do not speak to each other or even know of each other’s existence. Legacy systems complicate things further. These are all factors to consider before the workshop to avoid unwelcome surprises.
- Give your team the time they need to do research. We suggest at least one week of prep time for your subject-matter experts before you kick off a Sprint. For example, during the prep time one can research existing systems and pipelines, competing products, potential machine learning algorithms and regulations.
- Practice, practice, practice! We do a mock run of certain activities before the Sprint begins. We also offer educational workshops to our clients if their teams have no experience with design thinking.
Any successful Sprint requires individuals capable of building the product, educating others on important issues, articulating a business case, and making decisions that fit the vision. Fulfilling these roles is a hefty task when talking about complex algorithms or machine-learning tools, but a team should end up with a good mix of business leadership, production, and subject-matter experts, while still being lean. In Table 1 below, we summarize some typical responsibilities and team roles.
|Regulatory||Product Owner, Regulatory & Compliance Team|
|Data Access||DBAs, DevOps teams|
|Data Transformations (e.g. ETLs, feature engineering) and SLAs (throughput, latency, allowed down times, maintenance schedules)||Data Engineers, Machine Learning and Data Scientists|
|Data Serving (e.g. storing derived data to OLTP systems)||DBA’s, DevOps, Backend Engineers|
|Data Quality (e.g. error rates, cost of error), Monitoring and Product Analytics||Data Analysts, Machine Learning Engineers|
|Product Integration (e.g. extending existing user-interfaces)||Product Owners, UI and UX Designers, Front-end developers|
Table 1: Responsibilities and roles for building a data product.
We are a consultancy, and clients look to us to lead the sprint and keep it running smoothly while acting as experts on both process and subject domain. Data products — unlike consumer-focused products — require facilitation from subject-matter experts trained in leading design thinking activities. These specialists structure the activities, deal with technical gotchas, and handle skeptics. For these reasons, we train our designers in data and our developers in design processes. When we lead a sprint, we minimize the participants we bring while still playing off each other’s skills.
On the client’s side, we ask that they bring members to represent a mix of business stakeholders, decision-makers, and technical staff. The decision makers help get everything approved and the technical staff give informed, practical perspectives. Design thinking activities get these typically separate groups to work together and are invaluable to overcoming their inherent hierarchies (e.g. the HiPPO effect). To identify these participants, we always work with the client before the workshop.
- Use stakeholder surveys and pre-workshop calls to educate yourself on your client’s internal capabilities, who their team members are, and who should be present in the workshop
- Ensure your team knows their responsibilities, and where necessary, give them the ability to lead activities and discussions. For example, your Machine Learning engineer can help facilitate sessions on data quality.
It’s appealing in the world of data and machine learning to explore exciting opportunities with cutting-edge tech. Clients sometimes request a survey of approaches or state-of-the-art machine learning techniques that they can then ideate from. This puts the cart before the horse. Design thinking succeeds because it identifies problems and then builds solutions. Inventing problems is a great way to miss the actual needs of users while spending time, money, and energy on the wrong thing. The classic scenario in design thinking is creating transportation: When approaching the project of getting people from point A to point B from a tech-first perspective, it is possible to end up inventing a new type of jet engine when what people actually need is a bike.
Design thinking works by considering humans — their needs and problems — before considering the solutions. The solution to users’ problems is the goal. Work with the client ahead of the workshop and identify the customers, their needs, and a product hypothesis to focus on together.
Even if the product goal is correct it may not be achievable with current technology, your budget, your organization/team experience, or with the currently available data sets. We want to emphasize the need for iteration in data products and using the first version to gather data to build the second one. For example, before starting on the lofty goal of building a self-driving car, you might consider shipping a collision avoidance system. Not only is the cost of error much lower, but the large scale deployment of such a system will help gather data about various road conditions and driver behavior. More progress in machine learning systems is made when the right data becomes available rather by trying to deploy a sophisticated algorithm. Therefore we emphasize the need of both short term achievable goals and ambitious long term thinking.
- Try to identify and highlight the end-users of the system early in your process, and how they are affected by the outputs of the system.
- Strive to understand the feasibility of tackling each system feature. Ask, “What are the problems we need to solve, and why?” We do this at the beginning of a Sprint as a whiteboard exercise. Try to assess the depth of the problems, since not all are created equal.
Use the Design Sprint to identify a range of users for the final system and align on their needs. Every type of user should be covered, according to the user-centric principles of design thinking. External end users, such as your customers, and internal (technical) users such as data scientists, analysts or developers need to be considered throughout.
Understanding every type of user allows you to understand what data needs to be collected from them to enable the system to function. While business users may require a traditional UI, technical users might also require APIs, database access, or cloud storage. Sprint activities like user journeys can be adapted to represent dataflows and system architecture. Depicted this way, abstract systems become concrete for the whole team.
The team should cover where it can find users to test the prototype and the testing process. You can use Invision for mocking up web or mobile UIs. In the case of APIs you can create a Swagger schema for technical end users.
- Use the research stage to identify and begin recruiting users.
- Try to make sure you have internal users participating in the Sprint, even just as subject-matter experts.
Data projects often take a long time to show results, and design thinking’s approach helps focus the product on delivering for users quickly, and can help to focus on short term impactful deliverables.
We recommend running design sprints often, at regular intervals, and when significant iterations are planned. Even for broadly scoped projects, we suggest to start small, apply the ideas, keep track of how the team performs, and learn from the process. As the team’s confidence grows, it can move on to bigger projects or advanced and very technical features, repeating the cycles of learning and improvement.
We hope that this information and guidance has been useful. Thank you for reading!