top of page

DIBBs

Creating modular tools to streamline & strengthen public health data infrastructure 

Overview

DIBBs (Data Integration Building Blocks) is part of the Pandemic Ready Interoperability Modernization Effort (PRIME), a multi-year partnership between CDC and USDS that began during the COVID-19 pandemic to address critical gaps in pandemic-ready public health technology. The COVID-19 pandemic revealed that public health departments' current technology to ingest data from health care providers was not able to handle the volume of data coming in during a pandemic. 

 

The DIBBs team was, therefore, formed to develop CDC-offered products to help public health departments strengthen and modernize their data ingestion infrastructure. Our team has created modular building blocks that string together to form a data pipeline that can be utilized by state, tribal, local, or territorial (STLT) public health departments across the US to process data coming in from health care providers. 

 

DIBBs has a large team with parallel efforts under the project umbrella (3 work streams as of Jan 2023), and I have primarily been a part of the Streamline eCR team since June of 2023. The Streamline eCR team focuses on using the DIBBs building blocks and pipeline to make eCR (electronic case records) more usable for STLTs. 

👩🏼‍💻 My role: Product Designer, User Researcher

👥 Team: 1-2 Product Designers, 4-5 User Researchers, 1 Content Designer, 2 PMs, 10-12 Engineers/Data Scientists 

💼 Client/Partners: CDC, USDS

🕰️ Timeline: I have been on this project since Nov 2022 and am currently on this project as of Jan 2024

Project 1: Improved eCR Viewer

Context

In June of 2023, the DIBBs Streamline eCR team was formed to help STLTs to better use electronic case reporting (eCR). eCR is a new 

Challenge & goals

GearFit required a lot of extra work from acquisitions SMEs (subject matter experts); GearFit required SMEs to respond to and resolve each individual submission. With the influx of submissions, SMEs were completely overwhelmed with how they could process all of the feedback and help aircrew with their problems. Compounding the problem, our team had never built out features to process feedback in bulk because of low traffic over two years.

 

We were suddenly faced with the question: how can we ensure all of the feedback we received would make a tangible impact?

 

The goals of this effort were to:

  1. Better understand current processes for changing and improving gear in the Air Force so that we could understand where GearFit fit into the bigger requirements and acquisitions picture

  2. Create a short term solution to ensure that the feedback could be acted upon while we built out more functionality to handle higher volumes of feedback moving forward

Screen Shot 2022-11-14 at 12.36.52 PM.png

What the feedback view looks like for SMEs in GearFit when responding to a submission

My role

I led the research and design for the first phase of this effort, coming up with the research strategy and plan, running the research, and was the only one working on the product that came out of the research effort. I collaborated with the team throughout the research process, particularly with the other designer and the PM. The other designer led the second phase of research while I continued to work on designing.

Research

I planned a generative & evaluative research effort to better understand processes for gear change and improvement and gather feedback on GearFit. I collaborated with the PM and other designer to align on what we wanted to learn and what assumptions we had going into the research.

We conducted semi-structured interviews with 13 people across a variety of roles. I planned and led synthesis sessions with the PM and designer to group what we learned into themes to put together the final report. I also created an assumption tracker to validate our initial assumptions after each interview, document what we learned over time, and adjust who we recruited and the questions we asked to fill in knowledge gaps. 

Screenshot 2023-12-22 at 3.05.00 PM.png

Our synthesis process, which is intentionally low resolution to protect the data

Screenshot 2023-12-22 at 3.40.58 PM.png

Our assumption tracker matrix

Prototype

Our research helped us to have a more holistic understanding of how feedback can fit into existing processes around modifying existing gear and acquiring new gear. The bad news was that we learned the majority of decisions were made at gatherings that happened only twice a year, which gave us a strict deadline to create some sort of solution.

Since we would not be able to develop any features that could make a tangible improvement on understanding feedback in bulk within that timeline, I decided to experiment with designing a Quarterly GearFit Data Report to send to the people who make gear decisions.

I created a master spreadsheet to conduct analyses on the raw feedback and then organized the data into a report format. I aimed to conduct relatively simple analyses that would be feasible to implement as features on the website in the future.

Screenshot 2023-12-22 at 4.14.03 PM.png

An example of the analysis conducted on the data. I read through the feedback for common sentiments and then automatically counted how many submissions talked about the issue using keywords to give a sense of the most popular feedback for any gear item with many submissions. This method wasn't perfect, but it gave a decent sense of what the feedback was saying at a glance.

Screenshot 2023-12-22 at 4.09.38 PM.png
Screenshot 2023-12-22 at 4.24.42 PM.png

Example pages from the Data Report, providing an overview of the most popular gear items that aircrew submitted feedback about during that quarter and what the sentiment analysis looked like for a specific gear item

Validate

While our team was really excited about the data report, we weren't sure if it would be as impactful as we had hoped. We decided to send out the report to decision makers across the Air Force to gather feedback on if the report was valuable and if so, how to improve it.

Iterate

We received overwhelmingly positive feedback on the value of the GearFit Data Report, which gave us enough validation to create another improved data report for the next quarter.  

 

Our two main learnings from this first round of feedback were that (1) the report was overwhelmingly long and (2) the presentation of data in charts looked pretty but was not particularly useful, decision makers preferred to just see simple tables with the data. With this feedback, I worked on streamlining the report, removing charts and trying to reduce the overall page count.

Screenshot 2023-12-22 at 5.30.00 PM.png

New section design with tables instead of charts to display sentiment analysis

Implement

We sent out the second Quartlerly GearFit Data Report to an even wider audience of decision makers and conducted a second phase of research to understand how they were using the data in the reports and what data was most important to them, which the other designer led. 

We learned that the report wasn't usually identifying new gear issues for them, but was giving them a volume of data and evidence they hadn't had access to before to back up the problem, prioritize problems, and advocate for change. Additionally, since the data was already nicely "packaged" for them, they could easily put that data into a one pager to brief up to their leadership. 

Outcome

The Quarterly GearFit Data Reports enabled our team to ensure we were impacting gear change with the feedback aircrew submitted to us, while also helping us to prioritize which data analytics features would be the most valuable to develop first. The Data Reports served as a no-code functional prototype for our data analytics feature; they were a relatively low-effort, low-risk way to understand if data analytics features would be valuable to users before sinking time into developing them. 

 

One piece of feedback we received from a decision maker sums up the level of impact these reports had on our users: "Please keep these quarterly reports coming so our A3/A5’s can gauge needs and voice of our supported personnel. I can see where these inputs will influence future APEC priorities as it’s the best data backed product we’ve seen."

What I learned & what I would have done in a perfect world

I loved working on this project because it was a chance to show that we don't always need to develop a new feature to deliver value to real people and solve problems. I also enjoyed practicing applying a design exercise to a very long document, trying to apply user experience best practices to the formatting of the report (being able to get back to the table of contents from anywhere in the doc, IA exercises, etc). Additionally, I really got to brush up on my Excel skills with this one (or really bending Google Sheets to my will). It was also a great learning experience to lead and conduct a more complex generative research effort, trying to understand the complex space of acquisitions, something people can devote their whole careers to.

At the same time, the research for this project made me realize how broken the acquisitions and requirements process is in many ways. It made me question how much a feedback application could really contribute towards making gear better for aircrew when it took over 10 years to go from a new requirement to gear being available to aircrew for use. So in an ideal world, I would have taken a tech-agnostic angle to the problem, rather than just throwing some tech at the problem and hoping it makes a dent. I wished I could've taken a holistic approach to understanding and improving acquisitions for gear in the Air Force, but unfortunately I was hired to design a feedback website so that wasn't in the cards this time!

bottom of page