McKesson / Ontada:
Task Management Audit & Reporting

The Challenge

Within a clinical office workflow, every incoming phone call or internal assignment generates a Task inside the EHR. Each task may include multiple events, often handled by different team members, and can be transferred between users as work progresses. While the task model effectively tracked activity, administrators lacked visibility into how work was actually being performed across teams, locations, and time.

I was initially asked to design a simple audit report and dashboard that would allow administrators to see what tasks were completed, what remained open, who worked on them, and how long each task took to complete. As discovery progressed, it became clear that a single audit view would not be sufficient. What began as a lightweight MVP quickly expanded in both scope and complexity.

The Process

Discovery & Research

I conducted user and stakeholder interviews with administrators, operational leaders, and clinical staff. Early findings revealed that stakeholders were not just looking for accountability—they needed actionable insight. A flat audit log on individual tasks would not answer the broader operational questions they were trying to solve.

To meet these needs, we determined that multiple reports, powered by Datanyx, would be required to support different analytical perspectives.

Ideate & Prototype

What began as a straightforward audit dashboard evolved rapidly through ideation sessions with the development team. Early exploration revealed that the EHR itself had limited capability to support the level of data aggregation and analysis required. As a result, we transitioned reporting to the Datanyx platform, which allowed greater flexibility and performance at scale.

While the initial goal was a single, consolidated report, iterative prototyping explored as many as three separate reporting models. Through usability testing and technical evaluation, we ultimately found the right balance in a two-report solution: Task-level and Event-level reporting. This approach optimized usability while preserving analytical depth and ensured the system could efficiently handle the high volume of data required to support enterprise-scale practices.

Defining the Data

We began by collecting every question administrators wanted answered by the reporting system. From there, we defined the complete set of data points required to support those questions. During this process, we identified several gaps—key metrics that were not explicitly tracked but could be derived using existing reference data. These new data points were formalized to support more meaningful reporting.

Performance quickly became a critical concern. Even a single office could generate hundreds of tasks and thousands of task events per day. Our largest stakeholder, Texas Oncology, operated across 36 offices, requiring data to be aggregated and filtered by practice, location, provider, task type, and individual events—all while maintaining acceptable speed and usability.

Defining the Reports

We ultimately designed two distinct but integrated reports, each optimized for a different level of analysis.

Task-Level Report

This report provided a high-level view of task activity across practices and locations, enabling administrators to identify trends and operational bottlenecks. Key questions it answered included:

  • How many tasks were created within a specific location or practice

  • Average time to completion

  • Which task types consistently took longer to close

  • Which tasks remained open

  • Average close time by priority level

  • All tasks worked on by a specific provider or user

Each task appeared as a single entry, making this report ideal for trend analysis at the practice, location, or user level.

Event-Level Audit Report

The second report focused on the granular activity within tasks, capturing every individual event. This allowed for deeper auditing and cross-referencing, answering questions such as:

  • Who worked on a specific event

  • How much time was spent on each event

  • What actions were taken within a task and in what sequence

For example, while the Task Report could show how many high-priority tasks a location closed in a week, the Event Report could reveal how long a specific nurse spent completing assigned events across all of her tasks.

Impact

These reporting capabilities had an immediate and measurable impact. Administrators gained visibility into workflow inefficiencies, enabling them to identify bottlenecks, rebalance workloads, and reduce task closure times. The data also surfaced training gaps and role-specific challenges, allowing practices to refine processes and update training requirements.

Ultimately, the system helped practices operate more efficiently while maintaining a clear audit trail—supporting both operational improvement and patient care.

Key Takeaway

What began as a “simple” audit request quickly evolved into a robust reporting ecosystem. By pushing beyond the initial ask and focusing on the underlying questions stakeholders were trying to answer, we uncovered deeper needs within the data. Through close collaboration and user-centered discovery, we delivered a solution that not only tracked activity, but meaningfully improved how practices worked—and how they served their patients.

Previous
Previous

Mckesson / Ontada: Ontada Health Patient Portal

Next
Next

Vision Source: Insight Intranet Platform