Skip to Content

On the Frontlines: Push vs. Pull Automation

Blogs Francisco Aristiguieta, CIA Apr 13, 2022

Weren't we going away from "push" and toward "pull"? For the last many years, our friends in operations management have made big efforts to move from "push" systems (where things get prepared in advance and stocked away based on some predictions on consumption) to "pull" systems (where things are available just when needed with minimal down time).

This has allowed significant improvements on process adaptability and customization of products, and perhaps allowed reductions on waste from raw materials or mid-processed inventory levels.

The push-to-pull transformation works very well with physical products where the cost of raw materials, storage, and half-produced items is significant. But when dealing with information, the cost of "not noticing something important" highly outweighs the cost of keeping this information in inventory until we are ready to look at it. Because of this difference in value, we slowly but surely have been moving from having decision makers request information and wait until it is generated, to having information ready for the decision makers to use when desired. Over time, some of us may one day allow for some pre-approved decisions to be executed without any people involved (instantaneous response), but not every process needs to reach this level of responsiveness and automaton.

If different processes need different degrees of automation, then developers and users need a shared language to describe: (1) the target of each automation project and (2) how to measure the progress toward that target. Without this shared language, our projects may underperform or overkill, or simply take too many iterations to finally get things right. To reach this shared language, it may be useful to define levels of automation.

Level 1: Pre-order (Maximal human pull).

  • Custom requests, where a human needs something that is not ready. Typically, these are"one-off" requests that take some significant time to build. Examples could include building a new report or developing a new audit test.
  • Trigger: The human requests the process to run.
  • Mechanics: The automation doesn't exist, must be built, or even "solved by hand".
  • Responsiveness: Consumed when ready, which depends on backlog and availability of the team.
  • Content: Highly customized, built specifically to provide your answer.

Level 2: Just in time, JIT (Minimal human pull).

  • Common requests, ready or easily created in the moment. The machine is always ready to answer (standard questions), but the human must be interested in the answer to go and check the tool. This makes it easy to miss important events. JIT examples include a human looking for a "live" business intelligence report online, re-running standard audit tests, or playing a movie or song on your streaming service of choice.
  • Trigger: The human remembers to check for updates.
  • Mechanics: The automation is pre-made; it may be a "recipe" using "live" data.
  • Responsiveness: Can be consumed at the human's leisure, ready as soon as requested.
  • Content: Highly standardized, likely part of a large set of results that the human finds useful.

Level 3: Subscription (Minimal computer push)

  • The human is periodically reminded of the availability of the standardized information. This addresses the JIT likelihood of missing important events but could accumulate unread and be perceived as spam or clutter. For instance, the machine issues an email with highlights and links to multiple business intelligence reports that may or not cause a human action, or it produces an email report summarizing results from recurrent audit tests.
  • Trigger: Machine reminds the human to check the JIT tool.
  • Mechanics: The pre-made routine runs and is delivered on a schedule. Typically, it is part of multiple tests batched together.
  • Responsiveness: Ready and delivered before it is needed; the human can consume at their leisure. Depending on human response, the information may be stale by the time of review.
  • Content: Highly standardized, likely part of a large set of results that the human typically finds useful.

Level 4: Alerts (Maximal computer push)

  • The machine reviews multiple pieces of information to identify something that may require human attention. Examples include a hurricane siren, an email that includes only exceptions found on periodic standard tests, or an email or text message with enough information to convey urgency and trigger actions.
  • Trigger: Machine reminds the human to check the JIT tool.
  • Mechanics: Pre-made and run on a schedule, but only delivered when certain conditions are met. Conditions may be human-defined or based on computer predictions of prior human response or feedback.
  • Responsiveness: The machine combs through information before any request is made. Then it triggers the human's interest, prompting the human to follow up as soon as possible.
  • Content: Highly focused. Very specific information on things that you have specifically requested or previously found interesting.

Computer Overlords or Human in Charge?

If someday, all processes reach the "alert" level, do we simply drop everything and do what the machine tells us, whenever it tells us? Would we effectively have computer bosses? Not really. Automation is meant to work for us, and as end users, we still get to decide how far into the automation spectrum to go for each process and to set limits. That said, most people agree to listen to traffic lights and alarm clocks.

Is this consistent with your experience? Please let me know.

Francisco Aristiguieta, CIA

is responsible for internal audit analytics at Citizens Property Insurance Corp. in Jacksonville, Fla.

Access the Digital Edition

Read Now