Looking at the big picture of public power’s safety track records, our utilities and employees appear to be relatively safe. Over the past few decades, we’ve seen the median incidence rate drop from nearly seven incidents (reported illnesses or injuries) per 100 workers to fewer than three incidents per 100.
Trying to benchmark public power utility safety data against the industry’s incidence rate, which looks at the total number of reportable incidents by worker-hours in a year, does not really offer a utility meaningful data. A small utility might have an incident only once every 10 person years. In the year with an incident, the utility’s incident rate would be really high, and it would have many zero years in between. This means that smaller utilities will more likely have skewed data, whereas larger utilities will show less variance in the incidence rate, even when they have an increase or decrease in incidents year over year. Long-term, the probability of an accident can be similar for utilities with wildly different annual incidence rates.
Measuring safety over the long term will allow us to see if we really are getting better, where our efforts are most effective and in which capacities. By comparing incident rates across equivalent worker-hours of exposure for numerous public power utilities, we started to see a clearer reference case of how frequently incidents occur. While not yet well-proven, we think this alternative way to benchmark public power’s safety shows a lot of promise.
By including as many years of data as possible and making incidents proportional to exposure hours over time (e.g., halving the number of incidents a utility with 200 worker-hours has compared to a utility with 100 hours), better comparisons and benchmarking becomes possible across a broader group. With longer historical data from more utilities, the ability to do these peer comparisons will expand, which will tease out patterns of unsafety, or at least allow a utility to find a more direct long-term safety performance comparison.
We tested this method of benchmarking for a medium-sized utility in the Pacific Northwest and found that the method showed significant promise in highlighting the utility’s strengths and weaknesses compared to its peers.
The hope is that making these comparisons will show a more accurate probability of an incident over time, highlight differences between peers on a more level playing field, and allow utilities to see how they might adjust policies or practices accordingly to further reduce that probability.
To help utilities better benchmark and analyze their safety, we developed the eSafety Tracker, a service that utilities could begin to sign up for in early 2019. Utilities that have subscribed to the tracker are adding their safety data and benchmarking against other subscribers and historical data submitted to the Association’s safety awards program over the years.
The picture we have from the available data to date can only take us so far. Looking ahead, utilities will need to see more granularity into how their workers are exposed, the causes and severity of incidents, and the types of trainings and briefings that are being undertaken to make better decisions and get ahead of potential issues. Our plan for the tracker is that it will help utilities to standardize and classify injuries and incidents, and eventually also capture near-misses in a more standard manner. If this is more standardized, then the data can also be analyzed across categories and severity levels, such as by putting all the cut rates or minor injury rates together. Even more exciting, the Association just received a patent for the tracker’s planned predictive analytics feature, which will help public power utilities see the weak links in our processes and policies.
Better data leads to better decisions. When we can identify a trend, then we can learn from and address it. Because our goal is not to be “safe enough,” but to make utilities a safer place to work and have everyone go home safely at the end of their shift.