Weight, wait don’t tell me

Trust takes a lot to build and almost nothing to lose

In opaque systems small errors may be used as rationale to stop usage even if on balance the automation or model is still better than alternatives (even humans).



I love voice to text transcript of phone messages, meetings and especially long talks like the annual presidential address. Increasingly these are becoming available in real time and even as translations.

The quality of transcription is imperfect, but usable for many circumstances. It’s risky and inadequate without validation methods or a human in the loop for some legal, medical and other contexts.

I personally spell a huge percentage of words wrong initially, some so badly that spell check can’t reasonably predict what word I was aiming for. Unlike voice transcription, I use malaprops and homonyms less frequently than simple misspellings.

I’m still listening to Kahneman’s book “Noise” about how we can minimize errors in human judgement. A big theme is not over estimate experts for consistency and reproducibility and that simple models, consistently applied often out perform the humans on which the models were built.

Distrust of models, either through a lack of mechanistic understanding of what drives them or how to reproduce consistent results can cause us to throw the baby out with the bathwater.

A huge area of research in ML is transparency and observability. This is in part pragmatic to drive and justify model adoption in the real world and increasingly due to legal compliance requirements.

Reading Kahneman or Annie Duke or any of the other great behavioral economists makes me want to go back to graduate school, but every day we have ways we can incorporate and apply their research to improve our own decision making and that or those around us.

Leave a comment