Much has been written recently about the emergence of Artificial Intelligence and robotics technology, and about how these technologies will destroy jobs and wreck society.

Someday, maybe. But today and for the next few years—the planning horizon of CIOs rather than academics—the reality is far different and far brighter: jobs will be enriched and the workers doing them will do them in new ways, at least for firms whose CIOs understand what’s happening.

Artificial Intelligence (AI) has been studied and discussed since the days of Marvin Minsky and Seymour A. Papert, professors who pioneered the field over 60 years ago. Despite 60 years of research and billions of dollars invested, today’s AI is far from intelligent. Yes, AIs can win at Chess, Jeopardy, and Go: narrow problem domains with very specific rules. With the right class of problem—where data searching and correlation are paramount—AI can seem amazing. But very few classes of jobs will be lost to AIs near term. Think of the letters “AI” as standing for “Augmented Intelligence,” which is the use of computers with vast data stores and terrific pattern-matching skills to advise and assist humans who have to juggle complex problems.

Have you heard the expression, “When you hear hoofbeats think horses, not zebras?” It means “seek obvious answers,” right? In many fields, limiting oneself to the obvious answer means missing an answer that can save a bundle of money…or doom a hospital patient to death. The expression originated in the 1940s from Dr. Theodore Woodward, professor emeritus of medicine at the University of Maryland School of Medicine, who gave this advice to interns he was training. Back then a new doctor—or even a seasoned clinician—couldn’t possibly remember the huge number of symptoms and lab tests needed for the differential diagnosis of every disease. And even access to comprehensive medical libraries—paper books with card catalogs for access—was difficult, not to mention the research skills needed to ferret out lurking “zebras” when a patient presented with “funny symptoms.”

Fast forward 70 years to 2012, or so. Medical knowledge expanded by a factor of ten, or maybe a hundred, but students had access to billions of pages of medical research via laptops or mobile devices. Access to data was easy, but if anything the problem of finding the elusive zebra was even harder than for the 1940 physicians, because there was just so much to sort, select, digest, weigh.

What if the doctor had an assistant who had read—and could unerringly recall—every diagnostic fact; who had read the case histories of millions of patients; who had memorized every single fact contained in the patient’s records; and who could sift through the vast troves of data to identify not only “horses” (things the doctor would think of), but also “zebras,” things that everyone but the narrowest sub-specialist would miss? That is, if you could figure out which sub-specialist you needed and had access when you needed it. And what if that assistant whispered suggestions and opinions into the doctor’s ear? Think of the benefit to patients with rare ailments, or those who might benefit from some not-very-well-known new treatment. And think of the benefit to society if young doctors quickly gained the diagnostic and treatment accuracy heretofore obtained through 25 years of experience.

So, when will we see this assistant? How about two years ago? IBM’s Watson does more than win at Jeopardy. Using research data from online libraries, clinical results from teaching hospitals (MD Anderson Cancer Center in Houston, for example), and actual patient results obtained from the families of IBM employees and others, Watson is getting better and better at advising clinicians. It’s not just IBM: Stanford University and the International Skin Imaging Collaboration: Melanoma Project are helping doctors identify subtle melanoma symptoms that even expert eyes may miss.

Let’s be clear: most of us are far from ready to leave our diagnosis and treatment to an AI. But who can object to a trained doctor using automated help to identify the best possible diagnosis and treatment?

Areas such as weather forecasting, securities and commodities trading, insurance underwriting, pricing (in 2003 my team built an expert system that identified/priced a set of optimal mortgage loans for borrowers, out of millions of choices, in under 20 seconds), wherever you’re dealing with complex data and decision rules an Augmented Intelligence can help a skilled analyst ferret out subtle patterns and obscure choices that make the difference between a “good” outcome and a “brilliant” outcome.


When you hear “robot” do you think, “Bicentennial Man” or “Westworld”? If you do, you’ve been sold a bill of goods by Hollywood. Robots today are arms mounted to conveyor belts or machine tools that perform a limited set of repetitive, perhaps dirty or dangerous, tasks in factories. An emerging class of robots are gaining locomotion (movement), or discrimination (identification of objects), but are still far from operating autonomously. Far more often robotic technology can be used to make workers stronger, faster, more accurate, and grant more endurance.

One class of robotic augmentation is the “exoskeleton.” At BMW’s Spartanburg Assembly Plant workers are testing the “ekso vest,” a robotic backpack that supports their upper bodies while doing repetitive overhead work. And at Audi workers wear the “chairless chair” backpack that allows them to just sit down wherever they want—and a chair just appears beneath them.

Remember the movie “Aliens” in which Ripley dons the “Power Loader” exoskeleton during the climactic fight scene? While you can’t quite buy one today, several companies have announced development projects for such devices.

Another class of robotic augmentation is the “carry-all.” These are robots that carry objects to unburden their human “masters.” Perhaps it’s a drug-delivery cart that brings the proper supplies to the nurse for administration, or a semi-autonomous cart that follows one around (like “Gita,” just announced by Piaggio [parent of Vespa]). These robots save steps—and time—for skilled workers, or free workers up from the burden of carrying heavy loads.

Augmented Reality

Just as AI can make workers smarter and robotics can make workers faster and stronger, Augmented Reality (AR) can make workers more aware of their surroundings.

Until last year, few people paid any attention to AR. Maybe they’d heard of “Google Glass,” but how many had even seen one?

Then came “Pokémon Go,” which exposed 500 million people to the wonders of AR.

As silly as that application may seem, it demonstrates the utility of superimposing information atop a person’s visual field.

In my own industry, healthcare, it’s vital to be able to categorize people for many reasons: are they staff, patient, or visitor; are they where they’re supposed to be; are there any special circumstances about which we should know?

Imagine that as you walk down the halls wearing your glasses, everyone you see has a colored outline around them: blue for staff, green for patient, yellow for contractor, purple for visitor. This identification might come via their RFID badges communicating with building sensors, or later via facial recognition camera (à la “Windows Hello”) built right into the glasses. Using “geofencing” it becomes easy to identify an unauthorized person (a patient in a staff area, or a Senior Living “Memory Care” resident wandering outside their section). Combine AR with AI and you can imagine some very fine discrimination: a patient who has not taken their meds, or a Senior Living resident who just lost a spouse and may need extra compassion.

It’s not just people, but things: workers at firms that inspect, upgrade or repair systems in complex environments such as factories or oil platforms spend time finding components in need of inspection. AR can allow the workers to quickly look around a room and have critical components visually highlighted, along with a readout of recent and historical operating parameters and issues. Again, by adding AI to the AR system the worker can be alerted to “zebras”: infrequent situations that warrant special attention based on collective company or industry experience.

When the inspection, upgrade or repair commences, instructions can be superimposed right in the worker’s visual field, eliminating the need for reference to instruction manuals. This creates a situation akin to that of the new physician: wearing an AR display that identifies components, suggests needed tests and procedures, and provides “tips” can enable a less experienced technician to function at the level of a seasoned pro—because the seasoned pro is “right there” giving suggestions.

Who can say what might happen in the long run? Will AI and robotics someday supplant workers rather than augment them? What we can say is that in the world of today, CIOs have access to tools that make their workers far more productive than ever before. Over the next three to five years, for sure, Augmented workers will create competitive advantage for firms that embrace the technologies.

Originally published in Direction IT Magazine.

Regular Planet Mainframe Blog Contributor
Wayne Sadin is a CIO/CTO, an outsourcing executive, a Board member and a consultant to CEOs. Mr. Sadin has specialized in IT transformations – in improving IT Alignment, Architecture, Agility and Ability. He is an accomplished speaker and writer and has been recognized by Computerworld as both a “Premier 100 IT Leader” and an Honors Program “Laureate.

Leave a Reply

Your email address will not be published. Required fields are marked *