(Column) Understanding AI and its myriad of uses

Published 6:30 am Wednesday, August 23, 2023

When you hear or read news about artificial intelligence — or AI as it is commonly called — it’s easy to assume it is some new-fangled technology that may impact you someday, but not very much right now.

It can seem sort of like something out of “The Jetsons” cartoon show, with flying cars and a robot maid named Rosie.

Email newsletter signup

Or something more foreboding like HAL from “2001: A Space Odyssey,” may come to mind.

Actually, the basics of AI date back to the computer science of the mid-1950s, well before “The Jetsons.”

Beginning this week and continuing over the course of the next few months, CNHI reporters will be looking at the many ways AI is being used, the way it helps us and the concerns many have for its current and future applications.

So, what exactly is AI? That’s not an easy question to answer, but here’s one pretty solid attempt.

In an article on the Popular Science website back in February, Daniela Rus, director of the computer science and the artificial intelligence laboratory at MIT. defined it this way:

“Artificial intelligence is about the science and engineering of making machines with human-like characteristics in how they see the world, how they move, how they play games, even how they learn.”

If you use Siri or Alexa, you are using a technology powered by AI.

When you are on Facebook or other social media sites, AI is being used to gauge your preferences and suggest pertinent material. This, as we’ll report, can be a matter of concern.

If you use Waze or some other GPS system — yup, you guessed it. You’re using AI.

Businesses use it for some forms of customer service. Personally, I find that application annoying. I still want to talk to a real person.

We’re beginning this report with a focus on education, where AI is being used in areas ranging from lesson planning to virtual tutoring and collaborative learning.

There are ethical concerns including students using AI to complete assignments. There is some software that can combat that, but it is not foolproof.

Health care, the military, finance and banking, emergency responding and, yes — journalism — will be other areas our reporters will explore.

Our reporters have done and continue to do a good deal of research, but what you read won’t be written by an AI tool like ChatGPT, Google’s Bard or Microsoft’s Bing.

While those can be useful parts of initial research, we have made it clear to our reporters and editors that any information garnered by using AI tools ALWAYS needs to be confirmed, sourced and corrected by humans.

To be clear: We do not publish stories written by any AI technology.

Many other news producers have similar standards. They all should.

There’s a lot of hype and misinformation about this technology. We hope our reporting will help clarify and verify the good and the bad.

We begin already knowing that the same technology that can suggest a fun or newsworthy video to watch online or help you avoid a traffic jam can also be used to provide seriously flawed — and fake — information on important issues, like the COVID-19 vaccines and the 2020 presidential election.

AI has the potential to be of great use. It can also be hijacked and used to cause harm.

Our goal in this report is to present the most realistic picture of AI we can.

Dennis M. Lyons is Vice President, National Editor for CNHI.