Engler shares pros and cons to AI in our world


Joe Engler
By: 
Kim Brooks
Express Editor

     Is AI, also known as artificial intelligence, good or bad? Is it helpful or harmful? Should society be cautious or embrace AI? There are so many questions surrounding AI as more and more companies create their own AI systems and applications (apps), wanting to capitalize on the technology that is here to stay.

   There are dozens of AI apps and chatbots (a computer program) out there owned by massive and well-known technology companies such as Google (Gemini), Microsoft (ChatGPT), even Elon Musk (OpenAI). How do you know which ones to trust? How do you know which app is good or bad?

   Joe Engler, a Monticello resident, is no stranger to AI. He has been working at Collins Aerospace, one and off, since 2005. His current title is a principal technical fellow at Collins.

   “Which means I spend most of my time helping units throughout Collins and the greater RTX Corporation (formerly Raytheon Technologies Corporation) world,” he explained. “We’re owned by Raytheon Technologies; we’re a subsidiary of them. Throughout the greater RTX, I help people understand AI, help them implement AI properly, to set good safe guards internally for AI.”

   Throughout Engler’s career at Collins, he’s been working with AI.

   “When I started with Collins, AI wasn’t at all what it is today,” he said. “I started working on a program that was an agent-based methodology for creating computer programs. We build a lot of electronic devices. Those devices get built in our factory and they go on to what we call a test station. A lot of times we have to write special software for the device and the test station to talk to one another.”

   Engler is created with “Allison,” an “evolutionary program” that creates programs and solutions for those working for Collins.

   “That was our original incarnation of AI. It was fairly rudimentary. It definitely got us in the door as to doing AI,” he said.

   “Allison” stands for “All in Unison.”

   “It’s a set of agents that work together to create something that’s far greater than just the sum of the agents.”

   Engler’s experience with AI goes back 25 years or so as he tried to figure out the algorithms that go into AI.

   You can’t turn on the national news or read news articles from across the world that don’t mention AI.

   And it really is nothing new to society.

   Engler shared the history of AI, noting it goes back to the 1880s. Historical figures that had a hand in AI, though it was not known as “AI” 145 years ago, include Charles Babbage and Alan Turing. Turning is created as the father of AI due to his work as a code-breaker during WWII.

   “This was around the ‘30s, ‘40s, and ‘50s. He created the first AI test,” Engler said. “We call is the ‘Turing test.’ That was a test to see if a machine is actually intelligent enough to fool you. So it’s been around a while.”

   Engler takes the stance that AI is neither good nor bad (in the philosophical context), it’s here to stay, and it can be both useful and harmful.

   “All AI really is is a set of mathematic algorithms that do something with the data,” he said.

   To a person, two plus two equals four. To AI, two plus two should equal four, but sometimes it could equal three or five.

   “AI can screw up sometimes,” said Engler. “Part of what causes those screw-ups is the data set that we train it with. AI is only as good as the data your train it with.”

   He admitted there has been and continues to be a lot of hype surrounding AI’s reputation in society.

   “AI cannot think for itself. It cannot reason. It doesn’t understand logic very well. Things we take as simple and for granted, AI couldn’t even begin to accomplish.”

   On the positive side, Collins uses AI as an internal tool.

   “At Collins we spend a lot of money using AI internally in the company to find the benefits, to find optimizations, to find efficiencies within the company,” said Engler.

   On the flip side, AI can be harmful.

   When Microsoft put out a chatbot, it was removed from the internet within 24 hours.

   “The algorithm kept trying to learn through conversations it was having with people on the internet,” Engler offered.

   Through those conversations and the information people were feeding that chatbot, it learned to be misogynistic and racist.

   “It’s all in what you do with it and what you give it,” Engler said, plain and simple. “As far as the good versus the bad, you give it good stuff, you’re going to get good results. You give it bad stuff, you’re going to get bad results.”

   AI has been chastised in the art, music, and education industries. AI has been used to recreate priceless works of art, copyrighted music, and to plagiarize high school and college essays and papers.

   Engler uses ChatGPT quite often. He said such systems can be “flawed” based on the content the world feeds into it, which, unfortunately, can be copyrighted material. He said there are lots of lawsuits going on right now to fight against this misuse of AI.

   “Right now, the law is that anything AI generates cannot be owned,” he said. “The problem is the data that it used to generate (something), was owned.”

   With the education world, he said there are services out there, like “Grammarly,” that teachers and administers can use to detect the use of AI and plagiarism within a student’s work. Engler said AI can be a tool or an abuse within education, depending on how it’s used.

   “If they’re just copying word for word what was generated, that’s an abuse.”

   However, if a student uses AI to develop an outline for a paper, that’s a tool.

   “There’s nothing wrong with that in my mind,” said Engler.

   Much the same in the art world. Again, lawsuits and legislation are taking form to protect copyrighted images, music, text, etc.

   Engler serves on the federal NIST (National Institute of Standards and Technology) committee, working to come up with legislation on the national level in regards to AI safety. He has also been reviewing legislation at the state level, too.

   “While we’re all moving to use AI, we’re also moving to put guardrails on AI,” he said. “Let’s face it, there are bad actors out there. Bad actors can do things with AI that wouldn’t be necessarily beneficial to society.

   “People really are scared to death of AI taking over the world,” he continued. “It certainly could happen, don’t get me wrong. But in 50 years, maybe longer. The technology that we have right now and the means why which we go about building AI will not, cannot, make it human or human intelligence.”

   Can AI replace a human on the job?

   “If AI can replace you in your job, you probably don’t need to be doing that job anyway,” advised Engler. “AI should be used to facilitate your job, to enhance you doing your job. If AI can replace you, maybe you need to be doing something else, or thinking about doing something else.”

   He does believe AI will one day will replace one’s job. On the plus side, though, that frees up that person to go on to do something different, something better.

   “People are going to have to up-skill themselves.”

   It comes down to being cautious, understanding the limitations to AI, not believing all of the hype that is out there about AI, and “take everything with a grain of salt.”

Category:

Subscriber Login