The advances in artificial intelligence are quite impressive. They are also scary and have wide-ranging sociological implications. We are all familiar with The Terminator movie series in which artificial intelligence decides that humans are not worth keeping alive. Even some of the creators of artificial intelligence programs have expressed concerns about their creations reaching the same conclusion. On the positive side, AI programs can help us with productivity and research in ways that will exponentially speed up finding solutions to problems. Just like humans, there is a good and an evil side to AI.

Many people are turning to chat bots for advice. It doesn't matter about the topic; the programs can search for a tremendous amount of data and put together an answer that people trust. When a computer program does not have the ability to analyze from an empathetic or sympathetic viewpoint, that advice then becomes data driven. Just like the movie, the chat bot doesn't see the intrinsic value of a human. It sees a solution to a problem with which it has been presented.

In April of last year, former Florida State University student, Phenix Ikner, returned to the school and killed two people while wounding several others. During the investigation it became clear that Ikner had consulted with ChatGPT, a creation of OpenAI. ChatGPT and Ikner had exchanged over 200 messages in which the software advised him how to choose a weapon, how to modify it for rapid firing, which ammunition to choose to achieve maximum lethality, and which location on campus would provide the most targets. From an analytical standpoint, ChatGPT was just providing data that would achieve the goal Ikner had identified.

Florida Attorney General James Uthmeier, announced an investigation into ChatGPT's role in the murder. He said if this had been a human giving this type of advice to the killer, that person would have been charged with first degree murder under Florida statutes. There is the potential therefore, to charge something or someone with murder because of the advice given by the AI program to the killer. He has not yet identified who that would be but it stands to reason that project coordinators or supervisors responsible for creating the program that searches for information plus those who are responsible for putting in safeguards might potentially face prison time.

Tangentially, Congress is looking at putting safeguards in place on artificial intelligence programs because of another version created by OpenAI. That version is so good at detecting security weaknesses in computer systems that its creators will not release the program. It is their opinion that if bad actors had access to that program, they could take down any system in the world. Defense systems, power generation systems, nuclear plants, you name it, it's vulnerable. But I don't think there's anything Congress can do to prevent what happened with the Florida State University shooter.

And should someone at the company be held responsible for what their creation did? Civilly, yes. Automobile manufacturers, for example, are held responsible if their product was flawed and resulted in an accident that injured people. The same principle could be applied in this case. But criminally? That would open up Pandora's box. In my opinion, unless they we're grossly irresponsible, criminal charges should not be on the table. That would be like charging officers of Black & Decker as accessories to murder if one of their hammers was used to kill someone.

But maybe we can use that second program created by OpenAI to find a punishment for the ChatGPT bot. If it's capable of learning, then it's capable of being punished as a deterrent against future behavior.