
News

Share
21st October 2025
04:19pm BST

One of the highest-ranking officials in the US Army has revealed that he makes use of AI chatbots when making crucial leadership decisions.
In an interview with US-outlet Business Insider, Maj. Gen. William Hank Taylor made clear how much he relies on the likes of ChatGPT to make calls that could impact thousands of soldiers.
General Taylor, commander of the 8th Army, said: "As a commander, I want to make better decisions," while adding "I want to make sure that I make decisions at the right time to give me the advantage."
The general explained how "Chat and I" have become "really close lately."
Per the Business Insider report, the Major General isn't using ChatGPT in the exact same way a school pupil might in order to cheat on their homework. Instead, his conversations with the chatbot work to develop insights to fit into a theory known as the "OODA Loop."
Supporters of the OODA (observe, orient, decide, and act) Loop theory aim to be able to move decisively before the enemy does, which gives them an advantage on the battlefield.
Military officers like Taylor believe tools like artificial intelligence will be key to building up better understandings of enemy forces, so they can make better decisions at speed in battle scenarios.
The Major General isn't alone in attempting to harness the power of AI as a tool of war.
The US Secretary of the Air Force recently described AI as the key factor is deciding what is "going to determine who's the winner in the next battlefield," adding that "we're going to be in a world where decisions will not be made at human speed. They're going to be made at machine speed."
Others however, have been keen to point out the serious risks posed by trusting AI with so much responsibility.
Speaking to NewsWeek, the co-founder of the NGO advisory body World Digital Governance, warned that "more involved questions" could put confidential information in the hands of chatbots.
He suggested that we need a better idea of where the data is being held, and by whom, before trusting chatbots with information that could risk national security, adding: "For these models to be effective and give you a meaningful response, they need a lot of context."