By John P. Desmond, AI Trends EditorÂ
The AI stack defined by Carnegie Mellon University is fundamental to the approach being taken by the US Army for its AI development platform efforts, according to Isaac Faber, Chief Data Scientist at the US Army AI Integration Center, speaking at the AI World Government event held in-person and virtually from Alexandria, Va., last week. Â

âIf we want to move the Army from legacy systems through digital modernization, one of the biggest issues I have found is the difficulty in abstracting away the differences in applications,â he said. âThe most important part of digital transformation is the middle layer, the platform that makes it easier to be on the cloud or on a local computer.â The desire is to be able to move your software platform to another platform, with the same ease with which a new smartphone carries over the userâs contacts and histories. Â
Ethics cuts across all layers of the AI application stack, which positions the planning stage at the top, followed by decision support, modeling, machine learning, massive data management and the device layer or platform at the bottom. Â
âI am advocating that we think of the stack as a core infrastructure and a way for applications to be deployed and not to be siloed in our approach,â he said. âWe need to create a development environment for a globally-distributed workforce.â  Â
The Army has been working on a Common Operating Environment Software (Coes) platform, first announced in 2017, a design for DOD work that is scalable, agile, modular, portable and open. âIt is suitable for a broad range of AI projects,â Faber said. For executing the effort, âThe devil is in the details,â he said.  Â
The Army is working with CMU and private companies on a prototype platform, including with Visimo of Coraopolis, Pa., which offers AI development services. Faber said he prefers to collaborate and coordinate with private industry rather than buying products off the shelf. âThe problem with that is, you are stuck with the value you are being provided by that one vendor, which is usually not designed for the challenges of DOD networks,â he said. Â
Army Trains a Range of Tech Teams in AIÂ
The Army engages in AI workforce development efforts for several teams, including: leadership, professionals with graduate degrees; technical staff, which is put through training to get certified; and AI users.  Â
Tech teams in the Army have different areas of focus include: general purpose software development, operational data science, deployment which includes analytics, and a machine learning operations team, such as a large team required to build a computer vision system. âAs folks come through the workforce, they need a place to collaborate, build and share,â Faber said.  Â
Types of projects include diagnostic, which might be combining streams of historical data, predictive and prescriptive, which recommends a course of action based on a prediction. âAt the far end is AI; you donât start with that,â said Faber. The developer has to solve three problems: data engineering, the AI development platform, which he called âthe green bubble,â and the deployment platform, which he called âthe red bubble.â  Â
âThese are mutually exclusive and all interconnected. Those teams of different people need to programmatically coordinate. Usually a good project team will have people from each of those bubble areas,â he said. âIf you have not done this yet, do not try to solve the green bubble problem. It makes no sense to pursue AI until you have an operational need.â  Â
Asked by a participant which group is the most difficult to reach and train, Faber said without hesitation, âThe hardest to reach are the executives. They need to learn what the value is to be provided by the AI ecosystem. The biggest challenge is how to communicate that value,â he said.  Â
Panel Discusses AI Use Cases with the Most Potential Â
In a panel on Foundations of Emerging AI, moderator Curt Savoie, program director, Global Smart Cities Strategies for IDC, the market research firm, asked what emerging AI use case has the most potential. Â
Jean-Charles Lede, autonomy tech advisor for the US Air Force, Office of Scientific Research, said,â I would point to decision advantages at the edge, supporting pilots and operators, and decisions at the back, for mission and resource planning.â  Â

Krista Kinnard, Chief of Emerging Technology for the Department of Labor, said, âNatural language processing is an opportunity to open the doors to AI in the Department of Labor,â she said. âUltimately, we are dealing with data on people, programs, and organizations.â   Â
Savoie asked what are the big risks and dangers the panelists see when implementing AI.  Â
Anil Chaudhry, Director of Federal AI Implementations for the General Services Administration (GSA), said in a typical IT organization using traditional software development, the impact of a decision by a developer only goes so far. With AI, âYou have to consider the impact on a whole class of people, constituents, and stakeholders. With a simple change in algorithms, you could be delaying benefits to millions of people or making incorrect inferences at scale. Thatâs the most important risk,â he said. Â
He said he asks his contract partners to have âhumans in the loop and humans on the loop.â  Â
Kinnard seconded this, saying, âWe have no intention of removing humans from the loop. Itâs really about empowering people to make better decisions.â  Â
She emphasized the importance of monitoring the AI models after they are deployed. âModels can drift as the data underlying the changes,â she said. âSo you need a level of critical thinking to not only do the task, but to assess whether what the AI model is doing is acceptable.â  Â
She added, âWe have built out use cases and partnerships across the government to make sure weâre implementing responsible AI. We will never replace people with algorithms.â Â
Lede of the Air Force said, âWe often have use cases where the data does not exist. We cannot explore 50 years of war data, so we use simulation. The risk is in teaching an algorithm that you have a âsimulation to real gapâ that is a real risk. You are not sure how the algorithms will map to the real world.â Â
Chaudhry emphasized the importance of a testing strategy for AI systems. He warned of developers âwho get enamored with a tool and forget the purpose of the exercise.â He recommended the development manager design in independent verification and validation strategy. âYour testing, that is where you have to focus your energy as a leader. The leader needs an idea in mind, before committing resources, on how they will justify whether the investment was a success.â  Â
Lede of the Air Force talked about the importance of explainability. âI am a technologist. I donât do laws. The ability for the AI function to explain in a way a human can interact with, is important. The AI is a partner that we have a dialogue with, instead of the AI coming up with a conclusion that we have no way of verifying,â he said. Â
Learn more at AI World Government.Â
 Â