A recent
Pew Research study concluded that nine in 10 Americans are aware of the use of artificial intelligence in their daily lives. As one might expect, 61% of Americans 65 and older are concerned about the growing use of AI.
What might not be as obvious is that 42% of Americans, the digital generation aged 18 to 29, have reservations about the proliferation of AI.
Across all ages, the percentage of Americans with concerns about the development and deployment of AI is growing.
However, very few people can accurately describe what artificial intelligence is and is not. Likewise, many do not have a firm grasp of AI's current capabilities and uses.
Perceptions of AI are greatly influenced by media, particularly within the arts and social media, where AI is often anthropomorphized as replicated humans that develop feelings and violently express their angst against their creators.
The other common portrayal is that of super AI that does everything better than humans and eventually becomes a tyrannical overlord.
These false or ill-informed depictions are sources of preconceptions about using artificial intelligence within the military. Specifically, the responsible use of AI and where learning and advancement occur.
Airman Magazine recently interviewed Col. Tucker Hamilton, 96th Operations Group commander at Eglin AFB and Air Force AI test and operations chief, to discuss the current state of AI within the Air Force, ongoing research, development, and testing, and AI’s place in building the force of the future.
Hamilton is an experimental fighter test pilot in the F-35 Lightning II program and was previously the director of the Department of the Air Force – MIT Artificial Intelligence Accelerator.