Older Americans Embrace AI, Yet Trust Issues Persist
Artificial intelligence has permeated discourse in educational institutions and workplaces, often leading to the assumption that its primary users are younger generations. However, a recent comprehensive survey reveals that older Americans are actively engaging with AI, prompting questions about their usage patterns and perceptions of the technology.
Conducted by researchers in partnership with the University of Michigan’s National Poll on Healthy Aging, the survey polled nearly 3,000 Americans over the age of 50. The findings indicate that a significant majority—55% of older individuals—have interacted with some form of AI, whether through spoken commands to voice assistants like Amazon’s Alexa or text-based interactions with chatbots such as OpenAI’s ChatGPT. Voice assistants proved considerably more popular, with half of respondents reporting their use within the past year, compared to one in four who used a chatbot.
Beyond common applications like entertainment and information retrieval, the survey uncovered more creative uses of AI among older adults, including generating text, creating images, and planning vacations. AI also plays a role in supporting a key aspiration for many older Americans: independent living. The study found that older adults who utilize AI in their homes perceive it as beneficial for maintaining autonomy and enhancing safety. Notably, nearly one in three older adults reported using AI-powered home security devices, encompassing doorbells, outdoor cameras, and alarm systems. Of these users, a striking 96% reported feeling safer as a result. While concerns have been raised about the privacy implications of indoor cameras used for monitoring, externally-focused cameras appear to offer a significant sense of security, particularly for those aging in place alone or without nearby family.
However, the adoption of AI among older adults is not uniform, with demographics playing a notable role. The survey identified that individuals in better health, with higher levels of education, and greater incomes were more likely to have used AI-powered voice assistants and home security devices in the past year. This pattern mirrors the historical adoption trends observed with other technologies, such as smartphones, suggesting a familiar digital divide.
A critical aspect explored by the survey was the level of trust older Americans place in AI-generated content. The results show a divided landscape: 54% expressed trust in AI, while 46% did not. Interestingly, those who reported higher levels of trust in AI were also more likely to have used the technology recently. A significant challenge identified is the potential for AI-generated content to appear credible while being factually incorrect. Only half of the older individuals surveyed felt confident in their ability to identify inaccuracies in AI-produced information. This confidence correlated with educational attainment, with more educated users more likely to believe they could spot errors. Conversely, older adults reporting lower levels of physical and mental health tended to trust AI-generated content less.
These findings highlight a recurring cycle in technology adoption, even among younger demographics, where early adopters are often those with greater education and better health. This raises pressing questions about how to effectively educate all older individuals about both the benefits and risks of AI. It underscores the need for strategies to support non-users in learning about AI, enabling them to make informed decisions about its use. Furthermore, there is a clear demand for institutions to develop improved training and awareness tools, ensuring that older users do not place undue or inappropriate trust in AI when making important decisions without fully comprehending the associated risks.
The survey offers potential starting points for developing AI literacy tools tailored for older adults. A compelling 90% of respondents expressed a desire to know when information has been generated by AI, echoing the emerging trend of AI labels on search engine results and policies in some states requiring disclosure of AI content in political advertisements. Expanding such clear and visible notices to other contexts, including non-political advertising and social media, could significantly enhance transparency. Moreover, nearly 80% of older individuals indicated a desire to learn more about AI’s potential pitfalls and how to mitigate them. Policymakers have a crucial role to play in enforcing AI notices that clearly signal AI-generated content, especially at a time when the U.S. is contemplating revisions to its AI policies that could potentially remove language concerning risk, discrimination, and misinformation.
Ultimately, the survey underscores AI’s potential to support healthy aging. However, it also emphasizes that both excessive trust and a lack of trust in AI can be addressed through better training tools and policies designed to make the risks of the technology more transparent and understandable.