A statewide online survey of 500 Arizona adults on awareness of AI-generated content, trust in news, perceived personal risk, and support for regulating deepfakes.
Overview & key takeaways
Arizonans have already internalized AI and deepfakes as a real-world risk. Awareness is high, tolerance for personal misuse is low, and there is broad appetite for labeling and regulation.
What stands out at a glance
- AI is already mainstream: 93% say they’ve seen AI-generated content online.
- News credibility is fragile: 61% would lose trust in the news if they found out images or videos used were AI-generated; only 39% say their trust wouldn’t change.
- Personal boundaries are clear: 87% say they would ない be okay with AI creating compromising fake images or videos of them.
- Labeling is a near-consensus: 91% believe realistic deepfakes should always be clearly labeled as artificial.
- Little patience for AI in campaigns: 66% say AI deepfakes in political campaigns should be banned; another 29% only support them if clearly labeled. Just 5% say they should be freely allowed.
- People worry about the whole ecosystem, not a single format: 77% say all forms of deepfakes worry them, not just images or video.
All statistics are percentages of Arizona adults, weighted by age and gender to match the state population profile.
Why this matters
For policymakers, platforms, and campaigns, this dataset shows extremely low tolerance for deceptive AI — particularly when it touches news or politics.
もうAI詐欺を心配する必要はない。 TruthScan あなたを助けることができる:
- AIが生成したものを検出する 画像、テキスト、音声、ビデオ。
- 避ける AIによる大規模な詐欺
- 最も大切なものを守る 繊細 企業資産。
Use the navigation chips at the top of the page to jump into specific themes: news & trust, AI in campaigns, personal riskそして demographics.
News, AI visuals & trust
AI-generated content is already part of everyday online life in Arizona, and it has a direct impact on whether people trust what they see in the news.
AI is already everywhere
Nearly every respondent has already encountered AI-generated content online.
AI in the news undermines trust
Most Arizonans say discovering that news images or video were AI-generated would make them trust the news less — especially younger adults.
Among 18–29 year-olds, about 68% say they would lose trust in the news if they learned visuals were AI-generated, compared to 56–59% among older age groups.
AI & deepfakes in political campaigns
AI-generated attack ads and fake endorsements are deeply unpopular in Arizona, and voters want legal guardrails and labels for this kind of content.
Little tolerance for AI in campaigns
Two-thirds want AI deepfakes in campaigns banned outright, and another third would only accept them with clear labeling. Gender differences are small but real.
Overall, 65.9% say AI deepfakes in campaigns should be banned, 28.7% would only allow them if clearly labeled, and just 5.4% think they should be allowed without restrictions.
Voters prefer laws + technology
When asked how harmful deepfakes should be addressed, Arizonans see this as both a legal and technological problem.
- 65% want both new laws そして investments in detection technology.
- 24% focus primarily on legal regulation and criminalization.
- だけである。 11% prefer to leave rules to private companies alone.
For policymakers, the clear signal is that voters expect public-sector action, not just platform policies.
Personal risk & consent
People are overwhelmingly uncomfortable with their own likeness being cloned or manipulated, and they see deepfakes as a broad ecosystem risk rather than a narrow technical issue.
Almost no one is okay with personal deepfakes
When asked whether they would be comfortable with AI creating fake compromising content of themselves, opposition is nearly unanimous.
87.2% say they would ない be okay with that. Only 12.8% say they “wouldn’t mind” their likeness being cloned by AI.
People worry about deepfakes in every format
When asked which type of deepfake worries them most, most Arizonans don’t single out one format — they’re concerned about all of them.
- 76.5% say all forms of AI-generated deepfakes worry them.
- 10.6% are most worried about AI-generated voice/audio.
- 7.6% about AI-generated videos, and 5.3% about images.
This suggests communications and policy should treat deepfakes as a broad trust problem, not just an image or video issue.
Who we heard from
The survey is a weighted sample of 500 adults living in Arizona, with respondents drawn primarily from the Phoenix and Tucson media markets.
Sample profile (weighted)
Weights are applied so that the sample matches Arizona’s adult population on age and gender.
Geography & methodology
The survey was fielded via online respondents in Arizona on November 17–18, 2025.
- Media markets: Roughly 83% from the Phoenix DMA, 15% from Tucson, and the remainder from Yuma or overlapping markets.
- Mode: Online survey of adults living in Arizona, using quotas and weighting.
- Weights: Post-stratification weights align the sample to the Arizona adult population on age and gender.