Not all AI is created equal: Why do we trust some AI but reject the rest?
We trust AI when it feels like a helpful assistant, but not when it seems awkwardly human. The largest study to date on consumer responses to AI shows that how it is labeled, used, and experienced changes everything.
Do you open your phone with Face ID? Like it or not, you’re probably already using artificial intelligence (AI)—often without even realizing. Yet despite this growing reliance, many people still hesitate when AI shifts from helping to deciding. A new study led by Meike Zehnle, Christian Hildebrand (both from University of St.Gallen) and Ana Valenzuela (Full Professor of Marketing at Esade), sheds light on this conundrum—and suggests that the tide might be turning.
The research, published in International Journal of Research in Marketing, analyses the results of 440 experiments involving over 76,000 people, collected over the past two decades. It’s the largest study to date on how consumers respond to AI—and it reveals patterns that single studies alone might miss. Consumer resistance to AI depends heavily on how AI is presented, where it is used, and how real the experience feels.
The power of the label
The study found that what we call AI shapes how people feel about it. “Labels emphasizing the physical embodiment of AI (as in robots) trigger substantially stronger negative responses compared to all other AI labels,” says Valenzuela.
When AI seems almost human, but with some odd non-human traits, it triggers discomfort and distrust
The reason behind this can partly be explained by the ‘uncanny valley’ effect, discussed by Valenzuela in the Up Next Podcast. Consumers are at ease with AI when it feels helpful and non-threatening. But when AI seems almost human, but with some odd non-human traits, it triggers discomfort and distrust.
Take Amazon’s Alexa. Friendly voice, clear purpose: people trust it to play music or set reminders. Compare that to AI in robot form—research showed that embodied AI triggered the strongest negative response overall. People were more uncomfortable when AI looked or acted almost human, but didn’t quite cut it. The lesson? Don’t make AI in the form of a replica human—people want an automated assistant, but they don’t want that assistant to stare vacantly at them!
How AI is used
It’s not just how AI looks or sounds that affects the public reaction—it’s also what it’s doing. Studies highlight that consumer acceptance varies greatly by domain. People are generally comfortable with AI enhancing productivity or choosing movies and music. But they are much more cautious when it comes to more serious or risky decision-making.
“Responses are most negative in high-stakes domains, such as transportation, legal, or social welfare, and least negative in business environments such as operations and management,” notes Valenzuela.
Consumers happily accept AI helping with low-risk tasks like Spotify playlist suggestions
A recent news story underlines this unease. The UK government is developing a prediction tool to identify individuals most likely to commit violent crimes. Based on AI technology, the plan has raised immediate ethical concerns about bias, accountability, and human rights. Critics fear that AI could label someone as likely to commit a crime they may never commit. AI is only as good as the algorithms behind it—if those algorithms are developed by a group of people who don’t accurately reflect the diversity of the population, then bias and prejudice could be inadvertently programmed into the crime-prediction AI.
This cautious view isn’t universal. As explored in previous research by Valenzuela on how reactions to AI are shaped by cultural differences, Western societies—where personal autonomy is strongly valued—are generally more skeptical of AI than Eastern cultures, which may see it as a tool for social harmony.
However, consumers happily accept AI helping with low-risk tasks like Spotify playlist suggestions. The stakes—and the trust thresholds—are vastly different.
Trust grows with time
Despite the many concerns, people are accepting AI. Valenzuela’s research shows that consumer resistance to AI has steadily declined over time, especially since 2022, when tools like ChatGPT and DALL·E first appeared.
The younger generation report a far higher level of comfort when using AI
The public’s cognitive acceptance, or judgment of AI’s usefulness, is now almost neutral, and people are trusting AI more. It’s becoming more commonplace for people to ask AI for help with analytical or task-based jobs.
Research from Ipsos shows that the younger generation—Gen Z in particular—reports a far higher level of comfort when using AI. Growing up with AI equates to a higher level of trust in it. The older generation however, remain more distrustful.
This trend suggests a continued move toward acceptance, particularly as AI becomes more integrated in daily tasks.
It’s all about the experience
Interestingly, the researchers discovered that the level of faith in AI is influenced by how the interaction with it is designed. There are three important factors: independence, personalization, and realism. In short, users prefer AI that serves as a partner rather than a controller. When consumers feel they still have agency—and that they can override or opt out of AI decisions—trust in AI increases considerably.
Additionally, “Any AI that signals the ability to recognize people’s uniqueness will generate more positive consumer responses,” explains Valenzuela.
The author’s previous research on AI and human-centered design stresses a similar point: trust and transparency must be inbuilt into AI interactions. For people to forge sustainable relationships with AI, they need to understand what the AI does, why it does it, and how to interact with it.
Although building AI bots that mimic human appearance can repel users, adding human touches, like responsive interfaces or the option for dialogue, can increase AI usage. The more the AI user experience feels intuitive and respectful of human needs, the more likely people are to engage with it.
But there's a flip side. Highly personalized AI can also limit human growth by reinforcing past preferences and filtering out unexpected information. When algorithms only serve what users already like, they risk becoming echo chambers—reducing curiosity, learning, and even memory retention by outsourcing discovery to machines.
Designing AI with humans in mind
Resistance to AI hasn’t disappeared, but it is shrinking fast—and not by accident. Valenzuela’s research shows that consumer attitudes are shaped by choices companies and designers make every day: from how AI is labeled, to where it is used, to how much autonomy users retain.
Businesses and policymakers who are building the next chapter of AI would do well to understand these nuances. The goal shouldn’t be to focus on pushing hard for AI adoption, but to take hold of the opportunity to construct clearer guardrails—deciding when AI should step forward, and when it should step back. Good design means building human-centered AI that will enhance users’ trust and acceptance. This is the most positive route to a future where technology truly serves people.
- Compartir en Twitter
- Compartir en Linked in
- Compartir en Facebook
- Compartir en Whatsapp Compartir en Whatsapp
- Compartir en e-Mail
Do you want to receive the Do Better newsletter?
Subscribe to receive our featured content in your inbox.