AI design is a choice — and people deserve better options

By prioritizing human-centric innovation, AI can empower users and foster trust, satisfaction and mutual enrichment between humans and machines.

Ana Valenzuela

In the first part, we look at how AI constrains the human experience. 


When Ana Valenzuela, professor in the Full Time MBA and Executive MBA programs at Esade, and a global team of researchers examined the impact of AI on human experience, they found that what’s good for business isn’t necessarily good for the consumer.

Their analysis, published in Journal of the Association of Consumer Research, found that rather than enhancing the consumer experience, AI interactions reduce and constrain their options. The impact goes further than limiting interactions: AI comes at the risk of triggering a loss of skills akin to that last seen in the Industrial Revolution, only this time it may impact knowledge workers and our overall emotional and social skills. 

But rather than water wheels and steam engines being confined to history, this time it’s personal. What is at stake is people’s ability to interact on a natural level, since we are adapting our behavior to fall in line with machines.  

With clear implications for the future of work and society, Valenzuela and her co-authors say it’s time for AI design to focus on the needs of people — and the benefits for business will follow. 

Trends in AI regulation

The future of AI isn’t totally dystopian — there are many benefits for users, not least advances in medical research and treatment. But at the same time, the risks of developing and deploying the technology without oversight cannot be ignored.  

Regulators are being pressed to urgently implement regulation to protect users, and there has been some progress. In August, the world’s first piece of legislation aimed at regulating AI came into force in the form of the European Union’s AI Act. The Act outlines uses that threaten fundamental individual rights, including behavioral manipulation, emotion recognition and social scoring. 

In the US, the Biden administration set the ball rolling with the Blueprint for an AI Bill of Rights, an executive order on safe AI and a National Security Memorandum for ethical standards of AI in national security. But there’s no federal legislation that regulates the use of AI, and Donald Trump has pledged to favor innovation over regulation

Whichever AI path the new US administration chooses to follow, the global regulation either in place or under discussion focuses on objective criteria such as privacy issues or bias caused by algorithmic decision-making. While undoubtedly relevant, this overlooks the impact of AI on human psychology.  

Designing the future of AI

The research from Valenzuela and co-authors highlights three areas of concern they say are particularly relevant to policymakers:  

  • Agency transference: people handing over agency to AI, losing autonomy and skills 
  • Parametric reductionism: AI translating complex human behavior, identity and preferences into machine-readable data points 
  • Regulated expression: AI overriding authentic communication and constraining self-disclosure. 

Drawing inspiration from the psychological phenomena observed in their research, Valenzuela and co-authors have outlined four examples of design-based innovations that can tackle the issues they identified. 

  1. Diversify choices 

    Predictive AI offers options based on past behavior, limiting the choices available and herding consumers down a rabbit hole of increasingly niche material. Instead of offering more of the same, algorithms should be designed to help people discover new products and services.  

  2. Expand exposure 

    Similarly, AI is often criticized for creating echo chambers that validate beliefs rather than challenge them. If the motivation behind design is to expand knowledge, rather than limit it, consumers will be exposed to new perspectives and a wider range of opinions. 

  3. Enhance functionality 

    On the surface, AI may appear to personalize the user experience — but consumers have little or no control over the system itself. TikTok, for example, may offer a stream of dance videos because a user paused on one — but was it because they liked the song, or wanted to learn the moves? Building in options for more consumer control can benefit both sides. If the user can specify they want to learn a new skill and the content reflects that, they’re more likely to return to the platform, and even be willing to pay for the service. 

  4. Don’t anthropomorphize AI  

    Research shows that people tend to anthropomorphize AI, interacting with it in the same way interact with other humans. This makes them susceptible to the same cognitive and social biases they experience in human interactions, while also underestimating AI’s ability to extract and exploit information. Consumers need to be educated on how to approach AI in a more deliberative mindset: instead of instinctively applying human-centered metaphors, they should adopt new ways of understanding AI as a very sophisticated machine. 

There are many business benefits of AI, but there is an increasing need to explore how it can optimize human experience. Design, say the research team, is a choice. If consumer preferences are considered when developing the technology, future relationships between human and machine can be based on trust, satisfaction and mutual enrichment — not the pursuit of profit at any cost. 

All written content is licensed under a Creative Commons Attribution 4.0 International license.