Rethinking UX for AI:
When Friction Becomes a Feature

For years, the golden rule of UX design has been clear: reduce friction at all costs. Designers have strived to create fluid and intuitive interfaces that users can navigate effortlessly. However, with the emergence of AI, we are entering unknown territory where this fundamental principle is being questioned. AI systems, by nature, are complex and adaptive. Their decision-making processes often appear opaque, leading to what is commonly called the "Black box problem."


Users then ask questions:

• "Why did the AI make this decision?"
• "Why didn't it do something else?"
• "When does it succeed or fail?"
• "How can I correct an error?"

These crucial questions highlight users' confusion when interacting with traditional AI models that offer little transparency.

The complexity of AI systems introduces new challenges that cannot simply be solved by simplifying the user journey. Instead, we face two key imperatives:

Transparency: How can we reveal the inner workings of AI "black boxes" to build user trust?

Intentional friction: When should we deliberately slow down interactions to ensure user safety and engagement?

Finding the right balance between these two elements is essential. A new approach, called explainable AI, has emerged to address these challenges. Unlike "Black box" models, it brings clarity by giving users an understanding of why decisions are made. Thus, users move from confusion to understanding, and know exactly why an AI succeeds or fails and how they can trust its results.

Users then ask questions:
• "Why did the AI make this decision?"
• "Why didn't it do something else?"
• "When does it succeed or fail?"
• "How can I correct an error?"

These crucial questions highlight users' confusion when interacting with traditional AI models that offer little transparency.

The complexity of AI systems introduces new challenges that cannot simply be solved by simplifying the user journey. Instead, we face two key imperatives:

Transparency: How can we reveal the inner workings of AI "black boxes" to build user trust?

Intentional friction: When should we deliberately slow down interactions to ensure user safety and engagement?

Finding the right balance between these two elements is essential. A new approach, called explainable AI, has emerged to address these challenges. Unlike "Black box" models, it brings clarity by giving users an understanding of why decisions are made. Thus, users move from confusion to understanding, and know exactly why an AI succeeds or fails and how they can trust its results.

In the following sections, we will explore in detail how companies like Google, Tesla, and OpenAI implement transparency and intentional friction in their AI products, thus redefining how users interact with these advanced systems.


Google Vertex AI: Decision-making with Heatmaps

Google's Vertex AI platform is an example of how transparency can be integrated into AI products. Vertex AI uses feature attribution and example-based explanations to provide users with a deeper understanding of why the AI arrived at a particular conclusion.

For example, in image classification tasks, the system generates a heatmap that highlights the areas of an image that most influenced the AI's decision. In the case of classifying a husky, the heatmap might focus on features like ear shape or markings around the eyes, helping the user understand why the AI classified the image in that way.

Use Case: Identifying Bias in Medical Imaging

A key use case for this level of transparency is medical imaging. AI systems are increasingly being used to assist in diagnosing diseases from medical images. With traditional AI models, a doctor might receive a diagnostic recommendation without fully understanding why the AI reached that conclusion. This lack of transparency can lead to a lack of trust, particularly in situations where the diagnosis could be life-determining.

Using Google Vertex AI's heatmap functionality, healthcare professionals can see which areas of an image (such as an X-ray or MRI scan) the AI targeted to formulate its recommendation. If the AI incorrectly identifies a benign spot as a malignant tumor, the heatmap can reveal whether the system was influenced by irrelevant features or distortions in the image. Medical teams can then adjust the model or data, ensuring that the AI's decision-making process aligns more closely with clinical expertise.

This transparency not only helps doctors validate the AI's diagnosis, but it also encourages more responsible use of AI in the healthcare sector, where a single error in judgment can have serious consequences. Additionally, it allows practitioners to detect biases in AI models, ensuring that diagnostic tools work accurately across diverse populations.

Perplexity: Citing Sources in Real Time

This approach serves several purposes:

  • Building Trust: By showing users where the information comes from, Perplexity allows them to verify AI claims, thus increasing confidence in the system's results.

  • Encouraging Critical Thinking: Visible sources prompt users to consider the credibility of information, rather than blindly accepting AI-generated content.

  • Facilitating In-depth Research: With easily accessible sources, users can dive deeper into topics that interest them.

When Perplexity responds to a query like "What is conversational search?", it doesn't just provide an answer. It goes further by:

  • Numbering each statement in its response, correlating with source references.

  • Listing the sources used (in this case, 5 sources including crobox.com, algolia.com, and others).

  • Providing "Related Searches" to encourage deeper exploration of the topic.


Spotify Blend: A New Way to Share and Explore Music Together

Use Case: Strengthening Social Connections Through AI

Spotify's Blend feature goes beyond being a shared playlist generator. It demonstrates how AI can foster social engagement through transparency. Friends and family can compare their music tastes and observe how Spotify's algorithm combines their preferences.


By visualizing taste alignment, Spotify creates an interactive experience that strengthens relationships through shared interests. It aligns with broader trends of collaborative AI, extending personalization from individuals to group experiences, building trust and engagement through transparency.

Use Case: Strengthening Social Connections Through AI

Spotify's Blend feature goes beyond being a shared playlist generator. It demonstrates how AI can foster social engagement through transparency. Friends and family can compare their music tastes and observe how Spotify's algorithm combines their preferences.


By visualizing taste alignment, Spotify creates an interactive experience that strengthens relationships through shared interests. It aligns with broader trends of collaborative AI, extending personalization from individuals to group experiences, building trust and engagement through transparency.


Use Case: Strengthening Social Connections Through AI

Spotify's Blend feature goes beyond being a shared playlist generator. It demonstrates how AI can foster social engagement through transparency. Friends and family can compare their music tastes and observe how Spotify's algorithm combines their preferences.

By visualizing taste alignment, Spotify creates an interactive experience that strengthens relationships through shared interests. It aligns with broader trends of collaborative AI, extending personalization from individuals to group experiences, building trust and engagement through transparency.

Spotify Blend exemplifies how AI transparency can be engaging, fun, and socially interactive. Spotify uses Blend to merge two users' musical preferences into a shared playlist. Through a clear Taste Match score and visual representations of overlapping preferences, Spotify demystifies AI-driven personalization.

When users create a Blend, Spotify provides a score (such as 95% or 68%) indicating how closely their musical tastes align. The app offers additional context by highlighting specific artists or genres that contribute to the shared experience. This transparency transforms the typically opaque recommendation algorithm into something relatable and enjoyable.

When users create a Blend, Spotify provides a score (such as 95% or 68%) indicating how closely their musical tastes align. The app offers additional context by highlighting specific artists or genres that contribute to the shared experience. This transparency transforms the typically opaque recommendation algorithm into something relatable and enjoyable.

When users create a Blend, Spotify provides a score (such as 95% or 68%) indicating how closely their musical tastes align. The app offers additional context by highlighting specific artists or genres that contribute to the shared experience. This transparency transforms the typically opaque recommendation algorithm into something relatable and enjoyable.


ChatGPT: Multiple Perspectives and Breakdowns

In the example we see, the system provides two distinct answers to the question "Can you list the common attributes in that?". This approach serves several purposes:

User Choice and Feedback

It actively asks users, "Which response do you prefer?" This simple question transforms the interaction from a one-way delivery of information into a two-way dialogue.

The interface clearly states, "Your choice will help make ChatGPT better." This transparency about how user input is used builds trust and encourages engagement. Users aren't just consumers of AI output; they're active participants in improving the system.

Detailed Explanations

In both responses, ChatGPT provides numbered lists of observations or attributes. This structured format makes the AI's reasoning process more transparent and easier to follow. It's not just giving answers, but showing its work, much like a human explaining their thought process.

User Control

The interface also includes a "Stop generating" option, giving users control over the AI's output. This feature adds an element of intentional friction, allowing users to pause and reflect on the information they've received rather than being overwhelmed by continuous generation.

his approach to transparency goes beyond simply explaining the black box. Instead, it invites users to peek inside the box themselves, fostering a deeper understanding and trust in the AI system.

Creating Intentional Friction—Slowing Down to Improve Engagement

While transparency is crucial for building trust in AI systems, there's another, perhaps counterintuitive, approach that can enhance user engagement and safety: intentional friction. In the world of UX design, friction has traditionally been seen as the enemy of smooth user experiences. However, when it comes to AI, strategically placed friction can play a vital role in ensuring users remain actively engaged, make informed decisions, and avoid over-reliance on AI systems.

In rapidly evolving AI systems, users can easily fall into a passive role, blindly accepting AI outputs without critical thought. This can lead to several issues:

  1. Over-reliance: Users may begin to trust AI systems too much, even in situations where human judgment is crucial.

  2. Reduced Understanding: Without active engagement, users may fail to understand the limitations and potential biases of AI systems.

  3. Decreased Agency: Users might feel less in control of their interactions with technology, potentially leading to feelings of disempowerment.

Intentional friction addresses these concerns by creating moments of pause, reflection, and active decision-making within the AI user experience.

Let's explore how some leading tech companies have integrated intentional friction into their AI-powered products:


Tesla Autopilot: Driver Attention Warnings

Tesla's Autopilot system is a prime example of how intentional friction can enhance safety and engagement in AI-driven experiences. By implementing various alerts and visual feedback mechanisms, Tesla ensures that drivers remain active participants in the driving process, even when Autopilot is engaged.

Visual Feedback: The dashboard display provides a real-time visualization of what the AI system is detecting on the road. This includes other vehicles, lane markings, and potential obstacles. By presenting this information alongside the actual road view, Tesla encourages drivers to actively verify the system's perceptions, creating a moment of engagement that keeps the driver involved in the driving process.

Contextual Warnings: The system provides clear, contextual warnings when certain conditions might affect Autopilot's performance. For instance, the image shows a warning that "Full Self-Driving may be degraded" due to poor weather conditions. This friction point prompts the driver to be more vigilant and ready to take control if necessary.

Driver Monitoring: Tesla's AI goes beyond just watching the road; it also monitors the driver. As seen in the image, the system can detect driver drowsiness and suggest taking a break. This proactive intervention creates a critical moment of friction, encouraging the driver to reassess their ability to operate the vehicle safely.

Hands-on-Wheel Requests: Tesla's Autopilot is known to require periodic steering wheel input from the driver. If these requests are ignored, the system will escalate its warnings and eventually slow the car down.

These various forms of intentional friction serve crucial purposes:

  • They remind drivers that despite its advanced capabilities, Autopilot is an assistive technology, not a replacement for human attention.

  • They keep drivers engaged with the driving task, reducing the risk of over-reliance on the AI system.

  • They provide transparent communication about the AI's current state and limitations, building trust through honesty about what the system can and cannot do.

By strategically implementing these friction points, Tesla has created an AI-human interaction model that leverages the strengths of both artificial and human intelligence. This approach not only enhances safety but also helps users develop a more nuanced understanding of the AI system's capabilities and limitations, leading to a more responsible and effective use of the technology.

These various forms of intentional friction serve crucial purposes:

  • They remind drivers that despite its advanced capabilities, Autopilot is an assistive technology, not a replacement for human attention.

  • They keep drivers engaged with the driving task, reducing the risk of over-reliance on the AI system.

  • They provide transparent communication about the AI's current state and limitations, building trust through honesty about what the system can and cannot do.

By strategically implementing these friction points, Tesla has created an AI-human interaction model that leverages the strengths of both artificial and human intelligence. This approach not only enhances safety but also helps users develop a more nuanced understanding of the AI system's capabilities and limitations, leading to a more responsible and effective use of the technology.


ChatGPT: Response Delay to Mimic Thoughtfulness

OpenAI's ChatGPT introduces a subtle yet effective form of intentional friction through its response delay mechanism. This design choice slows down the interaction, creating a more thoughtful and human-like conversational experience

  1. Typing Indicator: As shown in the image, ChatGPT displays a typing indicator (the moving dots) while generating a response. This visual cue serves several purposes: ◦ It signals to the user that the AI is "thinking" about their query. ◦ It creates anticipation for the upcoming response. ◦ It mimics the natural pauses in human conversation, making the interaction feel more authentic.

  2. Gradual Text Appearance: Although not visible in this static image, ChatGPT typically reveals its responses gradually, as if it's typing in real-time. This approach: ◦ Allows users to begin processing the response as it appears, rather than being overwhelmed by a sudden block of text. ◦ Encourages users to stay engaged and focused on the conversation. ◦ Provides natural breaking points for users to interrupt or redirect the conversation if needed.

  3. Variable Response Times: ChatGPT's delay isn't uniform; it varies based on the complexity of the query and the length of the response. This variability: ◦ Reinforces the illusion of "thinking time" for more complex questions. ◦ Manages user expectations about response speed and quality.

By introducing this subtle friction, ChatGPT transforms what could be an instantaneous, mechanical exchange into a more measured, conversational experience. This approach helps prevent users from treating AI responses as immediate, infallible answers, instead encouraging a more thoughtful, interactive dialogue.

The success of this design choice highlights an important principle in AI UX: sometimes, slowing down the interaction can actually enhance the user experience, particularly when dealing with complex, language-based AI systems. It demonstrates that in the quest for efficient AI interactions, we shouldn't overlook the value of pacing and the familiar rhythms of human communication.


Conclusion: Finding Balance Between Transparency & Friction

As AI continues to integrate into our daily lives, UX designers are faced with the challenge of creating interfaces that are not only efficient but also responsible and enriching. The examples we have discussed demonstrate that transparency and intentional friction are not obstacles to good UX, but rather essential components in the AI era.

Looking towards the future, we can anticipate:

  • More nuanced transparency: AI systems will likely offer even more granular insights into their decision-making processes, adapted to different levels of user expertise.

  • Adaptive friction: Future AI interfaces could dynamically adjust the level of friction based on user familiarity, task complexity, and potential consequences of AI decisions.

  • Collaborative AI design: As users become more AI-literate, we could see more collaborative design processes where users have greater control over how they interact with and shape AI systems.

  • Ethical considerations: The balance between transparency and friction will play a crucial role in addressing ethical concerns around AI, such as bias, privacy, and user autonomy.

In conclusion, the future of AI UX design lies not in creating frictionless black boxes, but in developing transparent and engaging systems that respect human autonomy and promote meaningful collaboration between humans and AI. By thoughtfully implementing both transparency and intentional friction, we can create AI experiences that users not only trust but also enjoy engaging with, paving the way for a more harmonious integration of AI into our daily lives.


References

1. Tesla Model 3 Warning Alert
Screenshot from a Reddit post in the Tesla Model 3 subreddit. Reddit Link

2. Tesla Autopilot Success
Image from an article on Carscoops. Carscoops Link

3. Perplexity AI Conversational Interface
Image from an article in the Synthedia newsletter. Synthedia Link

4. What Does Your Blend Score Really Mean?
Image from a public post on Spotify’s Facebook account. Facebook Link

5. Explanation of the SHAP Model in AI
Screenshot from the Advancing Analytics blog. Advancing Analytics Link

1. Tesla Model 3 Warning Alert
Screenshot from a Reddit post in the Tesla Model 3 subreddit. Reddit Link

2. Tesla Autopilot Success
Image from an article on Carscoops. Carscoops Link

3. Perplexity AI Conversational Interface
Image from an article in the Synthedia newsletter. Synthedia Link

4. What Does Your Blend Score Really Mean?
Image from a public post on Spotify’s Facebook account. Facebook Link

5. Explanation of the SHAP Model in AI
Screenshot from the Advancing Analytics blog. Advancing Analytics Link


Let's build something great together.

Let's start the conversation.

© Seyon Sounthararajah 2025