Sitemap
2 min readApr 15, 2025

I think that's very well articulated. In a way, this is the model that I think is essentially inevitable, in the sense that Level 1 AI support will be for low level tasks, and then it has to decide when it needs to be escalated to the more flexible human team. Allowing the caller to escalate that themselves also seems like good design and customer service, as well - I imagine that almost all competent systems would be designed as you're saying, at least so long as human service members are better than the AI in the edges.

You already stepped the conversation back from the details of the California rules specifically, so I don't really have anything to add to your thoughts in terms of the original conflict I was discussing. But just for the sake of it, I'll say again that my questions are about how to decide when a company is at fault for service differences between AI and human systems, not really about anything related to IF the customer should be able to request to talk to another person. So long as the human services are better than the AI, it feels inherently like a good idea for a company to allow the ability to ask for a human when they want.

Like you said, for the percentage of cases that can't be handled by the AI, humans will be needed. That percentage might be quite large now, and presumably will get smaller over time. Until it's zero, that will be true.

I'm speculating on the times when it's reversed - where the really hard problems are faster and better resolved by AI than the human.

Anyway, thank you for your insights. They were interesting.

Aaron Stanton
Aaron Stanton

Written by Aaron Stanton

Aaron is an author, founder & investor in AI & XR. His work has been covered by CNN, WSJ, NYT, Forbes, TechCrunch & more. Previous exits to Apple & Qualcomm.

No responses yet