Aaron Stanton
1 min read6 days ago

Makes sense to me. I think being informed about what someone is interacting with is essential for them to be able to make an informed decision. Aside from some specifics on whether or it should be a blanket policy for all places that AI can be used, I don't see a lot of logical conflicts with labeling and informing.

I think the logical conflict in this case comes from making a company potentially liable for having an AI system that's substantially better than a human counter-part, in all circumstances. If it's not possible for a company to comply, the only solution is to reduce or eliminate the AI entirely, even if it could be hugely beneficial to a lot of people.

Unfortunately, I don't have any solutions. I believe consumer protections are needed, but also believe that a law like this, without a lot of nuance and understanding, could essentially block installing systems that are better than a human can do alone.

Aaron Stanton
Aaron Stanton

Written by Aaron Stanton

Aaron is an author, founder & investor in AI & XR. His work has been covered by CNN, WSJ, NYT, Forbes, TechCrunch & more. Previous exits to Apple & Qualcomm.

Responses (1)