As AI systems increasingly make decisions with significant real-world impacts, lawmakers are grappling with the issue of liability. Who is responsible if an autonomous vehicle crashes, or if an AI-powered medical tool delivers incorrect advice? Europe has proposed new rules assigning liability to manufacturers and deployers of high-risk AI systems, while the U.S. is still exploring options.
The debate is particularly contentious because AI systems often operate as “black boxes,” making it difficult to trace errors back to specific causes. Industry stakeholders are calling for clear, balanced rules that do not stifle innovation but also ensure accountability.
The outcome of these discussions will shape trust in AI adoption. Without clarity, companies risk legal uncertainty that could slow down deployment in sensitive sectors.