The rapid increase in the use of AI (and generative AI) has exposed us to the risks associated with this technology and the need for governance and regulation. Key principles being discussed in several forums on the subject, center around frameworks that focus on transparency, accountability and oversight. Many questions remain open for discussion. Should we rely on national or international standards and regulations, or should companies develop their own? How can we develop ethical standards that allow AI participants to innovate while ensuring safety risks are managed?
S&P Global Market Intelligence
Principal Research Analyst
Dartmouth College
Professor of Engineering Innovation
AVEVA Software LLC
Senior Vice President, Americas
Argonne National Lab / University of Chicago
Associate Laboratory Director for the Computing, Environment and Life Sciences (CELS) Directorate and an Argonne Distinguished Fellow