The rapid increase in the use of AI (and generative AI) has exposed us to the risks associated with this technology and the need for governance and regulation. Key principles being discussed in several forums on the subject, center around frameworks that focus on transparency, accountability and oversight. Many questions remain open for discussion. Should we rely on national or international standards and regulations, or should companies develop their own? How can we develop ethical standards that allow AI participants to innovate while ensuring safety risks are managed?