How Microsoft expands up safety and testing for Gen AI

Sarah Bird, the chief product officer for responsible AI at Microsoft, joins us today. We talk about the testing and assessment methods Microsoft uses to make sure generative AI, huge language models, and image generation are used safely. The specific risks and difficulties posed by generative AI are discussed, along with the need to strike a balance between security concerns and fairness, apply adaptive and layered defense strategies in order to quickly respond to unexpected AI behaviors, automate AI safety testing and evaluation in addition to human judgment, and put red teaming and governance into practice. Along with her opinions on the quickly changing field of GenAI, Sarah also discusses the lessons she learned from Microsoft’s “Tay” and “Bing Chat” incidents.

Source link