Responsible AI / Panelist

Shelley McKinley

GitHub

U.S.

Shelley McKinley is the chief legal officer at GitHub, where she leads teams responsible for trust and safety, social impact, developer policy, product & regulatory legal, commercial legal, and legal operations. McKinley started her career at Microsoft supporting parts of the Developer Division in 2005.

Prior to joining GitHub, McKinley was the head of Microsoft’s Technology and Corporate Responsibility organization, where she oversaw a team that drove the use of technology to benefit society through priorities such as accessibility, environmental sustainability, broadband access, responsible AI, and justice reform. She has also led legal, corporate, and external affairs teams across Europe and worked on products in Microsoft’s gaming division, including developer communities and Xbox Live. Prior to joining Microsoft, McKinley was legal counsel at Wizards of the Coast, where she supported the Dungeons & Dragons and Magic: The Gathering brands.

McKinley is an outspoken proponent of mental health awareness and uses her voice to help address the affiliated bias and stigma, by advocating for a culture of openness and inclusion. Outside of work, you can find her enjoying outdoor concerts, tearing up the slopes on her snowboard, and relaxing with her family and friends.

Voting History

Statement Response
Responsible AI governance requires questioning the necessity of overly humanlike agentic AI systems. Agree “Any governance framework should include a deliberate analysis of whether features, such as the level of apparent “humanness” in a tool, serve a practical and beneficial purpose. System design should examine whether benefits outweigh risks, document the decision-making process, and ensure transparency for end users.”
Holding agentic AI accountable for its decisions and actions requires new management approaches. Agree “Since AI isn’t a person or legal entity, accountability for decisions and actions demands a broad, shared responsibility from the start: Agentic AI creators must embed things like transparency and human oversight during development, while users must deploy them responsibly, and monitor and document impacts.

Today’s workflows were not built with the speed and scale of AI in mind, so addressing gaps will require new governance models, clearer decision pathways, and redesigned processes that make it possible to trace, audit, and intervene in AI-driven actions.”
Effective human oversight reduces the need for explainability in AI systems. Neither agree nor disagree “Oversight and explainability should be considered as complements rather than substitutes. Developers, researchers, and policy makers must ask how innovations in explainability can improve human oversight and, conversely, take the need for human oversight and trust to drive research in explainability and other forms of machine-assisted transparency and control.”
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Strongly agree “AI-producing companies must be held responsible for how their products are developed. While there’s a separate discussion warranted around AI applications developed for specific use cases, the need for regulating general-purpose AI products is no different from the need for regulating other product-producing companies and industries. All that said, policy makers need to ensure that regulation is proportional and places responsibility with producers of commercial products and services, not the developers building open-source componentry across the tech stack that may be integrated into these systems. If we fail to protect developers and end users, we will almost certainly see a decline in overall innovation.”