Öykü Işık is professor of digital strategy and cybersecurity at IMD Business School in Switzerland. Her earlier work focused on business intelligence, analytics, and technology management. Her current research, teaching and advisory work focuses on digital resilience and responsible AI.
Next to designing and delivering custom programs for top management teams, Işık leads two open programs at IMD, the Cybersecurity Strategy and Risk program and the GenAI for Business sprint. Her research appeared in outlets such as MIT Sloan Management Review, Harvard Business Review, and European Business Review. Işık cochairs the Global Future Council on Cybersecurity at the WEF and contributes to its Bridging the Cyber Skills Gap initiative. She is also the research director for the Swiss chapter of the Global Council for Responsible AI and has been recognized as a digital shaper in Switzerland in 2021 and 2023. She has lived and worked in higher education in Belgium, the United States, and Turkey before moving to Switzerland.
Voting History
| Statement | Response |
|---|---|
| Responsible AI governance requires questioning the necessity of overly humanlike agentic AI systems. Strongly agree |
“Responsible AI is not only about adding safeguards once a system is developed; it also requires pausing to ask whether certain applications should be pursued at all, such as overly humanlike agentic AI. Research on anthropomorphism shows that when machines mimic humans too closely, people overtrust them, disclose too much, or defer to them as if they had authority. Such designs can blur accountability, create moral confusion, and even trigger discomfort through the “uncanny valley” effect. If the essence of responsible AI is to identify and mitigate risks, then questioning the necessity of building these systems belongs squarely within its scope.
Yet organizations often focus on how to implement AI safely, while avoiding the harder question of whether some forms of AI are worth pursuing. Regulations like the European Union AI Act underscore the need for proportional oversight. Responsible AI governance must therefore go beyond technical compliance, to cultivate the capacity to challenge assumptions, slow down when needed, and ensure innovation strengthens accountability rather than erodes it.” |