Kirtan Padh is a machine learning researcher developing methods for ethical and reliable AI, with applications studying potential bias in existing systems. He is pursuing a PhD in computer science at the Technical University of Munich and is also a board member of the NGO AI Transparency Institute in Switzerland. Padh is keen to contribute his technical expertise to AI policy development and has represented the institute in various working groups in Europe to this effect.
Voting History
Statement | Response |
---|---|
Effective human oversight reduces the need for explainability in AI systems. Disagree | “Explainability and effective human oversight go hand in hand. In most cases, it would be hard to have one without the other. And having one doesn’t exclude the other. For instance, doctors are well known to trust explainable support systems more than black-box systems. This is an example where both human oversight and explainability are apparently needed for a truly trustworthy system. There could be a few cases where effective human oversight can reduce the need for explainability, but in general, it would be unhelpful to think of these as mutually exclusive requirements, but rather as two important and related goals.” |
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Strongly agree | “Accountability is an important consideration for a rapidly advancing technology such as GPAI. Beyond legal liability, accountability also entails the ability to hold companies responsible for their products. Developers should be answerable for how their products are created — a standard that applies to most consumer goods and should equally extend to GPAI products.” |