Ben Dias is the director of data science and analytics at EasyJet. He has 20 years of industry experience, with previous roles at Royal Mail, Tesco, and Unilever. His current focus is building and leading data teams and applying the lean startup approach to AI within large organizations. Dias actively engages with the U.K. mathematics community to help inspire the next generation of mathematicians and AI professionals. He also strives to ensure that AI solutions drive more equity and inclusion rather than exacerbating existing biases and inequalities. Dias holds a doctorate in computer vision and a master’s degree in mathematics and astronomy, both from University College London.
Voting History
Statement | Response |
---|---|
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree |
“Given the hype and fear equally prevalent in the media and general public, it is critical for companies to disclose whether and how they use AI in their products and offerings. The biggest challenge companies will face is in how to explain, in customer-friendly language, what type of AI they are using and for what. The main concern most customers will have is around how their personal data will be used by the AI and, in particular, whether the AI will use their personal data to train itself. Therefore, addressing this clearly and upfront will be critical. Customers will also want to know how the AI is governed. Therefore, it will be important to let customers know whether there is a human in the loop or how a human can be contacted to challenge any decisions or actions taken by the AI.
If the AI used is both adaptive and autonomous, the companies should be transparent about the fact that some outcomes may seem counterintuitive and may even sometimes be wrong. In these cases, the company should also aim to provide a customer-friendly explanation for each key customer-facing output while always providing an opportunity and mechanism to challenge the output if required. ” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree | “Some elements of the EU AI Act will come into effect at various stages, with bans on prohibited practices applying within six months. Therefore, some basic obligations will start to apply in 2024, which some organizations may not be prepared for. While most of the prohibitions outlined in the act may not be relevant to most organizations, the use of AI in recruitment and monitoring employees’ well-being, and biometric AI usage will apply to most organizations. Most will be considered “deployers,” generally incurring fewer obligations unless they deploy a high-risk AI system. However, if an organization modifies a purchased high-risk AI system, it may then take on the role of a provider, leading to significantly more responsibilities. AI system suppliers may also amend their contracts, potentially shifting some of their compliance responsibility and liability to their customers. Therefore, all organizations will need to establish processes to ascertain whether any of their AI systems will be subject to this regulation. However, despite this new regulation, organizations must continue to comply with existing laws like GDPR, consumer legislation, and competition law for routine operations.” |