How can we hold tech accountable? Insights from Political Risk Analyst Stephanie Hare 

Watch our interview with Stephanie Hare on the EU AI Act, the balance between innovation and regulation, transparency, and the role of engineers and policymakers in shaping the future of new technologies and AI.

During Stephanie Hare’s last visit to Copenhagen, we met up to delve into the complexities of regulating technology and the ethical responsibilities of engineers and STEM professionals. Watch the interview with Stephanie Hare here: 

The EU AI Act: innovation and regulation 

When questioned about the resistance faced by the EU AI Act, Stephanie highlights a fundamental truth: companies will always resist government regulations, viewing them as inhibitors to innovation. However, she emphasized that regulations, like the GDPR and the Digital Services Act, bring clarity to the market, fostering better functionality and innovation. According to Stephanie, having clear legislation delineates what is permissible, enabling companies to operate within defined boundaries. 

Transparency, accountability, and the role of regulators 

When discussing the focus on regulating new technologies and AI, Stephanie emphasizes two essential pillars: transparency and accountability. To ensure responsible AI development and usage, we must understand how these technologies work and explain them to the public. Transparency is crucial for building trust. 

However, transparency alone is insufficient without accountability. Regulators must take meaningful action beyond issuing fines. Stepanie advocates for regulators enforcing restrictions that impact the business models of companies found engaging in unethical practices.  

Risks of inadequate AI regulation 

While the “Terminator scenario” often captures the public imagination, Stephanie is more concerned about unintended consequences. Rushing into new technology without considering second, third, or fourth-order effects can lead to unforeseen problems. The complex interplay of different forces and ripple effects in society is what worries her. 

To mitigate these risks, Stepanie suggests engaging in ongoing conversations. Openness and transparency in the field of AI are increasing, with research papers, conferences, and interviews readily accessible. The more we engage in these discussions, the better equipped we are to foresee potential issues and take proactive measures. 

Looking ahead 

In conclusion, the interview with Stephanie Hare underscores the importance of proactive regulation, transparency, accountability, and the active involvement of engineers and STEM professionals in shaping the future of new technologies and AI. Policymakers, often without engineering backgrounds, need the insights and expertise of engineers to address the challenges and opportunities presented by emerging technologies. 

What are your thoughts on the ethical principles that should guide technological innovation?  

Join the conversation on LinkedIn using the hashtag #ListenToEngineers