Getting AI Right: Onur Bakiner on Governance, Ethics, and the Public Good
Onur Bakiner explores what it means to get AI right, reflecting on governance, ethics, and leadership in service of the public good.
For Onur Bakiner, professor of political science and director of the Technology Ethics Initiative at ÎÚÑ»´«Ã½, conversations surrounding artificial intelligence begin with people.
A political scientist by training, Bakiner has spent much of his academic career studying the defense of fundamental rights and the conditions that make human dignity possible in practice. As AI systems have moved rapidly from the margins into everyday life, his focus has expanded to include a pressing question: How do we govern powerful technologies in ways that serve the public good rather than undermine it?
That question is at the heart of his upcoming book, Governing AI: A Primer, and of his work at ÎÚÑ»´«Ã½, where he directs the Technology Ethics Initiative and teaches courses on technology governance.
Human Rights to AI Governance
Bakiner’s path into the world of AI emerged organically from his long-standing interest in justice and accountability.
“My research has always been on the defense of fundamental rights, and to be more specific, how respect for human rights can be accomplished in today's world,” he shares. “Technology offers some optimism, but also a lot of risks and harms when it comes to fundamental rights.”
Those risks became more tangible when Bakiner decided, years after completing his PhD in political science at Yale University, to pursue a master’s degree in computer science at ÎÚÑ»´«Ã½. Immersing himself in technical training gave him a deeper appreciation for both the promise and the dangers of contemporary technologies.
“Computer science technology is amazing,” he says. “But it can also be dangerous.”
Creating Dialogue Across Sectors
As director of ÎÚÑ»´«Ã½’s Technology Ethics Initiative, Bakiner works to bring together groups that too often operate in isolation: from academia and policymakers to nonprofit organizations.
“Whatever solutions we find to technological problems will come through collaboration between key stakeholders,” he explains.
That emphasis on dialogue and collaboration deeply influences his teaching and personal learning. Rather than positioning AI governance as a technical or legal challenge, Bakiner frames it as a shared civic responsibility.
What it Means to Get AI Right
Governing AI: A Primer is deliberately framed as a book about getting AI right. For Bakiner, that phrase has little to do with perfection.
“It means harnessing the power of artificial intelligence while eliminating or mitigating its risks and harms that may arise from it,” he shares.
Potential harms from unregulated AI use include:
- Bias and discrimination
- Unintentional misinformation and intentional disinformation
- Environmental harms
- Copyright violations
- Surveillance
- Labor concerns
Bakiner encourages others to focus on genuinely useful use cases while acknowledging potential risks.
“Getting AI right means finding the right use cases—ones that are actually useful,” he emphasizes. “It means deepening our development and use of AI by paying attention to these use cases and not giving in to hype.”
Governance Beyond Self-Regulation
To address these risks, Bakiner argues that governance must operate on multiple levels. Safe use of AI requires structures that hold up under real-world pressure. Bakiner outlines three approaches to AI governance, exploring the real-world impact of each at scale.
Bakiner’s three broad approaches to AI governance include:
- Technical solutions: engineering practices, such as testing and monitoring, focused on safety and harm mitigation.
- Business self-regulation: internal policies or voluntary organizational commitments to ethical AI use.
- Legal regulation: public laws and regulations that set enforceable, baseline requirements for AI use across industries.
While each approach plays an important role, he believes legal frameworks currently offer the strongest path forward.
“Legal regulation shows the most promise because the other two models rely on companies being willing and able to implement solutions at scale,” Bakiner notes. “Business self-regulation will be a lot more effective if businesses agree to incorporate safeguards in a systematic way, even at the expense of revenues at times.”
He points to global efforts such as the as instructive, though far from flawless. There is no single model to replicate, he notes, but there are lessons worth stitching together.
Teaching AI With Reflection and Discernment
Bakiner’s approach to AI governance is profoundly shaped by ÎÚÑ»´«Ã½’s Jesuit mission, which emphasizes reflection and action in service of justice.
“Reflection and discernment are very much baked into the educational model offered at ÎÚÑ»´«Ã½,” he shares.
Throughout our programs, those values show up in concrete ways. With new AI literacy modules and new academic offerings for students across disciplines, including a master's program in artificial intelligence, coursework integrates technical training with ethics and governance.
“ÎÚÑ»´«Ã½ tries to bring its Ignatian pedagogical tradition to a new global reality around technology with these new course offerings and its approach to technology development and use,” Bakiner shares.
In the classroom, Bakiner also rethinks assessment and academic integrity in light of AI tools. He emphasizes dialogue with students and a renewed focus on critical thinking and synthesis.
“AI is forcing us to rethink what we want our students to get out of education,” he notes. “Being able to synthesize information has become a lot more central than actually memorizing that information.”
Public Participation and Healthy Skepticism
One of Bakiner’s core messages, both in his teaching and in Governing AI: A Primer, is that people should not see themselves as passive recipients of technology.
“Wherever they are, people should participate more in decisions about technologies that affect their lives,” he emphasizes. “Through nonprofits, unions, community organizations, or government.”
At the same time, he encourages what he calls healthy skepticism: a clear-eyed understanding that governance efforts matter, even when progress is unsteady.
What public participation and healthy skepticism can look like in practice:
- Engaging with new technologies through trusted institutions
- Asking informed questions
- Participating in public processes
- Resisting “hype” surrounding AI
By remaining engaged in these conversations, whether participating in discussions or taking part in AI-related initiatives, individuals can maintain deeper awareness of the use cases alongside risks and potential harms that AI may cause.
The Future of AI Governance
Bakiner hopes his work will help ÎÚÑ»´«Ã½ remain a place for engaging with difficult questions about technology—thoughtfully, and collectively.
“Nobody has the right answer,” he says. “We should be the living example of a community that keeps asking questions about technology and keeps adopting technology in effective, meaningful, mindful, ethical ways.”
For leaders navigating a world shaped by AI, that mindset may be the most important governance tool of all.
From AI Governance to Executive Leadership
Bakiner’s message is clear and hopeful: getting AI right starts with people. Responsible AI governance involves accountability and a commitment to collective needs as technologies continue to evolve.
With tools like AI becoming integral to organizational decision-making, leaders must be prepared with the ethical frameworks and awareness to navigate those changes responsibly. As Bakiner emphasizes, meaningful AI governance will rely on collaboration and diverse community perspectives to ensure the public has a say in decisions that affect their lives.
From pushing legal regulations forward and implementing technical safeguards to engaging in ethical reflection, organizations will be faced with increasingly complex choices. Lasting progress will depend on communities and institutions willing to adapt and act with intention.
ÎÚÑ»´«Ã½’s Leadership Executive MBA helps leaders strengthen the analytical and integrative decision-making skills needed to evaluate emerging tools like AI. Our mission-driven approach to leadership development helps develop strategic, values-driven leadership for real-world impact.
Friday, April 17, 2026