AI in the Open: Navigating Responsible Innovation

Surrounded by the elegant setting of the Shangri La Museum in Honolulu, an expert panel convened to grapple with one of the most pressing issues of our time: how to responsibly harness the vast potential of artificial intelligence (AI) for the benefit of society. The discussion, titled “AI in the Open: Responsible Innovation for Access, Accountability, and Discovery,” brought together thought leaders from academia, industry, education, and policy.

Moderator Alondra Nelson, Harold F. Linder Professor at the Institute for Advanced Study, framed the critical questions at hand. “Any benefits that we want to have require us to focus on risks and to do that across sectors and across differences,” she said. “You don’t have to have a computer science PhD to be able to have an opinion about, ‘Hey, how AI is going to transform our society.’”

Opportunities and Challenges

The panelists discussed the immense opportunities and challenges posed by AI. Deep Ganguli, a research scientist at Anthropic, expressed optimism about AI’s potential to accelerate scientific research and help solve major challenges like disease and clean energy. However, he also voiced concerns about the technology’s impact on truth and jobs.

“Now we’re moving to a world where truth is getting harder to maybe define. And so this is a, this is a big concern of mine,” Ganguli said. “And another concern I have is kind of the impact on on jobs and labor, writ large… There’s a certain dignity we find from work. It gives us a sense of purpose, a sense of meaning. And I’m, you know, I’m concerned that what happens if we take that away, and how will we react as a society.”

Balancing Access and Accountability

A central tension explored was how to balance open access to AI tools, crucial for spurring innovation, with necessary safeguards.

Ganguli explained Anthropic’s approach: “What we want is to sort of make it so that the model helps you do good things and make it so it’s very hard to use the model to do bad things. And we spend a lot of time on sort of both fronts of this… We’re doing this all in house, we’re doing it all prior to deployment, we’re doing it continuously throughout development.”

He emphasized the need for society to grapple with these questions together. “These are questions no one company can or should have to answer, this is kind of going back to governance structures, like this is something that we as a society have to have to work on together.”

Shaping AI Through Policy

Marietje Schaake, international policy director at Stanford’s Cyber Policy Center, underscored the need for governance frameworks to ensure AI accountability.

“If you think about AI through the lens of power, where do we need correction in the interest of democracy is a key question,” Schaake said. She advocated for adaptive regulation that can keep pace with AI’s rapid evolution.

“Ideally, you would in the public interest want to have scrutiny over these models, learn from them,” Schaake argued. “Right now, there’s really hardly any regulation of when it goes from a research phase to an availability phase where everybody can work with it.”

Education in the Age of AI

Educators on the panel highlighted the need to prepare students and ensure equitable access to AI’s benefits.

Gabriel Yanagihara, a teacher at ‘Iolani School, shared his experiences. “I’m really starting to see it,” he said. “A willingness to pick up and learn.”

Yanagihara noted the risk of a widening divide between “those who have access to the AI tools and those who don’t.” At the same time, he expressed hope that expanding access to smartphones and AI could help “the bottom 50% of students get caught up.”

Lance Askildson, Provost of Chaminade University of Honolulu, emphasized the importance of giving people “control over their data” and “control over the foundational technological tools.” He shared how Chaminade is working to provide AI education to underserved communities.

“That message of empowerment, saying ‘Listen, your opinion as a fisherman, your opinion as a person working in textiles matters.’ We want to teach you how you can use that technology to benefit you, your family, your communities,” Askildson said.

Global Implications

Helen Toner, Director of Strategy at Georgetown’s Center for Security and Emerging Technology, highlighted the global nature of AI development.

This “presents an opportunity for collaboration, but also underscores the need for a shared understanding of responsible development and deployment practices that transcend national borders,” Toner said.

“No one country can go it alone in shaping the future of AI,” she emphasized. “We need international forums to align on common principles, share best practices, and coordinate on issues like data sharing and privacy protection.”

Engaging the Public

The Q&A portion surfaced key issues around broadening access, slowing development, and participation in shaping AI’s future.

Kevin Lim, who runs a local nonprofit, posed a question about slowing down AI development. “Is it possible to slow down?” he asked. “From those who are maybe with more technological bent, like is there a mechanism by which we can actually like decelerate? And to those who want to push on that question, what would the benefits be of slowing down?”

Ganguli shared that Anthropic has a “responsible scaling plan” that defines risk levels at which they would pause development. “After we pass a certain risk threshold, we will just stop. We will just stop until we figure out what the heck is going on. We won’t deploy it,” he said.

Toner expressed uncertainty about the feasibility of slowing development in the short-term but suggested that with enough public pressure, governance measures could be put in place. “Do I expect any kind of dramatic slowing down in the next six or 12 months? No,” she said. “Do I think that we have more levers to pull, including as a democratic society that can make this a top issue and a top concern? If things do kind of get more radical? Yeah, I do think there’s more that we can do there.”

Schaake emphasized the importance of using any slowdown to strengthen oversight and public understanding. “The question is, not only can we slow it down, but what will we use more time for?” she said. “Ideally, you would in the public interest want to have scrutiny over these models, learn from them.”

Another attendee asked how to ensure communities of color are included. Ganguli shared how Anthropic opened up their AI ethics principles for public input. Askildson reiterated the importance of empowering disenfranchised communities.

“If you don’t understand the technology, you are easily manipulated by that technology,” Askildson warned. “Giving people control over their data, giving them control over the foundational technological tools is very important.”

Responding to an audience question about the possibility of eliminating bias from AI, the panelists acknowledged the challenges.

“We have to be real about the fact that people will disagree about what fairness is,” Schaake said. “If we could reflect some of those realities better in how we think about AI and what needs to happen in terms of governance, then we are more fair to what is and is not possible.”

Yanagihara suggested that surfacing biases could enable important conversations. “The fact that there is the biases built in, you know, we can’t really fix it, but we might as well use it as a tool to continue that conversation,” he said.

Panel Transcript (Created with Otter.AI):

Ben Weitz
Welcome to this very special event, “AI In the Open, Responsible Innovation for Access, Accountability and Discovery.” So, for those of you who are new to Shangri La, we are a center of the Doris Duke Foundation, where our mission is to build a more creative, equitable, and sustainable future. And I couldn’t think of a more timely and relevant topic to be talking about how we can use technology responsibly, and in a way that we’re doing so in a safe way to you know, these evolving AI models are coming quickly and how are they be made available to the public that we’ll be talking about tonight. This evening’s discussion will be moderated by the Institute for Advanced Studies, Harold F. Linder, professor of social science scholar Dr. Alondra Nelson, whose background includes being deputy assistant to President Biden, and acting director at the White House Office of Science and Technology Policy. She is also a distinguished senior fellow at the Center of American Progress, and was also previously president and CEO of the Social Science Research Council and the inaugural Dean of Social Science at Columbia University. Dr. Nelson’s essays and reviews commentary have appeared in The New York Times, The Washington Post, The Wall Street Journal, and Science, among other venues. She was named one of the 10 people who shaped science by nature in 2022. And just last year in 2023, she was named to the inaugural TIME 100 list of influential people in the field of Artificial Intelligence. Please join me in the great privilege of providing a big aloha and warm welcome to Dr. Alondra Nelson and our expert panel. Mahalo.

Alondra Nelson
So good afternoon, Mahalo, Ben, for that warm introduction. We’re so delighted to be here. Thank you to the Doris Duke Foundation, to its president, Sam Gill, to Ben, the incredible team here at Shangri La, who have taken such incredible great care of us over the last two days. Aloha to all of you, and thank you for being here. So this panel was part of the work of a group of us who work together, it’s called the AI Policy and Governance working group. And we represent like industry, folks from academia, people from civil society, we have literally myriad perspectives on AI, we’ve got a lot of different ideological perspectives, we come from technical backgrounds, and social science backgrounds and pedagogical backgrounds and everything in between. But we are all committed to working together across this diversity and across these differences to do what we must and what we can to enable any of the benefits that we’re hearing that AI might offer us to actually be possible. And to make that possible means obviously doing a lot of work around mitigating risks, some of which are present in life already, and some of which may be coming on the horizon. Any benefits that we want to have require us to focus on risks and to do that across sectors and across differences. And so that’s what we are attempting to do.

So Sam Gill is a friend of mine, the president of the Duke Foundation, you know, he told me that one of his aspirations for Shangri La was to really make it a Global Center for convening for some of the most important issues of the day. And so we’ve been so delighted to be able to come as a working group, we have colleagues in the working group from the UK, from the United States and elsewhere, to really convene here to think about this issue of significant global significance of global significance. So we have different team themes for our working group meetings, we’ve been talking about something very wonky in technical, open source open model AI over the last couple of days and what to do about that. But we want to sort of have that conversation.

Part of our commitment in the working group is to speak to the general public about these issues, because we are facing kind of profound transformations in society. And I personally, and I think many of us in the working group are deeply committed to the fact that you don’t have to have a computer science PhD to be able to have an opinion about, hey, how AI is going to transform our society. And so I’m so glad to see all of you hear from the community and really invite you to bring questions when we get to the Q&A part of the conversation because this is going to impact all of our lives. It’s impacting all our lives, and you have absolutely a right to have an opinion about what’s happening, and to play a role in shaping it. So to make the questions that we’ve been talking about that are pretty wonky and technical, more broad. They’re questions about access to AI tools and systems who should have access to them. When if ever should the access to these systems be limited? And how might broader access be a good thing in some instances, and maybe not, you know, a healthier responsible thing for innovation or society and other instances? How might broader access to AI exacerbate risk and harms including the circulation of abusive images, threats to election integrity as we go into this important election year, and bias?

So there are these questions, and many others remain for us to answer. And, you know, part of what we’re going to try to do tonight is to sort of have a conversation with you about some of these questions, you know, what do we need to do if we’re truly going to have responsible innovation. So this is a big assignment, it’s going to require the public it’s going to require lots of contributions and lots of people weighing in. And so I’m delighted this afternoon to be in conversation with five really extraordinary thinkers and doers. And including members of this local Honolulu Community about these issues. So I’m going to introduce them briefly, invite them to come up, and then we’ll have a conversation, and we’ll leave 15 minutes or so for questions at the end. And we will also be around to have conversation after the panel is over. I might also ask the people in the working group to just raise their hands so folks know who you are. So if you have questions about AI afterwards, ask any of these incredibly brilliant and committed people who’ve worked in policy who work at important companies, including Google DeepMind, anthropic and others.

Okay, so, Deep Ganguli is a research scientist at Anthropic, a leading AI company. Prior to joining Anthropic, he was the research director at the Stanford Institute for Human-Centered Artificial Intelligence. He has a PhD in computational neuroscience from New York University, and a BS in Electrical Engineering and Computer Science from Berkeley, Deep.

Born and raised in Hawaii, Gabriel Yanagihara is a lifelong educator with expertise in AI computer science and creative media. He currently teaches at the Iolani School where his focus includes AI. Beyond his teaching role. He’s a strong advocate for professional development and community outreach. He frequently delivers keynotes conducts teacher training and leads professional development workshops on various topics in the fields of emerging technology Gabrielle

Helen Toner is the Director of Strategy and foundational research grants at Georgetown Center for Security and Emerging Technology. She previously worked as a senior research analyst at open philanthropy, where she advised policymakers and grant makers on AI policy between working at open philanthropy and before joining CSAT at Georgetown. She lived in Beijing studying in the Chinese AI ecosystem as a research affiliate of Oxford University’s Center for the governance of AI Ellen.

Lance Askildson is provost and senior vice president and a tenured professor of linguistics at Chaminade University of Honolulu. Previously, Dr. Rascal DHIN was Associate Professor founding director of the Center for the Study of languages and cultures and assistant provost for internationalization at the University of Notre Dame. He’s a scholar of language acquisition, language learning technology and linguistic processing, and the former secretary of the International Association for language learning technology. He is also a United Nations fellow and currently serves as the chair of the United Nations Institute for Training and Research for the Regional Center in the Pacific.

Marietje Schaake is an international policy director at the Stanford University Center for Cyber Policy and International Policy and also an International Policy Fellow at Stanford’s Institute for Human Centered Artificial Intelligence. Between 2009 and 2019. She served as a member of the European Parliament parliament for the Dutch Liberal Democratic Party, where she focused on trade, foreign affairs and Technology Policy. She writes a monthly column for the Financial Times and serves on the United Nations AI advisory body. Marietje, thank you.

Thank you all so much. Okay. So I guess the first question to all of you. Just the a good opening question would be around what you think the biggest challenges and opportunities are with regards to AI for the present and in the future, and particularly as these tools and systems and models become kind of widely used in the world. So why don’t we start with you?

Deep Ganguli
Sure. I’ll start off with opportunities. I’m super bullish on AI enabling and accelerating basic science research, especially in the service of like curing, managing or preventing a lot of diseases. I’m very impressed by like advances from Google DeepMind and solving things like the protein infolding problem, which sounds esoteric, but you can translate that into really accelerating the design of drugs to like kind of target diseases and things like this, this will benefit all of us if it’s to work. And similarly, there’s problems and like designing new materials to get us to more of more of a clean energy future, this sort of seems within scope. And if you’d have asked me this five years ago, I would have never predicted I would have said the words I’m saying, but the the advances have just been so interesting and powerful. In terms of some of the challenges, like with these new technologies, we’re also able to sort of generate realistic images and realistic videos. And so now we’re moving to a world where truth is getting harder to maybe define. And so this is a, this is a big concern of mine. I have. And another concern I have is kind of the impact on on jobs and labor, writ large, I think, you know, I’m not sure if AI will, like replace jobs, but I am sure that there’s a certain dignity we find from work. It gives us a sense of purpose, a sense of meaning. And I’m, you know, I’m concerned that what happens if we take that away, and how will we react as a society. And I think, in terms of like a big opportunity here, I think there’s a big opportunity here with this new technology to learn from the mistakes we made with previous disruptive technology. In recent memory, like social social media, I think this happened in an area where things were underregulated and where we weren’t super critical of the companies and the power structures. at play, we’re now creating a new disruptive technology. And we’re all of a sudden, a lot more critical. We’re thinking about regulation, we’re thinking about policy. And this is like, very exciting, because I think this is the right way. And so there’s a huge opportunity here to learn from the mistakes of the past and apply them to the future of these new technologies.

Alondra Nelson
Thanks. Gabriel.

Gabriel Yanagihara
Yeah, so, in terms of like challenges and opportunities coming from the classroom, where every day has many challenges and opportunities, we’ve spent decades in the education world trying to deal about, you know, the digital divide the students who have access to technology and those who don’t. But now we’re entering a field where those who have are willing to use AI, and those who aren’t, right, those who have the wherewithal and have the access to it. So seeing it in my classroom as a force multiplier. Take a student from an IQ a B’s and giving them the extra support and questions answered that they need to excel them as far as it can go. But even at 10x multiplier, right? You take it to a student who doesn’t isn’t willing to use those tools, or is scared to use them, or doesn’t have the support structure or technology to use it. 10 times zero is still zero. So we’re gonna see a, I’m really starting to see it. And I’m a little worried that how far it’s going to go between those who have access to the AI tools, and those who don’t. Right, so the same challenge we’ve been having for decades. But continuing on. But on the benefit side, right, we have access to Android smartphones, and the technology for less than $50. Right. So the barrier that we’ve felt we fought for decades, it’s much lower now. So, the moment that kids can get to it, what happens when you have this post-information scarcity? What happens to our business models, as private schools as private institutions, where every kid in the world, regardless of your socioeconomic background, with a $50 iPhone, from, like 10 years ago, can have access to all these AI language models to help them catch up to the top of the bell curve, right? So now the bell curve starts at that halfway mark? We have no idea what’s going to happen, right? But it’s really exciting to see that right. Like, I’m totally okay with whatever consequences come down the line, if I can help the bottom 50% of students get caught up to that, that phase that era. But then we’re gonna be entering the workforce, right? Like my productivity as a teacher has, you know, my bosses in the room somewhere. So like, it’s, it’s gone up so much. Like, I have so much free time as a teacher, and you will never hear a teacher say this now. But what I’m seeing now is, what is societies? And this is a question I’m giving you guys these questions, just we can talk about it later. One on One is what is society’s expectations of productivity for an individual worker for an individual student who are already overworked and studied to the max, when there are these tools that can 10x your productivity, right? Like one of the good examples I use with when I’m talking to my my fellow teachers, like, you know, those you were around before email, if you needed to answer a parent email response, you get a type of message and spend a day or two and get back to it. And now we get emailed outside of work hours all the time. What’s going to happen when your bosses and parents and society realizes you can write a 200 page book every five minutes, like this is gonna be a really exciting time that we’re gonna have to deal with. So yeah, there’s a couple of questions I have mulling around in my head.

Alondra Nelson
Thank you, Helen. Challenges and Opportunities.

Helen Toner
I mean, so many there are so many. Maybe I’ll talk About to, you know, challenges and opportunities through two lenses. The first lens, I think, is some of the things that that deep was talking about, you know, ways we can use AI. And I mean, all of my thinking about AI starts with the fact that this is a general-purpose technology, this is not, you know, just a car that can drive or a light bulb that can illuminate a room, it’s much more like electricity, it can do all kinds of things across all kinds of sectors. And that just inherently comes with an enormous amount of different opportunities for different ways that you can use it to, you know, increase your productivity, to solve scientific challenges, to pursue your pet passion, you know, all kinds of things. And likewise, all kinds of challenges in terms of ways that AI systems could be used for, for ILL ways that, you know, they could destabilize democracy, destabilize, you know, I work in national security, so destabilize kind of military relations. So in terms of use cases, there’s sort of endless potential benefits and challenges. And then I think there’s also, you know, lots of different benefits and challenges to think about or opportunities and challenges, in terms of who is just making decisions about these systems who is benefiting from them, were something we’ve talked a lot about in the working group this week is, you know, on the one hand, if you have too much concentration of power, a really small number of companies building the most advanced systems and getting to make all the decisions about how those systems are used, that’s very undemocratic. It’s very empowering for you know, most people in the world. On the other hand, if you kind of throw the doors open, and let everyone have total access to all of the technology, there are ways that could go wrong as well. You know, if it’s again, you know, could be used by criminals by terrorists could be destabilizing in other ways. So I think a really big challenge is figuring out how to navigate that balance of who is making the decisions here, what kind of governance structures do we have in place to try and steer things in a way that is, you know, as as good as possible for as many people as possible within the, you know, the constraints that we face?

Alondra Nelson

Lance Askildson
Well, some very good thoughts and observations from my fellow panelists. As an academic, I think I’ll start with a bit of a critique and focus on some of the challenges. And I think the first challenge with artificial intelligence is the nomenclature. It’s a taxonomy. What do we mean by artificial intelligence? And we have to recognize that this concept occurs in a cultural, societal and a commercial context, that drives interpretation. So many of us think of artificial intelligence when we hear it off the cuff as being something equivalent to how from 2001 A Space Oddity, audit oddity or Odyssey, maybe both. But we equate it to a human intelligence. And certainly at present, we don’t have anything that rivals human intelligence, we don’t have an artificial general intelligence, which is the common nomenclature, I think the other context that we need to remind ourselves of is, there’s an incentive to embellish the capabilities of these technologies, because they’re driving corporate and even nonprofit profits or revenues. So there’s a rich history of new paradigm shifting, disruptive technologies, like blockchain, like social media, that speak to the tremendous potential they offered, but exaggerate those capabilities. So, when we evaluate the opportunities and challenges, I think we need to be clear-eyed about what they can do. And so my background you may have heard is, as a linguist, and in linguistics, we’ve been working with the underlying technology of generative, large language models for many, many years, about two decades, some of the rudimentary models. Now they’re much more sophisticated today with much more complex algorithmic functions. But at their core, these are not new technologies. And I think a very, very important distinguishing factor that we need to remember is that they are all derivative. That means there’s nothing new. There’s nothing creative. There’s nothing innovative about the technologies output, all of that comes from human innovation, that it’s been trained upon millions and hundreds of millions of times. And so, what I like to highlight about the current zeitgeist of artificial intelligence is the human element is what gives it any meaning and purpose. So I think one of the big challenges is recognizing how we leverage our humanity, our human ingenuity, our ethics, our moral compass, to give rise to outputs from this technology that can be done you Use productively and responsibly. And then we need to recognize, as some of the other panelists have stated, that there are tremendous dangers inherent in being able to produce a 200-page targeted monograph in five minutes to be able to provide real-time disinformation that is very convincingly mimicking human interaction. And I think those are dangers that we’ve yet to fully appreciate. The other challenge that I see is understanding, we don’t even know the materials upon which many of the major AI models have been trained, we have some insight, because of some good news reporting, but those have not been shared openly by many of the purveyors of the commercial AI platforms. And what we do know tells us that there are incredibly dangerous biases inherent in these training materials. And so I think the challenge is, how do we first understand what those biases are, and then overcome them, you may have seen that an image generator from one of the major AI purveyors was recently shut down because it wasn’t operating. as intended, it was showing a certain bias towards certain imagery. That’s the tip of the iceberg. And I’m concerned about some of the cultural biases, the the ethnocentrism that is built into the training materials. I think these are all challenges that can be overcome. But they require a lot of careful thought, discussion and regulation. And so I’m really pleased that we’re having this discussion today.

Alondra Nelson
Thank you for that. Marietje?

Marietje Schaake
Well, so much has been said. So let me try to sort of wrap it up a little bit in this first round. As has been said, this is a technology that can be used in so many different ways. And so when I think about technology in general, but specifically AI, it’s almost like a layer that touches everything from the classroom, to the hospital to national security to, you know, the way we consume media. And if you think about that through the lens of power, you know, where do we need correction in the interest of democracy, I think is a is a key question. And deep, as mentioned, Helen has mentioned that there are concerns about disinformation. And that’s the sort of use case of artificial intelligence. But I also worry about how we need to govern this new technology that has a couple of qualities that make it different from many other things that we have to put checks and balances around. So it allows for enormous individualization. So what you see when you put a prompt into generative AI model is different from what the person in the back of this setting will see what you see, today can be different from what you see next week. It’s also ever changing where even engineers that are, you know, at the cutting edge of producing these innovations don’t quite dare to predict what outcomes might be. And so when you think about AI is a very fluid phenomenon, with varying outcomes for varying people in different contexts. That also presents unique problems for how to regulate I mean, you can’t put it in a lab and look at it with a couple of people or put some tests to it the same way you could with a car or a piece of medication, food, a chemical. So I think one of the major challenges is what AI means for democracy and for sort of Governability. And that leads to questions of, you know, who has power and agency now to do so that’s mostly the companies they are developing these models, they are deciding when to put them out into society for you to experiment with. So learn from with the mistakes that we see as well. And so I think the opportunity and the challenge at once are for democratic governments because that’s my focus. I believe democracy is under enough pressure. And so we need to really strengthen democratic resilience, including in the face of artificial intelligence presents enormous questions about governance. And if we do this well, we will see institutional innovation, we will see new ways of looking at these new technologies from the perspective of regulation, oversight, accountability. If we do not do this, well, we will see a market dictating the understanding we have of this technology, the agency we have as individual citizens, the geopolitical power relations in the world being decided by a drive to make a profit instead of a drive towards protecting the public interest.

Alondra Nelson
Thank you for that. So I want to I want to come back to you to say a little bit more about all of the opportunities and challenges that folks laid out here—they don’t just happen, right? They really require institutions infrastructure, decision making strategies to have the best outcomes, they’re not just going to happen. And without those, we might just be left with the worst possible outcomes. And so what are some of the things that you see in the kind of policy space as a former as an elected politician and a former policymaker? That’s promising? And how are we what are the sort of maybe green shoots of policy innovation that you see, that can handle all of the fluidity and the different stakeholders and the asymmetry of the powers and the stakeholders that you’ve just laid out?

Marietje Schaake
So I’m enormously encouraged by the fact that everywhere in the world, whether it’s here in Honolulu, or at the level of the United Nations, whether it is in Washington, DC, or Brussels, or a city council in your name is Rome, Italy, there is not a governance body, where AI is not top of the agenda. And I’ve hardly ever, if ever seen such a meeting of minds. I mean, maybe with the big challenge of climate change, but it’s taken very long. And there’s still a lot of controversy around it. Whereas with AI, and this touches on what deep said as well, lessons learned from being late to addressing risk addressing unintended Fallout, addressing problems to the health of our teenagers online to, you know, democracy to even fraud and misleading ways of using technology, those lessons have been learned. And so governments are eager to get it right this time. So that’s good, there’s a will, there’s a momentum, there’s not necessarily a sense of where to take that will or momentum. And that’s, you know, related to the challenge of nature, the technology that is so ever-changing where we have to anticipate use cases of AI today that we don’t know yet, but that we still have to tackle risk for. And so what I think is interesting, and also necessary, is more open-ended policies. So the European Union, which is the jurisdiction I know best, because I served there for 10 years, has just adopted a law, the EU AI act, maybe some of you have heard about it, which foresees a ongoing process of assessing new models and use cases of AI and has a body of experts that then assesses the risk that these new models pose. And so it is not a sort of locked in legislative process that is finished and set in stone at this moment in time. But it actually anticipates the change that is still coming and has expertise and a process in place to sort of move the law along with the way the technology moves along. And I think this kind of thinking about being principled in what it is we want to protect. But more creative or or adaptive to the way the technology will, will challenge these principles is key for moving forward with regulation.

Alondra Nelson
Thank you for that. So Lance, I wanted to come to you and extend and talk a little bit about education. Chaminade is a native Hawaiian serving institutions, extraordinary institution. And, and your organization is doing a lot of work on AI research, but very much centering the perspectives and the participation of the local community. So I want to just give you an opportunity to say some of what you’re talking about some what you’re doing.

Lance Askildson
Yeah, absolutely. So Chaminade has approached artificial intelligence in the same way that we’ve approached the questions and the challenge of big data and analytics. And we’ve started with the question of what are the ethical imperatives that are driving new inquiry, new scholarship, new research in these areas? And the immediate answer from our faculty, our students or postgraduates is largely around issues of data sovereignty, and who is at the table when these technologies are being developed when the underlying databases or in this case, the language models for many of these artificial intelligence models are being developed? How are they being trained? What are the algorithms, and what are the the epistemologies, the ways of knowing that and learning for that matter, that are at the heart of those architectures? And so what we’ve done is we’ve tried to start with our faculty, with our students with our research collaborators. Start with those questions and asked who should be at the table. And so I’ll give you an example. We currently host a $10 million National Science Foundation grant. It’s called the Alliance grant. It’s really focused on issues of sustainability in the Pacific and Pacific Island communities, in indigenous communities in particular. And so we start with we We started with questions around Indigenous ways of knowing, and how we can build not just databases, but analytical frameworks, and then queries for AI models that will begin to give people access over their own data and ways to interpret it in ways that will help them develop appropriate policy and regulation. And so as you might imagine, it’s hard to come forward with a political solution or policy solution without compelling data behind it. And increasingly, AI is going to be at the center of that. And so I think a lot of our work by individual faculty and institutionally is focused on those priorities. The other piece relates to what Gabe was saying, we’re concerned about the potential for acceleration of the haves and have nots, particularly here in Hawaii, where because of our distance, because of our relative geographic isolation, we’re at a greater danger for some of that accelerated inequality, we see it in other parts of our society. And we want to make sure that we’re being attentive and proactive to mitigate those types of stratifications of society. And so we’re starting with kindergarten, we have a K through 12, but focused on early age, Introduction to AI, we’ve reached 4000, keiki, 4000 children in Hawaii already. And we have a dozen other initiatives, I won’t go through all of them, but they’re aimed at engaging the community and really starting that educational process, like Gabe is doing at his school. And making sure that people if they don’t have the technical expertise, they have the innate curiosity that will help them navigate future educational opportunities. And so I think within the context of why, as one representative of multiple post secondary institutions here, I think there’s a collective concern and responsibility, a privilege, a kuleana, that comes with ensuring stewarding this new technology in ways that ensure the democratization, the accessibility of it, I think the biggest challenge there is, is really the underlying architecture. So these are very Western ethnocentric databases that have trained all of it, whether it’s an imagery-based AI model or a large language model. And it it limits the understanding, both for the input, what your queries are, and especially the output. And I think if you’re reflecting on other ways of knowing whether they’re indigenous or or other cultural ways of knowing, you recognize immediately that there’s going to be bias that’s baked in to the underlying output that you receive. And that’s something that we’re going to have to navigate going forward.

Alondra Nelson
Thank you for that. So Helen, I wanted to talk to you about Georgetown University and about some of the work that you’re doing there, which seems to fit to sit at this kind of interesting, Nexus or maybe tension. So Georgetown University, one of the sort of finest places in the world for foundational research and discovery. But your particular work at Georgetown is really around national security and concerns. And so I wanted to like, you know, have you asked you to speak a little bit about that possible tension between discovery and both meaning both wanting to open and explore information? And those times we might have to constrain it and how you think about that?

Helen Toner
Yeah, absolutely. In my work, we work a lot with national security policymakers in Washington, DC. So folks in the Department of Defense Department of State, the intelligence community, who are really looking at, you know, everything in the world through a national security lens. And you know, people coming from that perspective, really like having control over information, they like locking things down, if they might be dangerous, they like things being in the possession of the US government and not in the possession of anyone else. And so kind of coming to, you know, the work that I do is at the intersection of national security concerns and AI. And it’s been interesting to kind of encounter that basic posture or basic set of assumptions over and over again, in the work that we do. I think sometimes it is, you know, somewhat warranted and can make sense, you know, we’ve already talked a little about some of the risks from AI that are relevant for national security. So if we’re talking about the possibility that an AI that is very good at you know, programming could enable more people to commit cyber attacks, for example, that’s, that’s a concern for the national security community. Likewise, the possibility that an AI system that is very good at scientific R&D and medical R&D could enable a larger number of people to create bioweapons, something very concerning from the national security perspective. And then more broadly, there’s also very much a concern, kind of every in Washington right now, in the national security world is oriented around US China competition, I’m sure in Hawaii that, you know, being geographically where you are that, that this touches your lives as well. And so there’s also a concern about well, you know, don’t we have to lock down our American technology and make sure that it doesn’t benefit any single person or institution in China. And so, yeah, conversations that we have a lot are about, you know, how do you take those objectives seriously, take those concerns seriously, while also taking seriously, you know, on the one hand, the fact that this is a very general purpose technology, it’s not a nuclear weapon, it’s not, you know, any kind of weapon system, it’s a, it’s, again, more like electricity. And also taking seriously the enormous benefits, even specifically, from a national security perspective, the benefits of kind of open innovation, open sharing open communication, which have been so beneficial to the US over the past decades in terms of, you know, an open international scientific ecosystem where researchers publish their work, you know, online or in journals and share with each other and collaborate across borders, that has been a very, very powerful thing. So I don’t have a neat answer. But it’s definitely a set of tensions that we discuss a great deal and depending on the particular problem at hand, the particular set of actors, sometimes, you know, we will be supporting some attempt to kind of close off or control more often in conversations with national security actors, we will be kind of defending the benefits of a more open approach. But it is it is a definitely a difficult balance to walk.

Alondra Nelson
So deep, I wanted to ask you about tensions and trade offs also. So you’re right on topic. I mean, people here maybe some folks have heard of it, maybe others would be more familiar with Microsoft or some other companies. But anthropic is one of the nation’s the world’s leading AI companies, AI laboratories. And last week, you released a new, more powerful model, a set of models, you know, Cloud Three, Opus, and all of that. And so how do you, from your kind of industry perspective, think about these trade-offs, think about the relationship between wanting to have access for more people and also wanting to expand your market as a for profit and see, and the particular growing capabilities of the things that you produce? And, and you know, whether or not people shouldn’t have access to them?

Deep Ganguli
So first of all, a softball question, right, it’s easy to answer. So first of all, anyone right now can go use Claude, it’s just widely available. And you can just go to club.ai, what is called.ai, we frame it as sort of an AI assistant that’s trained to be helpful, honest, and harmless, right. And so, you know, you can, as others have said, this is a general-purpose technology. And really, what we want is to sort of make it so that the model helps you do good things and make it so it’s very hard to use the model to do bad things. And we spend a lot of time on sort of both fronts of this. In terms of like having the model help you do good things, we train it to be helpful and harmless through a variety of sort of algorithmic approaches. And now, of course, you cannot manage what you cannot measure. So after we go through these algorithmic approaches, we then measure like, Is this is this algorithm biased? Can you use it in a sort of a misuse context, to help you, for example, build a bio weapon or commit a cyber offense? And so this is an evolving science, these questions are not straightforward to answer. And so we’re doing this all in house, we’re doing it all prior to deployment, we’re doing it continuously throughout development, right. So we’re like, you know, and we’re also monitoring for these uses and misuses post deployment. And so we’re quite upfront and open about our acceptable usage policy, which says, like, you know, what these certain use cases are, they’re just off, you can’t you can’t use them. So just off the top of my head, like child abuse cannot use Claude to do this. Right. And, you know, using Claude to help you in a political persuasion campaign is prohibited.

And now the question is, how do we monitor for this, right? And so we have systems in place that are both using our own models, but also using people that are here trying to look for this and force it and sort of give us this feedback, like and then we will kind of like, disclose that these bad things are happening. We will revoke access. This isn’t perfect, but this is like it’s just imperative, I think. And then I want to go back to some questions about again, accessibility. You know, in earlier versions of Claude, we were curious, like, does Claude have cultural competency? How do we measure this? Right? So we spent a lot of time figuring out how to measure it, which was not super straightforward. We had to make a lot of Faustian bargains to make such a fuzzy question quantifiable, but we did it. We leverage tools from the social sciences, which are designed to figure out how large groups of people across the globe think on sort of subjective questions, like, for example, one question is, what do you think is more important, a strong economy or a strong democracy, Americans will typically, on average, say strong democracy, you know, other countries might say, strong economy over strong democracy. And so we, you know, we administer these questions to our models, and we we see how they respond. And then we can correlate the responses to the responses of people kind of across the world and surprise, surprise, clods sort of adheres to this western hegemonic worldview. Why is that? Well, the training data are all in English, the people giving the feedback are North Americans. So then we ran these really simple experiments. So what if we tell Claude Hey, answers, though, you’re from a different country. And you would see that its responses were definitely correlated with people from that that country, we’re like, Wow, great. So these models are super steerable. And then we dug into it. We were like, what are the models saying before they answer the questions, and they were lying on harmful cultural stereotypes? This is bad, right? And so, you know, and like, you know, other experiments we did was like, what if we change the language? Like, what happens? What if we ask Claude and Mandarin to opine on the on the CCP? What does it say? Will it will it be for or against it? Against it? Right? So it was like, even though linguistic cues are powerful signifiers of cultural kind of norms, this this sort of fell apart.

And so, you know, this is a previous generation of the model, what have we done since then? Well, we’ve gotten more multilingual data in there, we’ve gotten more sort of multicultural annotators in there, I’m not, I do not make the claim that we have solved this problem, or this is a problem that can be solved. But I’m making a claim that, like, it isn’t very deeply important to us to measure these types of phenomenons and take steps together to sort of address them. And we don’t want to go this alone, we are a public benefit corporation, we actually put out all of our methods and our findings, even though they’re in disservice of the company making a profit, right? I’m, I’m telling you harmful things about the models that we found. And, so we’re trying to do the right thing and put this out there. And, yeah, and learn together as a society. First of all, what’s going on with these things? How do we measure them? And what do we want? Like, what’s the right answer here? And these are questions no one company can or should have to answer this is kind of going back to governance structures, like this is something that we as a society have to have to work on together. And it’s a, it’s a mistake to think that, you know, a company has it, we don’t have it right, like, but we are trying and we are we want to do the right thing.

Alondra Nelson
Thank you for that. So we’re going to do one question to Gabriel. And then we’ll open up for A&A. So please, have your get your questions ready. So I want to sort of end where we began, at least. But you know, Ben invoked the sort of mission of the Doris Duke Foundation, which is about creative, equitable and sustainable futures. And so much of your work as a school teacher is about futures and about working with a generation of young people who increasingly will live in a world that’s never had chatbots. I mean, they think they can safely say that the children that you work with young people have always had AI in their lives, whether or not it was, you know, that was that they sort of knew that in a kind of material way. But this is like a chatbot generation in some ways. And so I wanted to hear more from you, if you and what you want to share about working with young people around issues of AI, how you teach them about some of the ethical and governance and democracy issues like how do we think about AI pedagogy? And a K 2k to 12 space and issues of civics, for example, or issues of ethics and democracy? And what what are their aspirations for these tools? And what I mean, we often young people use them differently from a sewer yet less young. What are you seeing from them about their aspirations for what these tools might mean in their lives?

Gabriel Yanagihara
Yeah, so one of the things to mirror the experiences that you have at the geopolitical level is my classroom is equally chaotic, where, like the teachers, I’m teaching teachers all over the state how to use the chatbots, how to use these tools. And they’re like, the military-industrial types want to control the data like knowing like essays, completely written pen and paper, and then I have the students who are like, the rebel is just running off in every single direction and all direction. Well, one thing the underlying theme I’ve seen with everything with all the students and the teachers is a curiosity that I’ve never seen before. A willingness to pick up and learn or as a teacher who’s burnt out, tired working like two jobs, showing up to a training that I’m running on a weekend to, like, I have to figure this out. It’s showing up in my classroom. We’re in the front line. We’re like Are there frontline soldiers dealing with these AI chatbots every day. And what we’re seeing is they’re just willing to take the risks. So they’re not really worrying about the government. So the large scale ideas they’re worrying about today and the now. And the hate, like, I mean, some of the stories that I hear from the kids.

So I recently did a circuit on Maui teaching, some of the students stay and be like, oh, you know, my kid doesn’t go to Iolani, Punaho, Mid-Pacific, those kids can get private tutors. I don’t need that. I have ChatGPT now. So now I’m on the same playing field. Right. And these are perspectives, whether correct or not, from the points of view of the kids that are coming through, right. And of course, when I first heard that, I’m like, Oh, really. So what value add do I or us as a private educational institution that charges for our services, when every student anywhere in the world can have access to the perfect college application essay, right, because they can go to an AI language learning model, whether they have a $50 smartphone, or a $20,000 year student tuition, with a whole department dedicated to help them. So for me being born and raised on Maui, I’m like, Go kid, do it, like just use the tools, figure out the consequences later, right? Like, it’s, that’s kind of the this the phase that we’re in right now.

So I do want to leave it with like, especially with the kids, they’re all gung ho for it, they want to try it out for it. The teachers, they also really want to try it out for it for them. There’s those questions of like, okay, how do I do an essay anymore? Like, do we ever even need a five paragraph compare contrast essay anymore? How else can I do my lessons. So instead of doing that, there’s been the push for place-based learning for, you know, pushing more social-emotional learning into the kids so that they know when it’s appropriate. Recently at NSA, in St. Louis, Missouri, with all the heads of the schools trying to talk about what kind of AI policy can be, like, just like the military industrial, or like the big governance side, like, you know, how do we lock down? Or which tool do we use? Do we use these tools, we use those tools. But what we’re seeing a lot of is the we already have the academic integrity policies, right?

The tools to be able to cheat and write your essay have existed before language learning models, there’s telegram channels for $2, we can get all your essays written for you overnight. Like it’s, it’s that’s been there for 10 years now. But now we’re seeing that the students themselves are able to just pick up and run with these. That said, on the flip side, there are students who are opting not to engage with these tools, not because they don’t know how to use them, but they’re worried about the adults in the room, all of us, and all of you guys haven’t figured out whether this is, you know, allowed or not, right. So what they’re doing is they’re like, I’m not going to use it, I’m going to hold myself back, I’m not going to use these tools, because I don’t want to be seen as a kid who’s cheating his way through getting into Stanford. When then now they’re gonna not get into Stanford, because everyone else is using these tools to get into at least in the short term, there’s a lot of…

Alondra Nelson
I’d like to think we cut through that at Stanford.

Gabriel Yanagihara
…that’s kind of, you know, even the kid like me, who grew up on Maui can get that perfect essay and stuff. But that’s just kind of, some of the examples I want to share.

Alondra Nelson
Thank you. Complicated indeed. Okay, so we’ve got about 15 minutes for questions. You’re going to take this microphone…Okay. Thank you, Tyler. There’s a gentleman here.

Kevin Lim
Hello, thank you to all the panelists. And for the people who summoned you. It’s been a dynamic discussion. My name is Kevin, I run a nonprofit called Hawaii Tech Mentors. And I’m interested in I’ll give you a choose your own adventure question. I’d love to hear from most or all the panelists. And the Choose Your Own Adventure question goes like this. Is it possible to slow down? Right, so from the those who are maybe with more technological bed, like is there a mechanism by which we can actually like decelerate, right? And to those who want to push on that question, what would the benefits be of slowing down?

Deep Ganguli
It’s a great question. So we we actually at Anthropic have something that we call a responsible scaling plan. And it basically says it asserts like levels of risk, right, and we define what that risk is, and this is open, you can go read it, it’s open to litigation, we want feedback. And we actually are committed where like, after we pass a certain risk threshold, we will just stop. We will just stop until we figure out what the heck is going on. We won’t deploy it. And I saw this happen in between the release of club two and club three. The first measurements we made were all about risk. They weren’t necessarily about like, how much money can we make off of this thing? They were like What have we done? Right? And I think I kind of want to see this policy, you there’s nothing enforcing this, like we as a public benefit corporation, like the leadership wrote this down, we all apply and on it, and we are all kind of committed, but there’s nothing stopping anyone else from writing down a similar policy and, and holding themselves accountable to it. And so, you know, I can speak for myself like I do feel that I kind of believe in this this policy, and I believe we will make that tough decision. We believe we can slow down.

Kevin Lim
You see emergency brakes at Anthropic.

Deep Ganguli
I see that working? Yeah, I’m optimistic about it. Good. Yeah.

Helen Toner
I think it’s the right question. I don’t know that the answer is yes, I think the answer is yes. And I want to return to something Alondra said at the opening, which is that we all have a voice in what happens with AI and we can exercise our voice. And we should not feel that we need to have a computer science PhD to have an opinion about where this technology should go. So I think it’s great that companies like Anthropic are voluntarily taking those kinds of things on. I don’t think that’s enough. And I think, depending on how, I mean, I think that’s very difficult with AI is we’re so uncertain about where the future will take us. And that different experts, you know, world-class experts have very different perspectives on are things just going to radically accelerate from here, and it’s going to be crazy in a couple of years, or are things kind of petering out, you know, we’re gonna have like, you know, 10 or 20 years of excitement about language models, and there’s gonna, you know, self-driving cars. Yeah. But I think I think in the world where things do keep getting kind of more dramatic, more confusing. I think the US government, US Congress, has its own particular challenges, in terms of, you know, enacting strong regulation or strong kind of government-based checks. But there are state and local governments, there are international governments, and who knows, with enough pressure in crisis situations, Congress has been known to act anyway.

So do I think, do I expect any kind of dramatic slowing down in the next six or 12 months? No. Do I think that we have more levers to pull, including as a democratic society that can make this a top issue and a top concern? If things do kind of get more radical? Yeah, I do think there’s more that we can do there.

Marietje Schaake
And maybe briefly, but I see a lot of questions, too. But the question is, not only can we slow it down, but what will we use more time for? And I think one of the big questions now is not so much to stop the innovation but to be more deliberate about when it goes from a research phase to an availability or market phase where everybody can work with it. And right now, there’s really hardly any regulation of that. Whereas ideally, you would in the public interest, so as a government or government agency, want to have scrutiny over these models, learn from them, the way that deep can learn about the models at anthropic, but that I can’t or the government can’t at this moment in time, because there’s just not mandated sort of oversight mechanisms. And so I think, with questions of, you know, risk for society, or violations of existing laws, such as nondiscrimination laws, or anti-trust laws that are very well established, not controversial, and that AI will, you know, make less legitimate. The question is, how can those be kept up? What does oversight look like? And I do think that in that process, there will be a slowing down of just racing to markets in the interest of gaining public understanding and getting better public policies.

Sandra von Doetinchem
Hello, everyone. And thank you so much for having us here today and sharing your expertise. And so I have a question that is somewhat related. So my name is Dr. Sandra polluting Tim, I work as a Senior Scientist for AI enabled live on talent solution for EduWorks Corporation. And I once heard someone say that AI is the biggest technological revolution that we will see in our lifetime. What is your opinion on that?

Gabriel Yanagihara
Well, I’ll say this, I grew up before dial; I may not like it, but I, I have been through so many, like life world-changing, like once in a lifetime experiences, things that I really don’t think it’s going to be the biggest or last. But I think this is the first one where we’re having this conversation. Like we discussed earlier, this is the first time that I’ve ever seen society hop on board right away. In fact, like, oh, there’s something really cool happening. Like, let’s figure it out. I think this is the first time I’ve been so I think this is going to be a model for how we tackle all new technological engagement. And I think that’s really cool. Like this conversation we’re having. It’s awesome.

Alondra Nelson
Lance, you have a thought on that.

Lance Askildson
Well, I’ll say it’s certainly a paradigm shift, a dramatic one at that. I don’t think it’s unprecedented. But I think we are going to have to, particularly in the education spaces work Gonna have to think about how we can equip new adopters with not just the skills, but the knowledge and the dispositions, namely, the ethics, to be able to navigate these technologies better than we have social media and even the internet in general. Those were, I also grew up before dial up. And those were, you know, shocking developments. And I think a lot of people would critique the trajectory that we’re on for some of those current technologies. I’ll just riff off this to make a comment.

I was really impressed that Deep commented that, you know, there are human moderators for clawed for this AI platform. And I think increasingly, we’re going to have to be preparing the human intervention, to monitor the technological innovation. And, you know, ethics cannot be an algorithmic function. Ethics have to be driven by human beings, we have a body, who experienced the world, and all of our senses with all of our humanity. And I think there’s a tremendous danger in allowing to go to the previous question the technology to accelerate for the technological ends, that technology needs to serve humanity and individual human beings in ways that are profound and meaningful. It can’t just be technology for technology’s sake. And so I think that’s part of the challenge, both in terms of adoption, education, and managing the paradigm shift.

Audience Member
High to be inspired by the space that we’re in a beautiful home built by a woman of privilege, who purchased and commissioned most of the art in it at a time when museums were filled with beautiful things that were stolen by colonizers. How do we feel good about using these AI particularly like the image generators that were trained on the backs of artists, authors, creators, who were likely not asked if they wanted their art to be contributing to this? How do we go about using these technologies in a way that doesn’t have an icky colonial feeling about it to continue to create art and other creative processes? So how to like do you? Do you tear it down and start over with only art that you have permission to use or language that you have permission to use? Or is there a way to continue building on what’s already been made but feel that those that created the language and the art that trained them have been compensated?

Lance Askildson
So I’m gonna jump into that. That’s a great question. And I think it’s a really thought-provoking experiment to think about, not just the ownership, so they’re certainly the ownership of those not just artists, but all of the language I mean, the hundreds of millions of essays and, you know, copyrighted books and other materials that have been used to train AI. I think one of the interesting ways to approach I don’t have an answer to that question, of course. But one of the interesting ways to approach it is to recognize that language, is not just again, I keep using this phrase, an algorithmic function. So AI models are processing language, they’re not creating language. They’re reconstituting the language upon which they’ve been trained. And so I think it’s fascinating to think about the nature of language, language is a vehicle it’s a container for thoughts for intention that come from our minds, from our hearts from our spirits.

And there’s a, I think, a deep philosophical question that needs to be explored further, in regards to the output of an algorithmic, you know, function, whether it’s unique in some way because it’s combined human material in new ways. lacks that intentionality lacks that underlying meaning, whether it’s an artistic expression, or a linguistic expression. And so in my mind, there’s a there’s something lost there, it’s vapid. And so in addition to the issue of ownership and kind of a bit of a post-colonial implication, I think there’s that deeper question of what does that output represent? If it does not begin with human intention and human meaning?

Alondra Nelson

Marietje Schaake
Sure, I can try to say something. And I think the whole question of, you know, colonialism has many tentacles, when we think about AI, also the very extractive models of labor that often, you know, impact, the least empowered communities of our world, even if the story is that, you know, brilliant minds in Silicon Valley are coming up with these incredible technologies. So this speaks a lot to the power imbalance that I briefly touched upon. But I think your question, you know, is an excellent example of, and I do believe there will be so many more battles around who owns the information, anything from your creation to your face to your expression, you know, who ought to have agency over it, who ought to have profited from it. And there will be so many more battles to look at, for example, the legal challenge that the New York Times has has put before open AI in training its models. So I think we’re only seeing the beginning. And I’m sure that the stakeholders involved whether it’s artists or workers or disenfranchised communities, will shape the outcomes of where we will be and hopefully will shape policies that will lead to more fair kind of participation and representation in relation to AI.

Teddy Reeves
Hi, Teddy Reeves, I’m a curator at the Smithsonian National Museum of African American History and Culture. My question really relates to access, and particularly when I’m thinking about communities of color, and AI, right there. When we think about new technologies, communities of color are often left out and often trying to play catch up. And so, what is being done to ensure that communities of color are at the table, both in development and also in access? We know communities of colors are typically don’t have broadband, they don’t have the access to the technology. And so what’s happening in your individual companies and institutions, and then also what’s happening legislatively, to ensure that again, we’re not left behind the curve.

Deep Ganguli
Yeah, that’s a fantastic question. So in terms of accessibility, you know, when we first read going back to like, the ethics behind these systems, like we train Claude through a method called constitutional AI, what is this? Well, it starts with a constitution, which is a set of high-level ethical principles, like normative principles, by which we want these models to abide by when we started this out, you know, we did it for research purposes, and a few researchers, myself included, kind of broke this down. And then we looked at each other, you know, and then and then we kind of train the models, and we found out wow, this works, we can like actually get our models to abide by these principles. This is amazing. Like, we can like operationalize ethics. And then I think I raised my hand, I was like, “Wait, why was it the three of us that got to write this down?” That seems like a mistake.

And so, you know, I somehow was able to do the following thing. Like, what if we open this up to the public? What if we did sort of a deliberation process where people from across the world, or—sorry, actually constrained it to North America, but where people from across the country could weigh in on, like, kind of what rules we want these chatbots to follow? And an even crazier thing, what if we trained a model against what the public wants, right? And so, we actually did this. And it worked. And we found things where the public agreed with what the three of us wrote. We found places where there was disagreement, we found that the model we trained from the public method was actually more fair and less biased, we put stuff from the public into the newest model as a result of it. That model is fascinating. There’s no reason we can’t make that bigger. We can’t take that to, you know, special interest groups like black and AI, who knows about the technology and know about the ethics, there’s no reason we can’t kind of, you know, do this at historically black colleges and universities either. And so we’re taking steps towards it in, you know, and it’s like, these are complicated problems. But these are the right questions to ask.

Alondra Nelson
Lance, do you want to weigh in here? I mean, you know, Chaminade has several NSF grants, including in the NSF INCLUDES program, they’re specifically about issues of access that don’t I mean, this great question. As you know, as we’re talking about AI, there’s obviously still quite a chasm of a digital divide. That’s not part of this conversation, as we’re leaping a horse forward in the hype cycle, and that we try to sort of forget that we’ve got these sort of foundational problems of infrastructure, but certainly Chaminade is sort of really working in this space. So lovely to hear from you on this.

Lance Askildson
Yes. So, again, building off of what Gabe said earlier, quite appropriately. We’re trying to mitigate the already existing divides in our society and hopefully repair them in some ways through these, again, transformational technologies. And I think, to answer your question, in the simplest terms, and then to relate it to our work at shaman OD, it begins with outreach. So you can’t just expect people to take advantage of even an open invitation. You have to be very proactive in going out connecting with of these communities, encouraging these communities to be a part of your community and your organization and then giving them you know, apriori opportunities to engage with technologies that they’re going to need to be effective navigators of our collective future. And I think the disenfranchised in disenfranchisement question is the scariest one to me, because those that lack some of these fundamental skills, not necessarily to navigate the technology, but to interpret it, you know, to understand what it can do. If you don’t understand the technology, you are easily manipulated by that technology and people that are working with those technologies. And so I think, again, giving people control over their data, giving them control over kind of the foundational technological tools is very important. At Chaminade. were federally designated native Hawaiian serving institution, we have one of the largest, if not the largest proportion of Native Hawaiian Pacific Islander students at our university. But more importantly, we partnered with Kamehameha Schools and other native Hawaiian organizations to really recruit students to programs like our Hulu scholars, our STEM scholars, our NSF INCLUDES grants and making sure that we’re providing opportunities that are really tailored to those communities and the individuals that we’re serving. And I think that’s part of the answer. It’s not a complete answer, but it’s part of the answer when it comes to educational institutions. And again, I love the theme of this conversation about several several my colleagues here have already repeated it, you don’t need to have a degree or particular expertise to have an opinion. And I think that message of empowerment, saying, Listen, your opinion, as a fisherman, your opinion as a person, you know, working in textiles matters. And we want to teach you how you can use that technology to benefit you and your family, in your communities at home. So that’s what we’re trying to do.

Alondra Nelson
Let’s get one more, but there’s really more on this side, because we’ve not been got time for one more question.

Walter Bell
Hi, my name is Walter Bell, I just wanted to ask, a lot of the talk about AI kind of seems like it’s a reflection of humanity in itself. So being that it’s a reflection of humanity, is there a way to create an AI model system that is free from the biases that various groups or cultures may have? With the way they see the world? He wants to take that? No.

Gabriel Yanagihara
Yeah, anytime you look in the mirror, you’re gonna see all this smudges on the screen and everything. But if you do figure it out, that’s the trillion dollar question. Call me Oh, we’ll all invest. I

Helen Toner
mean, I think we have a long way to go. We’ve come a long way over the past 10 years, and understanding and measuring and mitigating those different biases, we have a long way to go. But certainly, I think there’s an assumption that we had a lot of people had, and maybe gradually fewer people have that a computer is objective sort of inherently and something that goes through some kind of statistical analysis is inherently, you know, could not be racist. How would that be? It? does, it doesn’t have any feelings? How can you know, and I think we understand increasingly, or there’s an increasing appreciation of that being a very, very bad way to look at the situation. So I hope that we can get better at handling the fact that no, we cannot build a system that doesn’t reflect any of those biases.

Gabriel Yanagihara
And just having the biases, there has been one of the best introductions in a classroom of having the kids be aware that they themselves have biases, right? When they ask the especially for me, and the students that I work with, when they ask the AI about something and it doesn’t answer in a way that matches who they are their skin color or anything like that, it’s painfully aware to that student, that they’re not being reflected back from this tool. And that’s such a really cool starting conversation piece to have with any of them. So the fact that there is the biases built in, you know, we can’t really fix it, but we might as well use it as a tool to continue that conversation.

Marietje Schaake
So just in the whole AI discussion, there’s increasingly discussions about or analysis of what a human should decide versus a system. And the same with questions of representation and you know, visual reflection or voices that feed into how models are governed or or working. This is all very political. So even your question like, you know, try to try to sit everybody who was here tonight down for about four hours, and come to the conclusion of what nonbiased really is. I wish you the best of luck. So we have to be real about the fact that people will disagree about what fairness is that a human visa vie a system can never be representative of humanity, you know, because whatever. And even if we hear lofty promises of AI for good, or you know, ethics principles, and so on and so forth, like heart, if you listen to politicians, hardly anyone is gonna say like, Hey, I’m here with a bad agenda, some bad plans for you, everybody will say, I’m doing the right thing, you know, for your lives, I’m coming from a good place. And I think that’s what people believe. But nevertheless, we have very different stakes, we have very different backgrounds. And so I think it’s really important to be real about trade offs, to be real about who gets to speak on behalf of whom and why. And therefore, I don’t know that we will ever land in a place where everybody feels content about the representation or the lack of bias, because whatever shifts in the, in the, you know, better direction for one person may shift away from the better direction for the other. And I think if we could reflect some of those realities better in how we think about AI and what needs to happen in terms of governance, then we are more fair, quote, unquote, to what is and is not possible, and also to who we are ourselves, each and every one but also collectively.

Alondra Nelson
This is the end of the panel, but not the end of the conversation. I hope that in this community, that this conversation will continue. It’s it’s just too important for it not to so thank you all for being here. And thank you, Marie che Lance Helen gave and deep Thank you very much.