Conversations 4 Citizenship

Episode 7: AI for Peace and Sustainability: A Conversation with Parishrut Jassal in India

Episode Summary

In this episode, Parishrut Jassal, a PhD candidate from Panjab University in India, discusses how AI governance can support peacebuilding efforts and contribute to achieving the UN's Sustainable Development Goal (SDG) 16 of promoting peaceful and inclusive societies. Jassal explains that AI governance frameworks, such as the European Union AI Act, aim to mitigate risks associated with AI misuse and promote responsible AI development aligned with principles of peace and human rights. He highlights the importance of international cooperation and the need for global AI governance that considers cultural differences across nations. Jassal also explores the potential of AI applications in peace education, such as interactive simulations and educational games that foster empathy, critical thinking, and conflict resolution skills. While acknowledging the challenges, Jassal expresses hope that by harnessing the power of AI responsibly and ethically, we can create a more peaceful and sustainable world.

Episode Notes

In this insightful episode of Conversations4Citizenship, we dive into the world of AI governance with Parishrut Jassal, a PhD candidate from Panjab University, India. Parishrut's research focuses on how AI can be a force for peace and sustainability.

We kick off by discussing the current state of AI governance, with Parishrut highlighting the EU AI Act (AIA) as a groundbreaking example. He explains how regulating AI based on risk levels can help prevent misuse and protect human rights. But he also emphasizes the challenges of applying these rules globally, given different cultural and societal contexts.

The conversation takes an interesting turn when we explore the link between AI and peace education. Parishrut shares his vision of AI as a tool to foster empathy and understanding through interactive learning experiences. He makes a strong case for including ethics in AI governance to support peace education initiatives.

Looking ahead, Parishrut paints a hopeful picture of AI as a partner in human-led peacebuilding efforts. He stresses the importance of responsible AI development and international cooperation to create a future where AI benefits everyone.

Overall, this episode offers valuable insights into the potential of AI to be a force for good. 

This episode is hosted by Dr. Stella Micheong Cheong. Please subscribe to the podcast through Apple, Google, Spotify, or Amazon Music. You may also follow @c4c_ed on Twitter. We look forward to hearing your feedback. If you would like to explore participating in our podcast and submit your blog post to the C4C,  do not hesitate to reach out through the online participation form or email us at conversations4citizenship@gmail.com

 

  1. UN Adoption of AI Governance Resolution
  2. UN Global Pulse Lab's Work (AI & Peace)
  3. The Global Partnership on AI (GPAI)
  4. African Union's Efforts for AI Policy

 

 

Episode Transcription

Stella Micheong Cheong  00:00

Hello, listeners! Welcome to the conversations for citizenship podcast. I'm your host Stella Micheong Cheong. Today on the podcast. We are talking about artificial intelligence potential to transform peace and sustainability and how is shaking things up. For better or worse, we will be talking about how AI can be used ethically and sustainably, especially when it comes to building peace in a world facing conflict. Joining us is Parishrut Jassal, a PhD candidate from Panjab University whose research focus on how AI governance can support peacebuilding efforts through Parishrut's work, we will see how artificial intelligence can be a tool for peace, education and a more sustainable future. Let's get started. Hi, Parishrut, how are you doing? Hello, Stella, thank you for having me. I'm doing great. How are you? Wonderful. I'm always good and thanks for joining us. Before we dive deep, could you introduce yourself briefly? Can you give us your doctoral research on AI governance and what inspired you to focus on AI governance for SDGs 16 Sustainable Development Goals? 

 

Parishrut Jassal  01:27

Sure, I'll start with a brief introduction to your listeners. So hello, everyone, I'm Parishrut Jassal just tell it's my full name. I'm a PhD candidate, Panjab University in India. And I'm also working as a facilitator for AI governance fellowship programme, which is being hosted by Equestrian Institute in Africa. And in my doctoral study research, it focuses on how the AI governance, so we all know that the landscape of AI policies at frameworks are developing at the moment right now. So I analysed the policies which are developing across globe. And then I tap and connect the dots, how those policies are a potential resource or potential uses of the AI technology, how would they technological be developed and deployed can contribute to foster sustainable peace. And in the sustainable peace arena, I focus mainly on the Sustainable Development Goals 16, which is sustainable peace, justice and strong institutions. Yeah, that's it.

 

Stella Micheong Cheong  02:35

Alright. That's good. Well, Parishrut, I'm just wondering about the research that informed your thinking on the theoretical framework for AI governance and/or peacebuilding. Are there any specific scholars or books that you found a particular influential? It would be great to hear some titles our listeners could check out if they're interested in learning more.

 

Parishrut Jassal  02:57

So, I'll answer this. Your question via the initiated for me, it's very so how I got into the research field of AI governance and how we can leverage the AI scenario right now for sustained will be. So apparently, the AI technology, which is being developed around the globe, it is considered and as viewed the arena of tech savvy people, like only technology driven people are into the deciding factor, who will kind of technology is being developed, and what kind of technology is available to us. When this artificial intelligence, it became a very, you know, a hard word or as it was in a pretty mainstream arena for a long time as of now, and it's not going anywhere, anytime soon, when it was it garnered the attention of every human being who is aware of its surroundings, then the question arises that, okay, certain technologies in around us it's being available to each citizen of each country and how they're using it and certain trends came into ledger targeting says, well, then they started to, you know, that the initiative to govern this to, you know, to control this not in a manner to hinder the progress of a certain technology or to what extent it can go, but I'll state this with a very certain example, like, we all know that European Union AI Act (AIA) came into the picture very recently, and apart from that very recently, even United Nation in the General Assembly adopted a landmark decision on AI governing the AI and you know, what they discussed and what they mentioned right now, there's just saying that this particular technology, it should be safe, secure, and it should be trustworthy. They're not asking for more and this has been addressed to the lobby of the persons who developing this technology like that scientists who are behind this who are developing technology and by like, you know drawing this framework or trying this, the way how some any technology for that matter should be good in one go and they are addressing the digital gap the developing nations have and between the developing nation and the development, then they have mentioned about the fair access to the this technology, which is is of huge importance, when like any any technology should be a level playing field for everyone, then they have specifically mentioned that AI rule not following the sustainable development and also they have mentioned about the human rights and the freedom or the people should feel virtually and offline itself, but, they have specifically mentioned about SDG 16 are for that matter the whole SDGs which Sustainable Development Goals which we have our total in number 17. Now, to come to question that what kind of resources as this is a nascent stage as of now, what resources I refer to and what So, as of now, what is the source material or the literature which have gone through our hardly talking about direct connection between SDG 16 or for that matter SDG and AI, but the closest one which I can mention is that the nation global pulse lab, so, this global pulse lab, the base cases in Africa so, they are they have been researching and working on the big data and how AI can be used and utilised to foster peacebuilding and conflict prevention. Apart from that, I focus to deep dive into this research arena, which I'm also developing for and going forward in this I have focused on how AI is going to govern like what basically what governance is in when AI comes into the picture. And because it's a new word, even AI governance like it's a pretty like now it's in it's in you can see you can spot the word but how it where it originated and in literature word in the research arena, what is the epistemology of this term. So, in order to address that, the AI governance part and then the for literature for the Sustainable Development Goals 16 There is huge literature and though there are few drawbacks that metrics to measure the SDG 16 are not coherent, but to build a connection, I think that my research in this field will be of huge importance, because as of now, the yeas being talked about in journal manner like for human security or for human rights and to foster sustainable mobility or if we are talking in the sense of public sector, they just only mentioning about like, it can be used in healthcare agriculture and service sector may be but not specifically, no one is addressing about sustained and when it comes to sustainable peace also have specifically taken the Sustainable Development Goals 16 because by this we are achieving two things like one is obviously the developing a culture of peace and solving the issue of conflict and violence to governing AI how it is developed and deployed. Apart from that my research is also contribution to the agenda 2030 There will be some literature available, there will be some work available to assess the SDG 16 When you navigate it you find research work on SDGs and how it is faltering away or how it is how not normal matrix is available. But I'm not addressing that this issue with SDGs. But I'm making most what already have been designed, instead of like going back to the process of like developing something and re assessing what should be there in SDG 16 and what you should focus. But for me, I think as we dive deep into this conversation through the episode, I'll address those points. 

 

Stella Micheong Cheong  09:08

Wow. Parishrut. That's really informative. Well, personally, I'm curious about how AI research like your research on AI governance going back at Panjab University or even at other universities in India, I would love to hear about what cutting edge looks like there. 

 

Parishrut Jassal  09:30

It's very interesting question you asked like I came across people who are working in AI, I've made them virtually because in India as of now the situation on the ground level is like not that promising. But as far as the government work is concerned, that has been promising. You know, when some technology enters into any arena, like it's not that in my country, it was introduced later because when like any technology comes you are easily accessible to each and every one. But as far as the research situation in my country is concerned in India, I think there is work in computer science sector or in the domain of computer science where how the technology how any AI is to be developed and to be deployed, maybe, but not on to that extent that it is affecting masses, but that have been from long ages, like, it's not a new technology for that matter. But if I talk about specifically my research field, as I'm addressing how AI is should be good govern and what policies and what frameworks are being designed. I haven't come across any scholar as of now, and I'm pretty active on various social media platforms. And I do strive for look forward to meet people, but I have met people in India as well, but they are from computer science engineering background, and they are developing some technology, but not of my knowledge. So it's pretty nascent stage as far as research is concerned. 

 

Stella Micheong Cheong  11:04

Wow. So it means you are a pioneer in this area, right?

 

Parishrut Jassal  11:10

That we could say once I've given concrete results through my research, thank you. 

 

Stella Micheong Cheong  11:17

That's great. Thank you. Parrish. Let me move on. The more in depth question. As we're witnessing, AI offers on presented opportunities, the best however, with great power comes great responsibility, right? When it comes to peacebuilding is crucial to navigate the ethical, societal and environmental impact of AI. So does the AI governance you are envisioning take into account all these ethical, social and environmental impact? Can you share some key insights on how AI governance and SDG 16 are interconnected? 

 

Parishrut Jassal  12:03

Perfect, that's a good question. Thank you for this question. Because through this question, I will be able to give your listener an insight to what really goes into how AI is being governed and what they can also look forward to. So, coming to your question, see, if I have to tell you how AI is being governed as of now. So, the best example, which I have right now is the European Union AI Act (AIA) was in 2024. And this recent approved regulation establishes, you know, a risk based approach to regulating AIS and technology which is being developed or deployed, it regulates in a very risk based manner. So how it is a risk based approach. So they categorise the AI systems based on their potential risk levels, so they can be a high risk, medium risk, and minimal risk. So this initiative of the Earth, it directly addresses the challenges of you know, how ensuring that responsible development of a technology and deployment of AI technology is there to give our listeners a more clear picture, I'll give you an example of like our SDG 16. So you know, sustainable development goals 16 It has very stark targets. So 16.A like, this is a target of SDG 16. It talks about reducing violence, okay. And if we come come into picture, if we bring into the picture, how AI is governed through the lens of EU AI act 2024 in context of 16.A of explanation and giving a good vantage point to your listener. So 16.A talks about reducing violence and in relation to the AI governance, which is the EU AI Act talks about like imposing strict requirements on high risk system like facial recognition technologies. So EU AI Act aims to mitigate the risk of AI being misused in ways that could escalate violence or violate human rights. So if any technology which have like facial recognition technology, like we have iPhones today, it was a long back time when we used to have a phone with our thumb print used to open our interface of the this mobile device. Now we have our facial recognition is there so this AI act or this fear of governing AI, because I'm not saying that this is the prime level of like how AI should be gone, but right and what we have at hand it is the most concrete is the EU. And then comes un decision on like recent adoption and SDG 16 also talks about like, let's watch the strengthens ascending of institutions. We have various institutions and for any human like civilizations to survive, the institutions play a huge role and it is applic addressed in the sustainable development goals 16. So in this case, the ACT promotes transparency and accountability in AI development, and which is done by requiring companies or any AI developers to provide documentation on how their systems work, and what kind of data is being used to train their systems, which fosters trust in AI, obviously, and it also stands in the institutions that rely on this technology. So as of now, if I'm being frank with you, we have a layer that is what we have, like the Pentagon, which you can see to go on AI. And then we have the UN's decision adoption. And apart from that, the Council of Europe have recently shared their draft of like your governing the AI which which will go due process, you know, after that due process, it will be ratified, and it will be considered as one of the documents which we can address for AI governance. I can give you one more example, in this scenario, like the SDG 16.1, that target 16.1, it talks about, like, you know, how we can foster peace and a culture of peace. So the focus on responsible AI development can contribute to a more peaceful society by minimising the potential for AI to push more inequalities around or like how it can be used for malicious purposes. So the AI Act also talks about it addresses that. So I'm just giving an example. And to our listeners, how I'm like building a bridge, this bridge is going to connect to any AI governance document any nation any union of nations is coming up with and then I'm going to draw like, we have certain targets in SDG 16, what act is overlapping what target and if they are targeting it or not targeting even if they are targeting it, to what extent it is helpful in the target of fostering the whole SDG 16? Through what kind of technology is being developed? And what kind of technology is deployed in our society? 

 

Stella Micheong Cheong  17:16

Yes, 100% with you and in terms of peacebuilding, achieving peace, the collaborate in even though the European Union establishes a new AI act, but there are several challenges to adapt another culture, another society, another institute like India, you know, because there is a big difference between the country. So what are the main challenges, align AI governance with the goals of peace and justice and human rights? And how can these challenges be addressed? 

 

Parishrut Jassal  17:57

I will just say that, what are the challenges we have? So, you know, the potential misuse of AI in weapon systems, let's suppose like, it could escalate the violence to a level which we can, you know, it can it might go like, beyond our control, and leave alone, it will violate what international law it's gonna violate, or but what is at stake algorithm bias, it's a pretty basic mistake while developing any AI technology. And it can give you results, which might not be tangible like autonomous weapon systems, something but a certain technology which is giving the results and answers in certain pattern, which are inherently biased. And if it is ratified by certain governments agency, and it comes into application of daily use, how it could push already existing inequalities in our society in our justice system, and how it can feel discriminated, that is which humans the social evils which already in society, and if certain technologies developed, it's another challenge, if there is no transparency in it, like if certain companies developing certain technology and questionable and they are not like explainable to insert someone, and if there is no governance around that is that then the AI is making processes. So that's why institutions have to play a huge major field in this. And we'll come to your that point that a certain governance act or certain way of governing AI is related to its social and cultural situation where it is being flourished. So if the AI Act, or the policy framework around AI is being developed at let's suppose, ABC country, is it possible to use the same framework at XYZ country or at another country? I'm not naming any countries just to be politically correct at this moment, but you get me right to address this situation I'll give you say, certain amount of I mean, like scale can be used in another country, but obviously not the same document can be used in country XYZ, because I'll tell you how certain amount can be interested because how AI is govern, it is governed in a way like by establishing clear guidelines for Responsible AI development and deployment and AI can go and it can mitigate the risk, like around this other way of governance is like international corporations around like, how AI governance frameworks can ensure a more unified approach or you can say, a more unified approach, which is ethical and which fosters best practices and which in the end, foster the trust of their citizens or stressed of people in certain technology. So, what this is what the government government of any country have to ensure that their citizens should not be like go like very of what kind of technology is being authorised by our government. So in that sense of what kind of laws or what kind of matrix is being prepared through EUA act or through United Nations or even if I talk about India, India recently came up with their cybersecurity and privacy guidelines for AI in businesses and this came after what kind of AI model and how AI models should be prepared and which was rolled back because the consensus in the community of AI affected or the stakeholders and shareholders in this were not in favour of it, but the recent decision of like what India have came up with cybersecurity and privacy guidelines for AI in businesses, it does address certain issues, which are being over already being mentioned in EU AI Act when the EU talks about the data privacy, but it has certain areas, which India's cybersecurity and privacy guidelines are talking about, which are related to India or which are related to the people of India or what kind of businesses are flourishing in India. So as of now, we have only three, four legs was five, but that UN Act which was un word UN adopted sorry, it was ratified by 100 plus countries, and they were in favour of what guidelines and even the guidelines which un is has adopted as of now, tools were created not by a single entity or not by UN personnel is only that consideration of each and every country was taken into place. But it is so recent that we can't say that it is the final one, we still have to wait, we still have to see how it is being reflected upon and what the community shareholders have to say about this decision of the United Nations. But the risks which entails with any technology, and how to minimise this risk is in the hands of the developer of that technology, and the sovereignty in which the technology is being used. So that both the government and the developers are accountable in what kind of technology is coming to play, and what risk it entails. 

 

Stella Micheong Cheong  23:24

Wow, that's fascinating. Thank you so much, Parishrut. And my next question is related to education. I know your area is not related to education. But I think your research can also contribute to peace education in that respect. How do you think AI governance as you envisage it can contribute to peace education? Or what aspect of AI governance Do you think need to be considered in order to contribute to peace education? 

 

Parishrut Jassal  23:59

Thank you for this question. And although it is not entirely new to me, because I did my Master's in peace and conflict studies, so we have, like, we had this in our coursework, about peace education, are the ways of fostering a culture of peace we can say. So as far as peace education and my research on AI governance, which is for apparently for achieving SDG 16. But you know, what peace education is, it is about, you know, educating people or fostering a scenario where whatever is being taught, it amounts to a culture of peace, if I'm not wrong, right? And if I have to connect between AI and peace, education, and link it so I'll say the basic the most basic thing which clicks to me is the AI application in the scenario or in the domain of peace education, we can like explore how AI power tools or AI apps could be used to enhance this certain initiatives and if I have to tell you more that you know, the interactive simulations to help students understand the conflict dynamics, and they can explore the peaceful ways to solve those conflicts. And apart from that, we can also say that, you know, we have these personalised learning platforms cater to different learning styles and they can promote the sense of empathy or understanding different cultures different we can host some certain like, you know, scenarios, before making a decision or before like coming into certain opinion, which tend to be stereotypical, they can run the scenario they can understand what culture of milieu, which is new to them in a more informed manner. And apart from that, we can like obviously, if we notice, the young generation of today, they are hooked on their black screens, and it can be instilled educational games. And that can use like, you know, we can use the AI to develop certain education games to create engaging scenarios like where students can practice critical thinking, which will help them develop more sensitised and more human opinions and a way of addressing certain situation and if it is already, or we can totally make the educational games strictly towards like, conflict resolution scenarios, or what if certain, a two parties come into, like, you know, conflict, how it can be a level playing field for both of them to sort it out. And apart from that, the peace education, I will say that through peace education, we can also push or like assure the responsible use of AI, like if we can put more responsible AI development through peace education, like peace education can emphasise the importance of responsible AI development, and which is aligned with the principle of peace, you know, and we can discuss about how a culture of peace or respect for human rights like legal non human rights, respect for a fellow human being, obviously, we're gonna respect the rights of another human being. But here AI can play a huge role and we can definitely look into this, like, how further it be off some concrete use. 

 

Stella Micheong Cheong  27:31

Okay, thank you so much for sharing in several example, for instance, education game is interesting, we call Gamification but my question is about what aspects of AI governance Do you think need to be considered to contribute to peace education, have any of you ever thought about AI God that you will divide things AI governance related to peace education, I would like to add something 

 

Parishrut Jassal  28:01

How AI is governed we can see different policies frameworks, coming into picture. And if we have to connect it with peace education, per se, we can see a specific aspects of a governance which we can integrate it into peace education, I can say whenever a law, whenever any policy or framework is being designed, or any entities in a way attempting to go on AI, they can address the situation in that ethical consideration. And the can like add the aspects of I can't say these words like this point of my will resonate directly with peace education, but, you know, in a certain way, they can include the which is also being already being included in how AI is being convened. But if we have to specifically foster peace education, I will say that AI governing bodies can address at from the very beginning, the aspect of peace education in their documents, and I can't really comment upon like, what exactly they have to address upon in the document while of governing the AI but they can effectively mention how AI can be utilised for peacebuilding efforts for resolving the conflicts, it will be a more direct approach like they can discuss about the importance of integrating ethical consideration and also what kinds of conflicting scenarios are prevailing in a country. I hope I answered your question. 

 

Stella Micheong Cheong  29:41

Thank you so much. It's really helpful to understand okay, this is my last questions. So looking toward the future, what role do you envision AI playing in achieving global peace and meeting SDG 16 targets or how do you see the evolution of AI governance impacting the global pursuit of business.

 

Parishrut Jassal  30:07

I will say, when any new technology is introduced in the society or in any civilization, people are apprehensive towards it, people are on board towards it and how AI has entered into our life how this particular technology is shaping the world around us is of huge importance to each and every one of us, we cannot deny the important AI play. And I will tell your listeners I'll I would like to mention this to your listeners very specifically that I vouch for AI as a tool to humanity not which is replacing beat labour thinking for quality of human being or beat any manner, you know, in peace education or in peace and conflict studies, we talk about ways to conflict resolution or like how we can build peace, I am of the opinion that this technology which have huge importance to do wonders can certainly be used in a way that enhance peace, it will resolve conflicts very effectively, but only when it is used as a tool not it is given the purpose or it is designed to resolve a conflict. A human intervention has to be there. And the policies and the ways the process of resolving conflict, which humans have been using which are evolved, which have evolved through ages, through the insights of different conflict scenarios through different human efforts has still to be used, the matters developed by humans are still to be used only AI is to assist it, or to make it more efficient not to wholly replace it. That is for me what AI is to peace. And if I have to say the society in general, I will say be ready be receptive of what is happening around us. Because each and every human being have the same capacity of thinking of articulating something which is happening to them, no one is less or more for me, paid of any race of any culture there, everyone is human, and they have the ability to foresee the future. And I'm not saying everyone is clear when but I'm saying if they are aware enough, we can envision how certain technology is going to unfold down the lane of our journey towards this life where we are destined to and this AI technology is going to be very integrated, or very, it's going to play a huge role in our society, it's going to be that part. Like, you know, when mobile phones were new, it was hard to get it because we have certain places to make a call to connect with someone. But when we came into habit of picking this particular device, now it feels as a part of our body. Now we have wristwatch for that, like, I don't know what is coming up next. But this is very common. If you leave your keys and your mobile phone at your home while leaving for work, I think what first thought gonna come to your mind is that it's like your first priority is always going to be your mobile phone because it is that integrated into our life. And that will be the case with AI. And I have to mention with all our listeners that it might not be tangible, the role of AI might not be tangible, but you can't see it, but it's still gonna play a huge role in it, be it in uplifting the society or be it in hindering our progress to make this world a sustainable and a peaceful, but the decision of making that is within us, within each one of us whatever role they are playing as a role of educator as a role of developing of a certain AI technology, or in the role of decision makers who what technologies to allow and what not too long. Each one of us have a certain role to play. And let's play that role keeping in mind that we all are humans, and we all are together in this world and blessed almost certain values of which are valuable and which are loving to each one of us and towards each other. Thank you. 

 

Stella Micheong Cheong  34:31

Wow, Parishrut. Thank you so much for your time. Yeah, I do agree with your opinion, the technology should be developed as a tool for humanity. So are there any points that we haven't covered that you would like to highlight? Or would you like to share your future research plans in the field of AI governance and peacebuilding? 

 

Parishrut Jassal  34:56

I can mention about it. When I was addressing about how AI governance is taking shape, I didn't mention the efforts of, you know, collaborative efforts of which are regional efforts. This is the case of global partnership on artificial intelligence. This is GPAI. This is where democracy is leading democracies are coming into picture coming together and making working groups towards responsible AI, or what kind of sectors AI should address. And going to the nitty-gritty of this one is this, and India is the chair of GPI 2024. So we have a lot of expectations. And we're from India, of performing efficient role in developing a frameworks and policies which will go on the technologies of AI is building upon. And other than that, the must mention is African Union, African Union are addressing the AI act, not AI act, but like this is a regional collaboration, which, like exploring certain initiatives and developing certain frameworks for the development of AI in Africa. And that's a huge feat, because although the AIA act, but of EU, and even UNs and each and every one are talking about that there should not be any digital divide between developing nations and developed nations, but centain things I think, are only on paper, and only on the screens. But we all know what goes by like, what, what is the ground reality, how you can't just expect someone to be in tune with a certain level of technology, as a developed nation is having because they don't have the resources, they don't have the infrastructure, the semiconductors, which come when it comes into the picture, developing certain AI technology, leave alone governing it, it requires a huge amount of resources. And I will suggest that see my PhD topic is about the international cooperation, how we can govern AI as internationally as one entity, like, although it is a bit distant, a dream of mine, but we can try it looks like we can develop certain way of governing AI, which is overlapping across nations. And I'll also mentioned by this what will happen that each and every one will consider the cultural difference of each and every society. And they will develop certain way of governing technology, which is common to let's suppose my let's go to my political correct example of country ABC and country XYZ. So that's where we are bridging countries, we are bridging human beings. So, I think that's what I might have made that how the development in the so-called Global South and the Global North, let's not get into this childhoods of global south or global north, like let's be globally go in a way to go on this AI like there should be global governance of AI. That's the act we should have. I know. It's not practical as of now. But why not to aim for it? And we can if we are bridging anywhere near to it. Half of the work is done. 

 

Stella Micheong Cheong  38:16

Wow. Parishrut. I really appreciated this. This is absolutely fascinating. Your research on AI governance for peacebuilding is incredible. You know, it's clear that AI has huge potential or a more peaceful world, just like you said about this, those SDG 16. The key as you pointed out is global partnership of AI or the international hope, everyone working together to make sure AI is used responsibly and ethically. Seriously, your work has given me so much hope for the future. I truly believe that we can harness the power of AI to create more peaceful and sustainable world. By the way, Parishrut, best of luck finalising your PhD journey looking forward to lead this once you close that finish line. I know it's good to think. Thanks again for joining us today. And thank you so much for having me. I'm closing the episode. I'm Stella. thanks for listening to conversations4citizenship. We hope you enjoy this episode. Be sure to subscribe conversations4citizenship and look for us on Twitter@ C4C_ed. A transcript of today's conversation with Parishrut Jassal can be found www.conversations4citizenship.com. This episode of conversations4citizenship was produced by me, Stella Cheong, Kamille Beye and Adam Lang. Thank you so much and Dhanyawad!