Tech

Navigating the ethical minefield in the AI ​​landscape- Intel’s Santhosh Viswanathan on what India should do


Tremendous advances in artificial intelligence (AI) have opened up unprecedented possibilities, impacting virtually every aspect of our lives. What was once the preserve of specialized experts has now become accessible to individuals around the world who are exploiting the capabilities of AI at scale. This accessibility is revolutionizing the way we work, learn and play.

While the democratization of AI heralds limitless potential for innovation, it also poses significant risks. Growing concerns about abuse, safety, bias and misinformation highlight the importance of exercising responsibility artificial intelligence practice now more than ever.

An ethical conundrum

Derived from the Greek word characteristic can mean customs, habits, character or disposition, ethics is a system of moral principles. The ethics of AI refers to both the behavior of humans building and using AI systems as well as the behavior of these systems.

There have long been conversations – academic, business and regulatory – about the need for responsible AI practices to create ethical and fair AI. All of our stakeholders – from chipmakers to device manufacturers to software developers – should work together to design AI capabilities that minimize risk and minimize use. Harmful use of AI.

Even Sam AltmanCEO of OpenAI, commented that although AI will be “the greatest technology humanity has ever developed,” he is “a little scared” about its potential.

Address these challenges

Responsible development must form the foundation for innovation throughout the AI ​​lifecycle to ensure AI is built, deployed and used safely, sustainably and ethically. A few years ago, the European Commission announced Ethical principles for trustworthy AI, sets out the essential requirements for ethical and trustworthy AI development. According to the guidelines, trustworthy AI should be legal, ethical and robust.

While promoting transparency and accountability is one of the cornerstones of AI ethics principles, data integrity is also paramount because data is the foundation for all machine learning algorithms and Large Language Models (LLM). In addition to protecting data privacy, there is a need to obtain explicit consent for the use of data with responsible sourcing and processing of that data. Additionally, because our inherent biases and prejudices are embodied in our data, AI models trained on these data sets have the potential to amplify and expand scale these human biases. Therefore, we must proactively minimize bias in data, while ensuring diversity and inclusivity in the development of AI systems.

Then there is the concern surrounding digitally manipulated synthetic media known as deepfakes. Recently Munich Security Conference, some of the world’s largest technology companies have come together to pledge to fight AI-generated deceptive content. The agreement comes amid growing concerns about the impact of misinformation caused by fake images, videos and audio on this year’s high-profile elections in the US. US, UK and India.

More such efforts can be promoted by social media media platforms and organizations to prevent the amplification of harmful information Deep fake video. For example, Intel introduced a real-time system Deep fake discovery platform – FakeCatcher – can detect fake videos with a 96% accuracy rate and returns results in milliseconds.

Finally, while science fiction fans indulge in conversations around the technological singularity, it is imperative to identify risks and identify control measures to address agency shortages. humans and thus lack clear accountability to avoid any unforeseen consequences of AI going rogue.

Shaping ethical AI guidelines

Leading technology companies are increasingly defining AI ethics in an effort to create principles of trust and transparency while achieving their desired business goals. This proactive approach is mirrored by governments around the world. Last year, US President Joe Biden signed an executive order on AI, outlining “the most far-reaching actions ever taken to protect Americans from the potential risks of AI”. And now, the European Union has approved the AI ​​Act, which is the world’s first regulatory framework focused on AI governance. The rules will ban certain AI technologies based on their potential risk and impact, introduce new transparency rules and require risk assessments for at-risk AI systems High.

Like its global counterparts, the Indian government acknowledges the profound societal impact of AI, acknowledging both its potential benefits as well as the risks of bias and privacy violations. In recent years, India has taken initiatives and guidelines to ensure responsible development and deployment of AI. In March, MeitY revised its previous advisory to large social media companies, changing a clause to make it mandatory for intermediaries and platforms to seek government approval before rolling out models. AI models and tools are “untested” or “untrusted” in the country.

The new advisory maintains MeitY’s emphasis on ensuring that all fake and misleading information can be easily identified, and advises intermediaries to label or embed content content with “metadata or unique identifiers”.

In short, in a landscape where innovation is outpacing regulations, the importance of upholding responsible AI principles is undeniable. The potential for harm to society will be great when AI development is divorced from an ethical framework. We must therefore ensure that innovation comes with responsibility, protecting against the pitfalls of abuse, bias and misinformation. Only through collective vigilance and unwavering dedication to ethical practice can we harness the true potential of AI for the betterment of humanity.

– Written by Santhosh Viswanathan, VP and MD-India, Intel.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *