Tech

Singapore issues guidelines on AI system security and bans deepfake in elections


Privacy shield surrounded by device icons

Alexsl/Getty Images

Singapore has done it en masse network security announcements this week, incl instructions on assurance artificial intelligence (AI), safety labels for medical devices, and new laws banning deepfakes in election advertising content.

It’s new AI Systems Security Guide and Companion aims to promote a secure by design approach, so that organizations can minimize potential risks in the development and deployment of AI systems.

Also: Can AI and automation properly manage the growing threats to the cybersecurity landscape?

“AI systems may be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI ​​system,” the Cyber ​​Security Agency of Singapore (CSA) said. “AI adoption can also exacerbate existing cybersecurity risks to enterprise systems, [which] may lead to risks such as data breaches or to harmful or unexpected modeling results.”

“As such, AI must be secure by design and secure by default, as with all software systems,” the government agency said.

Also: 90% of consumers and businesses are worried about AI – see what worries them most

It notes that the guidelines identify potential threats, such as supply chain attacks and risks such as adversarial machine learning. Developed based on established international standards, they include principles to help practitioners implement security controls and best practices to protect AI systems.

The guidelines cover the five stages of the AI ​​lifecycle, including development, operations and maintenance, and end of life, the latter of which highlights how data and AI model artifacts are processed.

Also: Cybersecurity experts are turning to AI as they increasingly lose control of detection tools

To develop the companion guide, CSA said it worked with AI and cybersecurity experts to provide a “community-driven resource” that provides “practical” measures and controls. . This guide will also be updated to keep up with developments in the AI ​​security market.

It includes case studies, including patch attacks on image recognition surveillance systems.

However, because the controls primarily address cybersecurity risks to AI systems, the guidance does not address AI safety or other related components, such as transparency and fair. However, some of the proposed measures may overlap, the CSA said, adding that the guidance does not address the misuse of AI in cyberattacks, e.g. AI-powered malware or phishingsuch as deepfake.

Also: Cybersecurity teams need new skills even as they struggle to manage legacy systems

However, Singapore has overcome new things Law prohibits the use of deepfake and other digitally created or manipulated online election advertising content.

Such content depicts candidates saying or doing something that they did not say or do but is “factual enough” that the public “has reason to believe” that the distorted content is real.

Deepfake is banned during election campaigns

the (Integrity of Online Advertising in Elections) Bill (Amendment) was passed after one Second reading in parliament and also addresses AI-generated content, incl Innovative AI (Gen AI) and non-AI tools, such as pairing, said Minister of Information and Digital Development Josephine Teo.

“The bill aims to address the most harmful types of content in the election context, which is content that misleads or misleads the public about a candidate, through misrepresentation of words or actions. actions of that person, sufficiently real to be reasonably believed.” by certain members of the public,” Teo said. “Actual conditions will be assessed objectively. There is no one-size-fits-all set of criteria, but some general points can be made. ”

Also: Gartner says 1/3 of innovative AI projects will be abandoned

These include content that is “closely matched[es]”Known traits, expressions and mannerisms of the candidate. Content can also use real people, events and places so it seems more trustworthy,” she explains.

She noted that most of the public may find content with the Prime Minister giving investment advice on social media unimaginable, but some people could still fall victim to support scams Such AI. “In this regard, the law will apply as long as there is some member of the public who has reasonable cause to believe that the candidate said or did what is described,” she said.

Also: All eyes are on cyber defense as elections enter the era of innovative AI

Here are the four components that must be met for content to be banned under the new law: whether online election ads are digitally created or manipulated and depict candidates saying or doing something they does not do and is realistic enough to be considered legitimate by some.

The bill does not prohibit the “reasonable” use of AI or other technology in election campaigns, such as memes, animated or AI-generated characters, and cartoons, Teo said. It also won’t apply to “benign cosmetic changes” including the use of beautification filters and adjusting lighting in videos.

Also: Think AI can solve all your business problems? New research from Apple shows the opposite

The Minister also noted that the Bill would not cover private or domestic communications or content shared between individuals or in closed group chats.

“That said, we know that misleading content can spread quickly on open WhatsApp or Telegram channels,” she said. “If there is information that prohibited content is being communicated in large group chats involving many users who are strangers to each other and are freely accessible to the public, these communications That will be dealt with under the Bill and we will assess whether to take action.” take.”

Also: Google announced a $3 billion investment to tap AI demand in Malaysia and Thailand

The law also does not apply to news published by authorized news agencies or to ordinary people “carelessly” resharing messages and links without realizing the content, she added. has been edited.

The Singapore government plans to use various detection tools to assess whether content is created or manipulated by digital means, Teo explained. They include commercial tools, in-house tools and tools developed with researchers, such as the Advanced Technology Center for Online Safety, she said.

Also: OpenAI sees a new office in Singapore supporting its rapid growth in the region

In Singapore, remedial instructions will be issued to relevant persons, including social media services, to remove or disable access to prohibited online election advertising content.

A fine of up to S$1 million may be imposed on a social media service provider that fails to comply with the remedial direction. All other parties, including individuals, who fail to comply with the remedial instructions may be fined up to S$1,000 or imprisoned up to one year, or both.

Also: Sony Research’s AI division helps develop large language models with AI Singapore

“There is a notable increase in deepfake incident in countries where elections have taken place or are planned,” Teo said, quoted Research from Sumsub The number of deepfakes is estimated to have tripled in India and more than 16 times in South Korea compared to a year ago.

“AI-generated disinformation could seriously threaten the foundations of our democracy and requires an equally serious response,” she said. She added that the new Bill will ensure “the integrity of candidate representation” and that the integrity of Singapore’s elections can be maintained.

Is this medical device adequately secured?

Singapore is also looking to help users buy fully secured medical devices. On Wednesday, CSA launched one cybersecurity labeling plan for those devicesextend a program including the consumer Internet of Things (IoT) products.

This new initiative is jointly developed by the Ministry of Health, the Health Sciences Authority and the national medical technology agency Synapxe.

Also: Singapore seeks ‘real-life’ medical breakthroughs with new AI research centre

CSA said the label is designed to indicate the security level of medical devices and enable healthcare users to make informed purchasing decisions. The program applies to devices that process personally identifiable information and clinical data, capable of collecting, storing, processing and transmitting data. It also applies to medical devices that connect to other systems and services and can communicate via wired or wireless communication protocols.

Products will be evaluated based on four rating levels, Tier 1 medical devices must meet basic cybersecurity requirements, Tier 4 systems must have advanced cybersecurity requirements and must also pass Independent third-party software binary analysis and security assessment.

Also: These medical IoT devices present the greatest security risks

The launch follows a nine-month sandbox period ending in July 2024, during which 47 applications from 19 participating medical device manufacturers put their products through various tests . These include in vitro diagnostic analyzers, software binary analysis, penetration testing, and security assessments.

Feedback gathered from the sandbox period was used to refine the program’s requirements and operating procedures, including providing greater clarity on the application process and assessment methodology.

Also: Ask a medical question through MyChart? Your doctor can let the AI ​​respond

The labeling scheme is voluntary, but CSA calls for “proactive measures” to be taken to protect against growing cyber risks, particularly as medical devices increasingly connect to hospital networks hospital and family.

Medical devices in Singapore must now be registered with the HSA and subject to regulatory requirements, including cybersecurity, before they can be imported and supplied in the country.

Also: AI is helping therapists reduce burnout. Here’s how it changes mental health

The CSA in a separate announcement said the cybersecurity labeling scheme for consumer devices is now in place. recognized in Korea.

The bilateral agreements were signed on the sidelines of this week’s Singapore International Cyber ​​Week 2024 conference, with the Korea Internet and Security Agency (KISA) and the German Federal Office for Information Security ( BSI).

Scheduled to take effect from January 1 next year, the Korean agreement will see KISA’s IoT Cybersecurity Certification and Singapore’s Cybersecurity Label mutually recognized in both countries. family. It marks the first time an Asia-Pacific market has been part of such an agreement, which Singapore also signed. Finland And Germany.

Also: Connecting generalized AI to medical data has improved its usefulness for doctors

Korea’s certification program includes three levels — Lite, Basic and Standard — with all third-party lab testing. Devices awarded Basic Level will be considered to have achieved Level 3 requirements in the Singapore labeling scheme, which has four rating levels. KISA will also recognize Singapore Level 3 products that have met Basic Level certification.

The label will apply to consumer smart devices, including home automation, alarm systems and IoT gateways.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *