ADVERTISE HERE

The Online Safety Act’s framework centres on platform governance and operational practices, rather than on imposing criminal liability on individual users for lawful expression. – AI-generated image
PETALING JAYA: The Online Safety Act 2025 (ONSA), which took effect on Jan 1, adds a new layer to the country’s regulatory framework for digital platforms.
Passed by Parliament in December 2024 and gazetted in May 2025, the law will operate alongside existing legislation such as the Communications and Multimedia Act 1998.
The Act introduces specific obligations for platforms that host or distribute user-generated content, with child safety as a core pillar of the new framework.
It is also intended to address inconsistencies in platform action, where decisions on harmful content are guided by internal policies.
When flagged material is not removed promptly, a relatively small number of unresolved cases can still expose large numbers of users to online harm.
Rise in online harm
ONSA’s introduction follows sustained growth in reported online harm in Malaysia. Police recorded RM2.7 billion in reported losses from online scams between January and November 2025, Bernama reported.
On Oct 24, police announced that they had crippled a criminal network linked to child sexual abuse material (CSAM), with the arrest of 31 people and the seizure of more than 880,000 digital files.
Since 2022, 38,470 items of cyberbullying and online harassment content have been taken down following regulatory or enforcement action.
Despite ongoing moderation by platforms, harmful material continues to surface and, in some cases, remain accessible for extended periods.
Figures from the Malaysian Communications and Multimedia Commission show that major platforms removed about 92 per cent of the 697,061 posts flagged as harmful between January 2024 and November 2025, but 58,104 posts remained accessible online.
Framed another way, a shortfall of just one per cent would still translate into several thousand harmful posts remaining online.
The increasing use of automated tools and AI-generated content, including deepfakes, has added complexity to detection and enforcement efforts.
ONSA’s scope
ONSA establishes a framework for application service providers and content application service providers, including both local and foreign platforms that operate in or target the Malaysian market.
The Act covers a defined range of harmful content categories, including CSAM, online scams and financial fraud, obscene or pornographic material, harassment and abusive communications, content linked to violence or terrorism, material that encourages self-harm among children, content that promotes hostility or disrupts public order and content associated with dangerous drugs.
Rather than addressing posts on a case-by-case basis, the law focuses on risk management systems, including content distribution and recommendation mechanisms. ONSA’s framework centres on platform governance and operational practices, rather than on imposing criminal liability on individual users for lawful expression.
Under ONSA, platforms are required to take steps to identify and manage risks arising from harmful content on their services, including reducing users’ exposure to “priority harms”, or the most severe forms of online harm; ensuring certain categories of content are made inaccessible and providing reporting and user support mechanisms.
Platforms must also prepare and submit an online safety plan outlining how they address these risks. Enforcement measures are available where platforms fail to meet their obligations, though the Act sets out procedural requirements governing how directions are issued and reviewed.
The Act also requires platforms to adopt measures specifically intended to limit children’s exposure to harmful content and interactions, including age-related safeguards and restrictions on access to certain material.
Overall, these provisions will affect how platforms design default settings, content discovery tools and interaction features for younger users.
Limits and safeguards
While ONSA introduces new obligations for platforms in managing online risks, it does not extend to private one-to-one messaging or authorise the general monitoring of users. The Act also does not create new offences relating to lawful speech or political expression.
Safeguards within the framework include notice requirements before enforcement action, opportunities for representations, public records of regulatory directions and access to appeal mechanisms and judicial review.
ONSA introduces a formal regulatory structure for online safety, but does not replace existing enforcement, education or prevention efforts. Issues such as digital literacy, parental involvement and online behaviour norms remain outside the scope of the legislation.
As its implementation approaches, the Act’s practical impact will depend on how platforms adapt their systems and how regulatory oversight is applied over time.
*This article was first published by freemalaysiatoday.com

2 hours ago
6








English (US) ·