Apple's recently unveiled plan to implement a new automated system to fight against child abuse has received widespread criticism. Some groups have claimed that the system is prone to abuse and it could give Apple and other entities backdoor access to private user data.

Last week, Apple announced plans to embed its new child sexual abuse material (CSAM) detection system as part of its updates to iOS, iPadOS, macOS, and watchOS. The system is meant to automatically detect images flagged to be in violation of child pornography laws.

Digital rights group The Electronic Frontier Foundation said Apple's new system is nothing more than backdoor access to its own data storage system and its messaging system. The organization said that, even if the company promises the preservation of privacy and security, the system is ultimately still a back door.

The group said Apple could potentially change the parameters of the system to flag other sorts of content for other purposes. It added that the system itself could be easily abused by outside forces. The group said Apple's systems could be used by foreign countries to detect anti-government or LGBTQ content as examples.

"This is an Apple-built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control. Countries, where iPhones are sold, will have different definitions of what is acceptable," Facebook's VP of product management, Will Cathcart, said.

The company's CSAM system relies on images hashes provided by child safety organizations such as the U.S. National Center for Missing and Exploited Children. Apple said the "on-device matching process" is powered by a new cryptographic technology that will detect images without the need for human intervention.

Once an image that is being uploaded to iCloud is flagged as CSAM, the company may decide to disable the account and report it to relevant authorities. Apple said users could file an appeal to have their accounts re-enabled. Apple claims that its system has a "less than one in a one trillion chance" of incorrectly flagging CSAM content.

Apart from flagging images, the system can also block CSAM content from reaching children. Apple said a machine learning system would automatically warn children if they are about to open CSAM materials through iMessage.