OpenAI is turning to advanced artificial intelligence (AI) tools to assist with content moderation. Observers note that it could boost efficiency for businesses and add a valuable feature to AI tools that have yet to generate significant revenue for many enterprises.

OpenAI has been developing its content moderation system based on its latest AI model, GPT-4. The model can assist businesses with content review, aiding in establishing policies regarding suitable content and enforcing such policies by tagging or evaluating posts.

Having previously tested this technology, OpenAI invited select clients to pilot the tool. The company found that its proprietary content moderation system performs better than moderately trained human moderators. However, its efficiency isn't on par with the most experienced human reviewers.

The company suggests its tool could help businesses complete tasks in a day that would normally take up to six months.

Lilian Weng, head of security systems at OpenAI, believes there's no need to employ tens of thousands of moderators. Instead, humans can act as consultants, ensuring the AI system functions correctly and making decisions on cases that aren't straightforward.

Andrea Vallone, OpenAI's product policy manager, pointed out that GPT-4 can efficiently handle content review. Drafting content moderation policies and tagging usually takes a long time, but OpenAI's solution aims to bridge the gap between requirements and resolutions.

Major companies like Meta are already utilizing AI to aid their employees in moderation tasks. However, OpenAI emphasizes that the moderation process shouldn't be fully automated.

Ideally, Vallone mentions, employees can leverage AI to free themselves to focus more on evaluating potential extreme cases of content violations and refining content policies. OpenAI will maintain human reviews to validate some AI model decisions. Vallone stressed the importance of keeping human involvement central in the process.

Some observers highlight that even before generative AI came into existence, content moderation was a significant challenge. The advent of this new technology has increased the threat of misinformation, amplifying challenges faced by content moderation. Still, given AI's scalability, some tech experts believe AI might be the only feasible solution as the spread of misinformation intensifies.