AI and Viztech: A New Era of Safer Online Content
Online media companies struggle to police extremist material before it appears online. David Fulton, CEO of WeSee, argues that AI‑driven technology can spot dangerous content before it ever goes live.
Government officials and academics alike have been debating how best to curb the surge of online terrorist content. While platforms such as Facebook, Twitter and YouTube face mounting regulatory pressure—German lawmakers recently enacted a bill that requires major internet firms to remove “evidently illegal” material within 24 hours or risk fines of up to $57 million—confidence in their current moderation capabilities remains low.
In June, Harvard University hosted the conference Harmful Speech Online: At the Intersection of Algorithms and Human Behaviour, co‑organized by the Berkman Klein Center, the Shorenstein Center and the London‑based Institute for Strategic Dialogue. The event highlighted the critical gap between the growing problem of harmful speech and the tools available to tackle it.
The opening address underscored how extremist content shapes public opinion and political discourse, noting the severe resource and research deficits that hamper effective responses.
Automated Detection: The Urgent Call
In September, leaders from the UK, France and Italy met with major internet companies at the UN General Assembly in New York. UK Prime Minister Theresa May warned that failure to detect and remove terrorist content within two hours could trigger steep penalties. This deadline is significant—research shows that two‑thirds of propaganda is shared within that window, raising questions about whether the time limit is realistic.
Google and YouTube announced plans to scale up their AI‑powered content‑identification systems. Yet the problem persists. A recent Telegraph article reported that 54,000 sites offering bomb‑making instructions and other extremist propaganda were active between August last year and May this year, all linked to supporters of the Islamic State.
Cisco’s projections indicate that by 2020 the web will host 65 trillion images and 6 trillion videos, with image and video traffic expected to comprise over 80 % of total internet traffic in less than three years. Monitoring such volumes for extremist content is a daunting task—one that cutting‑edge artificial intelligence may help solve.
Viztech: A Predictive Approach
Viztech, a pioneering player in AI‑based visual content analysis, has developed a sophisticated filter capable of detecting adult and violent material—alongside terrorist imagery—before it is published. By recognizing symbols such as the ISIS flag or the faces of known extremist figures, Viztech’s system can flag content in real time, well ahead of any user interaction.
Built on deep‑learning neural networks, the technology operates at speeds up to 1,000 times faster than human cognition, offering both predictive filtering and precise categorization of video and still images. In essence, Viztech moves beyond reactive moderation to proactive prevention, providing governments, researchers and platforms with a powerful tool to safeguard the online ecosystem.
David Fulton, CEO of WeSee
Internet of Things Technology
- Industrial Internet Security Framework: Safeguarding IIoT Systems – Why It Matters
- How IoT is Driving the Next Generation of Manufacturing
- Why IPv6 Is Critical for the Future of IoT
- Affordability & Scalability: The Cornerstones of Smart Home Success
- The Internet of Things and 5G: Driving the Future of Connected Vehicles
- Industrial IoT: Driving Digital Transformation and New Business Value
- Hyperconvergence and IoT: Unlocking Edge Computing Power (Part 1)
- Future-Proofing IoT Security: Expert Strategies for a Safer Connected World
- Maximize Profits with Strategic OTT Management for Service Providers
- Industry 4.0 & IoT: The Driving Forces Behind Advanced Industrial Transformation