Ausbiz
Details
- Date Published
- 30 Apr 2026
- Priority Score
- 1
- Australian
- Yes
- Created
- 1 May 2026, 08:00 pm
Authors (1)
Description
<p>Key Points: </p><ul><li><p>O’Hanlon sees rendered video as an outdated, “dumb” format in an AI‑driven media world </p></li><li><p>Ion’s technology virtualises video into metadata‑rich blocks for instant AI‑driven assembly </p></li><li><p>Large AI and video platforms are, in his view, exploring Ion’s IP to cut costs and personalise content at scale </p></li><li><p>O’Hanlon reports solid funding into 2027 and positions Ion as an IP licensor rather than a platform builder</p></li></ul><p>Finbar O’Hanlon from ION Video sets out a radical shift in how video is created, stored and personalised, arguing that traditional rendered files are “dumb”, fixed bricks in a world demanding highly tailored content experiences. He maintains that billions of viewers now want one‑to‑one relationships with video, yet spend up to 30% of their time simply searching, as platforms struggle under an explosion of content such as hundreds of thousands of hours added daily to YouTube.</p><p>O’Hanlon states that ION’s granted patents, first pursued in 2007, virtualise video into Lego‑like building blocks enriched with metadata, enabling instant dynamic assembly without re‑rendering. In his view, this creates a new medium that connects large AI foundation models – such as those from Meta, Alphabet and others – with trillions of hours of human‑rendered content, making “video superintelligence” possible. He contends that this allows AI platforms to build personalised education, lifestyle and travel shows on demand, while collapsing storage and processing costs for major video platforms.</p><p>For O’Hanlon, the commercial opportunity lies in licensing ION’s intellectual property rather than building a consumer platform, with use cases spanning entertainment, CCTV, in‑home devices and new monetisation models for content owners. He reports a strong cash position and funding runway into 2027 for (ASX:IOV).</p>
Summary
This interview highlights an Australian technology's attempt to virtualize video into metadata-rich 'building blocks' to facilitate deep integration with AI foundation models. While the term 'video superintelligence' is used, the discussion focuses primarily on commercial advancements in personalization, content assembly, and operational efficiency for platforms like YouTube and Meta. The capability to dynamically assemble video assets via AI metadata raises implications for information integrity and the massive scaling of synthetic or hybrid media. However, the contribution to catastrophic risk discourse is minimal as the focus remains on intellectual property licensing and commercial data processing.