Researchers from the School of Hong Kong and Kuaishou’s Kling group have collectively proposed MemFlow, a novel technique designed to cope with the long-standing challenges of memory decay and narrative inconsistency in AI-generated prolonged motion pictures.
MemFlow introduces a dynamic, adaptive streaming long-term memory mechanism that significantly improves narrative coherence and visual consistency all through extended video sequences. Standard methods often rely upon rigid memory strategies, resulting in identification drift or character confusion over time.
The reply choices two core components: Narrative-Adaptive Memory (NAM), which retrieves primarily probably the most associated historic seen context based totally on the current rapid, and Sparse Memory Activation (SMA), which selectively prompts key information to maintain up computational effectivity.
In benchmark checks, MemFlow achieved a VBench-Prolonged complete top quality score of 85.02 and an aesthetic score of 61.07, whereas sustaining safe long-range semantic consistency. Subject consistency reached 96.60, and real-time inference achieved 18.7 FPS on a single NVIDIA H100 GPU, highlighting every top quality and effectivity good factors.
Provide : liangziwei
Elevate your perspective with NextTech Data, the place innovation meets notion.
Uncover the latest breakthroughs, get distinctive updates, and be a part of with a worldwide neighborhood of future-focused thinkers.
Unlock tomorrow’s developments proper this second: be taught further, subscribe to our e-newsletter, and alter into part of the NextTech neighborhood at NextTech-news.com
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our e-newsletter, and be a part of our rising neighborhood at nextbusiness24.com

