Luma’s Amit Jain says fragmented AI tools impacting creative workflows, launches unified intelligence

Luma's Amit Jain says fragmented AI tools impacting creative workflows, launches unified intelligence



Luma AI, a Palo Alto-based startup that turns prompts into videos, says it’s time to fix the fragmented nature of today’s AI tools which forget details and hinder creative work.

Luma’s Unified Intelligence architecture, launched on Thursday, claims to fix this with one multimodal AI brain that blends reasoning with content generation and can handle text, images, video, and more in a single system, without losing track between steps. Its first model, Uni-1, mixes words and visuals together so the AI can think through ideas and create them at the same time.

The startup launched Ray3 in 2025 as the world’s first video reasoning model, followed by Ray3.14, which creates videos, animations, and visuals.

Last November, Luma raised $900 million in a round led by Humain, a Saudi-based public investment fund (PIF), with participation from existing investors Andreessen Horowitz, Amplify Partners, and Matrix Partners.

Most AI models process only a fixed amount of prior conversation or data at a time, causing them to “forget” earlier details in long projects like coding marathons or storyboarding, requiring constant re-prompting. Big tech companies have repeatedly flagged AI’s “memory problem” which leads to incoherent output, higher cost, and failed projects.

Luma also launched Luma Agents, a new class of AI tools built on its new architecture, to enable creative work for agencies, marketing teams, studios, and enterprises.