# Pollo AI API ## Docs - [Check Your Credits](https://docs.pollo.ai/credits/balance.md): Use the endpoint below to view your current credit balance in the Pollo AI API. Need more credits? Head over to your API billing page at https://pollo.ai/api-platform/billing to add them easily. - [Pollo AI API Introduction](https://docs.pollo.ai/index.md): Discover all you can do with our API! Pollo AI's API empowers developers to integrate advanced AI capabilities into their applications, enabling seamless video and image creation, and more. - [Veo 2 API Documentation](https://docs.pollo.ai/m/google/veo2.md): Veo 2 is Google DeepMind’s advanced AI video generation model that creates high-quality, realistic video clips. It produces videos with smooth, natural motion and can simulate real-world physics. Learn how to integrate it into your applications below. - [Veo 3 API Documentation](https://docs.pollo.ai/m/google/veo3.md): Google Veo 3 offers improved quality over Veo 2. With it, you can now add dialogue between characters, sound effects, and background noise for a richer viewing experience. Integrate the Veo 3 API into your projects seamlessly. - [Veo 3.1 API Documentation](https://docs.pollo.ai/m/google/veo3-1.md): Google Veo 3.1 brings improved audio-visual synchronization, advanced frame-to-frame control, and the ability to guide video generation with up to three reference images. Integrate the Veo 3.1 API into your projects now. - [Veo 3.1 Fast API Documentation](https://docs.pollo.ai/m/google/veo3-1-fast.md): Google Veo 3.1 Fast delivers rapid video generation with exceptional visual consistency. Integrate the Veo 3.1 Fast API for swift, reliable video creation at scale. - [Veo 3 Fast API Documentation](https://docs.pollo.ai/m/google/veo3-fast.md): Veo 3 Fast is the accelerated mode of Google Veo 3. It can produce high-quality videos with synchronized audio in under 1 minute, which is 30% faster than the standard Veo 3 model. Integrate it now. - [Hailuo 02 Documentation](https://docs.pollo.ai/m/hailuo/hailuo-02.md): Minimax’s Hailuo 02 offers precise prompt execution, advanced physics handling and crystal-clear 1080p resolution. Learn how to integrate and set up here. - [Hailuo 2.3 API Documentation](https://docs.pollo.ai/m/hailuo/hailuo-2-3.md): Check out Minimax Hailuo 2.3 API here and start creating today! Experience stunning stability and vivid visual expression across anime and distinctive art styles. Integrate Hailuo 2.3 now. - [Hailuo 2.3 Fast API Documentation](https://docs.pollo.ai/m/hailuo/hailuo-2-3-fast.md): Explore Minimax Hailuo 2.3 Fast API here! It brings every motion to life with unmatched speed, precision, and artistry. Integrate it today! - [Hailuo 01 Documentation](https://docs.pollo.ai/m/hailuo/video-01.md): Minimax’s Hailuo 01 offers advanced video generation from text or images, boasting smooth motion and precise facial control. Learn how to integrate and set up here. - [Hailuo Live2D API Documentation](https://docs.pollo.ai/m/hailuo/video-01-live2d.md): Hailuo Live2D is an advanced AI model from Minimax that is designed to turn still images into vivid animations with smooth, stable motion, supporting a wide range of artistic styles. Learn how to integrate it below. - [Hunyuan API Documentation](https://docs.pollo.ai/m/hunyuan/hunyuan.md): Huanyuan is Tencent’s 13-billion-parameter video model specializing in natural motion and realistic video and image synthesis. Learn how to integrate it here. - [Kling 1.0 API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v1-0.md): Kling 1.0 is the original Kling AI model for text and image to video generation, featuring moderate prompt adherence and quality. Learn how to integrate it for effortless video creation. - [Kling 1.5 API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v1-5.md): Kling 1.5 is an upgrade over Kling 1.0, offering better video quality, more realistic motion, and improved prompt relevance. Learn how to integrate it here. - [Kling 1.6 API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v1-6.md): Kling 1.6 is the powerful image-to-video generator of Kling AI, known for lifelike movements, enhanced motion dynamics, and improved video quality. Learn how to integrate it below. - [Kling 2.0 API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v2-0.md): Kling 2.0 is the latest Kling AI model that delivers filmic-quality, realistic motion videos at high resolution with smooth camera work and enhanced prompt adherence. Learn how to integrate it here. - [Kling 2.1 API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v2-1.md): Kling 2.1 is an upgraded AI video model with smoother motion, better prompt accuracy, improved character realism, and enhanced visuals for more cinematic results. Learn how to integrate it here. - [Kling 2.1 Master API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v2-1-master.md): Kling 2.1 Master offers significantly enhanced visual realism, smoother video playback, and more dramatic, compelling action sequences compared to the standard versions of Kling 2.1. Learn how to integrate it here. - [Kling 2.5 Turbo API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v2-5-turbo.md): Kling 2.5 Turbo API offers director-level cinematics with strong emotional expression, camera control, and film aesthetics. It enhances contextual understanding and motion for pro results. Integrate it now. - [Kling 2.6 API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v2-6.md): Kling 2.6 features synchronized audio-visual generation, allowing precise control over dialogue, tone, and sound effects. Integrate the Kling 2.6 API now. - [Kling 3.0 API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v3-0.md): Kling 3.0 delivers hyper-realistic textures, complex motion modeling, and superior prompt adherence. It utilizes advanced physical simulation to ensure fluid, lifelike movements across intricate cinematic sequences. Integrate the Kling 3.0 API now. - [Kling 3.0 Omni API Documentation](https://docs.pollo.ai/m/kling-ai/kling-v3-omni.md): Kling 3.0 Omni delivers true omni-modal generation from text, image, and audio inputs to create broadcast-ready 4K volumetric video. It provides unparalleled spatial understanding and dynamic environmental simulation for single-shot scenes. Integrate the Kling 3.0 Omni API now. - [Kling Video O1 API Documentation](https://docs.pollo.ai/m/kling-ai/kling-video-o1.md): Kling Video O1 provides a unified multi-modal workflow, accepting text, image, and video inputs. It ensures superior frame consistency and allows multi-step prompting. Integrate it now. - [Luma Ray 1.6 API Documentation](https://docs.pollo.ai/m/luma/luma-ray-1-6.md): Luma Ray 1.6 is an earlier version video generation model of Luma AI, focusing on efficient video generation with realistic and detailed animations. Learn how to integrate it here. - [Luma Ray 2.0 API Documentation](https://docs.pollo.ai/m/luma/luma-ray-2-0.md): Luma Ray 2.0 is the advanced video generation model from Luma AI, focusing on high-quality visual effects and detailed text and image to video transformations. Learn how to integrate it here. - [Luma Ray 2 Flash API Documentation](https://docs.pollo.ai/m/luma/luma-ray-2-0-flash.md): Ray 2 Flash is an enhancement to Luma AI's Ray series, offering much faster processing and lower costs compared to Ray 2, making advanced AI media creation more accessible and efficient. Learn how to integrate it here. - [Pika 2.1 API Documentation](https://docs.pollo.ai/m/pika/pika-v2-1.md): Pika 2.1 allows creators to generate captivating videos with better adherence to prompts and refined visual aesthetics. Learn how to integrate it here. - [Pika 2.2 API Documentation](https://docs.pollo.ai/m/pika/pika-v2-2.md): Pika 2.2 is the latest iteration in the Pika series, designed to provide superior video generation capabilities. It incorporates advanced algorithms for higher resolution outputs and improved motion fluidity. Learn how to integrate it here. - [Pixverse 3.5 API Documentation](https://docs.pollo.ai/m/pixverse/pixverse-v3-5.md): Pixverse 3.5 is the upgraded video model of Pixverse AI, which improves video quality and lets you customize more. Learn how to integrate it here. - [Pixverse 4.0 API Documentation](https://docs.pollo.ai/m/pixverse/pixverse-v4-0.md): PixVerse 4.0 is the latest AI model that can turn your photos and text prompts into smooth, realistic videos with lifelike motion, detailed effects, and professional-quality output. Learn how to integrate it here. - [Pixverse 4.5 API Documentation](https://docs.pollo.ai/m/pixverse/pixverse-v4-5.md): PixVerse 4.5 is the latest AI model that can turn your photos and text prompts into smooth, realistic videos with lifelike motion, detailed effects, and professional-quality output. Learn how to integrate it here. - [Pixverse 5.0 API Documentation](https://docs.pollo.ai/m/pixverse/pixverse-v5-0.md): PixVerse V5 provides expansive camera control and intricate motion dynamics for cinematic-level results. It delivers perfect prompt interpretation and lightning-fast rendering. Integrate PixVerse V5 API now. - [Pixverse 5.5 API Documentation](https://docs.pollo.ai/m/pixverse/pixverse-v5-5.md): Pixverse V5.5 enables multi-shot generation for creating complete cinematic sequences. It offers hyperrealistic visuals, integrated audio/SFX, and pixel-level control. Integrate Pixverse V5.5 API now. - [Pollodance 2.0 API Documentation](https://docs.pollo.ai/m/pollo/pollo-dance-2-0.md): Pollodance 2.0 offers advanced motion control, cinematic style transfer, and industrial-grade video generation. Experience professional-level fidelity with dynamic scene manipulation and fluid character consistency. Integrate the Pollodance 2.0 API now. - [Pollodance 2.0 Fast API Documentation](https://docs.pollo.ai/m/pollo/pollo-dance-2-0-fast.md): Pollodance 2.0 Fast provides high-speed video generation and low-latency processing. Optimized for rapid production, it delivers high-efficiency visual output while maintaining core cinematic quality for high-volume needs. Integrate the Pollodance 2.0 Fast API now. - [Pollodance 2.0 Fast Ref API Documentation](https://docs.pollo.ai/m/pollo/pollo-dance-2-0-fast-ref.md): Pollodance 2.0 Fast Ref combines high-speed generation with precise video-to-video reference. Efficiently handle style migration and character tracking at scale for rapid, professional-grade video production. Integrate the Pollodance 2.0 Fast Ref API now. - [Pollodance 2.0 Ref API Documentation](https://docs.pollo.ai/m/pollo/pollo-dance-2-0-ref.md): Pollodance 2.0 Ref offers precise video-to-video reference, style migration, and subject tracking. Achieve pixel-level consistency with industrial-grade tools for advanced content manipulation and reference-based editing. Integrate Pollodance 2.0 Ref API now. - [Pollo 1.5 API Documentation](https://docs.pollo.ai/m/pollo/pollo-v1-5.md): Pollo 1.5 is our advanced and versatile model. With it, users can produce high-resolution, creative, and cinematic videos efficiently from text and image inputs. Learn how to integrate it here. - [Pollo 1.6 API Documentation](https://docs.pollo.ai/m/pollo/pollo-v1-6.md): Pollo 1.6 is our better, cheaper, and faster video model for high-quality, super-realistic, creative, and cinematic video generation. Learn how to integrate it below. - [Pollo 2.0 API Documentation](https://docs.pollo.ai/m/pollo/pollo-v2-0.md): Pollo 2.0 offers complete audio integration, character consistency, and flexible 1-10 second video generation. It's fast, affordable, and supports diverse styles, perfect for rapid creative workflows. Integrate its API solution now. - [Runway Gen 3 API Documentation](https://docs.pollo.ai/m/runway/runway-gen-3.md): Runway Gen 3 is Runway ML’s professional AI video model for animating still images and applying diverse video effects, widely used in creative industries. Learn how to integrate it below. - [Runway Gen 4 API Documentation](https://docs.pollo.ai/m/runway/runway-gen-4.md): Runway Gen-4 is Runway ML’s cutting-edge AI model for turning still images into dynamic animations and applying creative video effects. Ideal for professionals, it enhances video projects in creative industries. Learn how to integrate it below. - [Seedance 1.0 Lite API Documentation](https://docs.pollo.ai/m/seedance/seedance.md): Seedance AI is an innovative streaming AI model that uses advanced artificial intelligence to generate both images and videos in real time. Learn how to integrate it below. - [Seedance 1.5 Pro API Documentation](https://docs.pollo.ai/m/seedance/seedance-1-5-pro.md): Seedance 1.5 Pro delivers unified audio-video synthesis with cinematic camera control, precise lip-sync, multilingual support, and immersive spatial audio. Integrate Seedance 1.5 Pro API now. - [Seedance 1.0 Pro API Documentation](https://docs.pollo.ai/m/seedance/seedance-pro.md): ByteDance's advanced AI video model that excels in coherent multi-shot video generation. It can deliver smooth, stable motion, and accurately follow detailed prompts for complex video content. - [Seedance Pro Fast API Documentation](https://docs.pollo.ai/m/seedance/seedance-pro-fast.md): Seedance Pro Fast is an upgraded video model by Bytedance. It achieves an excellent balance between video generation quality, speed, and cost. Access Seaweed Pro Fast API here and integrate it now. - [Sora 2 API Documentation](https://docs.pollo.ai/m/sora/sora-2.md): Sora 2 is OpenAI’s most advanced AI video model, capable of generating high-definition videos with synchronized audio, realistic physics, and multi-shot consistency. Learn how to integrate the Sora 2 API into your applications. - [Sora 2 Pro API Documentation](https://docs.pollo.ai/m/sora/sora-2-pro.md): Sora 2 Pro understands complex prompts, simulates real-world environments, and generates immersive scenes with unprecedented coherence. Learn how to integrate the Sora 2 Pro API into your applications. - [Vidu Q1 API Documentation](https://docs.pollo.ai/m/vidu/vidu-q1.md): Vidu Q1 is a next-gen AI video generator released in 2025 that creates high-quality, realistic 1080p videos from text prompts or images. It features smooth motion, cinematic lighting, and detailed animations. Learn how to integrate it below. - [Vidu 1.5 API Documentation](https://docs.pollo.ai/m/vidu/vidu-v1-5.md): Vidu 1.5 is a powerful AI video generation model with high prompt adherence, featuring smooth transitions and creative effects. Learn how to integrate it below. - [Vidu 2.0 API Documentation](https://docs.pollo.ai/m/vidu/vidu-v2-0.md): Vidu 2.0 is an AI video generation API that creates videos by combining reference images with text prompts. It uses advanced technology to keep characters, objects, and environments consistent throughout the video, ensuring smooth and natural animations. Learn how to integrate it below. - [Vidu Q2 Pro API Documentation](https://docs.pollo.ai/m/vidu/viduq2-pro.md): Vidu Q2 Pro API generates cinematic videos with top-tier visual detail, though at a slower pace. It's suited for professional productions requiring intricate and polished outputs. Integrate it now. - [Vidu Q2 Turbo API Documentation](https://docs.pollo.ai/m/vidu/viduq2-turbo.md): Vidu Q2 Turbo API delivers fast motion-heavy videos with stable camera transitions. Ideal for quick short-form content, it balances speed and quality effectively. Integrate it now. - [Vidu Q3 Pro API Documentation](https://docs.pollo.ai/m/vidu/viduq3-pro.md): Vidu Q3 Pro offers seamless audio-visual synthesis with advanced cinematic language, intelligent scene switching, and human-like character liveliness. Integrate the Vidu Q3 Pro API now. - [Wan 2.2 Flash API Documentation](https://docs.pollo.ai/m/wanx/wan-v2-2-flash.md): The Wan 2.2 Flash AI model by Alibaba boasts Ultra-fast processing, enhanced prompt understanding and reliability, and advanced camera control. Learn how to integrate it below. - [Wan 2.2 Plus API Documentation](https://docs.pollo.ai/m/wanx/wan-v2-2-plus.md): The Wan 2.2 Plus AI model by Alibaba features cinematic-level aesthetic control, more stable and fluid motion synthesis, and improved realism in dynamic rendering. Learn how to integrate it below. - [Wan 2.5 API Documentation](https://docs.pollo.ai/m/wanx/wan-v2-5-preview.md): Wan 2.5 API supports native audio generation and custom audio uploads for dynamic videos. It improves motion physics, prompt awareness, and visual richness, perfect for creative projects. Integrate it now. - [Wan 2.6 API Documentation](https://docs.pollo.ai/m/wanx/wan-v2-6.md): Wan 2.6 enables longer visual narratives and multi-shot sequences with video referencing for stable character and style. It uses visual and audio cues to ensure precise creative control across cinematic scenes. Integrate the Wan 2.6 API now. - [Wan 2.1 API Documentation](https://docs.pollo.ai/m/wanx/wanx-v2-1.md): Wan 2.1 is Alibaba Cloud’s flagship multimodal video foundation model. It excels at generating cinematic videos with realistic physics, complex motion handling, and bilingual text effects. Learn how to integrate it below. - [Pollo MCP Server](https://docs.pollo.ai/mcp-server.md): Pollo AI API's Model Context Protocol (MCP) server enables interaction with powerful Text/Image to video generation APIs. Learn more details here. - [Pollo Agent Skills](https://docs.pollo.ai/pollo-agent-skills.md): Official Agent Skills by Pollo AI — give your AI coding agent the ability to generate videos, process media, and more. Learn more details here. - [Pricing](https://docs.pollo.ai/pricing.md): Pollo AI's API solution offers competitive pricing that’s significantly more affordable than Fal AI and Replicate, making it the smart choice for developers and businesses integrating top-tier AI image and video generation models without overspending. - [Quick Start](https://docs.pollo.ai/quick-start.md): Jump into using Pollo AI's API with our Quick Start guide. Learn how to get your API keys and more! - [Get Task Status](https://docs.pollo.ai/task/get-task-status.md): Get task status to the generated video or image. - [Webhooks](https://docs.pollo.ai/webhooks.md): Learn how to integrate Pollo AI API’s Webhook service to receive secure, authenticated notifications for video generation task completions. ## OpenAPI Specs - [openapi](https://docs.pollo.ai/openapi.json) - [openapi-filtered](https://docs.pollo.ai/openapi-filtered.json)