mode in character object while generating images using character.weight and enable_facelock are deprecated and will be removed in the future.Example:
Video Anyone generationmedia_id for video generation, eliminating the need to re-upload images as assets.video_anyone generation make sure that input image meets the following conditions to avoid generation failure:- Input image aspect ratio should be either
3:5or5:3. - Maximum characters permitted for
promptis 500 only.
segment, style, model, true_touch, narrator or video_anyone. These properties have a priority order as follows:segmentstylemodeltrue_touchnarratorvideo_anyone
narrator can be generated either by using a script paired with a voice(List all the available voices using GET /pub/v1/voices endpoint) or by utilizing an existing audio asset.segment, true_touch, narrator and video_anyone generations only acknowledge parameters that are passed within their own objects.narrator generation make sure that video asset meets the following conditions to avoid generation failure:- The maximum supported frame rate for a video asset is 30 FPS, calculated by dividing the total number of frames by the total duration.
- Maximum height and width of the video asset should be
1080and1920respectively.
Headers
API key needed to access our public endpoints. You can find yours under the 'Account' section of the website
Body
The aspect ratio of the image for the generation
1:1, 2:3, 3:2, 4:5, 16:9, 9:16 "1:1"
The batch size of the generation (Max permissible value depends on your subscription plan). Visit RenderNet pricing for more details
1
AI guidance for this generation. Higher the value, the output will be closer to the prompt (but may result in distorted images after a point)
4 <= x <= 127
If you want to create images of a character, use the character’s name in the positive prompt. Example: {character_name} riding a bike
The control net(pose control) for the generation if you want to use (optional)
The face you want to restore for the generation if you want to use (optional)
Query the list resources endpoint GET /pub/v1/loras to view all available LoRAs. You can add multiple LoRAs to a single generation. Make sure your LoRAs have the same base model (SD 1.5 / SDXL) as your selected style/model.
Query the list models endpoint GET /pub/v1/models to view all available models.
"JuggernautXL"
For Narrator generation, pass either script or audio_asset_id along with other required details in narrator payload.
The prompt for the generation
The quality of the image for the generation (Case Sensitive)
Plus, Regular "Plus"
The sampler you want to use for the generation (Case Sensitive)
DPM++ 2M Karras, DPM++ 2M SDE Karras, DPM++ 2S a Karras, DPM++ SDE, DPM++ SDE Karras, Euler a "DPM++ 2M Karras"
The seed for the generation (randomized if you don't want to use a seed)
1234
Changes the input asset image based on the find and replace prompts
The number of steps you want AI to take for the generation
10 <= x <= 3020
Query the list styles endpoint GET /pub/v1/styles to view all available styles.
"Bokeh"
Query the list styles endpoint GET /pub/v1/styles to view all available styles.
Enhances and upscales the input image passed in the asset_id
Convert image to video seamlessly.