Generate Media
The generations endpoint lets users generate media based on various parameters such as aspect ratio, model, prompt etc. The response will return a generation ID for each request, which can be used to track the status and view generated media.
When using the generation array, ensure that each object contains only one of the following properties: segment
, style
, model
, true_touch
, or narrator
. These properties have a priority order as follows:
segment
style
model
true_touch
narrator
If multiple properties are provided, the one with the highest priority will be used.
The narrator
can be generated either by using a script paired with a voice(List all the available voices using GET /pub/v1/voices
endpoint) or by utilizing an existing audio asset.
segment
, true_touch
and narrator
generations only acknowledge parameters that are passed within their own objects.
For narrator
generation make sure that video asset meets the following conditions to avoid generation failure:
- The maximum supported frame rate for a video asset is 30 FPS, calculated by dividing the total number of frames by the total duration.
- Maximum height and width of the video asset should be
1080
and1920
respectively.
Headers
API key needed to access our public endpoints. You can find yours under the 'Account' section of the website
Body
The aspect ratio of the image for the generation
1:1
, 2:3
, 3:2
, 4:5
, 16:9
, 9:16
The batch size of the generation (Max permissible value depends on your subscription plan). Visit RenderNet pricing for more details
AI guidance for this generation. Higher the value, the output will be closer to the prompt (but may result in distorted images after a point)
If you want to create images of a character, use the character’s name in the positive prompt. Example: {character_name} riding a bike
The control net(pose control) for the generation if you want to use (optional)
The face you want to restore for the generation if you want to use (optional)
Query the list resources endpoint GET /pub/v1/loras
to view all available LoRAs. You can add multiple LoRAs to a single generation. Make sure your LoRAs have the same base model (SD 1.5 / SDXL) as your selected style/model.
Query the list models endpoint GET /pub/v1/models
to view all available models.
For Narrator generation, pass either script
or audio_asset_id
along with other required details in narrator
payload.
The prompt for the generation
The quality of the image for the generation (Case Sensitive)
Plus
, Regular
The sampler you want to use for the generation (Case Sensitive)
DPM++ 2M Karras
, DPM++ 2M SDE Karras
, DPM++ 2S a Karras
, DPM++ SDE
, DPM++ SDE Karras
, Euler a
The seed for the generation (randomized if you don't want to use a seed)
Changes the input asset image based on the find and replace prompts
The number of steps you want AI to take for the generation
Query the list styles endpoint GET /pub/v1/styles
to view all available styles.
Enhances and upscales the input image passed in the asset_id
Response
The error if the request was not successful
Was this page helpful?