You can test and compare prompts for different models on the playground page.

Open logs in playground

You can open the logs in the playground by clicking the Open in Playground button on the logs page, which is helpful to test and compare the responses of the models.

A/B test LLM models

You can compare the responses of two models side by side. Click the Compare button in the top bar to compare the responses of the two models.

Test models with different settings

You could adjust the settings of the model to see how it affects the response. Check out to OpenAI params page for more information about these parameters.

Variants: You can set variants from 1 - 50 to test different responses from the model. This is helpful to see how consistent the model is.

Function calling and image

We provide you with the ability to test the LLMs with function calling and image. You can click on the Add tool button or link icon to add a function or image.

1 click to integrate models

Click the View code button in the top bar to copy the code with the prompt and paste it into your code editor.

Save prompts to the library

After you have created a prompt, you can save it to the library for future use. Click the Save as new prompt button in the top bar to save the prompt.
You can check the new prompt on the Prompts page.