The ModelBench Playground is a powerful environment for testing and comparing different LLM models. It allows you to interact with multiple models simultaneously, add custom tools, and refine your prompts in real-time.
Select the models you want to compare from the available options.
Add any necessary tools by pasting their JSON schema into the tool section.
Write your prompt in the input area.
Run the prompt and observe how different models respond.
Refine your prompt based on the results and repeat the process.
Use the “Show Log” feature to view detailed information about each interaction.
Share your work using the “Share” button to generate a public link.
The Playground is your sandbox for quick experimentation and model comparison. For more structured testing and benchmarking, check out our Workbench feature.