Using Eggplant AI

Eggplant AI uses learning algorithms to automate test case creation. You create a model of the app or interface you want to test to generate test cases and to ensure sufficient coverage in your testing.

Build a Model

Plan and build your model. Use States and Actions to represent screens, pages, dialog boxes, and menus as needed to replicate your app or interface. Define possible transitions between states and actions that describe potential paths through your interface.

You can also add variables to capture and provide values for states and actions, as needed. Your model can be simple or complex, depending on your testing needs.

Run the Model

When you run a model in Eggplant AI, learning algorithms choose the specific path the test follows through the model, which can be different with each run. Use the reports Eggplant AI generates to check model coverage so that you can see whether you need to adjust the weighting of any actions or states. Reports can also show the overall success and failure rates of your model. View the output from your model as it runs in the Console.

Check the Run report for basic information about test runs. This report has multiple filtering options to help you make use of your test run data. To report on whether specific parts of your model execute during a given test run, use the Test cases tab, where you can define specific test cases and report on them. The Coverage report shows how many times the states and actions in your model have been hit by your test executions. You can view this data as a heat map or table. Results from any of these report tabs can be exported in XML or CSV format.

Integrate with Eggplant Functional

Eggplant AI works with Eggplant Functional to automate the testing process. You connect Eggplant Functional by using the Eggplant AI Agent. Create SenseTalk snippets in Eggplant Functional, then associate them with states and actions in your Eggplant AI model. When you run a model that has associated SenseTalk snippets, those snippets execute on your system under test (SUT) as the model runs.

Integrate with Eggplant Performance

You configure Eggplant AI with the Eggplant Performance Test Controller to automate performance tests. Eggplant AI uses Eggplant Performance and Eggplant’s very own hosted platform to carry out the performance testing behind the scenes. This integration involves setting up of engines for parallel execution and the systems to run the tests on SUTs, however you as a user don’t need to manually configure the systems. Running load tests in Eggplant AI is extremely simple, which means you can run it without any technical knowledge or prior coding experience. For more information on performance testing using Eggplant AI, see Running Performance Tests.

 

This topic was last updated on August 19, 2021, at 03:30:47 PM.

Eggplant icon Eggplantsoftware.com | Documentation Home | User Forums | Support | Copyright © 2022 Eggplant