メインコンテンツまでスキップ
バージョン:23.5

Working with Text in Eggplant Functional

When you need to find text, remember that you can capture an image of the text and let your script perform an image search. This works best in situations where the text is static and unchanging, such as button text. However, for scenarios where the text is dynamic or the text value to be searched isn't known ahead of time, Eggplant Functional provides on an optical character recognition (OCR) engine to search the screen for the text string on the system under test (SUT).

There are multiple ways of finding text on the SUT:

When to Use Images vs. OCR to Find Text

While you can use OCR almost any time you need to search for text on the SUT, it should be treated as a secondary option to captured images. Take a moment to evaluate the situation to see if using a captured image is feasible or if you need to use OCR.

In situations that fit the descriptions below, try using a captured image before turning to OCR:

  • The text is not dynamic.
  • You know what text you are looking for ahead of time (it is not being pulled in from an external source during the script run).
  • You do not need to read the text off of the SUT using ReadText, or the text can be instead copied using the RemoteClipboard function and returned to the script that way.
  • Your test is against a single platform or system (not cross-platform or cross-browser).
  • The text you are working with is set to a very small font size, or an abnormally large size.

OCR is well-suited to the following situations:

  • You do not know the text you are looking for ahead of time, but the script will have access to that information by the time the search is conducted.
  • The font, size, or color of the text can vary.
  • Your testing includes a variety of browsers, and different browsers can render fonts differently.
  • You need to read a text value from the SUT and bring that information back into your script for further testing or to record in an external data file.

Image Searches

In many cases, you can find text on the SUT the same way you would find any other item on the SUT: by capturing an image of the text and searching for that image in your script. Captured images are easy to use, and very reliable.

For images that contain text, the search type setting Smoothed for Text, accounts for anti-aliased text that might appear in the image. This setting allows for more successful image matches of images that contain text. This can be set at the time of image capture, or in your script using the searchType property with your image search.

Note: This method is only for finding text on the

Examples:

Click "HelpMenu" -- Finds the image of the Help menu, then clicks it
Click (imageName:"HelpMenu", searchType:"smoothed for text") -- Finds the image of the Help menu using the smoothed for text search type, then clicks it

For more information about image searches, see Finding Images.

Optical Character Recognition (OCR)

When you can't capture an image of your text because the content is dynamic (the text content can't be predicted in advance, or the possible values are too numerous too make image capture practical), the primary solution is to use the Optical Character Recognition (OCR) functionality of Eggplant Functional.

OCR uses an interpretive algorithm to decipher pixel patterns on the screen, and determine the textual content by comparing what it finds with a dictionary library.

OCR can be used to either read or find text. To read text off the screen of the SUT using the OCR, utilize a ReadText function. All of the possible parameters for reading text with OCR can be found under the documentation for the ReadText function. To find text with OCR, pair a Text: parameter with any Eggplant Functional command.

Example:

Click (Text:"Help") -- Finds the word Help with any appearance, then clicks it.

The sections below include the basics of testing with OCR. For more detailed information on how to use OCR, see Working with Optical Character Recognition (OCR).

Types of Searches

OCR can be used to conduct two different types of searches:

  • Searching for text on the SUT: When you need to find a specific string of text on the SUT to verify that it is there, or in order to interact with it. Often this is information pulled into the script from an external source, such as a data file. For more on using external data files, see Data Driven Testing.
  • Reading text on the SUT: When you need to read a string of text off of the SUT screen. This is typically a value you don't know ahead of time.

Both of these types of searches do "search" for text on the screen, but require use of different commands, and have varying OCR properties available to them. For instance, while a Language parameter can be defined for both reading and searching, the caseSensitive property is only used when searching for text. Please note that while you can set a search rectangle for both reading searches and text searches, the ReadText function expects. For a full list of OCR properties, including information on which properties can be used for which type of search, see Text Properties.

How to Search for Text

OCR text searches are similar to image searches, and you typically use the same commands or functions. Instead of specifying the image name, you specify the text string you want to find, in quotes, preceded by text.

Example:

Click (text:"Eggplant") -- Finds the first instance of the text string Eggplant (with any formatting), then clicks it.

Most OCR searches, however, require other properties to be set alongside the text parameter, the most frequently used being the searchRectangle property. To see all of the properties you can use when searching for text with OCR, see the OCR properties table in Text Properties.

Search rectangles are typically defined using images, though coordinates can also be passed to this property. The hot spot of the captured image defines the point used (see Using the Hot Spot for more information on moving this point). In the example below, a search rectangle is set using TLImage to define the upper-left corner of the area to be searched, and BRImage to define the bottom-right corner of the area.

Example:

Click (text:"Patient Name",searchRectangle:("TLImage","BRImage")) -- Searches for a patient's name in a search rectangle defined using images

If you want to find all of the locations of a text string, use the EveryImageLocation function:

Example:

log EveryImageLocation (text:"OCR") -- Searches for the text string OCR and logs the screen coordinates for every instance found

Tech Talk

How does it work on the back end? When you search for text using a generic text property list, the OCR engine first searches for each instance of text on the SUT, and then evaluates the text values to find the string you are looking for. You do not have to describe the appearance of your text, because text formatting is not considered.

How to Read Text

Reading text with OCR uses the ReadText function. Similar to searching for text, you can read text within a given rectangle. The difference is that ReadText does not require use of the searchRectangle property, and instead you just pass the rectangle, set of images, or point directly to the function. The ReadText function, like all functions, is executed alongside a standard Eggplant command, such as put or log. Like when searching for text, a variety of properties can be set with your search, like the Language property in the example below. For a full list of OCR properties, including information on which properties can be used for reading text, see Text Properties.

Example:

Log ReadText (("TLImage","BRImage"), Language: "Spanish") -- Logs Spanish text read in a rectangle defined by the images "TLImage" (upper left corner), and "BRImage" (lower right corner)

Text Considerations for SUTs and External Files

You might also need to type text on the SUT. For instance, part of your test might involve filling in fields in a form. For information about adding text with SenseTalk, see Typing on the SUT.

You can also read text from external data files. This ability allows you to set up data-driven testing, among other things. To learn about this feature, see Gathering and Using Data.