A group of researchers from the Chinese Academy of Sciences and Monash University have presented a novel approach to generate Text Inputs for mobile app testing based on a pre-trained large language model (LLM), which has been evaluated on 106 Android apps and automated testing queries, showing significant performance improvement, tell Infoq on the topic.

According to the researchers, one of the main obstacles to automating mobile app testing is the need to generate text input, which can be challenging even for human testers.

This is a consequence of the fact that different categories of inputs may be required, including geolocation, addresses, health measures, as well as the relationship that may exist between different inputs required on successive input pages and leading to validation limitations.

It has been shown that large language models (LLMs) such as BERT and GPT-3 can write essays, answer questions, and generate source code.
QTypist attempts to leverage the ability of LLMs to understand input prompts taken from a mobile application to generate meaningful output that can be used as textual input to the application.

Finally, the prompt dataset is used as input for GPT-3, whose output is used as input content. The effectiveness of this approach is evaluated by comparing it to the baselines of a number of alternative approaches, including DroidBot, Humanoid, and others, as well as to human evaluation of the quality of the generated input data. In addition, the researchers performed a utility evaluation against 106 Android apps available on Google Play, integrating QTypist with automated testing tools. In all cases, they say, QTypist was able to improve the performance of existing approaches.

While the initial work done by the team of researchers behind QTypist is promising, more work is needed to extend its scope to cases where the app does not provide enough context information, and to apply it to cases beyond GUI testing.

 

Tags: , , , , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR