How can manual testers take advantage of LLM adoption in testing?

Just because you're 'not technical' doesn't mean you can't benefit from LLMs

How can manual testers take advantage of LLM adoption in testing?
Image by Vilius Kukanauskas from Pixabay

This is the first in a series of posts that follow up on questions I’ve been asked during AMA sessions for different companies on the subject of Generative AI and testing. If you want me to discuss Generative AI, testing and quality engineering, I offer free, one-hour, online ‘Ask me anything’ sessions for teams. Reach out to me on LinkedIn if you want to talk more.

The first question that has been frequently raised at AMAs is:

How can testers without a deep technical background in AI start to engage with and benefit from LLMs (Large Language Models)?

Having a ‘deep technical background’ is a tricky thing to define. But assuming the person asking the question sees technical as someone who can create and read code, let’s explore it from a few different perspectives.

Working with LLMs doesn’t require technical skills

One of the reasons Generative AI has become so popular is because it is accessible to anyone, regardless of technical ability. A simple prompt, written in natural language, can elicit a response. For example, to generate some test data, I can write:

You are a test data generator. Create two room bookings in an XML format for a hotel booking platform that includes name, address, check-in date, check-out date and room number. Check that each room booking is unique to the other before outputting all bookings.

And get the following response:

<HotelBookings>
    <Booking>
        <Name>John Doe</Name>
        <Address>123 Elm Street, Springfield</Address>
        <CheckInDate>2024-10-01</CheckInDate>
        <CheckOutDate>2024-10-05</CheckOutDate>
        <RoomNumber>101</RoomNumber>
    </Booking>
    <Booking>
        <Name>Jane Smith</Name>
        <Address>456 Maple Avenue, Rivertown</Address>
        <CheckInDate>2024-11-10</CheckInDate>
        <CheckOutDate>2024-11-15</CheckOutDate>
        <RoomNumber>202</RoomNumber>
    </Booking>
</HotelBookings>

This isn’t a novel discovery, but it’s important to remember what an LLM is replacing. In the past, creating an output like the one above may have required tooling that required deeper technical knowledge, such as creating or operating code to create test data. Just because we don’t have technical skills doesn’t mean we don’t have technical needs in our work, and an LLM offers the ability to translate our needs in a way that is easy to do.

That’s not to say that working with LLMs to get valuable outputs isn’t an easy task, though. Learning about Prompt engineering and utilising it in our prompting is vital. But fortunately, it is an equally accessible skill to learn, regardless of our technical ability.

In Software Testing with Generative AI, I demonstrate throughout each chapter different Prompt engineering tactics and how to use them in service to our testing requirements, and each tactic relies on employing different patterns of natural language. Something that we are all capable of doing. Take, for example, the prompt mentioned earlier. In it, I added the following sentence:

Check that each room booking is unique to the other before outputting all bookings.

This is utilising a type of tactic called ‘Checking for assumptions’. This tactic involves us adding sentences into a prompt to check that the output the LLM is giving is accurate. It doesn’t check this for us but increases the odds of getting a better response and reducing hallucinations.

As we can see, this tactic doesn’t involve any actions that are considered technical in the traditional sense. There is no code or other technical artefacts entered. Just an appreciation for a pattern of words we can use in our prompts to maximise the output value. These techniques can be learnt by anyone despite their technical experience and confidence.

Utilising LLMs in different testing contexts

Demonstrating how to use LLMs in testing is core to Software Testing with Generative AI because I believe it can be used across a wide range of testing activities. This means the LLMs have the potential to add value in areas that are not deemed ‘technical’. A lot has been shared about the use of LLMs to support test case creation or automation, but I believe that there is also use in activities in the following areas:

Analysis / Risk identification

LLMs have the potential to act as recommendation tools for our analysis of features and consideration of risk. I find success is found by spending some time to make sense of a feature and identify risks by ourselves before we use an LLM and then providing that information to an LLM to help suggest new ideas. By doing this, we maintain control of the direction of our analysis and can use what is fed back from an LLM as jumping points for other areas of exploration. This means an LLM can support us without taking over the analysis and potentially misleading us with hallucinations.

Exploratory testing support

During an exploratory testing session, we carry out a range of activities, from setting up test ideas to generating new ideas to try out in our exploratory testing sessions. Just like with analysis and risk identification, we can utilise an LLM to aid our exploratory testing. We set the direction we want to go in, and then we use our LLM to generate the data we need, scripts that we would like to use or even suggestions of ideas that act as heuristics to trigger new ideas.

Reporting

One area I am very interested in is how LLMs can take existing material and transform it into other formats. LLMs have the potential to translate notes into new formats. For example, taking raw markdown notes and converting them into a testing story. Multimodal LLMs, ones that take more than one format of input, such as text and images, can be used to convert mind maps or written notes into more formal write-ups if required.

These are just a few examples, but notice how little they focus on the technical aspects of testing or software development. What is required for success with LLMs is the ability to break down complex tasks and then identify where and when LLMs could be used. At no point are any of these suggestions focused on replacing the whole testing activity, but by being mindful of the work we’re doing, we can see small points in which LLMs can be utilised in many different ways.

LLMs offer an opportunity to become more technical

One last thing to note is that being a bit more technical in your role isn’t inherently a bad thing. But it is hard to know where to start, what direction to take and access the right type of training material for you. Whilst becoming more technical is a personal choice, using LLMs can help us in a way that is tailored to our needs. In the past, I have used LLMs to help educate me on technical topics such as machine learning, as well as utilising LLMs to break down complex code for me.

What I like about using LLMs to help me better understand technical topics is that it feels like a safe space to ask questions. Sometimes, asking questions can make us feel vulnerable as if asking the question itself is a sign of failure. With an LLM, there isn’t another human on the other end of the line, so it can feel safer to ask those ‘silly’ questions.

It should be said, though, that a good dose of healthy scepticism is needed when using LLMs to educate us. We should always be aware that LLMs are prone to hallucinations or, if they haven’t been updated for a while, are liable to give us outdated information. That’s why it’s important sometimes to corroborate what we’ve learnt with other sources to help check that what we’ve learnt is indeed correct. As well as testing out our newfound knowledge!

Conclusion - LLMs lower the technical bar

Let’s return to the question posed at the start of the post:

How can testers without a deep technical background in AI start to engage with and benefit from LLMs (Large Language Models)?

As we can see, there are many ways in which we can leverage the use of LLMs regardless of our technical background. There are certain skills we need to develop, such as prompt engineering to get the most out of an LLM and a healthy scepticism of what is being returned to ensure we don’t get misled. But these skills don’t require a deep technical understanding to get started. In the end, the best way to learn is to experiment. So, regardless of your technical background, I encourage you to identify an opportunity in which an LLM might be of use and see how you get on. You’ll discover pretty quickly if using LLMs is the right fit for you or not.

If you enjoyed this post and would like me to join your team to discuss Generative AI, testing and quality engineering, I offer free, one-hour, online ‘Ask me anything’ for you and your team. Reach out to me on LinkedIn if you want to talk more.