Postman Flows: 1 Flow To Smoke Test Your API Estate

One of the things that I love most about airing a new talk is the new ideas it sparks. On Friday, I delivered my talk “Go with the Flow” for the first time, to the lovely internal community at Dunelm.

LinkedIn Post from Dunelm, who were the first to give feedback on my new talk around Postman Flows

The inspiration for this post goes to Adam Pike of Dunelm, who during the talk mused that Flows would make it possible to see the full suite of APIs interacting with each other in Postman.

And it got me thinking. We know Flows has this superpower of being able to call any other collections in a workspace. If you have different collections for all your different APIs, wouldn’t it be handy to be able to visually see the key connections and check they’re all talking to each other?

Here’s a couple of ways I think it could work.

Simple Ping Smoke Test

Example smoke test Flow showing several requests being checked and errors logged out to Slack

In this example, I’m using 3 requests which can belong to different API’s. I’m then checking for the status indicator – however, you can just as easily check for status code != 200 or in a range of allowed values or similar. This then outputs to the console log if all is well, or, if there is a problem, a Slack message is output to the team with the Flows ID to alert them to an issue.

Advanced Integration Smoke Test

But what if what you want to keep an eye on is whether API’s are working when strung together in a real-world customer journey? Will the Auth API talk to the Orders API? Will the Location API talk to the 3rd party Google API? Perhaps you have mocks for third party services, but want to check if they return their end of the bargain and give you what you expect, that your internal API’s will respond in the right way?

Flows can allow you to create that visual map of your API estate.

I see it developing like a spiderweb, with a start button in the middle branching off into different workflows, all executing at the same time. πŸ•ΈοΈ

Here’s a very rough idea of what that could look like:-

Flow containing 2 integration tests running concurrently

The Yellow Flow shows API 1, authenticating, posting and deleting. The Blue flow shows a different API flow entirely – you can of course use multiple APIs in a single flow if you wish. Using the colour feature we can code each flow to make it easy to see if they are running as expected, tagging a different API in a different colour (maybe mocked APIs in grey, for example). If not, we evaluate the status code and if we don’t like what comes back, we send a message to Slack.

Once you are confident you aren’t generating heaps of information that will just be ignored, and no false negatives(remember to test the tests!), you might even want to get this up on a monitor and schedule a run for every 5 minutes or so, just to give you the confidence everything is up and running in the environments you need things to work in.

Thanks again to Adam for the inspiration – I think this might be a useful feature to have.

Hope this helps, til next time!

OpenAI : The-Tool-That-Must-Not-Be-Named ⚑

“No, I don’t want to hear what ChatGPT thinks, I want to hear what you think”

Said a software testing friend recently, when in answer to a question I responded that I’d researched this very topic on OpenAI for an upcoming interview and wrote down the salient bits on my notepad. It stung a bit, because I thought I’d been clever to get a second opinion, but it was a hard no on their part. The agency of thought had to come from me, what a computer had to say wasn’t what they were interested in.

Maybe in times to come we’ll look back on those kinds of exchanges wistfully, and Peter Kay-esque comedy sketches will be written (by computers of course) saying “remember when folk used to actually care what a human thought about the best way to do something!”.

Its been fascinating to see both the level of interest and the level of distaste emerging around this new tool within the QA Community. People are happy to take a look at it, and then very quickly seem to come to an irrevocable marmite-esque conclusion that either:-

“This is the future! It’s incredible what this thing can do – look, it helped me do X, Y and Z, and with far less pass agg than Stack Overflow. Love it”

Or

“This tool is dangerous. We should avoid it at all costs, and if we do use it, treat it with extreme care. Relying on the information it produces (which is often incorrect) without the ability to critically evaluate it will lead to some terrible results. Look – I asked it X,Y and Z and it came up with some absolute mansplaining tosh that sounded great but there was absolutely no factually correct substance to it. Hate it.”

The tool-that-must-not-be-named

For those of you unfamiliar with Harry Potter, the evil Lord Voldemort was considered so powerful , terrible and omnipresent, that to even utter his name was something shocking. Everyone thought about him, and knew about him, but only those with incredible skills as wizards would dare to mention his name.

OpenAI and other AI tools such as Lensa may quickly become tools-that-must-not-be-named within professional tech circles. In other words, tools that a lot of people actually use, but don’t openly acknowledge for fear of retribution. A bit like a company advertising for a “manual tester”, or the quarter final of the FIFA World Cup being the most watched TV event of the year (in the UK). Just what the testing world needs, another thing to argue about interminably – hurrah!

As testers, I love that people are using their noggins to evaluate a new tool. And it has genuinely educated me to learn some of the more negative sides of the AI world – and there are plenty, so I am glad there are people out there who are talking about that stuff!

I also believe in confirmation bias, so we will look for information to justify our inherent beliefs and place less importance on things that seem to cause us cognitive dissonance by diminishing or trivialising them. We, at least, are still human after all.

So what can we acceptably use Open AI for?

However, I believe in shades of grey. I believe that there is a middle ground, and a set of acceptable use cases for this suite of models in particular which will evolve, many of which I am already finding myself forming the habit of using:-

AI as an explainer

Tech is full of acronyms, weird expressions with several meanings (hello Lambda!) and difference of thought. So much so that it is baffling to outsiders, or people trying to enter the industry, or even those of us who have several years under our belts in all honesty!

These smoke and mirror linguistics can feel gatekeepy, and it’s exclusionary to say that people have to learn everything through experience only, or somehow magically know all the same things you do. I recently met up with a group of new software tester recruiters who were overwhelmed with the amount of buzzwords, do’s and don’ts and terminology they had to get their head around.

Example

Example ChatGPT extract asking to explain the difference between C# and .NET in a straightforward way

Beware

I think for basic definitions such as this one, the AI is probably good enough to be reasonably accurate – at least as accurate as a google or stack overflow search. However, I’d be careful using it for more detailed information, or information about recent events as the data is only as good as the data set – which in OpenAI’s case currently ends in 2021. At the very least, when asking a tricky question you should try and cross-reference the key facts the engine is giving you elsewhere – we’ve all had that icky moment when we’ve realised a Wikipedia entry has been modified and we’ve already relied on the definition!

AI for experimentation (and fun!)

I think as testers, we love learning new stuff. We are as magpies to the shiny free tech glimmery gold. And, its got to be said, there is joy to be had from asking a computer to write a poem about something tech related in the style of a gruff yorkshireman/robot/sarcastic salesperson. There just is.

Example

I used Postman Flows (anyone who reads this blog knows what a huge fan I am of that feature) to automate a workflow that checked an OpenAI auto-complete phrase and then output the results to a Slack channel using their API. In my case, “give me the top 3 headlines this week on Postman” – this could easily be leveraged into a scheduled run each week using the new scheduler on the collection runner.

somewhat fuzzy picture shows a Postman Flow sending a request to an OpenAI API and piping the response to Slack

Or even, computer generated pictures of a cat eating a pizza anyone?

Beware

There is so much to evolve and iron out here, with images in particular. For example, I’d caution against uploading any pictures of yourself, as the rights to what AI does to that data stop being yours (noone needs nude deepfakes). If you have a strong conviction against AI generated art (or art which has been moderated by AI without acknowledging or compensating its originator) then probably steer clear altogether. We will be having to ask ourselves moving forward with pretty much everything we see – could this be fake?

Summary

I remain just one of a multitude of opinions on this subject – mine more uninformed than many. So read this blog and take its advice with the same critically applied evaluation that you apply to the rest of your testing life. I will still be interested, curious and open to hearing and learning about the complex and evolving opinions on this topic.

2023 Gifts For Software Testers

Wait, you’re leaving/been promoted/done something awesome for us and we want to say thanks?

Need some thoughtful inspiration on what to buy for loved one or colleague who works in Quality Engineering(QE)/Quality Assurance (QA) that will put a smile on their face.

Here are my top tips:-

Books

There have been several stellar software testing books released recently. Available digitally (hello last minute.com) or in good old fashioned paper form.

images of front covers of the software testing books mentioned

Memberships

Often, us testers aren’t fortunate enough to work for an organisation with a huge personal training budget. So we miss out on anything that requires a subscription or paywall, which can make finding decent content a bit more tricky. Want to help?

screenshot from MoT website with reasons to go Pro

Other bits

Of course, most testers just want regular nice things as presents. But if you want to get something more generic, or even give them the choice of something to get that’s still thoughtful, maybe a voucher could do the trick?

  • Home Office Stuff – stuff to make that home environment a bit nicer – think plants, posh stationary, maybe a small whiteboard or a cool picture. Designworks do some great bits.
  • Gift Card – did you know you can get a Ministry of Testing gift card? Now you do! Lots of cool SWAG on there including Testsphere cards, would Heu-Risk-it and all the hoodies and caps a tester could wish for.
sample images of Designworks stationary

Please note – I am not sponsored by any of these people or companies, they are just genuinely things I think are good.

Hope it helps put a smile on the face of a QA who deserves it.

T’ra for now!

OpenAI: Three ways it can help software testers

Opportunity knocks

I’ve been playing around with the suite of OpenAI tools that have recently appeared in public beta. I doubt they will be free to use forever, so now is a good time to have a knock about with them if you want to see what AI can do.

Hit the links to go straight to each area:

Dall-e for image generation using Postman API πŸ“·
ChatGPT for inspiration and content creation πŸ’‘
Codex for checking test script syntax and first drafts

I am coming at this from a very novice level of understanding algorithms, AI, machine learning and all that jazz. The buzz on LinkedIn and Twitter is growing and it would appear that battle lines are starting to be drawn, but if you want to check out this technology for yourself, here a few places to explore.

Where to start

First things first, sign up for an account

Go to https://beta.openai.com/signup/

You’ll need to provide an email which is verified and an SMS which is also verified. Call me paranoid, but I worry about my records being leaked by a service such as this, plus as a tester I like to be able to use non-personal records, so I used Mailinator who’s enterprise level has both unlimited private email addresses and also an SMS service. Its mine, but it also doesn’t use my personal email and phone number. You might need to sign up for the separate tools (Dall-e, ChatGPT etc) separately too.

What can I do?

Dall-e Image Generation for Test Data (or fun!)

There are lots of ways to do this, but for me the quickest way of knocking around any API is via Postman. If you want to use Dall-e via API calls to generate images on the fly, you can do that in open AI and Postman currently by following these steps:-
1. Generate an API Key
2. Create an Open AI Environment in postman. Add in the URL and the API Key

3. Fork the Postman Fun APIs Only OpenAI endpoints created by Postmanaught Ruby Sutter workspace. Save as a new collection in your workspace of choice.

4. Ensure the authentication token points to the API key which was saved in your environment variable

5. Select the environment variable created earlier. Open the image generation endpoint, click body, enter something funky into the prompt and click send

6. Finally, select the Visualize option in the response body to view the images – awesome huh!

Images can be used as test data, generated on the fly and fed into your tests.

Use ChatGPT to research and even write content

YouTube Video showing Open AI’s ChatGPT tool answering the question “how openai can help software testers”

Once you have registered and verified your account, you are allowed to put in any text you wish and see what ChatGPT has to say about it. Here is the output in full, when I asked the question “Write a short post on how open ai can help software testers”:-

Open AI is a leading provider of artificial intelligence technologies and has the potential to help software testers in a number of ways. For example, Open AI’s GPT-3 language model can be used to automatically generate test cases, making it easier and faster for testers to cover a wide range of scenarios.

In addition, Open AI’s machine learning algorithms can be used to automate certain aspects of software testing, such as regression testing and performance testing. This can help testers save time and reduce the amount of manual effort required for these tasks.

Furthermore, Open AI’s technology can be used to analyze large amounts of data, such as test results and performance metrics, to identify trends and patterns that may not be immediately apparent. This can help testers better understand the behavior of the software they are testing and make more informed decisions about how to test it.

Overall, Open AI has the potential to significantly enhance the capabilities of software testers and help them to more effectively and efficiently test complex software systems.

Open AI

But how to use this as a tester now? Research! Ask it stuff, even create blog posts or draft other content you need, as well as generating e.g. a fixed number of words without relying on lorum ipsum.

Use Codex to cross-reference or draft test automation scripts

Youtube video using Codex to generate test scripts (code extract not shown on mobile view)

Like the others, this still has a way to go, and I don’t see it ever replacing skill test automators but as a neat cross-referencing tool, to easily give some inspiration on correct formatting, syntax etc. this is a good thing to know about. There was a very interesting LinkedIn post of someone generating a lot of automation code for tests

Other areas with promise are the content moderation and text completion endpoints, which seem very cool as well.

I’m still very much exploring this developing tool set, and look forward to learning from others how they are utilising it (if at all) to add another tool to their testing toolkit.

Bye for now!

Postman Flows Early Access: Feature Guide

I’ve been wrapping my head around the latest Postman Flows early access release, and this one is a biggie. Of course it is still subject to further change still, but as outlined by the team in the linked post, the major changes to the current beta version of the low code API workflow feature called Flows are:-

Start at the Start

  • Back by popular demand, the return of the Start button – don’t call it a comeback!

Simple Block List

Massively simplified list of blocks, probably around 50% are left (those that didn’t make the cut include Test Summary, Create Durables and Conditions blocks to name a few).

Youtube video showing the simplified block list in Postman Flows

Hooking You In With Webhooks

Webhooks – can now be added in order to trigger Flows from the Cloud – this will make CI implementation of a Flow possible as the Webhook URL that is generated when a Flow is created can be saved and called called like any other Postman request, as well as allowing a Flow to be triggered automatically by an event, say, a Slack or Discord message. I’ve tried calling this from the CLI and holy smokes it is fast!

Terminals Terminated

End of Terminalslog blocks can now be added instead, which pumps data to the console log.

YouTube video showing console log entries appearing for tests following execution of a Flow

Flows Query Language (FQL)

The arrival of Flows Query Language (FQL). FQL aims to low code-ify data that is used in our API requests and responses, to allow that data to be easily queried, accessed, reused and changed in a much simpler way than by writing complex JavaScript pre-post scripts against the APIs themselves.

An example Evaluate block, using FQL to add variable strings together to create a query for OpenAI

I suspect FQL will be subject to tweaks, but everyone acknowledges that the click and hope method used prior to this was pretty painful. To my mind it looks a lot more technical (and therefore a bit more scary for a new user to pick up) this way, so I’m hopeful that with plenty of feedback the team can continue to make the experience simpler.

But in order to use Flows for more advanced operations, there does need to be a trade off between ease of use and functionality. FQL is trying to bridge that gap by allowing us to:-

  • Generate standalone data (e.g. current date/timestamp) to use
  • Pull data from a Flow (e.g. a response body value) to use
  • Create our own variables (e.g. no times to iterate a test) to use
  • Manipulate data (trim it, combine it etc.) to turn it into something more useful

Here is Postman’s own FQL guide, which I’ve used extensively for reference.

Youtube video shows an early attempt to work out evaluate block using FQL

Summary

This feature is finally coming close to General Availability. And it feels it too – Flows is much more polished and functioning well. It is worth getting the early access version just to play around with some of these features, especially for folks who are either completely new to Postman or are regular users who want to keep up to date with what it can do.

Postman Flows How To: Generate Test Data

In this series of blog posts, I give short tutorials on how to accomplish something using Postman’s no/low code API feature Flows.

This time it is generating test data. If you need to run a set of steps repeatedly to create data before you can execute your tests, then here’s an easy way to do this, no code required.

Steps:
1. Create a flow of the steps needed to generate data e.g. Send Request then x,y,z.
2. Add a Loop N Times Block, with an inbound connector of Number. Select how many iterations you wish to run in the number block (e.g. I want to create 5 bookings, so will enter 5)
3. After the loop N Times block, add a Create Data block. Inside this block, create a list pointing to /data (in other words, the numeric input to the block that you have created above)
4. Add a For Each block after the Create Data block. This will connect to your steps created in step 1 above, and tells flows that you want to iterate through the looped data. Ensure the For Each block is set to pick up /for
5. Run the Flow and check the data has been correctly generated, hurrah!

Here is a short video running through these in more detail.

Youtube video on how to generate test data using Postman Flows

Postman Flows How To: Override Order Of Execution

This is a quick post to explain the 2 ways to link blocks together using the current version of Postman’s low/no code feature, Flows. (version 10).

TL:DR check out the video to see the flow in action, apologies for my somewhat noisy cat! πŸˆβ€β¬›

What are connectors?

As discussed by Postman, Postman Flows has two different types connecting one block to another:-

  • Connection – the solid line
  • Signal – the dotted line

We use connections as standard. But it is useful to know how and when to use signals. Signals change the default order of execution, so if you want to make sure that block B waits for block A to complete before kicking off, then use a signal to do this.

How To Add A Signal Connector

  • Click the grey dot in the bottom left hand corner of a block e.g. block A
  • Drag the block to the “On” box of the block you wish to pause e.g. block B
  • You should see a dotted line connect the two, and the On change to Off in block B

The Off indicator is telling us that block B will be considered switched off until block A has executed. During execution of the Flow, as block B executes its status indicator will change to On.

Tutorial Video

Youtube video demonstrating the two kinds of connectors

I hope you’ve enjoyed this tutorial, and weren’t too distracted by my pesky moggy. 😺

Til next time!

Postman Flows How To: Send A Message To Slack

Here’s a short tutorial post showing how you can fire off a message to your Slack channel when using Postman’s low/no code feature Flows. The good news is this is really easy, and you don’t need to know how to code to do this.

Completed Flow with steps

TL:DR: Check out my short video here

Setup

1. go to https://api.slack.com/apps

2. click Create New App

3. select the workspace you want to post a message to

4. On the newly created app, select incoming Webhooks on left hand menu

5. Toggle Activation to ON

6. Click Add New Webhook to Workspace

7. Select Channel to post messages to. Copy cURL request.

8. Open Postman, click Import button

9. In Raw tab, paste the cURL request from Slack copied in step 7

10. Save request in a new collection (e.g. “Slack API)

11. Create a dynamic variable for request body message text AND webhook token (optional). Save variables in new Slack API environment.

12. Send the POST request. Confirm Slack message received (yay!) Save example in Postman.

13. Open your Flow. You can now add a Send Request Block for the POST Slack Message request, adding in a variable for the message you wish to send. Remember to add to the reject output of a conditional block if you only want to send if something has failed.

Et voila! Your message will now be posted into Slack should the condition fail to be met.

Or check out my video to see this working:-

YouTube Video Entitled Postman Flows Send Message To Slack

Til next time!