Type of Tests: Taiko, Gauge UI and API level automation
Website Tested: Various including The-Internet, Gauge.org
This episode is a brief one, all about my continuing odyssey of creating a test automation portfolio. This time, I’m following the advice the incredible Steve Mellor gave me some time ago and diving into ThoughtWork’s open source library Taiko and framework Gauge (which I still inexplicably spell “Guage”. every. single. time).
We’ve recently started using this where I work too, so having a sandbox to practice in has been cool.
The tests do lots of basics to demonstrate the simplicity of Taiko and Gauge working together, such as interacting with various websites through smart selectors, form authentication, use of table-driven test data and mocks. It is very simple to pick up, and I found debugging nice and clear too.
My strategy was basic, namely:-
Install the cloned framework, and get it up and running.
Get the existing tests to run (this took longer than I thought, a lot had changed in the couple of years since the original repo had been created).
Add in a few new tests from scratch
Here is a video of the tests successfully passing when executed in VSCode.
I’ve published my repo as a template, which means you can use it as a basis to begin your automation framework. Check it out here:-
I was definitely reminded of the importance of having a jump off point – creating a lot of this stuff from scratch would have been extremely time consuming. Also, using an old repo is sometimes more trouble than its worth (approx 50% of test cases in this did not pass first time round, but it was actually a good learning curve to try and solve the puzzle of why!). Onwards!
For a major upcoming talk in October, I’ve been thinking about how successful testers seem to carve out their own futures. In my twelve years of experience in this industry, I’ve worked with many people over the years who on the face of it seem to be… jammy. They just never seem to struggle to find their next amazing role (or stay in their current one and get promoted, better terms etc).
The thing that has interested me, is that the testers who excel at doing this aren’t necessarily the ones with the best CV’s on paper. The brutal truth is there is not always a positive correlation between your skills and your opportunities.
So what is the common theme amongst these successful folk? Perhaps its right place right time, perhaps its just a good jobs market that keeps throwing great opportunities their way. Perhaps, a little. However, IMHO its largely the number of people they are visible to. They make a point of investing time and effort into growing their reputation – whether that be at work with their immediate team and wider colleagues or outside of work e.g. with former colleagues, select recruiters or the testing community at large.
You can be an amazing tester, but if no-one knows you they can’t tell you of opportunities when they arise. And this could majorly cost you over the course of your career.
If you want to become more visible, my advice would be to invest your time on your reputation in the same ratio as you would on your technical skills.
One way of identifying your weak spots, or opportunities to improve even further, is to complete a Reputation Audit. I designed this myself, and it represents a high level view of where you should look and the sorts of things you can do to turbo charge your reputation. It shouldn’t take more than 5 minutes, and you can start implementing your own recommendations straight away.
Here is a link to a blank repuatation audit to get you started:
Please reach out to me if you would like a blank reputation audit template for your own use and I will send you a spreadsheet – alternatively, if there is a way of adding an excel file into this page without it costing me a fortune then let me know how and I’ll do that instead :o)
***warning*** any actions here must be sincere, and you must be willing to contribute for the benefit of others, not just what you can get out of the situation. Personally, I’ve grown a lot in confidence over the past few months just by doing things such as submitting questions for an AMA, posting a few bits in a chat alongside an online meetup, or reaching out to some of the incredible people in the software testing community and asking for their thoughts or help.
As a tester, I sometimes feel bombarded with the sheer amount of stuff I’m “supposed” to have mastered. In one article I read recently, at least 15 different technologies, toolsets, areas of tech etc. were mentioned that testers “need to know”. Whilst to some that probably seemed like a great challenge, to me it just felt completely overwhelming.
Its important to keep reminding yourself that you can’t know it all, and ultimately what is more important is having a growth mindset.
Growth vs. Fixed Mindset
A few years back, I watched an amazing Ted talk by Carol Dweck called “the power of believing you can improve”. Whilst it was aimed at children in education, I believe it translates well into those upskilling in the world of software testing. If you believe you can learn something, but you’re just not there yet, you will be a lot more likely to practice it and master it than if you think you’ll never get there.
I still have to battle the instinct all the time which tells me “you’re not a proper coder” / “you don’t have a CS degree” / “you’ll never be as good as that guy” – because you’re actually only competing with yourself there. What is more important to recognise is that everyone is learning new stuff, all the time actually, and if you keep chipping away at it it does get easier.
Peter Simons did a great talk a few years back about learning automation, and this was his first slide:-
The message is clear – don’t be put off starting anything because you don’t know everything. Begin with small steps, accept it doesn’t have to be perfect, and learn one thing at a time. Refactor and iterate.
Thank you to the Bloggers Club at the Ministry of Testing for inspiring me to write this. I’d encourage anyone of thinking about blogging to take a look at bloggers club if you need a nudge.
Website Tested: Restful Booker API and Restful Booker UI
In this post, I want to continue to walk through my long-term goal of creating a test automation portfolio. This time it is a bonus section, because it wasn’t in my original set of tasks to include NUnit testing. However, I’m a big believer in not trying to reinvent the wheel, and after taking Brendan Connolly’s excellent NUnit testing course on Test Automation University I was able to get a copy of his code working which meant there is a repository there for me to dip into in future. I was also able to discuss my progress with Brendan, who was really helpful, so thankyou for giving me your time.
My aim with this code was 2 fold:-
To get it working with the latest nuget packages, selenium version etc. (parts of the original codebase were probably a few years old)
To make as many of the tests as possible pass and improve my learning by making slight modifications to the codebase
As I said before, this was pre-existing code from Brendan’s TAU course on NUnit, which I highly recommend. It used the popular training site Restful Booker to show off lots of NUnit functionality, via tests which predominantly Added Rooms. In addition some introductory tests were standalone on key functions e.g. Equality Assertions.
So, what does it do?
There are lots of test cases here (in fact I had to reduce the numbers to make it a little clearer). The tests use data driven tests to read external data sources (among others) to provide test data for the tests. There are also a couple of API tests which create a booking and get a booking using the RestfulBooker API.
Whilst the number of tests written into the code were few, because of the selection of test data and parametisation the actual number of tests was over 100 – this will come in useful if you want to rerun a test with different inputs. Asserts were used to verify test results. One cool feature that I liked was [pairwise] which, when inserted as part of the [Test] definition, allowed a sensible set of combinations of test data to be created automatically to avoid millions of tests when, say, 3 or 4 different fields have been specified.
For accessibility purposes (thankyou for the schooling @UndevelopedBruce !) , I include an extract of the code alongside a screenshot of Visual Studio.
public void AddRoomWithValueSource(
[ValueSource(typeof(TestData),nameof(TestData.CurrencyStrings))] string Price,
[Values]bool accessible,[Values] RoomType roomType)
var originalRoomsCount = adminPage.GetRooms().Count;
var room = new Room()
Number = roomNumber,
Type = RoomType.Family,
Price = Price,
Accessible = accessible,
HasWifi = true,
HasView = true
var rooms = adminPage.GetRooms();
var createdRoom = rooms.Last(r => r.Number == room.Number);
I don’t want to give anyone false expectations of my supposed awesomeness here. To be frank, there is no way I would consider myself able to write this code myself (yet). What I was able to do was modify it, get it working, (mostly) understand it.
In doing this I learnt a few very valuable lessons.
Noone is above making mistakes
People really want to help you out (even super duper leaders in their fields that you assume are completely untouchable), you just need to ask for it
Lesson #1 Everyone Makes Mistakes
So herein lies a story. I tried to get all of the tests in this repo which were supposed to pass (some were intentionally failing) to pass successfully. One test in particular was just impossible for me to understand. I tried debugging, I tried adding console logs to check the values, I tried watching it running and I had no idea why it failed. I assumed as the code was from a mighty instructor that it must be something that I’ve done to it thats the problem. I checked the code against the original .zip file and it was exactly the same.
So, completely stumped, I took to Twitter and asked for help.
Twitter is just magic. Within half an hour I’d received this – from the amazing Mark Winteringham – the self same guy who wrote the Restful Booker site I was trying to test.
It suddenly made perfect sense – it was the code which wasn’t right, not me! I checked his theory out by running the test and checking the first value in the list which appeared, and lo, it was that false value which was being read, not the value of the newly created row. I quickly updated the code to read “Last” instead of “First” and re-ran it. The relief at seeing that pass tick was palpable.
The course code was focussing on teaching other elements, and as this test wasn’t modified the issue was never exposed during the course itself. Mark’s reply to this is a lesson I will take with me for a long time.
Everyone makes mistakes.
Lesson #2: Ask for help
This is all tied in with the above story and quote. Its easy to think of people as above you and feel like you are bringing them a silly question that is beneath them to answer. But the truth is I think people actually love being able to answer questions, and if they know the answer straight away and it helps someone out then it actually makes them more likely to respond, not less. Also, people are generally helpful and lovely and that’s just the way it is so there inner saboteur. :o)
I’m actually very happy with how this first part of the portfolio has worked out considering my starting point, but let me be clear that in no way do I want to give anyone the impression that this code was a few minutes in the making. This took days of effort, and one helluva lot of Googling. But we all have to start somewhere, and I’m sure the steep learning curve will mean things will be faster next time. Another bonus is I’ve ended up with a nice template that I can use as a jumping off point in future.
As a result of Coronavirus restrictions, my local gym introduced a pre-booking policy. Members could book either a gym or a pool session up to 7 days in advance. I thought it would be fun to write a robot to automate this booking process.
So, what does it do?
Here is a video of the booking process. Note I do not confirm a booking (as this could not be undone without a phone call to the gym as they have not implemented an online cancellation feature) but the Robot does the following:-
Login using secure credentials (triggering an email if this fails)
Navigate to a date – in this case default but could be specified by the user in a pop up box
Select a time – again using a default value but user can overwrite this
Select an activity of either pool or gym – defaulted to gym, however if the user types a value into the pop up this is then read from a SQL Server Express database (that I created from scratch 🤓) and the corresponding activity number is returned
Confirm the booking or alternatively remove the booking
Logout and close the browser
All key processes contain try/catch blocks for exception handling.
The code was written in UiPath Studio (a free tool) in its proprietary ui language, and some customised fields were written in VB.net. Flows and tests are executed either via Studio manually or via a scheduled robot created in UiPath Orchestrator.
For those new to UiPath or Robotic Process Automation in general, an end to end flow is broken down into key workflows/sequences, which utilise customisable drag-and-drop functions to organise the flow. The process can then be embedded into a state machine framework which allows for reliable execution and error handling.
High Level Overview
Here are the key features of the framework:-
The code is on my GitHub Repo, and I encourage feedback if you have time. Another new tool that I incorporated was GitHub Desktop, which meant I could commit code directly from UiPath into GitHub, add additional files into file explorer instead of using GitHub’s web UI (which I’ve never really gotten on with) so GitHub Desktop will be my default for managing version control in future.
There’s a piece of development work alongside the testing element (in fact, the tests were the easiest bit by far). An Excel spreadsheet included as part of the REFramework standardised template allowed each part of the framework to be tested in isolation. This takes seconds.
Although it was helpful to me to automate a process for a website I actually use in real life (even though I have no intention of using this automation for anything other than a learning exercise really), automating this website was a mistake. The output has meant that as a portfolio entry this has less value than if I’d chosen, say, a practice site mentioned in Angie Jones’ original blog post such as OpenCart or Restful Booker, because someone wanting to see my code running won’t be able to – the login credentials are stored locally in my Windows Credential Manager and for obvious reasons I won’t be sharing them on GitHub. Next time, I will use something generic so that my code will run for anyone.
An update to the above is that I have now frozen my gym membership, so at present even I cannot run the code itself as my account is frozen so I cannot make bookings and improve the code – I’m kicking myself I didn’t use a generic site as this sort of thing wouldn’t happen!
I’m also guilty of over-engineering this solution in the quest to prove to myself that I can write something “proper”. Bloody imposter syndrome. When I looked through Angie’s GitHub repo she had such nice clean simple structure to her code it was much better to do less and keep it simple, especially when this is something I may need to explain in future job interviews.
Finally, I know full well that if an experienced RPA tester/developer were to review my code they would inevitably find space to improve it (and if anyone is up to the task, I’d genuinely be very grateful). I know, for example, that nested if statements aren’t cute. But I honestly could not think of a better solution. I think this is perhaps where pair programming/code review with an experienced person comes into its own. Two heads are definitely better than one.
That said, I’m still in a celebratory mood having completed the first of my five automation porfolio tasks. Time for a short break, then on to the next one!
If someone were to come to an interview for a testing related position armed with their own GitHub repository containing samples of their work I can only think this would be a positive thing, for the following reasons:-
Even if it isn’t perfect, it gives conversations about automation a jumping off point – “can you tell me why you did X” and “what would you like to do to improve Y“, “when wouldn’t you use automationfor Z” etc.
It demonstrates an enthusiasm and passion for testing
It allows a nervous interviewee an opportunity to show their skills without just speaking about them
If they don’t yet have commercial experience in that area, it is a great way to demonstrate what they could do if given the right opportunity
There may be some useful code which (assuming it is the property of the person who wrote it and not copied and pasted from someone else’s repo) may be applicable to the work-related project and the ideas can be quickly lifted and shifted to add value.
Now, lets be clear, I have only been working in automation for a few years, so I am still learning and would still class myself as a beginner/intermediate level in lots of areas. But I’m also keen to take advantage of the free online courses and resources aimed at people just like me, so I’m setting myself a long term goal with this activity.
Goal – in 1 years time I will attempt to have the following on GitHub:-
Millions and millions of years ago, back when I was studying for my Law degree, I was surprised to learn that far from the truth being objective and fixed, it was a malleable thing.
Mooting societies, where aspiring lawyers could pick a side to debate about a particular case, or topic, were popular. And they were popular because either side, with the right amount of well constructed arguments and material, could win.
Fast forward to my current life as a software tester, and this subjective fluidity also applies to risk. There are many different ways to prioritise bugs, issues or features – including whether to raise them at all if you think they won’t get fixed.
The one thing that seems to consistently hold true is:- most people assume everyone thinks the same about risk as they do.
The confirmation bias is strong in this one.
I’ve often been shocked, even when an agreed categorisation table is in place for both priority and severity, triaging bugs that I would deem super critical, only for the Product Owner or wider team to prioritise something I would consider far more trivial instead – sometimes I understand why, other times the rationale seems flawed. As a tester, I see risk everywhere, which can be a good thing, but needs to be kept in check. Sometimes a pragmatic way forward needs to be accepted by everyone, where not all bugs can be fixed, and we “fail-forward”. UPDATE: An interesting approach that I’d like to investigate more on bug raising was outlined at a recent Lean Coffee morning I attended. Stuart Day, Principal QA and Agile Coach at Dunelm, advocates not raising bugs at all (or hardly ever) – as most bugs take more time to create, triage, prioritise and fix than they do just to have a chat with the developer and get them sorted there and then (or even fix them yourself if you can). Whilst I’ve informally done this from time to time, I’ve never seen it be openly advocated from a senior manager in this way, and it definitely peaked my interest. I guess you can be the judge of whether that approach is more or less risky.
Of course, the risk appetite is greater or weaker depending on other factors too, such as:-
the impact if things go right – e.g. the government programme to use drones to fly medical equipment to the Isle of Wight was brought forward at the start of the Coronavirus pandemic in order to “accelerate the pace of development”
A great example of how we all asses risk differently is with Coronavirus (I realise this post is weirdly Covid-19 heavy, but bear with me!). I have lots of conversations with friends and family who consider “bending” the rules absolutely fine, whilst others stick rigidly to the government guidelines. Everyone feels their approach to risk is the right one. Often people, are quick to criticise others for exposing themselves to “unnecessary” risk which they themselves justify taking “I just don’t understand how these big crowds are going out and mixing together. I had a party in my back garden yesterday, but that was OK because it was just my neighbours and some family”. etc. etc. Risk is in the eye of the beholder.
I think in general the ubiquity of software means that most people have a reasonably high risk appetite when it comes to, say, an app, or even an early release of any software product. They accept that if they are “early adopters” or “beta testers” the development team are still working through issues. The important thing IMHO is to find the time and the space to fix the “cosmetic” or “usability” issues, often found in user testing or by testers themselves, which negatively impact their experience of the software. Workarounds shouldn’t have to be used forever, and companies who continue to treat their users in this second rate way may pay a heavy price. Perhaps this is where Stuart Day’s approach to fast bug fixing would pay off?
Something you can’t believe not everyone knows because its so. bloody. fantastic. It is the key that opens the door to financial independence, opportunity to work with like-minded people from all over the globe and a chance to be a part of delivering things which can affect millions of people. And we should cherish it.
Let me tell you something about me. I grew up on the East Marsh area of Grimsby, in the county of North East Lincolnshire, UK. At the time I was growing up, it was in the top 25 most deprived wards in the country. Twenty-fifth, out of 32,844.
So knowledgeable was it of the fact that the East Marsh was such a rubbish area to live, the English Indicies of Deprivation Index was kind enough to embed an advert for a burglar alarm installer onto the above page on its report, which gave the scores for the East Marsh itself. The % scores of deprivation read like binary code – better than 0% of areas in England for Education, employment, income deprivation etc, etc. – statistically speaking, there was nowhere worse in England for me to have grown up.
When I was 18 someone told me I was more likely to have had a child by my age than make it to university. It stung (the truth always does I guess), but it also lit a fire under my bum.
I was obsessed with getting out. My only ambition was making it to University. Fortunately, I was blessed with a hard working professional for a mother, as well as determined and capable matriarchs as grandmothers, so I never doubted it was an option, and indeed my hard work got me to the University of Sheffield, where I studied Law. But I used to smile at the people I met at Uni for whom attending was a given, something that was nailed on. For me it was (and still is, when I come to think about it) my best ever achievement.
But what does this have to do with testing you ask?
Well I guess those early experiences of growing up in such a deprived area, as well as working in a bunch of other tremendously overworked and underpaid “menial” jobs, taught me two things. I don’t ever want to go back, and appreciate the good things. My mantra in my early twenties was follow the money – I sought work on a single criteria – wherever paid the best for my skillset, so determined was I to not be resigned to my statistical fate.
When, like most testers, I fell into testing by saying yes to an opportunity without really knowing what I was getting in to, I could not have known how much amazing goodness would be coming my way over the coming decade.
When I hear people in our industry complaining about working conditions, or not having enough free coffee, or only getting a 2K payrise, I only have to think of the life I left behind me to realise how lucky folks are to have what many would consider such paltry concerns. I feel like crying when people from back home get in touch to tell me there are no jobs and (despite being perfectly capable) the geography of where they live has limited their chances. They’ve never heard of software testing either, but I wish they had.
I feel incredibly lucky to work in an industry that is such an unbelievably amazing sector, at such a great time in history for tech – its exciting, challenging, interesting and, yes, for what you are doing, very lucrative indeed. You don’t need to talk to verbally abusive strangers, you don’t need to clean toilets or ashtrays, or pack boxes with frozen fish for hours on end (and yes, I talk from experience on all these matters). Generally speaking, you work with well educated and like-minded people and you get to be a small part of delivering some brilliant, game changing stuff.
If you work as a tester in 2020, you’ve lucked out my friend.