Skip to main content

DevOps Decrypted: Ep. 18 - AI in testing and the art of running an internal conference?

Laura, Jobin, Rasmus and Jon are joined by Sandesh Kumar – an Engineering Manager at Adaptavist. Sandesh tells us how his team uses AI in quality engineering and testing and why it's not the existential threat to jobs that everyone's making it out to be. But the big story on the agenda is Adaptacon 2023, our internal, global conference. Jon explains the challenges of organising an event like this and its benefits – even when the talks appear to have nothing to do with our business.

Vanessa Whiteley
Vanessa Whiteley
20 July 23
DevOps Decrypted: Ep. 17 - Are 3rd party devs an invasive species or essential to the health of a product?

Summary

Welcome back to another episode of DevOps Decrypted! In this episode, Laura, Jobin, Rasmus and Jon are joined by special guest Sandesh Kumar – an Engineering Manager here at Adaptavist Group. Sandesh tells us how his team uses AI in quality engineering and testing and why it’s not the existential threat to jobs that everyone’s making it out to be. But the big story on the agenda is Adaptacon 2023, our internal, global conference. Jon explains the challenges of organising an event like this and its benefits – even when the talks appear to have nothing to do with our business.

Transcript

Laura Larramore:

Hi everyone! Welcome to DevOps Decrypted – the show where DevOps is what we're here for.

I'm your host, Laura, and I'm here today with Jobin, Rasmus, and Jon – and a special guest, Sandesh Kumar, who works as an Engineering Manager at Adaptavist Group.

Welcome, everyone!

Jobin Kuruvilla:

Hello, hello!

Rasmus Praestholm:

Hello!

Laura Larramore:

Alright! We're going to start today by talking a little bit about an event that we had recently called Adaptacon – Jon, why don't you give us a little overview of what that is and what it entails?

Jon Mort:

Yeah – so Adaptacon was kind of like our internal conference, where we allowed people to talk about whatever they like—some of their interests, something that might inspire something with colleagues.

And we did it over two weeks. We're a global organisation – you can't stick a pin in a calendar and get everybody in the same place, all at the same time, and so we spread it over two weeks – and you know, I tried to create a space for as many people to share their ideas and to listen to ideas from around the organisation. So yeah, it's an internal conference, but in the Adaptavist way!

Yeah. And it was a lot of fun.

Laura Larramore:

Yeah, it was. It was fun.

Jobin Kuruvilla:

The best thing I like about this initiative is that there are so many different types of talks. You know that was one from Simon. There were a few from technical folks, non-technical folks. It was so interesting. I could join all of them. And I have a task to go through some interesting ones. But obviously, work comes in the way! The ones that I attended, I was, like… so so happy that I did. I didn't have the time to participate in all of them. But I'll hopefully catch up on most of it.

Jon Mort:

Yeah, to give you some idea of the scale of it – there were 70 talks across the two weeks. So, being, you know, going to them all is pretty much impossible. But yeah, it was a lot of content!

Jobin Kuruvilla:

Yeah, I felt like, looking at the AW3 agenda, you have so many parallel tracks going on, and you are confused. Where should I go? Right? And this was pretty much similar. But okay, the good thing is, it's all recorded. So you can go back and watch some. So that was good. So, brilliant initiative.

And I know, Jon, you put a lot of effort in the background, setting this up and doing it. So maybe you can talk us through the action that was like where to set this up. I mean, how did you even come up with all this? 70 talks?! Was there any screening right there?

Jon Mort:

Well, it was a whole team that did this. You know, it's very much... Not just me! It's a team behind it. And we decided we wanted to do it after some of the conversations we had at our end-of-year celebration meet-up; a couple of people talked about how, "Hey, we should showcase some of the talents!".

We did something very similar a few years ago called Springcon, but it was a much smaller scale at that point, and so hey, we should do that again – so that kicked off the idea.

So we put our survey to gauge interest. It was like this, you know, so well and sound a few people being enthusiastic about it. But you know, is there an appetite for it across the organisation? And the answer came back, yes, there's appetite, and then we put together a call for papers. Get speakers to submit. And then kind of essentially just cobbled together the idea, so the calendar and things, and maybe we didn't get the schedule out in time, that some of it was a bit of a shambles, sort of all organisation-wise, I would say!

But it mattered and didn't matter simultaneously, putting it all together. But one of the things we wanted to be able to do was put on as many talks as we possibly could. So we initially were thinking about just doing it for one week.

But the number of talks that came through, and just if we were going to select between them – it was just too hard. There was so much quality stuff. So yeah, we decided to put it over two weeks and then say yes to everyone! So there was no one that we rejected.

There were a couple of people who couldn't talk for various reasons. But there was. Yeah, we tried to accept as many as possible.

Rasmus Praestholm:

So maybe this is like picking out your favourite child, which is tricky. But what were your favourite talks, Jon?

Jon Mort:

So, Sandy did a talk on Dungeons and Dragons, and why, you know it, you know you should give it a go. And I don't play roleplaying games. It's not generally in my interest, but it's, like, thoroughly inspired to go and give it a go.

The way that she talked about kind of passion, and I like the link to like sort of self-improvement. And this is the thing that was kind of that, that was, really, really cool.

But I also liked a really kind of fun one about hooking up a beer pump to a Jira board, which is just brilliant.

And yeah. So I think we should get that in all of the offices!

Laura Larramore:

Yeah, that one was fantastic, and I am interested in doing something like that. So I thought that was cool. One of my favourite ones was the one about AI bias, which he mentioned that it was like a 5-week course. It was like; I need that 5-week course because that's interesting to me, because my background is in humanities, and so bringing in stuff that belongs in societies along with the tech was cool. And I thought it was superb.

Jobin Kuruvilla:

Again, as I was saying, there were multiple ones. It's definitely like picking out a favourite style. I mean, I was going through the talks. And there was this one from Simon, where he talked extensively about how Adaptavist is today as the Adaptavist Group, and where we are going, the direction we are setting.

Which I thought was very useful. It was happening at the same time my team call was, so I told the team. Yeah, don't bother listening to me! Let's go and listen to Simon. So we all did that. I would say This was a wise decision, but then there were other talks, like more technical ones, like… probably the one we will focus on today.

But at the same time, there was, for example, a Kubernetes talk from Gopal from my team about how to scale GitHub on Kubernetes and things like that.

So…I think there was a variety of topics. That was one about leadership. I forgot who it was, but I didn't get to see it, but there was a call out for that at our team channel level, that particular talk. And you know, I think everybody should listen to that specific talk. So I guess that's what's very much interesting about this AdaptaCon.

How about you, Rasmus? You asked the question, did you have a favourite one?

Rasmus Praestholm:

Sure I did. There are several more, you know, interesting ones, lots of new topics, lots of AI stuff, naturally, and I like those talks.

The one I will. I will call it, as my favourite, is actually on behaviour-driven development.

And partly because John Kern did it. And both of us also work on hobby projects. So we share some things there, and also the thought of working those into dogfooding and internal projects, and targets for this kind of neat technical topics that can be hard to justify, especially as consultants. But even as a regular developer, something like business, sorry, something like behaviour-driven development can't take a lot of time and sort of like a mind to get into.

And that's hard to do if you're trying to do consultancy on some, like, write a plugin or something like that. But on hobby projects, you can do that, and like, really enjoy it.

But it still doesn't fix the thing that, as neat as BDD is, it's a lot of stuff. And testing, in general, is, and it often doesn't get much attention.

So towards the end of that, it was still rising. Hmm… Can you have AI help you write these tests, but have it be meaningful and like arbitrarily test that the same 42,000 numbers, plus the other 42,000 numbers, result in the same number? That's like [grumbles].

But could you feed, you know, a given when then to an AI that knows enough about the code to express those in code?

I think that would be neat.

So I hope to come out of that with more and some navigation to other neat places.

Jobin Kuruvilla:

Yeah, yeah.

Hey, Jon, I have one more question. I'm sorry to put you on the spot again.

But you have already organised a few conferences like this – you mentioned the Springcon and AdaptaCon. Are there any learnings for our listeners to take away from this? Would you have done anything differently?

Jon Mort:

Yeah, I mean, that's excellent timing – we've just come off the retrospective for the event. So as a kind of a bunch of things. I think one of the things we deliberately did was remove constraints on speakers and make it clear. You can talk about anything and everything.

So I think if you're tempted to do that for an internal conference, I would thoroughly recommend it because you have got various opinions and perspectives.

The other thing is to make sure that the talks are short enough so that it's not over; you know you're not overcommitting people for coming up and attending lots of them.

So we put only half-hour slots for things which some people were a little bit, you know that they would rather have a longer time to talk. But actually, the focused time means that you're respecting people attending. And you can; you have to work on your talk a bit more to get it down to that, you know. 20 min plus 10 min questions.

That'd be the two things. But I would encourage any organisation, anybody who's thinking about doing it, it's such a great thing to give that platform to the talented people that make up your staff because, sure, we have a bunch of talented people within Adaptavist Group. Still, you know, we don't hold a monopoly on the gifted people.

And could be giving people a platform to share, I think, is excellent for their development, not just for the people speaking, but for the whole of the organisation.

Jobin Kuruvilla:

Yeah, that makes total sense. And speaking of talent – I think we have one such talent in the podcast today. Sandesh, welcome!

Sandesh Kumar:

Yeah, thanks. Thanks. Jobin. And that's a massive introduction. I don't know if it's true, but thank you!

Jobin Kuruvilla:

I mean, we don't mince our words, do we?!

Rasmus Praestholm:

So, Sandesh, do you want to describe quickly what you do in Adaptavist?

Sandesh Kumar:

Hi Rasmus, I'm an Engineering Manager at Adaptavist – I manage the quality engineering function mainly within the product engineering business units. And also, last year, I started working with the cloud Foundation team, which is responsible for building the cloud platform our product teams build on top of.

Jobin Kuruvilla:

Sandesh, would you like to speak about your AdaptaCon talk, and what the topic was?

Sandesh Kumar:

Yeah, sure. So before that, I want to say how good being part of Adaptacon this year was. It was remarkable, just because of all the new companies that are part of the group now, and Adaptavist Group, and we have a big team in Malaysia who are taking part for the last time in such a conference.

And this was a bit more inclusive than the previous ones because now we had sessions in the morning and evening, covering different times. And so this was special. Still, the talk I – I did a couple of talks, but the one I'm going to talk about today was, it's about how AI will play a key role in the future of quality engineering.

Jobin Kuruvilla:

That seems like a loaded topic again; AI is in the news for everything. So let's yeah. I mean testing Rasmus already mentioned about behaviour-driven development. I mean, AI probably has a role to play there as well. Speaking of your talk in particular, what aspects of the testing did you cover regarding using AI?

Sandesh Kumar:

So I was trying to be a bit more generic here; I was not trying to be – I mean talking about the specific field in quality engineering – but how AI can help us evolve quality engineering as a whole. So I touched upon how AI can be used for data generation, how AI can be used for identifying edge cases, and by using natural language processing, how it can help you understand the user stories and explain them to you in a better way so that you can come up with scenarios to test it.

As quality engineers, you depend on these things to generate useful or practical test cases of the scenarios to test the product.

Jobin Kuruvilla:

So you have a big team. Do you use any of this right now in your work today?

Sandesh Kumar:

So, yes. What we have been doing since earlier this year when chat Gpt was made available for the entire company. I've been encouraging the quality engineering team to start using ChatGPT to explore how we can play a role in our workflow.

The team has been using some things for identifying test scenarios, generating test strategies, testing and planning.

Even working as a paid programming tool. Right? You ask if you encounter issues while writing test scripts, you can ask the question.

These are some of the use cases for which we have been using AI.

There was a talk in Adaptacon, which, being a bit biased, was one of my favourite talks of Adaptacon, where quality engineers [inaudible] talked about how they use AI in their day-to-day workflow, which is very interesting. There are a few things that they discovered while using AI.

Jon Mort:

Yeah, I thought that was. That was a great, like, a kind of example of people used to pick up a tool, and, you know, integrating it into their workflow so they can do more, and get down to the bits that, you know, the real value that they bring to the testing.

I agree; that was a good talk; I enjoyed that…

Sandesh Kumar:

And the cool thing about that presentation was that AI also generated the display. So all the script, the slides, they use ChatGPT to inspire them, and that was neat.

Jobin Kuruvilla:

Very interesting. So coming back to testing, it looks like one of the critical points in that talk was, you know, predicting defect patterns, right? Again, it's an exciting topic. I mean, when you think about it, one of the problems that we have today is… There's tons and tons of logs, and you know there's no way a human can go through all of those logs, and, you know, come up with, hey, these are all the defects that we have and then predict their pattern based on that.

Do you think, you know, in the future, AI can potentially do a better job in, you know, predicting that defect pattern in the future?

Sandesh Kumar:

I think so, but I don't think it will replace what we have. I believe it will enhance how we do things. The way I see AI and the way I've been increasing the team to use AI is to think about it as a tool, another tool that we use, and we need to learn to use the tool to make our work more efficient and better.

So yes, it will make things easier and quicker for us to get things done. But it would be best if you still had someone with domain expertise sitting there analysing the results of AI and making informed decisions.

So it's about, probably about, it's like using it as a sounding board or a way to seek guidance and then use your intelligence to act.

Jobin Kuruvilla:

You mentioned AI as a tool. That's an interesting one. So do you see another tool coming into the midst to enhance that AI features? Or do you see all the tools you're using today will incorporate more AI into their functionalities so that those tools will do the hard work… for you?

Sandesh Kumar:

I think there will be a lot of integration with AI; we are already seeing it – just talking about quality engineering. There are entire companies building or integrating AI into their existing tool set, like… There's a company called Testim, and what they do is they have AI predicting or generating the scenarios based on existing test cases or the application usage.

Then there are automated tools which can self-heal based on, if there's a change to the UI, it can automatically fix the failing test based on the changes. So there are already such things happening. But I think it will be more and more going forward because it makes it easier for us to do more stuff with this integration.

Jobin Kuruvilla:

I'm just rewinding what you just said… So there are tools that will develop test cases. But then they will also look at the test cases, see the failing test cases, and then go fix the code. That… sounds like doing the job of what we are doing today!

Sandesh Kumar:

Yes, it… The answer is yes, but there are still limitations. It needs to understand the context. Sometimes there will, there are chances that it might get things wrong. But I'm sure it's going to evolve and get over it.

Rasmus Praestholm:

I am also interested about specifically what sort of tools up there you're seeing and what kind of integrations you're seeing that help this because, as much as I would wish to ask, so hey, what's the prompt I can feed to ChatGPT to generate perfect BDD for my entire hobby projects, I probably need to go in there and figure out that okay, I can copy paste things from ChatGPT back and forward.

But what kind of like in IDE or other similar assistance that uses AI as a tool, as you say, should I be looking for?

Sandesh Kumar:

So if you're talking about BDD, I'm unsure if there are any. I did. I did a POC using OpenAI APIs and created an app on Jira, which the idea behind that was. It will read the story description, and it will generate test cases for you or test or acceptance, criteria or test scenarios.

And all I was doing was prompting it to do that. So if I can, if I'm able to think about it, I'm sure there are tools in the market doing the same. I haven't encountered one, but if I have to give you an example like a GitHub Copilot, it's one of the examples where it can generate unique tests for you based on the code you write. So I think that's an excellent example of where the market and the industry are going.

Rasmus Praestholm:

Nice.

Jobin Kuruvilla:

So I think earlier, Laura mentioned the bias in AI – do you see that impacting quality engineering as well? You know, most of the things you mentioned earlier are technical. So I don't see a reason for yeah. I mean, it makes sense, right? AI can do that. That's great. It's enhancing our capabilities. But do you see bias having a play in any of this?

Sandesh Kumar:

I think there are a few things that we need to take care of. The alternative AI is as good as the data it is trained on. So if the data quality is not maintained, if you're not training the models with high-quality data, you'll end up with insufficient data, or it'll end up with data that are not useful. So there is some. And if the data is already biased, you will have biased responses. So we can't… I think that's a big challenge in it with the current level of development. We cannot just trust what we get out of it. We need to make sure that the data going into it is validated.

But I'm pretty sure it's biased because of the available data for training. I mean. For example, another example is if, let's say, you're using a tool to generate scenarios, edge cases or test data, for you know, a financial institution or a banking application. You need to train it with relevant data.

You cannot just train it with generic data and expect it to provide relevant information. So I think that is also important here.

Jobin Kuruvilla:

It sounds like it. It changes the way we work. But that would probably also mean upskilling your teams, you know. Work with the new tools and AI. Right? What do you do there for your teams?

Sandesh Kumar:

So yes. So whenever I have this discussion about AI taking over jobs, I always think about it or refer to the Industrial Revolution that happened. Yes, it did take away jobs. I think I mentioned this in one of the blogs I did. It did take away jobs, but it created so many new jobs because now people had more time to do interesting stuff.

Alright. So. And also when I started my career in quality engineering a while ago. Automation testing was not as extensive as it is now. Many people were doing manual testing, and people were scared about this checking out automation testing, right? Because it will take over manual testing jobs. But it made everything so much better. People had to upskill. They become more technical. They started being valued more, right? Because now, you have a new skill set in the industry.

You're not just a tester anymore. Right? I think. AI, we play the same part, right? You're not just going to use the tool. You'll start learning new skills like prompt engineering, for example. Right? It's not something you can copy/paste. You need to know how to prompt it. But that's something you'll pick up; you'll start training or learning how to train the models with the data, right? And learning how to validate that.

That's something that I'm not aware of, right? So that's something that I would like to learn. So it will make us better and only make those people better. We want to be part of this journey.

Jon Mort:

One of the lenses or mental models that I use to think about large language models is that they very quickly produce the kind of like, the average or the boring thought – maybe the sorts of things which you would expect to come out of something, you know? I've got this feature to test – it will produce the obvious test cases, scenarios, and things.

But I see that because it can produce the obvious. Maybe it's more nuanced – you can spend more time on the actual value that is bringing, so you can get anything given the same amount of time to work on a problem. I don't know if you got any thoughts on that.

Sandesh Kumar:

It's true. It's like… Taking care of the Happy Path. It does take care of those happy paths. And now you have more time actually to start breaking the system. Consider edge cases, boundary analysis, and find data that can damage the system. Before this, you didn't have much time to do all of this, so you will focus on making sure the happy path works, and then the sanity is done. Suddenly this thing is done so… But with AI, since you get this right away, you don't have to spend time on it. You can do more investigation and be curious about how the system behaves. Try to spend time analysing other user behaviours and getting them.

But at the same time, I think you can also work with AI to get that information. You don't have to like it. Now think about it. One of the examples is when I started using ChatGPT to write automated test scripts for a Jira scenario or hand in a cloud. It didn't know it had to switch to an iframe to act on or use the controls or elements on the screen.

It was generating code, but it didn't know that there was an iframe or the application was within an iframe, but when I prompted it, it corrected itself. So I think working with AI and prompting it to go deeper maybe or go to a specific area will make the entire testing process better or generate scenarios better.

It can't do all of it on its own.

Jobin Kuruvilla:

Yeah. And it's about preparing a team to do that right and returning to my organic question about upscaling. I also see that you had another topic or another presentation at Adaptacon about developing a team – probably, it all comes together, right? How do you develop your team to work in this new era of AI and everything else?

Sandesh Kumar:

So it. It wasn't a talk that I did at AdaptaCon. So this was more of an experiment we all did about two years ago to see how we can give people some time for sub-development. Before that, whenever I used to have one with my team, they all wanted to focus on personal development. They wanted to learn new technology, new tools.

But every single one said that time was the most significant factor because they were always busy helping teams out and helping couples achieve their goals.

And which is fair, right? And I've been in companies where your bonus or you're, I don't know, salary increments are tied to personal development or objectives. In those companies, it becomes tough to achieve anything because you are always doing something.

So what we did was Jon was part of that discussion. Jon, Adam, was product head of product management at that time, Yari. So we got buy-in from the teams to let people spend one sprint or half a day weekly on things they want to learn. It can be anything. It wasn't really team related, right?

A lot of them adopted that. We ran this for, I think, a couple of sprints. We have data; I have a blog or a piece somewhere at the end of it. We did a survey. See how it was for the people who participated in it and also for the teams. The teams didn't see any difference in the output; the velocity didn't go down, they were still being productive, and they were still achieving and getting things done.

But there was a significant increase in satisfaction with people who participated. They were able to focus on things they wanted to. They were able to learn something.

And at the same time, they didn't feel like they were slacking off from work to do something else. Everybody is on the same page. They didn't judge; I don't know, ashamed to say in the stand-up that. I'm not working on this today because I'm learning something.

I think that made it easier for people to learn new things. This happened two years ago, and a lot of teams, a lot of teams are still practising it. At least the quality engineering team is still practising it.

And now, we have gone through different technologies and tools, but now we are encouraging people to learn about AI and how we play. So people are using this time to go to courses and use what they know.

Jon Mort:

Yeah. And it's something that's been picked up by teams other than, you know. So set that time aside as well as you think it's essential, I mean, and in its looping back round to AdaptaCon as part of that, you know, giving, giving time for and encouraging that development will all look kind of hooped in.

But the thing that I liked about the way you did that was the data-driven approach with the surveys. You have to be easy to go, here some time, and go… Yeah, it's, you know, it's better. And it had no effect, but going to the metrics. I thought that was a perfect approach.

Sandesh Kumar:

I mean, if I remember it correctly. I think it was your idea, Jon! To run it as an experiment and have the data to get the buy-in. So it was a good idea…

Jon Mort:

That's not my memory of it! My recollection is that we needed to do an experiment and test the hypothesis; it was your data-driven approach. So…

Jobin Kuruvilla:

I wouldn't mind taking credit for it, to be honest!

But being on the services side, we actually – I mean, this might be another experiment that we can conduct because it's probably easier on the product side; again, this might come across as an excuse, right? But on the product side, you're running sprints, and you can actually determine what needs to be produced at the end of the sprint. But while working on the service side, Rasmus and Laura can probably attest to it. We work on customer schedules, and most of the time, our biggest problem is that there are deliverables to meet. And suddenly, something changes. The pattern changes, and we have to act accordingly. So it's always… not that easy to, you know. Say that. Okay, hey? We will keep away one day or half a day because, you know, that's what they intend.

But then suddenly something changes, and you know everything falls over – but still probably a good experiment to do because, you know, maybe going back to how we use OpenAir for resource availability, we might say, hey, we block off specific time every week for this. You know that reduces our resource availability and reduce the capacity.

And we can probably plan long-term based on that; it might be a good experiment to run.

Laura Larramore:

One of the things our team has been doing around that is we have learning sessions because our team that I'm on has a lot of associates, and so we'll have, like, an hour time, where we all teach each other. And so we spend time learning and preparing for that and teaching each other things, which helps a little. It's not as much as half a day a week, but it is a little way to work on the services side where it's a little more confined.

Sandesh Kumar:

And also another thing we did… So when this personal development time was introduced, we didn't have associates in the team. We had experienced people.

Over the last year, we have introduced a lot of associates in the team who probably need more support with development. Another thing that we have done is we have, we did our onboarding sessions, where we just started building everything together, paid programming sessions. So everybody was part of it. Everybody was learning together.

And what it meant was they all formed a bond, a group, and they started working on it together. So they were all learning together, and they were not afraid to ask questions which they felt were not the right or appropriate questions to ask themselves. So it made it easier for them to spin up quickly.

Laura Larramore:

Yeah, that's a great way to bring new people in and people who are new to the industry. But I appreciate everyone's time today.

This has been great. I've enjoyed this conversation. I enjoy talking about some of these things, significantly as AI ramps up; we all need to increase our skills and get a little bit more proficient to get into those more technical things that AI can't quite reach yet.

So thanks for joining us, everyone, to discuss this AI and behaviour-driven development.

And hope that you're enjoying the show. Let us know what you think on social media @Adaptavist – we look forward to continuing this conversation.

So, for Sandesh and Rasmus, Jon, Jobin and myself – this has been DevOps Decrypted, part of the Adaptavist Live Podcast Network.

Like what you hear?

Why not leave us a review on your podcast platform of choice? Let us know how we're doing or highlight topics you want us to discuss in our upcoming episodes.

We genuinely love to hear your feedback, and as a thank you, we will be giving out some free Adaptavist swag bags to thank you for your ongoing support!

Like what you hear?

Why not leave us a review on your podcast platform of choice? Let us know how we're doing or highlight topics you would like us to discuss in our upcoming episodes.

We truly love to hear your feedback, and as a thank you, we will be giving out some free Adaptavist swag bags to say thank you for your ongoing support!