Skip to main content

DevOps Decrypted: Ep. 20 - The Magic of Metrics with Umano

In this episode, we speak to special guest Chris Boys from Umano. Umano is a platform dedicated to understanding the human side of team performance metrics, and Chris tells us all there is to know about it. Jobin also talks about DevOps Days London, where Adaptavist will have a booth and a speaker taking the stage. Laura and Rasmus steer the chat around meaningful measurements and team behavioural change.

Vanessa Whiteley
Vanessa Whiteley
6 October 23
DevOps Decrypted: Ep. 19 - The Open Source Drama Broadcast

Summary

Welcome to episode 20 of DevOps Decrypted, a milestone achievement for our podcast! Milestones like this allow us to reflect on everything we’ve achieved, what we could improve, and how to shape our future – through the magic of metrics.

So it was probably good that we spent some time talking to our guest, Chris Boys from Umano, on this episode. Umano is a platform dedicated to understanding the human side of team performance metrics, and Chris tells us all there is to know about it.

Jobin tells us about DevOpsDays in London, where Adaptavist will have a booth and a speaker taking the stage as Laura and Rasmus steer the chat around meaningful measurements and behavioural change in teams.

Transcript

Laura Larramore:

Hello, everyone, and welcome to DevOps Decrypted! This is episode 20 – milestone 20 – of DevOps Decrypted, and I'm your host, Laura Larramore.

And joining me today are Jobin and Rasmus – and we have a guest speaker today; we're gonna interview Chris from Umano.

So welcome, everyone. To get us started, we'll talk a little about DevOpsDays in London, from 21 September through the 22nd – Jobin, do you want to give us some info about that?

Jobin Kuruvilla:

Yeah, definitely. This is probably the first time we are actually sponsoring DevOps Days so that Adaptavist will be having a booth at DevOpsDays in London.

We will have a representative from my team speaking there. So everybody who is there in London and around, please do come visit us in the booth and go listen to the speech by Jason, who’s going to be there. It's going to be a wonderful time.

Apparently, I'm also going to the DevOpsDays here in Washington, DC, next week.

So those of you who do not know about DevOpsDays, it's a worldwide series of technical conferences covering many different topics. You know, everything about DevOps. They started this back in 2009, something. It's going really strong, and at Adaptavist, we have been monitoring DevOpsDays for quite a while now. And this is the first time we get to sponsor one. So I'm very excited about this.

You should really check it out if you haven't been to one before.

Rasmus Praestholm:

Will there be anything in there about metrics?

Jobin Kuruvilla:

Well, not for this one! I mean, there could be? I haven't checked the agenda yet, but there could be something about metrics around all the DevOpsDays, right? But whether it is there in the DevOpsDays or not, it's going to be there in our podcast – isn't it?

Laura Larramore:

Yeah. So today, we have Chris Boys here to talk with us about Umano and their metrics system, how it works and everything.

So, Chris, if you want to give a little intro, we'd appreciate that.

Chris Boys:

So, Umano is a team analytics tool. We combine the power of metrics with automated insights to really help teams know where they're at and then align around action to improve.

Fundamentally, that is always the objective around metrics and measuring practices.

It's to help us perform at our highest level, and in a sustainable way, and in a consistent way, so that we can, you know, perform our best as a team.

And I think it's really interesting hearing you guys are about to launch into DevOps Days because so many companies today, engineering organisations, will enter the conversation of metrics through the lens of DORA.

So the metrics published – sorry, the metrics that really help build and accelerate performance around continuous improvement, sorry, integration, delivery – for shipping value to customers.

For again, accelerating that feedback loop – another metric and another signal from which you can continuously learn and improve.

So, our kind of unique lens is on the topic of metrics. we love DORA, and we think we should go broader.

We think we need to look not just at the way we ship products from a CICD perspective and a pipeline perspective but also look at the practices of team performance and understand ways of working as leading indicators that can – or inputs – that can lead to stronger outputs and ultimately outcomes in terms of that customer value.

So that's our mission. We're maniacs on a mission to really help teams know where they're at through metrics and learn faster so that they can build momentum faster.

Jobin Kuruvilla:

That's a very interesting topic that you brought up about team-level metrics.

Because when you talk about metrics, you know, there is always this conversation about service level metrics versus team level metrics.

And when you talk about DORA metrics, you mostly look at service level. Right? Things like deployment frequency, lead time, mean time to recover, changing failure rate – all of that, specifically focus on the service you're operating on our product and things like that.

And what you are mentioning is we should go broader. Of course, we should, apparently; I was actually looking at the agenda for the DevOpsDays in London as we were speaking – there is actually one topic which says, “Observability is too damn expensive”, which I found very interesting because it probably ties in with the topic that we have at hand.

But anyway, what exactly do you mean by team-level metrics? And how does it actually differ from the DORA metrics or generally service metrics?

Chris Boys:

I think… So, when we talk about Umano team-based metrics, there are two aspects.

One is the quantitative side of measuring team practices. And so we look at workflow. We look at the way teams design and build, the way teams review each other's work from a quality perspective – the way they engage.

So, we look at collaboration, communication, flow, information, and flow.

All of these are aspects of how teams team. And so if you get data into the hands of teams, they then are empowered and can be accountable for iterating on the way that they perform as a team.

The way they practise shipping products, to build, to review and engage in the way that they ship.

And so, for us, it's kind of those behavioural elements. So, looking at attributes such as speed, such as predictability, or progress.

All of these attributes in an agile way of working fundamentally have a set of signals. Those signals are metrics which help provide a view of how teams are tracking in performing in a way that matters most to them and in understanding a broader view of those practices.

It gives teams the observability to go. Hey? This actually might be lagging. Let's double down on this and take an experiment to improve in the way that we practise next sprint or next iteration, which is great.

But the other great thing we forget about is the power metrics. It's to celebrate improvement. It's to really create the momentum and the feeling of progression in the way that we're teaming to do better.

And it's that celebratory element that, I think, is also so critical – that metrics can really inspire teams to perform better and to lean into their strengths and to really focus on what practices drive that team and help them be set up for success.

Jobin Kuruvilla:

Great, I mean, all of that sounds really great to me. But where do we get this information from? I mean, how do you actually monitor? What are the tools that you're collecting these metrics from?

Or is it something that we have to record within Umano? Or – how does it all work?

Chris Boys:

First of all, you don't need Umano. You have all of the data already at your fingertips. It exists in the tools that you use to create the products that you're creating for the customers that you serve.

And so you can use any means to extract that data. You can write queries within your tools to create the insights you want.

Some of the tools will already have a predefined set of metrics. So Jira, for example, will have lead time and cycle, time or velocity, or burndown – whatever those match metrics may be.

What we do with Umano is plug into your tools, so we'll look across Jira, or Azure from an issue tracker, and we'll look across Github, GitLab, and Bitbucket from a repo.

We'll look at Slack or Mattermost from chat and Confluence from a Wiki.

And so we're looking really broadly across the toolset, which I think is critical to building that very single-pane view of practices across the workflow.

When we connect to those tools, we look for the signals through interactions and in the artefacts of what you're creating.

So, for example, we'll look across and scan the tickets. We'll look across commentary in the way you're reviewing each other's work in pull requests.

We look at elements of collaboration by the extent of interactions that are occurring, not just within your core team but across an extended team, to understand the complexity of your workflows.

So, all of these data points exist.

And depending on the level of maturity and capability of the company, you can default to the metrics that existed in those tools already, or some very sophisticated companies and enterprises, no less, will have teams that are focused on creating data lakes in their Power BI or eazyBI tools to merge not just their team based data, but other operational data, to create a view of whatever information they need to make, better, more accurate decisions around performance.

So it's really dependent on where you're at and what questions you're looking to answer from your data set most critically.

I think once you start with the questions that you're seeking answers to, it will then be very quick to go looking for that data within your toolset to help you get the answers in a real-time view and ultimately make those decisions in an accelerated way.

Rasmus Praestholm:

Speaking of decisions.

I like what you said earlier, which is, you know, we talked about metrics – metrics are kind of like a, I would call it a well-travelled topic these days – lots and lots of talk about metrics and all that.

But you mention something you call the myth of metrics.

Can you explain what you mean by that?

Chris Boys:

You bet.

I love speaking to prospective customers and indeed our existing customers around data-driven ways of working.

It's a really interesting dimension, a cultural dimension, to the way we work today.

And I think… There is a lot of hype around being data-driven.

To be frank, I think it's a ton of bull****, because we still so often see teams and leaders diverting and referring to bias or to intuition in the way that they make decisions.

When we ask companies what data-driven means to them? And how are they embedding data into their decision-making cycle into their cadence of work, and into their practice of continuously improving?

It's typically pretty shallow.

We don't see a culture of learning. We don't see a practice of collating data to answer a predefined set of questions that helps teams to know where they're at and take action to improve.

And so I think, you know, with the advent of the penetration of AI in the way that we’re now working – it's just leapfrogging the conversation again, and I worry that we don't have the basics in play, the foundations in play around a data-driven way of working.

And I, you know, it's never too late to start small – start with a very clear focus on what it is you want to have answers to and collect data to drive metrics and to embed metrics into the way that you work.

Rasmus Praestholm:

Yeah, I saw on the website that one of the metrics that's pointed out in there is called Hidden Work, and it lists an example, you know. Tickets are assigned to somebody outside a sprint. I was like that would be wonderful to know!

But how do you know that? Because most work outside sprints doesn't get assigned unless you had maybe a culture, to, you have to have, like, tickets for everything.

Chris Boys:

Well, I think that this is a really interesting observation, Rasmus, I think. Why, first of all, why we create that metric to help bring a bigger picture, a more complete picture to workload, so that we can provide some protections around overburdening teams by thinking that they have a finite capacity of stuff that they're working on – when an actual fact, they're working on a ton more, either through doing other teams a favour or cleaning off the stuff that was supposedly meant to be done and wasn't, whatever the reason might be.

But if we get a complete picture of what teams are working on, what's assigned and what's not. Well, then those leaders that are assigning that work may think twice about that team having the extra capacity to take on work mid-iteration.

So that's why we created that metric. In how we bring observability around that, this is the benefit of Umano, that's basically scanning through all of the tools and the data sets that you have to identify and make connections in those signals, to create meaning in those signals.

And so, by virtue of knowing team members, knowing what's assigned and identifying activity that's occurring by those team members outside of what's assigned, we can start to build a much bigger profile of what the team is actually working on so that you can protect them, be more ruthless in the way that you triage that workload and ensure you're building sustainability in the way that you're working.

Rasmus Praestholm:

I'm smiling, which won't show on the podcast.

But I just realised that I'm doing hidden work right now.

Chris Boys:

Well, there you go!

Rasmus Praestholm:

Yeah, like, my boss is, like, tangentially aware that I do this podcast thing once in a while, but it's not like I have a Jira ticket for it.

So, whenever you have an intelligent, like AI thing, or something that can scan somebody's calendar and, like, figure out, oh, these things cut into my main job.

Maybe we can get some metrics on that. That'd be nice.

Chris Boys:

Well, it, you know again. This is all about culture and how far you want to go, and I think I totally get the, you know, the jest with which you may suggest, you know, integrating calendars.

But there actually is a signal there around time, right? And time spent outside of priority work and some tools will help you do that.

And I think that that is another signal to understand what's actually going on. That's inhibiting teams from performing at their best.

I think the other signals, and I mentioned this at the beginning, that often go unnoticed or aren't explicitly included as a sign of performance or as a broader metric in itself are those qualitative metrics.

So you talk about, you know, the hidden work of the work of the podcast. Well, there are also opportunities in Umano to capture the qualitative story of what's actually occurring and insert narrative and context in and around the team's workflow.

So not only through notes but also through things like what we have as a team vibe that's like a health check, a Spotify health check so that you can sprint stack the health of the team around the cultural or the softer elements of work that goes beyond the harder practice metrics that teams would be more familiar with.

So to your point, you know, you jest, but I think it's actually a really important point. How do you capture narrative and context for what's going around your work so that everybody actually can see that and have a better understanding of what's going on? And then make, you know, again, decisions around what's important and what's not?

Rasmus Praestholm:

Yeah. Don't take my podcast. Away from me, please!

Jobin Kuruvilla:

Well, it is an interesting metric, for sure. I mean, I was looking at the various metrics. You know the hidden work. That's very interesting. We all do that!

But having said that, I mean – how easy is it to create new dashboards, new gadgets, or new metrics? In case you need it.

Chris Boys:

So, at the moment, we have about 25 metrics that cover workflow from design, build, review and engagement practices.

The whole positioning of Umano is that we are off the shelf. It's plug-and-play. You do not need to write queries. You do not need a data science team. You do not need analysts to extract data and create meaning from it. So that's the whole proposition.

At the moment, we don't enable customisation from the view of creating from a data lake a metric that you want to create. So, there are tools that already exist like that. For example, as I mentioned earlier, eazyBI or Power BI, or even write your own queries within Jira as an example.

But the whole proposition for us is to simplify the end-to-end data collection process and the meaning-making process so that all you need to do is really log in and look at and observe across the range of metrics or the one or two that matter most to you – where you're at and what action you could take to improve.

Jobin Kuruvilla:

Okay, interesting. And in terms of the integrations you already mentioned, a lot of the tools there, most of the popular tools that are being used today.

Have you ever come across scenarios where, hey? I have this separate tool or a custom tool we developed, which needs to be hooked into Umano to get that complete picture.

If so, you know, is that possible to do?

Chris Boys:

Yeah, totally. So we have a list of, like, basically our own pipeline of tools that we're connecting through. That is one stream of work that our team is focused on.

So next would be really we’re continuing to work through the primary work tools like Monday.com and Asana, and even Trello from a lighter touch Kanban approach.

But yeah, we basically will take on those requests for integrations, and we're gradually working through them.

I think it's really important; one of the value props to our earlier point around simplifying and automating the data collection process, and then the analysis to create your dashboard is one of the more challenging tasks.

And actually, we know that one of the problems that so many engineering organisations have is time spent reporting. It's time spent collating these data points to run through custom metrics and then present that on a dashboard.

And we've, you know, helping teams get back to doing what they love most and do best rather than spending time building and creating reports. So yeah, if you've got tools we're not currently integrating with, let us know.

Jobin Kuruvilla:

In terms of the team-level metrics you already mentioned, how can the teams pull together data around the performance of a particular tool?

Is there a portfolio level metrics as well where you can, you know, roll up all the different teams into maybe projects, also, lots of groups up even to a portfolio level – is that something that you can do?

Chris Boys:

Yeah, look, we're building that right now, and we're calling that Groups – effectively, it's the team-to-teams view, and it's the ability for you to collate teams in whatever way makes sense for how you want to observe those practices – be that geography or feature or capability.

The reason we started with teams and started with team-level insights is because we believe that real traction and behaviour change for continuous improvement, digital transformation, for high high-performing ways of working start at the team level.

So often, we see companies start at the tower level if you like. From the cockpit of leadership, creating reports at that very, very high level and then from on high, are sharing their perspective of what needs to change across the system of the organisation or across workflow in general.

And that is one aspect of building a data-driven culture and a continuous improvement culture.

The other, which gets so often missed, is actually letting the teams fall in love with their own data, giving them their information so that they are empowered genuinely and, therefore, are accountable for the actions that they're taking to improve.

So that's why we started at the team level, with team-first metrics, you know. To that point, we don't create direct observability into individual behaviour – again, it's the team in the way that we collectively share and come together to create great outcomes.

So, that's what we call team-first metrics.

To your point, though, teams don't work in isolation. Autonomy must sit within the framework of accountability.

Accountability is the portfolio view. And so we are now rolling that up for leadership to really bridge accountability and autonomy, and get a much more unified approach in sort of the strategic and systemic ways of working in an organisation, with, that can set guardrails for the teams to be autonomous within those guardrails with their own data.

Rasmus Praestholm:

So one of the things I noticed off the website that stuck out to me was this little thing called Ojo, described as “our little coach”, which made me really go. Aha! They have a chatbot.

Everybody's got a chatbot; it has to be a chatbot!

So, can you describe what that is?

Chris Boys:

You bet! Ojo is – it means eyes in Spanish, so we think it's this little pair of extra eyes for you, working with you in your corner to help you see the things you may not be seeing.

And so what Ojo does is take the insights from the catalogue of metrics that we have to provide suggestions for the team, which they may or may not act upon, based on the context that that team has.

And so, Ojo is presenting insights into the cycle of how we work.

So, there are planning-based insights. In other words, Ojo may be suggesting that a team builds out a sprint plan, whether it's over-planned or unplanned, based on that team's usual way of working because Ojo is there to help that team build a more accurate, predictable, sustainable plan.

And so Ojo will provide a top line, such observation, and then some suggestions around what they may do or what is different about this particular sprint in the example of planning that gives the team the levers, then to adjust their plan, so that they’re more confident in their ability to execute on that plan.

In the same way around the way teams then track, Ojo provides commentary on where improvements are being made based on the team's usual way of working.

Hey! Thumbs up. You guys are knocking it out of the ballpark. You're above your usual benchmark.

Or where things might be lagging.

In the context of lagging, again in the interest of building data literacy and a culture of data-driven ways of working, Ojo will also be suggesting other metrics for a team to go and look at if one aspect of their performance is lagging so that they can cross-reference that and get a much richer three-dimensional view on their practices to really observe any adverse impacts or things that – unintended consequences, if you like, of their ways of working.

So that's the power of Ojo. It's really to bring that observation and suggestion into how teams plan, track and review their work.

And in so doing, it's really to kind of help almost scale the scrum master, if you like. It's scaling that capability of understanding across the team in a shared way, in a common language, so that everybody together knows what matters and what they need to do about it.

Rasmus Praestholm:

I like the sound of that. It sounds to me like a lot of metrics, tools and things out there, like, give you metrics like, here you go. Here's a bunch of numbers.

And this almost sounds like asking ChatGPT, hey – analyse my recent sprints and advise.

Chris Boys:

Pretty much, Rasmus. And I think that you've hit the nail on the head. So many tools – reporting tools or metric-based tools are static data.

You can get analysis and paralysis. You can get totally confused by looking at a dashboard of charts and knowing what the hell to do with it and what matters with regard to taking action.

We want to help teams move from viewing to doing.

It's only by taking action do you live and breathe the culture of continuous improvement do you embed a way of evolving into high performance.

If you don't take action, your head is stuck in the sand, and you will perpetuate the status quo.

And teams that are stuck suck, right?

We've all been in those teams that go nowhere because it's like Groundhog Day. You get to the retro and are like, what the hell? We're doing the same thing, talking about the same things that are going wrong, and nothing is moving.

And so this is the whole point of Umano: to help teams really know where they're at, but through accelerated learning from metrics – but critically take action.

What precise action will matter most for improving your way of working? And that's the value of Ojo.

Rasmus Praestholm:

I feel like trying it now.

Chris Boys:

Good! I look forward to seeing you there!

Jobin Kuruvilla:

So this Ojo. The little coach that we're talking about. Is it available currently only on the planning stages, or do we also have it tracking some of the…

Chris Boys:

No, it's embedded – it's embedded into um…

So the way, it's again the way Umano provides observability into a team's way of working is what we call this kind of data-guided performance loop. It's a cycle of work, right? We all work in a cycle of planning, tracking, reviewing, or retrospecting.

Either on a micro cycle, like our sprint or our Kanban iteration, or our macrocycle, like our quarters, or a half-yearly or annual plans, whatever that might look like.

And so what Umano does is help within a team space, create your active cycle hub in the way you sprint or the way you perform in your iterations.

And so, within your active cycle hub, you have your planner, which builds insights live as you build your sprint plan in your issue tracker. When you hit, start spread, and the issue tracker, you're moving into tracking that work.

So our tracker, then, is the sprint summary or your interval summary that looks at performance on a daily basis, benchmarked against how you usually work to get the thumbs up or the flags on what you wanna focus on.

And then, it moves into your retrospecting tool, our reviewer. So Umano, through Ojo, helps teams. Retrospect in 3 ways.

One, there's an Ojo pane of all of the insights Ojo is sharing. There's the team input pane, which is where teams are literally inserting their cards and commentary on what they did well, do differently, and take action.

And then there's the team vibe paying, which is the team health Spotify checks, if you like, that track the qualitative things you will observe over time.

So, by mirroring how a team works in that cycle of planning, tracking, and retrospecting in a micro cycle and in an active cycle, again, that's how we're helping Ojo to help teams improve their way of working.

It's embedded insights to take action at planning, tracking, and reviewing, so it's not a static dashboard per se of what you may be used to through Jira or other tools.

It's quite different in the way that we mirror ways of working with embedded insights at each stage of that cycle to help improve either or any of those stages of the way that you work.

Jobin Kuruvilla:

To take that a little bit deeper. So does it also monitor… I know that you have integrations with tools like Github, GitLab, and the like, so there’s probably CICD information coming through from those tools.

How about the environment? Does it go to that level? I mean, do you also monitor environments and report back – yes, this particular environment is either down, not functional, or it's up and running; this particular version of the software is deployed into this environment – those kinds of details.

Does it bring that into the dashboard as well?

Chris Boys:

So, at the moment, Jobin, it does not.

The very clear focus… There are a lot of tools that exist for that use case and for that observability right now. And we don't think we've got a unique point of view on that.

It’s the observability of team practices, the inputs into high-performance cultures, the inputs into shipping quality, value to customers. And so that's our unique approach.

Your point's really interesting, though we do. Though we are increasingly getting asked about that single pane of view. And, like, we're exploring that, like, you never say never!

And there may come a point where actually, our customers say, this is, this is a really really critical element to team performance, and through the lens of team performance. Yes, you know, we're talking about environment performance – but through the lens of team practices, what impact are those practices having on our environments?

And are environments improving their performance as a result of our team interactions?

Jobin Kuruvilla:

Yeah, I was probably going to ask another question. But you answered this, you know, whether it is Atlassian, or GitLab, or Github, they all come with their own dashboards, as well, you know, covering a lot of other integrations they might have – take Atlassian Compass, for example, it brings insight from all the different applications, in the Atlassian suite of applications.

And probably when other CICD tools like, you know, GitLab and GitHub. Similarly, GitLab it’s a single DevOps platform that has its own dashboards covering all the end-to-end DevOps lifecycle – so I was seeing where the Umano dashboards fit in with those dashboards that you are seeing in, say, GitLab versus Atlassian Compass and things like that, you know.

And where do you get this additional insight from?

Chris Boys:

So, for us, there are gaps in the observability that those tools provide and the current gaps that they're, kind of blind spots in those tools is just that it's practised in how the team designs their work. The way the team builds the work goes around the skills of creating their work.

And then, obviously, there is observability through pipeline data and whatnot. And then environment observability at the tail end. But where they're very opaque, and even like, as I said, almost blind at that at the front end of the input scale of how we work.

So that's one aspect. The other aspect is they're not looking at like the softer side of collaboration and interaction and communication, which is such a critical element of team performance.

They're not looking at the complexity of interactions across a value stream or workflow.

So what Umano is doing by creating, sorry, by plugging not only into those tools but more, and bringing all of that data together, is to bring a much, much richer picture together for those teams on their workflow end to end.

Jobin Kuruvilla:

This brings up another question: are there APIs that you have for your product using which we can pull this information into another product?

Chris Boys:

So, not yet. We are also being asked for that as a feature set, which we will definitely be putting on the agenda for early next year.

But we've got some very exciting new features that we're working on as a priority to help and help teams perform better. And so one of those we've talked about already is groups, so the way that leadership can get that roll-up view in their performance window.

But yes, some other interesting things around sentiment analysis and looking at the sentiment of languaging teams based on performance and correlations between sentiment and impacts of sentiment on the way teams are performing.

So, these are some of the fun things we're currently working on to provide a richer, deeper understanding of how teams work.

Jobin Kuruvilla:

Yeah, I mean, I'm just glad that I'm not the only one who is asking these questions or asking for these features!

I do have one more question – is it only SaaS-based at the moment? Or is there any way we can take it on-prem?

Chris Boys:

So, we are currently building the on-prem connectors. So we're specifically, we're building the Jira Data Centre connections so that teams can get that view of insights within a data centre environment but also in a hybrid or blended environment.

So it's actually, we're also designing for the use case where companies may have teams on cloud and on-premises, which some of the…

Jobin Kuruvilla:

Which is most enterprises…

Chris Boys:

Yeah, yeah, exactly. And you guys, as well…

Jobin Kuruvilla:

Yeah, absolutely – but Umano itself is SaaS-based; it's only on the cloud?

Chris Boys:

Correct. Yup. Right now, we'll by the end of the year, we'll have those connections up for data centre customers.

Jobin Kuruvilla:

I will return to the session I was talking about for the DevOpsDays London – “Observability, it is a costly affair”.

What do you say about that? Any final thoughts on that? I mean, I think you know. Obviously, there's a pricing associated with Umano and anything else that we're talking about.

But personally, I believe that it is worth the cost because, obviously, these methods bring forth a lot of insights that you would otherwise not pay attention to.

Obviously, that, in turn, leads to team improvement and team performance improvement – any final thoughts on that?

Chris Boys:

I think that's a very cool topic title because it's not, I think the literal cost element is a false economy. You can't afford not to have observability in the way of working today.

I think it's costly because it's behaviour change.

Observability demands a culture of behavioural change.

Otherwise, there is no point in rolling out metrics or pretending to create observable cultures if you're not going to do anything about it.

And so, for me, that's the costly angle on embedding and embarking on this journey of observability.

But, as I said, it's a false economy if you don't do it, and the costs are way higher if you don't do it – and my bleak assessment is obsolescence, quite frankly, if you don't embrace a culture of continuous improvement and evolution, and the only way to do that is metrics and knowing where you're at, so that you can continuously iterate to get better.

Rasmus Praestholm:

I hear that!

Jobin Kuruvilla:

I cannot agree more!

Laura Larramore:

I think a lot of people at the team level would appreciate having a clear understanding of what they need to do individually and as a team to be able to improve; I think people generally want to improve.

I think sometimes they have no guidance and that, so I do think that it's definitely a worthwhile investment.

I appreciated your insights on qualitative and human interaction within metrics because that's something that often gets left out.

But thanks for joining us today to discuss this topic of metrics on the DevOps Decrypted podcast!

You can connect with us on social at Adaptavist, and let us know what you think of the show for me, Chris Boys Jobin and Rasmus – you guys have a great day.

Thanks for listening; we'll see you next time on DevOps Decrypted, part of the Adaptavist Live Podcast Network.

Like what you hear?

Why not leave us a review on your podcast platform of choice? Let us know how we're doing or highlight topics you would like us to discuss in our upcoming episodes.

We truly love to hear your feedback, and as a thank you, we will be giving out some free Adaptavist swag bags to say thank you for your ongoing support!