Skip to main content

Transcript: DevOps Decrypted Ep. 4 - A New Hope

Ryan Spilken
Ryan Spilken
28 October 21
Devops Decrypted Artwork

Summary

In this month's episode of DevOps Decrypted we take a look at the recent Facebook outage which affected the masses and what caused this to happen? Then we take a deep dive into GitLab's IPO and what that means for business moving forward. We discuss a new internal technology called Product Pods as well: this is Adaptavist's user interface for a Container Platform. Finally, an insightful interview with Jason Spriggs, Senior DevOps Consultant at Adaptavist, around Atlassian Open DevOps.

Transcript

Romy Greenfield:

Hello everyone, and welcome to DevOps Decrypted. This is episode four, A New Hope. I'm your host, Romy Greenfield. And joining me today are Jobin Kuruvilla, say hello.

Jobin Kuruvilla:

Hello, hello.

Romy Greenfield:

And we've also got Matt Saunders, say hello.

Matt Saunders:

Hello, Hello, hello.

Romy Greenfield:

Cool. So today, wanted to start off talking about something that I found very amusing that happened a little while ago, which was the Facebook outage. WhatsApp, Instagram and Facebook, all down for over six hours.

Matt Saunders:

It was a biggie, yeah.

Romy Greenfield:

It was a biggie.

Jobin Kuruvilla:

It was only six hours? I mean, it felt like a lifetime.

Romy Greenfield:

It did because we all had to go back to using text and actually ringing people.

Matt Saunders:

Yes, and all that time we got back through not constantly checking our feeds on Facebook and Instagram.

Jobin Kuruvilla:

People were suddenly active in Slack. I don't know why.

Romy Greenfield:

I was actually very upset because I was expecting a reply to one of my texts and didn't realize that it had gone down and it'd been an hour. I thought, "This is getting rude."

Matt Saunders:

Yeah, that person just doesn't like you, Romy, that's what it was. No. In seriousness, yeah, massive. And it's a sign of how massively dependent the entire world is on these platforms now that it caused so much news, so much inconvenience. Everybody's like, "Yeah, but it just means that people can't chat to each other." Yeah, but it's not just a toy. People are running businesses on these things and just can't really survive their everyday lives without these tools. It's a bit of a scary thing, I think.

Romy Greenfield:

Yeah, definitely.

Matt Saunders:

But let's talk about DevOps angle of it, right?

Romy Greenfield:

Yeah. I think the thing that scared me the most about it was the fact that ... the reason that they ... Well, from what I've read anyway, the reason that they couldn't get it back online was the fact that it was so secure that even the developers that could help, couldn't access it.

Jobin Kuruvilla:

Yeah, I think the best thing I read about that is ... I think somebody tweeted about it, is that, "Oh, Facebook actually left their keys inside the car and locked themselves out of it. So they just couldn't get back inside." I wonder if it is because they were not having a newer model of the car, which would have prevented them from doing it, or is it because they were so secure, as Romy, you just mentioned.

Romy Greenfield:

Yeah. I mean, I've locked myself out of the car before, and that was really embarrassing. And I had to get my dad to come and bring a spare key. So, Facebook's dad had to come down.

Jobin Kuruvilla:

Which is kind of what they did. They had to actually manually go to the data center and do a hard reset, because they just couldn't do a soft reset using software, right?

Romy Greenfield:

Yeah.

Matt Saunders:

Yeah. They should have used Mark Zuckerberg's dad, who I believe he's actually a dentist. Drills and equipment to get him in.

Romy Greenfield:

I mean, it was as painful as pulling teeth, so ...

Matt Saunders:

Oh, very good, Romy. Very good. [crosstalk 00:03:05] I'm not looking at this with a whole of schadenfreude, because ... I mean, Facebook are up there as a pinnacle of technical excellence and you're like, "Oh, you are supposed to be brilliant and yet you're down for six hours." Could it have been prevented with better DevOps? That's an interesting one to unpick. I'll give it a stab for a couple of minutes, if we're all right. [crosstalk 00:03:33].

Romy Greenfield:

Yeah, go for it.

Matt Saunders:

And you guys can weigh in on it. So it seems like the root of what caused this to happen was around BTP announcements of the rooting for Facebook's IP addresses. So the IP addresses that run things like their name service, just disappear from the internet and they locked themselves out from that. And I'm starting to think of some DevOpsy-type things that I'm sure they were doing.

Matt Saunders:

And it's easy for us to look on from the outside and say, "Oh, why didn't they do this? Why didn't they do that?" But I'm trying to approach it from a learning perspective of things like circuit breakers, peer review, that sort of thing. The thing about them locking themselves out their own data centers and not being able to get in with their key cards. Good design suggests that you use some sort of circuit breaker, and similarly in deploying any sort of software. If you've got a process and it's all automated, which is what we should be doing, banging that DevOps drum, automate all the things, centralize all the things, but still have a way of manually going over there and flipping the button. It's like finding your key to your car under a paving slab around the back of the house.

Matt Saunders:

Right, Romy? That sort of thing. I think its a good pattern that we can use and we can highlight the usage of. Similarly the peer review, apparently it was an automated check of some sort that was supposed to run and not actually change anything, but actually changed some things by mistake. And yeah, I'm sure they've got all the right processes around here, around this. And it was some very weird thing that they hadn't really considered could have been a problem that caused this automated process to go off and basically lock everybody out. It's again, another sign of how good testing, anticipating what might go wrong, expecting things to go wrong because they sure will, making sure you've got a good way out of it if they do. And all those sort of DevOps principles kind of come to mind. And yes, I'll say it one more time. I'm sure that everything that they do to prevent problems like that is way ahead of anything that we've done in our much, much smaller worlds. But those are the things I start to think about.

Jobin Kuruvilla:

Yeah. Yeah. I completely agree. And it's interesting that you talked about automated testing. They had a script that will go and check these kinds of scenarios, but unfortunately, they had a bug in that script. So basically, they never figured out ... I mean, this scenario never occurred before, so nobody could actually foresee this. But that's where I wonder if Chaos Engineering would've done something different. I mean, if there are ways we could reproduce something like this. Obviously, you cannot exactly predict this one particular problem, but maybe think about all those scenarios that can happen and introduce some kind of chaos in the production environment. And then you see whether your automated script would have picked it up or not, right?

Romy Greenfield:

Yeah.

Jobin Kuruvilla:

But, hindsight is a good thing. You can obviously talk about it now that it has happened, but that's an interesting thought.

Romy Greenfield:

Yeah. I think it's a good idea to think, "What the worst thing I could do to my system, to my infrastructure? What's the worst command that I could write?" And test those in a non-production environment. That's probably what Facebook has now started thinking, "Okay, this is the worst thing that we could possibly have done at that time." Now we've accidentally tested it in production, let's actually do some non-production testing to see what we can do to stop ourselves getting locked out next time.

Jobin Kuruvilla:

Yeah. And obviously the peer review is something that you brought up. It's great. I mean, I'm pretty sure Facebook asked that too. So, I think we talked about it probably in one of our earlier episodes. You can't actually put the blame on an individual for this one. Obviously, somebody made the change, but the system should have prevented it from going into production and bringing everything down. So you have to say, "We have to improve the process here." And there should be other ways to catch this before it gets into production and bring everything down.

Romy Greenfield:

Yeah. I think it's a team effort, isn't it? When you've got something that complicated, you could say that there are multiple people to blame because they made the security too tight almost. But, it's not any one person's fault.

Jobin Kuruvilla:

Yeah. One thing I thought that was interesting was all the three products, Facebook, Instagram, WhatsApp, they all were relying on the same infrastructure. That was what really stood out for me. Could they have done things differently? Could they have decided differently that not all of them would've gone down? Or, maybe only part of them would've gone down? Is there disaster recovery? I mean, what else they could have done differently? Obviously, without knowing the inside design of how they have done things, it's very difficult for us to comment. But probably it's because of the service that has gone down, it made things even more difficult. But, I'm curious what happened there.

Romy Greenfield:

You'd also think that maybe it would be better to have a degree of separation between the products, because if there was something that majorly affected it again, you wouldn't want all three to go down at the same time. So although it's beneficial to have the sharing of infrastructure, I think from an outage perspective it'd be better to have that separation.

Matt Saunders:

The classic trade-off. It's like the all eggs in one basket thing versus like you say, the economies of scale that you get through centralizing things. And we have to see it through the lens of hindsight, as Jobin says. I'm sure that probably five minutes after everything was back, those folks at Facebook were writing tests that stopped that one ever happening again.

Romy Greenfield:

Yeah.

Matt Saunders:

You can do such great things with chaos engineering tools like Chaos Monkey, famously invented by Netflix more than a decade ago now, which would not only kill off random things but do them in production as well. And sufficiently advanced organizations will be doing a load of stuff like that. But, there's always just that one thing just out of reach that you can't quite predict that will take you down.

Matt Saunders:

But, I think the thing to take away is that how many outages have been prevented, or outages that happen that nobody actually notices because of all these good things happening within Facebook, and within our own organizations. We go and shine a lens on these things when they happen, because they're unusual and they're extraordinary. And we don't really see the benefits of things that we've talked about in the last few minutes. It's things like Facebook engineers, I think are encouraged to get something out and deployed on either their first day, or their first week when they start. And they have many, many systems that protect the infrastructure from people making mistakes. It's all the good stuff that we talk about around CI and CD, and those things are helping people to move fast, get things out, get things delivered. And what we don't see is how many good things are happening, or how many bad things are happening that nobody notices, because that's the way things go.

Matt Saunders:

I think one of the tragedies of this event is that because it was so widespread and took such a long time that it does get you back to that kind of primal, "Who was the blame?" Oh, we need to put controls in place, or be more protective, which is a classic thing that happens in any organization after you have a big outage. Suddenly, all the people who have been accepting of things like, "Yes, we're going to deploy multiple times a day." And yes, we're going to let devs deploy straight out to production. Those things you can get a bit of a bounce-back effect in an organization where people are like, "Oh, let's just cool it a little bit." And at that point, the organization starts going slower again. But I'll be there saying, "No, we need to carry on." We have to keep on doing these things, otherwise, we end up paralyzed.

Jobin Kuruvilla:

I completely agree with Matt. We focus on the bad things, but so many outages could have been prevented earlier. So I think it is cool that, the VP of infrastructure at Facebook, he actually wrote a blog post about it after the outage. So what he's talking about is they have done so many Storm builds. So in a Storm exercise, they simulate all the major system failures by taking down a service, a data center, the entire region offline, first testing all the infrastructure and the software. So all of these things they already do. The one thing they haven't done before was obviously taking down their backbone, taking on the PTP protocols, some things like that. And they got hit at that particular point. But it is a learning curve. They probably would've prevented multiple, multiple outages by doing all the earlier storm builds that I was talking about.

Jobin Kuruvilla:

Now there's an opportunity to include a new one into the mix. And so they'll be doing that too. So there's a lot of learning there for us, obviously how to find out all the different failures that can occur, try to prevent that from happening. But in this world, with technology evolving so fast, there's always something new that's coming up. And you had to prepare for it. If you can see it earlier, yeah, great. If not, learn and move on.

Matt Saunders:

Something new or something old. I found any conversation I've had about Chaos Monkeying ... I'm not sure if that's a word ... Well, it is now, has been like, "Hey, we want to do some Chaos Monkey engineering. We want to go and take down the servers." And if you're at a first level of evolution in an organization, people are horrified. They're like, "No, you can't take down the servers, that will cause an outage." And then you get to a second level where they're like, "Yes. Well, we've got some auto scaling groups here and resilient self-healing infrastructure. So yeah, if you take down that server, then another one will spring up over there," or another container, or whatever. And then you get to the third level, which is like, "Yes, you can do that. That's fine. Do that on all these products, but not that thing over there."

Matt Saunders:

No, don't try and kill that one, because we're not really sure about that one. Maybe the bit that broke in Facebook was of that ilk, perhaps. When you go off and redesign, or design your infrastructure, you look at standard things like taking out single points of failure, making sure that individual pieces of equipment are resilient and can fail. Things like AS numbers, the autonomous system stuff that drives basically the backbone of the internet, is actually quite fragile, in my opinion. And very, very difficult to make resilient. You can have multiple routers, but you don't necessarily have ... So, if a router fails ... A massive backbone router fails inside a Facebook data center, or in a peering point transit location, probably no problem. Traffic will go other ways.

Matt Saunders:

But then there's this one thing that you can aren't really get away from, the single point of failure off, which is the AS number. Or you could, and they probably will, now having had this outage. But again, learnings across organizations seem to be that there are a lot of these things and it gets harder and harder to engineer them out around the edges. Maybe that was a contributing factor.

Jobin Kuruvilla:

And of course with the pandemic going on, people were not actually inside [inaudible 00:15:34] that made things harder. They couldn't get back in now that they were locked out. So it was a perfect storm, unfortunately for Facebook and the users of Facebook. But I think at the end of the day, the biggest shock in the software world is if nothing works, try starting it. And that's exactly what they did. They got somebody into the data center. I don't know, they probably kicked the doors open and rebooted the system, and everything magically came back.

Romy Greenfield:

Yeah. Turning it off and on again. Should have tried that first.

Romy Greenfield:

So, let's move on to another discussion point. So GitLab's IPO.

Jobin Kuruvilla:

Oh, wow. Yeah, yeah.

Matt Saunders:

Yay. Go GitLab. Yeah. I have a lot of time for GitLab.

Romy Greenfield:

Me too.

Matt Saunders:

Great company, great products, unprecedented levels of transparency around they do things. So yeah, I hope they make a big success of it. IPOing, a big step. Although, GibLab's been around for a long time, but it seems to have gone pretty thermonuclear in the last couple of years, which is brilliant.

Jobin Kuruvilla:

Yeah. I remember when Atlassian did their IPO. I was definitely interested at that time. I wasn't actually investing at that time in stocks, so I actually told my friend, "Hey, Atlassian is going public so you might want to invest." That time, I think it was like lower turn days, I believe when he started it, and now it's like definitely more than 100 last time I checked. So I wasn't going to miss this one. As soon as I heard GitLab is going public, I'm going to put some money on it. Because, I have actually used GitLab, it's an awesome product and I can already see the future is bright for GitLab. Yeah, I hope I'm right about this one.

Matt Saunders:

Disclaimer, some of our panelists may have invested into GitLab.

Romy Greenfield:

Yeah.

Matt Saunders:

So it's a great platform. As I'm sure many people know, Adaptavist are partnered with GitLab, and we're getting a whole load of success in people who are interested in us helping them get the most out of it. Brilliant kind of all in one solution for software, delivering everything kind of DevOps from start to finish. Right, Jobin? I know you and your team have been using it pretty much in depth.

Jobin Kuruvilla:

It is, absolutely. Yeah. And we have actually ... I don't know, we are still learning some aspects of it, because as you said it's a tool that can do almost everything. Starting from planning all the way to monitoring. So, we had started using GitLab for what it was known earlier, like the SEM system and the CACD tool. But it can of course do a lot more now, and we are still learning parts of it. But that's the beauty of it, technology keeps evolving. And the tools that you see today are not the same that you see tomorrow, because the same tool does actually have so many other functionalities in it. It's amazing how far GitLab has gone in the last three, four years.

Romy Greenfield:

Yeah, it's great. Because actually, I think in my first software engineer role, I was introduced to GitLab and I loved it. It was really easy to use for CICD, especially as me having my first input for DevOps, and my first experience with pipelines. So it's great that it's just gone up leaps and bounds since then.

Jobin Kuruvilla:

Yeah. At this point in time, we are actually [inaudible 00:19:15] GitLab for planning purposes. Obviously, we are an Atlassian shop and we have actually used GitLab for planning for so long. But we are learning ourselves by using GitLab for our planning internally, within my team, just so we can get a feel of how good it is. Obviously when we go to our customers, we're the experts on the tool and so we should be using it also to get a feel of it, which is what we are doing now.

Matt Saunders:

It's interesting talking about this single tool that consolidates lots of things together, and how that's a great thing. Especially it's a bit dichotomous with what we were talking about with like Facebook, they should be not using the same things and keeping things separate. One of the reasons I like GitLab and will recommend it in many situations is because it is a one stop shop. As we know, there's been an absolute proliferation of great tools in the DevOps sphere. And as an old fashioned kind of Unix person, I kind of like having lots of tools. The whole Unix philosophy is you have one tool that does one job and does it really, really well. But actually, seeing this single tool that goes all the way from the source code management through your CI, your CD, deploying and monitoring container registry, etc, etc. Infrastructure management, you can run Kubernetes clusters from within the thing.

Matt Saunders:

I was a bit cynical. We've got something that often comes in Adaptavists for example, it's like, "Can we have DevOps in a box?" And actually, I spent quite a long time trying to make one of these things. And DevOps in a box with all the things you need to actually do all these things all in one go, and it kind of missed for various reasons. Not all of which were technical. GitLab seemed to have found the sweet spot of just enough functionality, you need all those bits and getting the integration between the bits, working really, really nicely to start kicking it out of the park. So, good luck to them.

Romy Greenfield:

Brings you on nicely to what we're doing within Adaptavist. We are doing product pods. You guys want to just discuss a bit about this?

Jobin Kuruvilla:

Yeah. I was thinking about the same thing. That's exactly what I was going to say. Matt just mentioned about DevOps in a box. Yeah, out on product pods talk. This is very interesting because I think Matt is one of the person who started doing this within our company. I have actually ... People who know the history of Adaptavist. So I came up from Go2Group and we had internally doing something, or CWF, continuous workspaces framework, which was doing something pretty much similar. So we introduced this idea after joining Adaptavist and somebody said, "You should talk to Matt." And why is that? Matt was already doing this things inside Adaptavist and I'm like, "Oh, that's great." Like minds think alike, or maybe he was far ahead of the game. So I talked to Matt and I was like, "He has product pods." That was really interesting. So Matt, tell us more about product pods.

Matt Saunders:

Thank you. Not sure about far ahead of the game. I always just like to be just a little bit further ahead of the game, because otherwise I've run out of breath. So coming on from what we were talking about, like DevOps in a box, it was accidental that I've managed to almost seamlessly segue by this part of the podcast. We often get asked, Adaptavist, for infrastructure to go off and either try out new things, or do migrations for customers, or ... What's a good example? Here we go. So Adaptavist are fairly well known for ScriptRunner group of products and many other that we do, which are plugins for Jira and Confluence.

Matt Saunders:

And one of the things we found that we needed was to be able to test out new versions of ScriptRunner, or any of the other products on a Jira or a Confluence instance. Or, we've got Jira and Confluence instances, they're quite critical in running our business, or actually, maybe we don't want to use them because they're ... Once again, harking back to the Facebook thing, not because we think our developers are going to break them, but because ... Well, actually, maybe we want to be able to let them experiment.

Matt Saunders:

So, we invented something called product pods. Long story short, I named ... Come up with one of the CIO, I think, Neil Reilly, to basically give you pods of applications like a Jira, or a Confluence, or a Bitbucket, or a GitLab, a Jenkins, Nexus, etc, etc, in a kind of ephemeral, temporary way, running inside of Kubernetes. Hence, the pods thing, because Kubernetes runs things in pods. Put a self-service interface on the front of that using a tool called Rancher, which is a great orchestrator for containers, and developers can go off and spin up whatever they need. So yeah, a continuation of a number of ideas, the DevOps in the box thing, which we tried to do a couple of years ago. That project didn't really go anywhere. That's fine, we tried, we learned and eventually we have the pods.

Matt Saunders:

So ideas for this in the future, basically, we're going to have a look at moving anything that's temporary or ephemeral. So, things that are maybe just in existence while we do a professional service engagement with the customer perhaps, or whilst a piece of software is being released that our products teams are developing, and put them all inside of this thing. The great thing is the technology is coming towards us. The sort of applications that people want to deploy and want to use within the organization tend to run quite nicely inside of Kubernetes. They maybe even have helm charts with them, which are basically the bits of YAML that define how each individual bits of an application get glued together for one cohesive whole.

Matt Saunders:

For example in GitLab, GitLab isn't just one thing, it is a collection of lots of different components and orchestrating them together is not the work of a moment. And so what that's meant is that we can have a developer who needs a Jira installation to test the software against can have that within about 12 minutes, and most of that is because Jira takes a long time to start. So, that's product pods in a nutshell. And really excited about that, where we can go with that. It gives us a lot of opportunities to unify the way that we run things, because it's all Kubernetes and that's all the same. And basically help people within our organization to get their jobs done quicker, without having to worry about how do I deploy this? I need a server for that. Those of issues. Sorry, I talked for way too long there. Hopefully [crosstalk 00:26:31]

Jobin Kuruvilla:

Before we start talking about it, right? I think internally [inaudible 00:26:37] we are using it for different things. Like for profit from services when we are doing demos to the customer, or when we want to pitch an idea to the customer, it's very, very easy to create a demo [inaudible 00:26:51] product pods. You need a Jira environment, you need a Bitbucket environment, or a Jenkins, GitLab, all the different tools that Matt was talking about earlier. We just spin it up. You can even do that with the help of existing data. So, that is another useful thing. You spin up a Jira environment with 100,000 issues, maybe with some rapid boards or agile boards on it. So it's very easy for us to show them most of the customer pretty fast. That's one thing. But then we also have our products team who are working on releasing new versions of the product.

Jobin Kuruvilla:

And of course, they want to test the product in different versions of Jira, or maybe do some performance testing with different dataset. You have different dataset and you can spin up three different data instances with 100,000 issues, a million issues, 2 million issues, and then do your performance testing against them. So there are a lot of different use cases that I can think about. Now, one thing I picked up Matt, was you mentioned that this is mostly aimed at ephemeral environments, environments that doesn't last long.

Jobin Kuruvilla:

But, one of the use cases that came up with one of our customers, a major airline customer that we had was that they create different environments for different customers they onboard. And for each customer, they needed a Jira and a Jenkins and a Nexus. So what became the idea of continuous work workspaces framework, CWF that I was talking about earlier was because whenever a new customer comes onboard, we just need to spin up these three different tools. You need a Jira, you need a Jenkins, you need a Nexus integrated together. So probably product pods is something that we can use for that scenario as well. It's not necessarily an ephemeral environment, it's going to last long. It's going to last for the duration of the customer engagement, maybe six months, maybe six years. Who knows? But that's another use case where you can have product pods use.

Matt Saunders:

Yeah, absolutely. So to be totally fair, most of the things that let you do those sort of enhancements are down to the beauty of Kubernetes. I know we haven't got seven hours for me to wax lyrical about Kubernetes, but the things that we've done have all been part of a learning process, and coming from a point of things like everything's running in a container. Containers are generally suggested that they should be kind of short-lived. That's one of the design things around containers is that they spin up, they do something and then they go away again. Applications like Jira and Confluence and Nexus are not really like that. So already you're like, "Hmm, this isn't quite Kubernetes core thing." And you get things like the file system within a container goes away once the container goes.

Matt Saunders:

So the first things that we've done over the last year or so, or a couple of years, have kind of tried to fit into that model of ephemerality. Again, I think I'm making up words here, but I'm sure you know what I mean? But, that's just the start. So we're doing things like each Jira has a database. The database runs within the Kubernetes cluster. So if the container for the database goes away, then you're not sure where your data's going to go. That isn't any good for something that's going to be long running. But Kubernetes lets you deal with things like that. You can go off and create external databases, maybe an RDS database within Amazon. You can get plugins like Crossplane, which do this for you in Kubernetes. And also for storage, you want long lasting storage, you can set up EPS volumes in Amazon and all the other equivalents in all the cloud providers. Other cloud providers are available, ladies and gentlemen, and do all that sort of stuff.

Matt Saunders:

That's the next level of stuff that we want to do. Firstly, for that airline, although Kubernetes is full of like boat-based metaphors and containers. So I'm not sure how that works with airlines, but sorry. Cheap gag. And also for our internal stuff. We have had discussions around maybe putting our core, the big Adaptavist, Jira and Confluence, inside of containers. It's a fairly scary prospect, but it is actually kind of feasible now. And I can see those use cases being very, very valuable to customers and prospects who want to do that sort of thing. "Hey, interested in running Kubernetes? Come and talk to us."

Jobin Kuruvilla:

Yeah. I don't think I have talked to you about this one, Matt, before. But one of the key initiatives that I have in mind for this year is DevOps as a service. When we come around to that, obviously I think product pods is going to play a big role in that, because obviously behind those scenes when you're offering DevOps as a service, we need to spin different tools, of course. Maybe running in Kubernetes containers, and that's positive to customers. So how you see it? Can you do it? I mean, we have product pods, right?

Matt Saunders:

Yeah. This sort of thing is getting easier. Putting groups of applications together to solve problems is basically an orchestration problem, or an orchestration task. Kubernetes with a layer on top to actually do all that for you in a consistent and robust way, is the way forward. Everything's getting more complicated.

Jobin Kuruvilla:

Yeah. Now we can finally focus on the customer problem and the solution to it, rather than worrying about installing the software and figuring out why it is working in my environment, not somewhere else and so on and so forth.

Matt Saunders:

Yeah. That's the dream, isn't it, Romy? Yeah.

Romy Greenfield:

It really is the dream.

Romy Greenfield:

Finally, for this episode, some of us got together with Jason Spriggs from Adaptavist, about Atlassian open DevOps. So let's listen to that conversation now...

INTERVIEW TRANSCRIPT COMING SOON