Leo Vasiliou
00:09 - 00:54
So, hello, everyone. First and foremost, thank you for giving us some of your precious time today.
Welcome to the overview of the 2024 DevOps Research and Assessment, aka DORA, report where Ben and I will be speaking, about the findings in that report as well as about Catchpoint's, SRE report. My name is Leo Vasilou, a former In the Trenches practitioner, one of the authors of the Catchpoint SRE report, and currently evangelizing for Internet performance monitoring.
I have the honor and pleasure of being joined by Ben. Ben, maybe say hello to everyone.
Benjamin Good
00:54 - 01:17
Hi, everyone. My name is Ben Good.
I am a solutions architect at Google, specializing in DevOps and application modernization, all those those fun things. And I had the, the privilege of contributing to this year's, Dora report.
So looking forward to running through it with you, Leo, and let's dive in.
Leo Vasiliou
01:17 - 03:41
Sweet. So, before we continue, I'll cover a couple of quick operational items.
To everyone listening watching if you could look over to the right side of your screen there's a chat box where Peter just typed something in so feel free to say hello from where you are joining. There is a question and answer button as we go through, any questions come to mind, please type them there.
And then there's a docs button that has links to our respective reports, a couple of other resources for you. Having said that, if you do wanna give this a quick scan to get the report or simply navigate to dora.
devslash, dora-report-2020four. Additionally, if you want to get a copy of the SRE report, simply query SRE report 2025.
But again, I would also like to remind you that these resources are, on the right side of your screen. Excuse me.
Now these types of research are not possible without our sponsors, contributors, interviewees, passionate practitioners, evangelists, antagonists, and last but not least, our readers. So if you wanna take a moment to show your support, please consider finding their respective LinkedIn and give them a follow.
And, Ben, real quick, I'm, I'm glad y'all sorted alphabetically. So Catchpoint was, first and listen.
Might have to make sure we change your name to AAA Catchpoint for, for the next one, for the next one. Now, bad humor aside, let's get into it.
Here's how we'll break down the topics in this webinar. Besides the bullets on the screen which you can read I would also like for you to think about the outcomes that these sets of capabilities will help you achieve.
How fast can we push updates? How fast can we resolve disruptions or incidents? Or even how can we reduce the number of incidents in the first place? Because those outcomes are what our research is really all about, getting better. And I think, Ben, I've talked long enough to get us framed and warmed up.
So maybe you could, take it away from, from here.
Benjamin Good
03:41 - 05:15
Cool. Thanks, Leo.
So one of the the big parts of the Dura Research is really looking at how we measure software delivery performance. And over the years, we have identified, four primary metrics that we can use to to get visibility into how we're performing from a software delivery perspective.
But before we get to those those things, I wanna go over and touch slightly on the model that that we use in the research. So when we anchor on the, you know, a few core capabilities or areas such as climate for learning, fast flow, and fast feedback, we can begin to predict and get a view into the software delivery metrics.
And then as we measure those things, then those are productive or predictive of different outcomes within an organization from, the performance and the well-being of the organization. So if we go one level deeper and look at those capabilities, really what we're able to identify is certain things that we can begin in areas that we can influence change within how we operate and build software such that we can then measure those things with those four key metrics and then also, you know, see those outcomes in our organizations as we're building and delivering software.
Leo Vasiliou
05:15 - 06:02
And Benjamin, if I could just comment the, this is the first time I saw this model. I just wanna say that, I fell in love with it because I firmly believe that capabilities are the gateway to outcomes.
And if you were to sort of extend this model to the left, it would eventually get to, like, the nitty gritty nuts and bolts speeds and feeds. So, I've often said that capabilities are the gateway between those speeds and feeds on the left and the outcomes on the right.
And talking about capabilities is a fantastic middle ground. For example, when IT feels like the business doesn't understand what they're saying and the business feels like IT doesn't, understand what they're saying, you talk about the capabilities as a good middle ground.
So, absolutely, love that, the way the model is broken down.
Benjamin Good
06:02 - 08:32
Yeah. And I think it gives us a nice way to, like you're saying, frame the conversation and focus in on certain things where we can influence change.
Agreed. Agreed.
I think one of the key things that that is a part of Dora is the community. So improving these these upon these capabilities, is is difficult, and we want you all to know that you're not not alone in this journey.
And the communities are really large a large portion of that. Okay.
So let's let's actually look at the different metrics now. And generally speaking or broadly speaking, there's two categories, throughput and stability.
So when we look at throughput, it's really the velocity at which we're able to build and deliver software, and we break that down with two different measures. Lead time for changes, so really measuring how long it takes us from code commit to get code into production.
So what is that lead time for change? The second one is deployment frequency. So how often are we deploying code to production? And the faster we do that is an indication of, reliable, repeatable process in which we get code out there.
When we look at stability, it's really a measure of quality or how quickly can we recover from an an issue. And we measure that using change fail rate.
So how often was the percentage of time that we introduce a change that results in degraded service? And then when we do introduce one of those changes, how long does it take us to restore from that or roll back from that and, recover from those things? And that gives us an an indication of stability. So how do we do this from a survey perspective? What we do is we ask respondents of the survey, about these different measures for the primary application or service that they work on.
And then we take those those data points and we do cluster analysis, and try to find groupings of related things, but also understanding that there's differences in there. And what we end up with is, this year four clusters, emerge, when it comes to software delivery performance.
Leo Vasiliou
08:32 - 08:49
And then if I'm, reading that or if I'm understanding you correctly, the clusters, that's like, what's the data within the data? Because, otherwise, how would you do it year over year? So that's is that a correct maybe an oversimplified way of understanding? Okay. Perfect.
Benjamin Good
08:49 - 09:52
You know that that's exactly right. So we look at the data for that year and we treat it independent of the previous years.
So the metrics stay the same, but the clusters will vary from year to year. In some years, we've had three clusters, but most years, we end up with four clusters.
And this is how the clusters broke out in the 2024 report. So, each cluster is color coded here and we have, low at the top of the chart, you know, moving down, to elite, and you can see the different, measures as it relates to the metrics.
So looking at lead time for change, low performers are, you know, getting a change from commit to production in that one month to six month time range where elite performers are doing it in less than a day. And you can see how the the different clusters vary, as they go from the across the different metrics.
Leo Vasiliou
09:52 - 10:23
Benjamin, Larry took the words right out of my mouth. So the first time I saw this, I had to do a double take.
And, just just to make sure, just to make sure we're all looking at this the same way, I'm thinking of that y axis as a golf score. The lower, the quote, unquote, better.
Mhmm. Right? Okay.
So then what do you make of that? Like, is that something you speak to in the report, or is that something you'll get to, in, the webinar because you can't help but notice that?
Benjamin Good
10:23 - 11:10
Yeah. So we definitely dig into this in the report, and I'll talk about it just, you know, a little bit here.
This is the first year where we've seen this, flip where medium performers had a lower change fail rate than high performers. And what that indicates to us, is that medium performers and high performers are making a decision around deployment frequency and how often changes fail.
So there's a trade off that's happening in there, and that is that's really in you know, called out in the data and it it pops right out there. But this is the first year that we've we've seen this.
I think it's really fascinating, results in the.
Leo Vasiliou
11:10 - 11:13
Agreed. Agreed.
Yeah.
Benjamin Good
11:13 - 11:55
The last thing I mentioned on this slide is, lots of times, people are curious about, you know, how many are you know, what percentage of people are low performers or elite performers and those kinds of things. So I'll just call out those percentages real quick.
We saw twenty five percent of the respondents fall into the into the low cluster, thirty five in the medium cluster, twenty two in high, and nineteen percent, in the elite cluster. So a nice, you know, in a nice distribution Yep.
Leo Vasiliou
11:55 - 12:22
Alright. Well, if somebody could type in the chat to oh, never mind.
It looks like Benjamin has completely dropped. So I guess, we talk about reliability and resilience.
Good thing we have reliability and resilience, in our practices here. So I will go ahead.
Oh.
Benjamin Good
12:22 - 12:25
I'm back. I'm really sorry about that.
Leo Vasiliou
12:25 - 12:40
Welcome back. I was just saying, Benjamin, that, in addition to, being reliable, resilient in our day to day practices that we had, resilience in this webinar, and I was prepared to continue.
But, thank you for, making me sweat a little bit, just enough to come back.
Benjamin Good
12:40 - 13:44
My Internet let me down for a little bit. So hopefully, we're all good to go here.
So I think it's interesting to look at what it means, from a performance difference of low versus elite performers, and we see these are really large numbers. So, elite performers, you know, have 127 times faster, lead times for changes and, you know, so you can see the the difference in here.
It's quite stark. What I don't necessarily want to come across as saying though is that you need to be an elite performer.
What we want people to take away from this is that it's a continuous improvement cycle, and the the really high performing teams are elite at improving, not necessarily hitting that elite cluster, if you will. So it's more about continuous improvement than it is striving to hit some sort of metric that we're And then would.
Leo Vasiliou
13:44 - 14:10
you say the best well, I don't wanna put words in your mouth, so I'll ask it as a question. Is the best way to think about those categories because everybody wants to be, you know, quote, unquote elite is, a yardstick or benchmark? And then you kind of, the idea being that the only way you'll know if you're getting better or worse is to actually measure against some yardstick or, ruler.
So okay.
Benjamin Good
14:10 - 14:47
Perfect. Exactly.
So the the important thing is that you understand where you're at in that spectrum of software delivery performance across the metrics, and then that you're working to improve those things. You might be completely acceptable for your, software and your your company to, you know, be at the, say, the high level or the medium level in terms of lead time for change, just because of the nature of your industry.
So you don't necessarily need to be elite.
Leo Vasiliou
14:47 - 16:01
Thank you for verifying. And so, moving on, and actually, by the way, I'll just kind of remind everyone, that if we don't see the questions in the q and a it's because we're looking at the screen so we will get to them.
I know I saw one pop up but otherwise please do continue to type your questions in the in the q and a. Having said that moving on, Ben, maybe a couple of minutes on artificial intelligence, AI.
It's my understanding Ben became looking into AI during your previous research cycle, which would have been the 2023 research cycle. And that makes sense to me because over the last couple of years, the use of AI in professional development has become ubiquitous.
And after reading your research, I believe this report in particular I mean, I know it's only the second year, in your cycle, but I believe this report in particular, is important is an important opportunity to assess the adoption use and the attitudes of those professionals because in my opinion, we're at a critical inflection point. So maybe just kinda talk to us about what you found, what, what the data said.
Benjamin Good
16:01 - 16:43
Yeah. Awesome.
Yeah. You can nail that.
We started looking at it in 2023. Twenty '20 '4, we went much deeper into the research there.
The first thing that we wanted to understand is where are respondents that you know, where are those folks using, AI in their day to day jobs? And I apologize for the the bit of the eye chart there, but there was a a wide spectrum of things here. But, you know, the two things that stood out was writing code, so using AI to generate code, and then the next one was, using AI to summarize information.
Leo Vasiliou
16:43 - 17:19
Ben, I'll mention that this was also the second year we asked a similar question in our research. And even though the choices in our list were not identical to the choices you had on your screen, on your list rather, that writing code was also the top selected choice, two years in a row.
So I thought it was, a nice validation, of two completely different sets of, of research. So Yeah.
Very interesting.
Benjamin Good
17:19 - 18:53
Yeah. And I think that it's nice to see that that, we're getting similar results.
And I also think that this, you know, speaks to where we're at in the industry, with where the tools really shine. I mean, they, there's been a lot of focus on getting them better at writing code and understanding or trying to, you know, consolidate that information.
So it it, it comes together. The next thing that we were looking to understand is what was the impact of productivity, because of our use of AI.
So we see here that 70% of respondents said that they had a productivity gain because of AI. And a third of folks were saying that it was extremely or moderately extremely or moderately increased their productivity, which I think is a good indication of the tools are actually are actually helping a little bit.
The next thing though is that we wanted to understand how much, you know, people, trusted AI. And this is the if we keep the previous picture in mind, almost 40% said that they had little or no trust, in in the quality of the code that's generated, which I think is, a fascinating thing from a it's helping me be more productive, but yet I don't quite trust it.
Leo Vasiliou
18:53 - 19:24
kind of thing. Benjamin, please forgive me for chuckling there.
I didn't mean to. It's just the way you said it.
It made me think one of our contributors this year, I don't know if I'm allowed to say names, but I'll I'll tell you what they said. They said, we all know that a coworker you can't trust is a constant source of extra work, and AI is at best a coworker you can't trust.
And just when you said it, it made me think, so if that's at best, then what's the at worst? And I think we're seeing a little bit of that here. So.
Benjamin Good
19:24 - 19:33
Yeah. It's it's a really interesting juxtaposition that we're seeing in the data.
Don't quite know what to make of it yet, but it's pretty interesting.
Leo Vasiliou
19:33 - 19:52
Alright. So, we got a little bit about the attitudes, the sentiment toward AI.
But, Benjamin, kind of do those attitudes, do they translate to actual outcomes or results? Maybe now we can shift a little bit, hear your thoughts, on what the data says there.
Benjamin Good
19:52 - 20:40
Yeah. So, yeah, that's that's the next thing.
So what we did, is really try to say what's the impact that AI is having. And the way we looked at this is we said if if an individual, increases their AI adoption by, say, 25%, what do we expect the change in outcome is gonna be? So this is, extrapolating a little bit on the data, and we see that if we increase AI adoption by 25%, flow, job satisfaction, and productivity show nice bumps, in as a positive outcome here.
Leo Vasiliou
20:40 - 20:53
I would ask, our viewers to just make a note on that time doing toilsome work, and we'll come back and speak to that in just a, just a moment.
Benjamin Good
20:53 - 21:38
Yeah. I think that's a it's a pretty, astute, you know, observation here in the chart, which is the toilsome work, portion of it.
And I think that, using this diagram helps us understand where that's helping at. So AI is is helping us get valuable work done, but it hasn't necessarily tackled the toilsome part of our of our day to day.
And I think a lot of that is in how we define what toilsome work is. So if we think of, toilsome work as the meetings and the bureaucracy and those kinds of things, AI isn't necessarily helping us in those areas just yet.
Leo Vasiliou
21:38 - 21:50
I hear that. Was was there anything else on the slide, Benjamin, before I kind of, offered my, you know, pity comments?
Benjamin Good
21:50 - 22:22
Yeah. I got I have one thought that I'd like to just offer up to to the viewers here, which is I think, you know, as we adopt more AI, it we need to be cognizant and aware of, you know, what we're using and what we're filling that extra time that AI is is, you know, potentially freeing us up with, so we don't put more toil into that into that bucket.
So And and the the it's a perfect segue, by.
Leo Vasiliou
22:22 - 24:00
the way. It's almost like you and I, practice and rehearse this, into into a piece of our research.
So the reason this fascinates me, what you just said, I'll go back to what we published this year is, well, what is toilsome work. Right? But, in addition to the stats.
So one, two, three, four. So for the last five years in our survey, first, we put the definition from the Google SRE book of how they define SRE.
And then the question verbatim for the last five years is around what percent of your work on average is COIL. Now granted, there's a little volatility when you talk about doing these types of surveys.
But I think if you think about the potential margins of error that, I agree and that it reinforces what you've been saying because what we saw is, for the first time in the five years we've been asking this verbatim identical question, we saw them rose. So, we had the same theory.
I don't know if it's theory or hypothesis that if what you said it's help, expediting the realization of value, I'll just go back and kinda repeat what you said. Make sure you're, prioritizing what you're using AI for and don't do things like accrue.
I don't know. We talk about technical debt, like, don't accrue AI debt or something, something like that.
Yeah. But, yeah.
Fascinating. So back over, back over to you.
Benjamin Good
24:00 - 24:51
I I also have to wonder about, you know, the comments we were making around the the coworker that you can't trust and how that might play into what we're seeing here as well. So, So the the last the last point here on AI, is if we increase AI adoption, you know, what other changes do we expect to see on a more, on a slightly more granular level.
So we we expect that we'll see, you know, an increase in documentation quality, code quality, and code review speed. Some of those some of those key things related to quality there, where AI could be helping us.
Leo Vasiliou
24:51 - 25:53
That's crazy. I mean, you know, crazy in a good way.
Right? So Mhmm. Mhmm.
Alright. So just to keep ourselves out, on time so we can hopefully have enough time for a couple of questions, a little bit of discussion, let's talk about platform engineering, which, Benjamin, to quote you when we first met, you said, it's my jam.
So Yeah. In addition to in addition to it being your jam, platform engineering is also an emerging discipline, that has been gaining interest, momentum across the industry.
Socio technical discipline where engineers focus on the intersection of social interactions between the different teams and the technical aspects, automation, self-service, repeatability of processes. So given this, hotness and sexiness, talk to us about platform engineering.
What's the what are we thinking about here? What's what's the way us as viewers should be thinking about it? Cool.
Benjamin Good
25:53 - 26:41
Yeah. And I think the the blend of the social and the technical, I think, is the is the part that really fascinates me.
So I was excited to see that we were, gonna do a little research on it, as a part of this year's Dora report. This is how we defined, platform in this year's report in the questions.
So, a set of capability that set set of capabilities that shared across multiple applications and services, with the goal to improve efficiency and productivity. And we also called out that a dedicated platform engineering team is not required.
And with that definition, 89% of respondents said that they have a developer platform.
Leo Vasiliou
26:41 - 27:21
Benjamin, as a fellow author, sometimes I I fall into the trap of assuming that everybody knows what I'm talking about when I write or even when I speak, and I don't wanna make that assumption here. So, I know you've got the concept of a platform up there, but maybe not so much like platform engineering.
So take, like, platform engineering versus site reliability engineering. How should viewers consider their relationship with each other in even in addition to other, you know, DevOps, constructs? But what's the relationship between them? Is it love love? Is it love hate? What's the correct way to think about it?
Benjamin Good
27:21 - 28:53
I think it's definitely a love love situation, in my opinion. So, I like to think of and talk about platform engineering as it relates to these other engineering disciplines from this this kind of lens.
So I'll start in the middle where we have software engineering, and that's really, you know, writing software to deliver value, for our end users, and that's really focused on, you know, the business and domain expertise. Then we have this concept of site reliability engineering where we're where we're applying software engineering practices, but with the focus on reliability.
So how do we keep our systems and applications up and running, monitoring them appropriately, and responding to incidents? So we're we're taking some of the software and engineering disciplines, but we're applying them to a different area of engineering. Similarly, for platform engineering, we're applying software engineering techniques, but with an effort to, create systems that support the infrastructure, and make it easier for developers to get their get their jobs done.
And the product or the platform in this case is really an internal facing thing, not customer facing. That software engineering spreads, you know, spans across, all three of these disciplines.
Leo Vasiliou
28:53 - 28:58
Mhmm. Gotcha.
Thank you.
Benjamin Good
28:58 - 29:32
So when we look at it, one of the things that we wanted to see from the research is how does a platform impact productivity. So we can see here in the chart that when a platform is in play in the cuss in the respondents organization, that there is a nice, bump in productivity here.
And it's about about, you know, 8% bump in that case.
Leo Vasiliou
29:32 - 29:35
I mean, that that that visual is crazy. Sorry.
Go ahead.
Benjamin Good
29:35 - 32:09
Yeah. No.
It it is it's really pretty striking, I think, is to is to say that when we have a platform, it really does have a a positive impact on productivity. The next thing that we wanted to get a view into was, like you said, platform engineering is this is this new, new emerging area, but it's been around for a lot longer.
We just haven't necessarily called it platform engineering. So we wanted to get a view into how does the platform impact performance over time.
So you can see here that when you have a platform for, less than a year, you get a bump. And in that one to two area, you see another bump in organizational performance.
But then you have this dip in the two to five area and the, like, in the age of the platform and then it comes back. But on the whole, we can see that the platform, does have a a positive impact on organizational performance.
Okay. But it's not it's not all, you know, sunshine and and flowers, you know, from what we saw on the data.
So Dun dun dun. There's definitely some upside, but we did see some some negative, in here.
And we're we're not exactly sure why that is, but we'll we'll dig into it a little bit here. So the first thing was we saw a a 14% decrease in change stability.
So what that means is that when, respondents said that they have a platform, their changes were more likely to to need rework. So they they needed rollback or additional changes or something like that is what we saw.
Again, we don't know exactly why that is, but that was that was an unexpected downside there. This is some of the things that we think could be the, the reason for this, but we don't really know.
The survey doesn't quite give us that that level of visibility. So it's one of those those open open areas where I hope to be able to do more research in here.
But it just, one of those things where it's great for us as a community to be discussing these things, and see where where the challenges could be.
Leo Vasiliou
32:09 - 32:42
So the ability to quickly remedy enables teams to make riskier changes. Feedback testing may be immature under the and so again, these are hypotheses.
Right? We're not saying that, platform is being built counter to preexisting, oh, preexisting challenges with change, stability, and burnout. What is your experience? So I would definitely say I mean, hopefully, y'all follow-up, next year's research.
But, if anyone else has any potential hypotheses, type them in the chat. So Yeah.
Back over back over to you, Benjamin.
Benjamin Good
32:42 - 33:27
Yeah. Well, I think that, you know, as we look at these, it's like the platforms could be enabling us to do to do, more testing, do more experimentation.
Maybe the platform is there because we're trying to solve some of these other challenges, and maybe we just don't quite have everything built into the platform yet. It's it's pretty fascinating.
But I do think that, you know, this is this is a pretty accurate thing when it comes to platform engineering, which is, how we build the platform isn't maybe exactly how it will get used. I think this is a great a great photo from John Alspar.
Leo Vasiliou
33:27 - 34:47
I agree. I agree.
So, Benjamin, thank you for giving us, some insights and platform engineering, especially for me, for the research that we do, how people should think about it, you know, as it pertains to other form of, types of engineering, right, cyber liability. So moving on, developer experience, right, closely related.
So, I'll go I guess I'll go back, like, all this talk about AI just to say that software still doesn't build itself even when assisted by AI. People build the software, and their experiences at work are a foundational component of organizational success.
Alignment between what the Devs are building and what the users need allow employees and orgs to thrive. At least that's the general idea.
Developers are more productive, less prone to burnout, more likely to build high quality products when they build them with, user centricity in mind because they know what it's like to be a user of a platform. All that to say that that in turn creates better experiences for the people they build for.
True or not true? I I feel like you're about to tell us.
Benjamin Good
34:47 - 37:00
Well, let's dig into it a little bit and we'll see what see where we end up on it. I think, this is one of the things that we've seen in previous years, and it's also true in this in this year's research, which is, when we put the user first, pretty much, you know, everything else falls into place.
Not it's not a guarantee that everything is gonna fall into place, but when you keep the user or that user centric mindset, at first and foremost, it fixes installs for lots of things. So what exactly does that does that mean? So when we talk about user centricity, we're talking about incorporating user feedback, into the software that we're building.
We're reprioritizing features based on feedback. We're trying to understand what users, you know, want to accomplish, and we're really treating that as as the top priority.
And I think it kinda goes back to that that picture that we saw of the path. The same thing is true for our end user products as it is for, say, a platform.
When we keep the user key, user at the center, you know, then we have, much better outcomes. So what does that look like from a data standpoint? So this chart is a little bit, heavy.
So I'll take a few seconds here and try to break it down a little bit. Each of the the colored lines is a measure of user centricity.
So you can see that the yellow is 5. 6 and the orange is 9.
4. So as we increase the level of user centricity, what we're seeing is that product performance, how well our software works improves and our delivery throughput also improves just by changing the level of user centricity.
I think this is a really powerful chart and, you know, helps to center us around that that users are key.
Leo Vasiliou
37:00 - 39:01
I agree. I agree.
Like you said, maybe not gonna fix everything. Right? But put them first and other things kinda fall, into place, which I think is a good, segue, to kind of the, a piece that I had wanted to talk about as part of this.
And that is, this is how we opened our piece of research this this year. We asked, do you believe or, actually, so we said, you know, speaking of ostensibly about web performance and application performance, things like that, We said slow is the new down.
It means that if something's really slow, it might as well be down in essence. And as you can see, most people agreed with that expression even though, the number of people who said they agree was far higher than the people who said they've even heard of it before.
But the reason I'm showing this slide is because the second most popular objective, performance should be tracked. Sorry.
Second most popular answer choice. Performance should be tracked against an objective.
And that got me thinking about this next one here, to speaking about the idea of an experience, obviously, our survey and our report is called the SRE survey and the SRE report. So the fact that SRE was the number one answer for what should your org prioritize over the next twelve months should not surprise anybody, which means, in my opinion, that the second most selected answer of establishing experience level objectives and service level, that's really the number one answer, if you kinda take it in that little piece of context.
So that's why, I thought it was fascinating and helped reinforce what you were talking about with Centricity.
Benjamin Good
39:01 - 39:01
And.
Leo Vasiliou
39:01 - 39:15
then we're we're getting close. So just a couple more slides.
But, Benjamin, maybe, in addition to this, you could just talk to us a little bit about stability, right, of the organization, not of the the system.
Benjamin Good
39:15 - 41:00
Yeah. And before I switch to that that slide, like, we when we were talking before the webinar that that slow is the new down.
That was one of the things that really resonated with me as I went through the the SRE report. And I think it's spot on with what users expect today.
So, you know, building around that is definitely a users definitely a user centric thing. When we talk about stable priorities, I think that it's important to keep in mind what exactly that means.
So and and the impacts of it. So when we have unstable priorities and we're we're switching from one priority to the next priority, what is that what happens? And what we see is it decreases productivity and it increases burnout, which I think, most everyone on the on the webinar here would agree with when we might have experienced this in our in our day to day.
So what do we do? We stabilize those things such that in the people working on on our software don't always have to be, you know, should constantly changing what they're working on or shifting the priorities. There's a a bit of a nuance in there in that being user centric helps to increase stability.
You're keeping the user first, and that's a that's a stable priority, for everyone to anchor on. And like I called out on one of the previous slides, the features may get right preprioritized, but the core priority of serving the user stays the same.
Leo Vasiliou
41:00 - 44:08
So, Benjamin, you know what this makes me, think of? Well, before I get to that, how long have you and I known each other? Not that long. Like, three, four months.
Right? We we happen to meet and we started talking about, hey. Maybe we can, you know, jam on, some of our findings.
I I bring that up to now say that it's absolutely fascinating to me that last year when the respect you know, DORA team working on DORA report, the SRE team working on SRE report, I think it's absolutely fascinating that we chose some of the exact same topics even though we never knew each of us. AI excluded because everybody's researching that.
That's that's like all like, you can't breathe without somebody, you know, sneezing the word AI. Now that brings me back to having said that, let's kinda take a peek at this because we also researched organizational priorities regarding stability.
And so let me give this a quick read. So what this says is when people said they never, seldomly, or sometimes felt pressure to prioritize releases, their sentiment trended toward them saying that they felt the priorities of their organization are stable.
Mhmm. When people said that they often or always felt pressure to prioritize releases, their sentiment trended toward the priorities of their organization.
They said that they felt were unstable. And two things, I felt this was so important to corroborate our research, which was completely independent and pure coincidence that, again, we we came to similar findings that I wanted to reinforce what you were saying.
And for all those business leaders out there listening, you know, please take a special note of what we're trying to say here. The second is priorities are stable until they're not.
Right? So my thought, in addition to what you said on the previous slide, is, like, to go back and audit your internal engineering capabilities or or maybe practices, going back to a comment I saw at the beginning, and make sure that they are reusable and can be applied even though the priorities may change. And I think that is also an amazing use case for platform engineering.
So those are my thoughts on, what I wanted to say. And I guess, Ben, before we kind of work to close out the webinar, address any questions or comments, anything else on this topic or anything else we've discussed so far before we, work to wrap this up and, address some of the questions and comments?
Benjamin Good
44:08 - 44:47
Yeah. No.
I think it's I think it's really cool how the the research from both both angles, you know, comes together and it shows, you know, the the aspects of, you know, what we're doing in SRE and software delivery, you know, the door research, and DevOps practices and, you know, platform engineering practices really coming together. And, you know, like, lots of times people are like, oh, they're they're they're competing interests or they're they're different.
No. Really, we're all trying to solve the same problem just in, maybe slightly different ways, and I think it's really cool to see it come together like this.
Leo Vasiliou
44:47 - 47:17
Agreed. Agreed.
Alrighty, team. So we're gonna go ahead and, work to wrap this up.
But first, not without your obligatory advertisement slide for the sponsor of this webinar, Catchpoint. So thank you, Catchpoint.
Obviously, that's where I work. But, what we'll say here is when your application creation and application delivery is becoming more and more Internet centric, and let's face it, it probably is, and you're relying on the Internet dependencies, that Internet performance monitoring from Catchpoint gives you the visibility you need to reduce the frequency, duration, and impact of incidents caused by Internet stack disruptions.
And now I can switch to my sincere comments is, when our labor of love research that people like Ben and I do is meant to give you the data, to help you make important decisions. And when I was thinking about the different use cases that we could potentially, help with, I wanted to, go ahead and open the poll.
And even though we really didn't talk about, Catchpoint at all, I felt that these were some of the use cases that we could, help people with. So, feel free to take a peek at them and select any that may apply.
Otherwise, Ben, let's take a moment to, go to open discussion if there are any comments, any questions to address. And before we do that, I will show this last slide to kind of give the summary of the topics that we, wrote about in our respective reports.
Right? Delivery performance AI, platform engineering, dev experience, web performance, toil, priorities, research, etcetera, etcetera. Now what I'll ask people, if there were some questions in the chat that we didn't see or didn't get to, please type them again.
But, Ben, one that I did happen to catch, somebody asked, for the survey, what what type of sample size, you know, number of respondents and, were you working with here for to do these types of investigations and correlations, you know, the cluster analysis, etc.
Benjamin Good
47:17 - 47:28
Yeah. I honestly don't recall the number off top of my head.
If you allow me a couple of seconds, I can pull up the report and look at the demographics.
Leo Vasiliou
47:28 - 49:17
Sounds good. Alright.
So let's see here. Training and learning.
What specifically did you okay. I I right.
Okay. So the the essence of the question is, what did you essentially talk about in the training and learning? So, I had the opportunity to work with, Salim, and he kinda opened my eyes a little bit to kind of how to think about certain things.
But, we put together a series of questions based on, their feedback, about whether or not organizations invest in and give their workforce the tools to become better through training and learning. The one thing I want to say on that particular topic is we embedded some AI questions into, the training and learning section.
And I know we focused on showing some slides where sentiment varied by your level of managerial responsibility, AKA rank. But in a rare alignment, one of the questions was, I desire to be trained on AI.
I I forget the exact words, so don't quote me. That was a flat trend line, meaning all ranks had nearly the exact same sentiment.
So that's what I'd like for people to call, to take away here is that that desire to be trained and enabled on AI was universal across all ranks of the organization, which very rarely happens when we break down our data by levels of managerial responsibility. So, Benjamin, did you find what you were looking for? If not, no worries.
We can.
Benjamin Good
49:17 - 49:20
not. It was nearly 3,000 responses.
Leo Vasiliou
49:20 - 49:47
Okay. 3,000 responses for yours.
Ours was, I believe, was just over 400. I'll make a comment there that, fewer responses.
If I unintentionally use the word causation, I didn't mean it. We'll just say it kind of generally trended, right, because, a smaller sample size to be, working with.
But still, you can, still good enough to start some discussions.
Benjamin Good
49:47 - 50:15
And in the in the DORA report, there's a whole section on the demographics and we break down, of those 3,000, what industry, what, organization size, what was their the respondents, you know, job, category, those kinds of things. Super, super detailed information, but I'll I'll, refer you to the demographic section of the report.
So but yeah.
Leo Vasiliou
50:15 - 52:00
Sorry, Benjamin. I'm not laughing at you.
Somebody, asked if my voice was available to be rented out for other webinars. So guys are we got some jokers on this one.
Okay. Incidents and stress.
Okay. So I think I think what they're I think what's being asked is, there's a couple of topics we didn't really mention in this webinar, so I think that's what's going on here.
Yeah. There there was a lot of stuff, so we try to cover, like, some of the major highlights and constructs for people to, upon which to step.
But incidents and stress was the idea of like how many incidents do you work on, you know, over the last thirty days? How many have you worked on, I think is the way the question was worded. And then we revisited some of the questions that we asked, I believe, in our 2019, which was the second survey that we did, about levels of stress.
We expanded in this year's research to talk about whether people felt stress levels increased or decreased not only during, incidents but also post incidents. And I don't think I'll ever forget this number, fourteen percent.
Most people said stress levels increased during incidents then after incidents. But a subset of fourteen percent of people said that their stress levels were higher after incidents if you can believe that.
So I just thought that was an interesting little nugget there. Alrighty.
I don't see, any other questions or it looked like there were just a couple. Benjamin, any closing comments or thoughts for excuse me.
Any closing comments, thoughts for all of us?
Benjamin Good
52:00 - 52:27
Yeah. I mean, thank you so much to Catchpoint for sponsoring this year's survey.
Like I said, we can't do these things without, you know, folks like you supporting us. So really, really appreciate that.
Super interesting things going on in the industry as a whole whether, you know, everything from just how we build and deliver software to how we run it. So it's very interesting stuff.
Leo Vasiliou
52:27 - 52:55
Agreed. I'll wrap up by saying what I said at the very beginning.
Thank you so much to everyone for giving us some of your precious time today. Enjoy the rest of your day.
Find us on, LinkedIn if you wanna message us, maybe talk about, ideas, jam on our next year's research. Otherwise, I believe if I'm not seeing anything from our maestros behind the scene, that is a wrap.
Thank you so much, Benjamin. We'll see you backstage.
Thank you. Cheers, team.