Webcast 60 min

How & Why to Make a Financial Case for Your Next Storage Project

Tom Grave


WinFirst Marketing

Webcast Transcript

Marisa Cabral: Good morning and afternoon to everyone. My name is Marisa Cabral. I will be your moderator for today’s webinar. Thank you so much for joining us today.

This webinar will be focused on how and why to make a financial case for your next storage project. It will be presented by Tom Grave, president of WinFirst Marketing.

Before we get started, I would like to mention that this presentation is being recorded, and you will receive an email with the recording from BrightTALK at the conclusion of the webinar.

Lastly, feel free to type in any questions you may have during the webinar, and we’ll be able to address these at the conclusion of the webinar as well.

With that, I would like to introduce you to Tom Grave. Tom, take it away.

Tom Grave: Okay. Thanks very much, Marisa. Thanks for having me. Thank you, everybody, for attending. Thanks for sharing some of your time today. I know everybody is quite busy, but it is actually quite an exciting topic for me, believe it or not. So I’ve been working in the IT profession for over 20 years, sad to say, but believe it or not, doing financial analyses as part of a storage decision is a really critical component, and it can be exciting. And so I’m glad you’re here to share the time with us.

It’s counterintuitive to say that because as an IT professional, I’m sure you didn’t go into for your love of spreadsheets and your love of finance. But the reason it’s exciting is it goes back to something I learned early in my career. Actually, my first job, my very first boss, he’s the partner at PricewaterhouseCoopers, and I was a junior analyst. And he destroyed my work in front of my peers, and it was one of those teaching moments. And then he pulled me aside and said something that really stuck with me, and sticks to me this day, which is that everything we do has to be shown to have a positive ROI for our client. And that was kind of intuitive. I understood what he was saying.

But then here’s the really good part. He said, “So the magical thing is [unintelligible 00:02:12] correctly.” And he explained to me that when you can show a positive ROI on a big decision, the budget limitations can fall away, and it really becomes a conversation about making money. And a lot of the limitations that you might think you’re facing sort of disappear because, after all, if you can credibly explain that you’re going to deliver a return-on-investment, the question is no longer how little I can give you to get by. It becomes how much can I give you. If you can credibly show me you’re going to give me a 50 percent return on investment, I’m trying to find ways to give you more of my capital to see that return.

So now, naturally, nothing’s guaranteed, and there’s a lot of limitations, a lot of constraints. But when you put these things in financial terms, it really can have a dramatic impact. So, again, that’s what makes me excited, and I hope to share some of that with you today, and then make it a little bit more tangible.

It’s always good to start with sort of universal truth, which is another thing I learned early. Not from the same partner, but from the other [unintelligible 00:03:18]. So start with a universal truth, get the audience to align themselves. And nothing beats this one. “Nothing can be said to be more certain than death and taxes.” I think we’ve all heard there. I think we’d all agree. I’m not sure if you all know who that’s typically attributed to. A lot of people have said it over the years, but it goes back to one of the Founding Fathers, Mr. Ben Franklin, back around the time of the Constitution. So even though we broke away from England, trying to escape all those taxes, he knew you’re never going to escape.

And so we can modify that. There’s one more certainty that I think everybody can agree to, which is death, taxes, and storage growth. So let’s attribute that to somebody as well. Let’s call it “anonymous.” Anonymous told me this recently.

But – so we’re all familiar with this. I mean, this is the world we live in. And like I said, I’ve been working in and around the storage world for 20 years at this point. And we’ve heard some of the stats, we’ve seen some of the macro stats, right. Like you see the hockey sticks going up and to the right for infinity, like these hyperbolic curves over time of how much storage growth there is.

You hear stats. One good one is in all the course of humanity, from the dawn of time until now [unintelligible 00:04:40].” I don’t know how they do it, but it’s, again, kind of counteractive, kind of cool. But what’s it mean for us? So make it more tangible, and bring it down to sort of the IT environment, the datacenter level. And when we do that, we tend to find it as a storage cost crisis today.

Now, the folks on the call, you might not see your current situation as a true crisis, so I don’t mean to paint a broad brush for everybody. You know, it might not be a fire situation and a crisis that bad, but when you survey the landscape, and you talk to a lot of IT professionals, analysts, you look at the trade magazines, the trade publications, we can see that we’ve reached this point where we are at what can be considered a crisis because of the amount of gear that we’ve accumulated over time and the amount of capacity we have to handle, and all the things we’re doing in IT.

And so the story – so let’s look at Gartner and go back – all the way back to 2016, way back before the last few years. And at that point, Gartner said the average enterprise had about 1.5 petabytes of total digital storage across the landscape. That’s in the datacenter. That’s at the remote sites. That’s archive versions of data. And again, for the folks on the phone, this is an average. And in some cases, you might be much more than that. You might only be a 10th of it. But the story is pretty much the same where a few years ago you had a certain amount of data, and we’re seeing that grow tremendously.

So by the end of 2018, and again, we’re approaching the end of 2019, so I’d just imagine it’s a bit more than that now, but it grew from 1.5 petabytes on average, to 9.7. So that’s even with all the traumatic – dramatic growth we’ve all seen, that’s an amazing growth rate. In fact, that’s a 569 percent growth rate, so more than fivefold in just two years. And you can imagine if things stay the same, project a few more years out, if everything stays on pace, the average will be at 30 petabytes in a couple of years.

And so at a certain point there we are going to reach this tipping point, and that’s where even though it’s been easy to sort of throw money at the problem and just add more capacity to the environment, and add more types of storage devices. At a certain point, we might be looking for a new solution, and I think that’s what we’re looking at.

So just to make it really tangible to paint the picture, which is the context for what we’re talking about today, we can start with a baseline. So let’s say on the left side this represents 2016. And again, it’s a representative picture, but you’ve got a set of different storage devices on the bottom – high-end arrays, midrange, you’ve got some Flash devices, which were quite popular then, and they can only become more popular; secondary storage; online storage was big then, getting better now. And we even put a little NVMe in there for performance.

So if that represents 1.5 petabytes, we’ve got the baseline on both sides, how do we grow more than fivefold in two years? Let’s think about it. Well, we’re getting – especially organic growth, which is normal. Companies are growing. If you have an application that’s popular, the dataset is going to grow for that application. You’re going to add new applications. So you’ve got some organic growth, which is just the growth, the natural growth of more data over time for a given application. And that’s going to drive more servers up top, more data down at the bottom, and increase the amount of devices.

App performance drives a lot of consumption, so we saw over the last couple of years the rise of Flash storage. Not just the rise, but the real explosion of it. And NVMe is a market unto itself that’s really exploded. And then you’ve got new applications, so you put some more servers up top, that’s going to drive – it has a ripple effect across everything. You need some more high-end storage, some more Flash, some more mid-range. Everything you’re doing is primary storage, of course, and creating multiple copies on the secondary storage front.

And if we stop there, that’s – let’s call that organic growth, and maybe we’ve doubled the amount of storage over time. But we know it doesn’t stop there, and we’ve seen something more significant, which is the – what we’re calling strategic initiative. So these are going to be some buzzwords, some terms that we’ve all heard, and maybe we use them, and maybe we kind of roll our eyes when people say them because they’ve been perhaps overused. But they do create a real phenomenon inside your IT environment.

So everybody has a definition of digital transformation. I mean, there’s books about it so we’re not going to spend too much time defining it. But really we – I think of it as the – sort of the integration of anything that’s online technology into all areas of business, the way we’re interacting with customers, with our partners, with our employees. And it goes – you know, for different organizations it goes further than that. We’re creating new products and new services that are 100 percent digital. Obviously we have organizations that are only digital, like Netflix. They don’t have any physical products whatsoever.

So that’s digital transformation in a nutshell. But when you think about what it means as organizations go through it, moving everything to an online basis and making that accessible, that drives storage up and down the stack. Cloud journey is another – again, it’s a buzzword, probably overused, but that’s – it’s a related term, but it’s distinct in that it’s digital transformation involving the cloud, everything that’s leveraging cloud computing, it’s a big enabler. And ironically, a few years ago, analysts were predicting that perhaps the rise of the cloud meant the decline of the datacenter. I think most people aren’t talking that way anymore. I mean, I think they’ve both grown dramatically. The cloud, of course, has exploded, but it hasn’t limited the amount of storage that people are deploying onsite.

And those things feed on each other. You have hybrid clouds. You have private clouds. You might start an application in the cloud in a test environment. Once it reaches a point of production, you’re bringing it back in-house. Anytime you’re doing that, now you’ve got a new app, you’ve got new storage requirements, you’ve got secondary copies, and so forth. So these things are synergistic, but one thing certain is that neither of them is leaning to a decrease in the amount of capacity.

So you’ve got other big phenomena – and these are all of these big themes in the market. So IoT, internet of things. We’ve got millions of devices, each one’s generating billions of bits of data over time. All of that is coming into the organization, needs to be stored, analyzed. New apps are created to deal with it. Even new professions, new organizations, like whole teams of data scientists, in some cases, to capture and analyze and sort of productizing some of that data.

Consumerization of IT is a similar phenomenon in terms of things happening in the real world that are impacting the datacenter. And that’s a phenomenon where business users have raised their expectations of what they expect from IT because of what they can get in the real world. So we can all sign up for a Gmail account for free, get unlimited email storage for infinity. You can sign up for Dropbox and get two terabytes for free, share information with anyone you like, access any of your information from any device at any time. And on and on. You have infinite pictures in your pocket on your iPhone with iCloud.

And so the expectations set on that side are coming in the datacenter and driving a ton of storage as well. So each time you’re adding to that, you’re adding just a ton more gear on the right side. And then you even have secondary copies, but I put that down in the strategic initiatives, data sharing and copies, because we’ve seen the secondary storage market really explode as well. And that’s driving, again, new applications, business analytics, and additional things.

But overall, you step back and you look at the picture on the right versus the left, and you can ask yourself have we or are we approaching or have we reached a tipping point here where we want to look for some fundamental changes. And I think a lot of people are saying yes. I mean, maybe not every organization at this point, but more and more we’re seeing that we are at a tipping point where we people are looking for sort of fundamental architectural solutions, perhaps, for the challenge of this explosion of data growth that we know is not going to end.

I mean, we’re – and many people would say we’re on that digital transformation stage. We’re early in our cloud journey. Fast forward five, 10 years, and imagine what the world looks like then, and what the implications are for all of us.

So that’s the context. And then bringing it back to the exciting topic of financial analysis, this is the context for your next big storage decision or any IT decision really, which is over the last couple of years we’ve all seen tremendous growth. We’ve had a – we’ve seen budgets necessarily increase to match the speed and the rate at which the growth is happening. And now we have to frame whatever big decision we’re recommending, understanding that that’s the context. And it’s also the context where if you’re looking at something more strategic, it becomes all the more important if you’re recommending change that’s more significant than just incrementally adding capacity within an existing solution, now, the business case and the financial case become even more important.

So that’s all context. And that brings us to the sort of setting up the main topic of the day that we’ll cover in the next 30 minutes. And really what we’re talking about here is the intersection of the IT organization and the finance organization. And if you just search for those terms, it’s amazing. You know, my radar is sort of up for this because this is a topic I talk about and write about a lot. But if you just pay attention to the popular press, you’ll see all kinds of articles on this all the time.

So this is just from two weeks ago. I saw this just the other day where according to a recent Forbes poll, a Forbes Insight Report, nearly all the survey respondents – they did a survey. It was mostly of folks in finance working for enterprise companies. 96 percent say that the CFO and the CIO collaboration is critical. And then there’s a but, right? However, almost all of them, 89 percent say there are significant barriers that prevent that cooperation, which can be – I don’t want to say tragic, but they can be a big challenge.

Now, this is specifically talking about two individuals, the head of finance and the head of IT. Those are just two people, right? Or the head of the information organization. But really you can look at it at an organizational level. The IT organization and the finance organization need to find these touchpoints of collaboration because after all, we’re all here to serve the high-level business goals, or the organization goals. It doesn’t necessarily have to be a capital enterprise. But whatever the organization’s high-level strategy and mission are, both finance and IT are here to serve that high-level mission.

And of course, the point of today’s topic is making sure that the recommendations you’re making and the projects you’re planning are fitting into that framework. And that’s actually a good poll question. So what’s cool about our webinar today is we can do live polling of the audience and get some statistics.

I’ll turn it back to Marisa to help us with our first poll question here.

Marisa: Yes. So our first poll question of the day will be when is the last time you had lunch with a colleague from the finance department? And it could be A, this week; B, within the last month; C, within the last year; D, never; or E, we don’t have a finance department. So go ahead and put your answers in. And we’ll give it about 30 more seconds to get some answers in here.

All right. Great. Thank you, everyone, for participating.

So it looks like we got about 50 percent of the people saying this week, 25 percent within the last year, and 25 percent never. So very interesting. I will give it back to you, Tom.

Tom: Hey, that’s great, 50 percent this week. Okay. So we have an advanced audience here. Maybe they responded to the topic of the day. So I can tell you that’s an unusual response for a – at least from what I’ve seen and asked other folks about this. I think that – establishing that relationship at whatever level you’re at within your organization, establishing a peer relationship with some – a counterpart on the finance team is crucial. And if you’re actually having lunch with these folks – and that’s the reason why we asked it that way because, you know, one question we could ask is, “Do you know somebody?” But that’s not very – that doesn’t give a great indication if you can collaborate. But lunch is a big step forward in a business relationship.

And so as you’ll see, it becomes critical when we’re putting our IT decisions in sort of a financial model, to have those connections, that’s very good news. And you don’t have to be an accountant, right, but doing some small things well can go a long ways, as you’ll see here. And so this sets us up for making the financial case and building a financial case for success. And we can think about what that means in terms of two sort of archetypes, I guess, of the folks who might be asking for money. So at the end of the day, you’re asking an organization to spend some money and invest in your recommendation.

And would you rather be this gentleman, who is sort of pleading, and just coming to the table saying you need this much money, “I need X to buy Y.” Not necessarily a position of strength. Or do you want to be this team of gurus? I mean, just look in their eyes, right? First of all, it’s a good idea to bring some backup to any big recommendations, right? So the woman upfront here, the young lady knows that she’s bringing her financial gurus, and she’s making the case. It’s the same recommendation. “I need X to buy Y. It’s going to be less than half than what we paid for Z, and it’s going to be deliver Q savings over the next three years.” [unintelligible 00:20:56]

So we’re not doing a poll question on this, but I would guess that more of us will see value in being on the right-hand side versus the left-hand side.

Was there a question come in? Okay. I thought I heard a question. Okay. I’m not sure if you’re on mute there, but all right.

So another fun way to get started here is to think about a very popular show probably most of you are familiar, which is Shark Tank on CNBC. It’s been around for a few years. It’s a cool show. It’s another reality show with some stuff, which has dramatic effect for TV of course. But what I like about it is these are true investors. I mean, now they’re TV personalities and stars unto themselves, but they’re putting their actual money into the entrepreneur’s visions and entrepreneur’s role themselves in, and present their product pitch, their idea, business, and get evaluated.

And of course it’s just a snippet of how it actually works in the real world. But it’s kind of cool. And of course, every show has sort of the villain, but also the most interesting character. In this case of Shark Tank, it’s this guy, Kevin O’Leary, also known as Mr. Wonderful. But he’s the best character on the show for me because he’s like the Simon Cowell of Shark Tank. He’s the one who’s the fiercest. He’s the most critical. He’s the one that’s the fastest to point out all of the flows in the business plan. But he’s also – seems to be the most successful. He gets the right deals, and he knows how to put things together.

And lucky for us, he’s published his rules for success, so we can leverage. And these can be translated. I mean, these, I think, are – may be obvious tips, but also good tips for anybody in business.

So it starts with being credible. And especially when we’re talking about billing financial cases for an IT project or a storage project, if you lose credibility then everything’s out the window. Invest in the right markets. Focus on your cash flow. Make sacrifices. Know your weaknesses. Know how to execute. And focus on making money. If you’ve ever watched the show, “How do I get my money?” that’s always his question he has for the entrepreneurs, which is great because typically the entrepreneurs are all over the place, and they’re talking about how cool their products are. And he always thinks of the bottom line, which is a good metaphor for us in IT.

And so these can be – let’s translate these to your IT pitch because I think they line up nicely, but maybe we’ll restate them. So be credible doesn’t change. And you’ll see as we’re going through the slides that that is the most important thing.

You know, invest in the right markets. Align with strategic initiatives. You want to put your recommendations in the framework of not just the overall mission and the vision for the IT team, but to a company or an organization at large.

Focus on cash flow. Focus on OPEX, operating expenses. We’ll talk about that in the next few minutes.

Make sacrifices. We can think of a sacrifice as a tradeoff. In business we make tradeoffs. If we’re investing in A, we’re not investing in B. If we’re – if we have Sam spending his time over on project Q, he’s not working on project Z. So every decision involves tradeoff, and it’s good to articulate those, so that people understand context.

Understand your weaknesses translates to understanding risk, knowing how to execute. It’s not just knowing, but it’s showing, demonstrating you can execute, which is laying out the timeline, the plan, and putting the overall recommendation within that kind of framework of execution.

And of course, how do I get my money, which is demonstrating ROI, back to my kind of first point that I learned early in my career. But going back to that point, it’s – what’s cool is when you do it well, it really does allow you to do kind of extraordinary things because you’re no longer limiting yourself – at least at the beginning you’re not limiting yourself to some constrained budget because if you’re showing an ROI, you can make the case to invest more.

Now, it’s not always perfect in the real world, but it doesn’t hurt, and it’s a good starting point because even if you still face a lot of limitations and a restrained budget, putting things in that case, there’s still a tradeoff to be made, and your projects are going to stand out, and more often than not get funded.

And so that brings us to poll question No. 2, which is – which has to do with your plans specifically. And so back to our moderator for poll question No. 2, and then we’ll take it from there.

Marisa: Okay. So for poll question No. 2, what is the timeframe for your next IT storage infrastructure project that will require a financial case? Let’s start voting. And I’ll go through the different options. So A is within the next three months; B, within the next six months; C, within the next year; D, by the end of 2020; or E, currently not planned.

And I’ll give it about 30 more seconds. Thank you for participating in this. That’s great.

Okay. So we have quite the mix here. We have 20 percent within the next three months; 20 percent within the next six months; 40 percent by the end of 2020; and 20 percent not currently planned.

Tom: Okay. Very cool. All right. Excellent. So 40 percent near term, and then a little bit longer term, and then some not planned. All right. That’s great. Thanks, everybody, for participating. We like to make it interactive. And don’t sit back and relax too much, but at the same time, that’s our last poll question for now, and we’ll move on from there. But again, it’s just helpful for us to know where you stand and understand if we can provide you some takeaways as we go into it. And I’m hoping we can.

But we’re going to cover the high-levels, obviously, in the brief webinar. We can’t go into a great amount of depth, but my hope is to cover the basics, and give you some good resources and materials, and sort of point you in the right direction.

Going back to your lunch question, the fact that you already have contacts, a lot of you with folks on the finance team is great. For those that don’t, that’s one of the biggest pieces of advice is to befriend that because once you’re building these cases, you want to – basically, finance is sort of the language of business, so you want to make sure you get some feedback ahead of time from folks who are living and breathing that.

And in a lot of cases it’s sort of outside our comfort zone in IT. Again, it’s not the reason why we got into IT in the first place. And even – and it’s not a comment on our intellect. It’s not that anybody doesn’t have the ability to do it, but it’s not where you spend the majority of your time. Just like anything else, where you’re spending more of your time is where you’re going to, you know, develop more of your core competencies, and typically this outside that zone.

But let’s start with TCO, total cost of ownership. And this slide is – the next couple of slides get into some detail here. But we’re going to keep the seven rules for your IT pitch on the side there, make sure we’re always keeping those in mind.

And total cost of ownership is crucial because a lot of it has to do with a lot of things, and it sets the foundation for any good case to be made. So in terms of the foundation, it sounds intuitive, but let’s define it. It’s the sum of all costs of purchasing, deploying, and operating a given system – you know, think IT system – over the course of a defined time range. And typically, we’re seeing that in a three to five-year time range, depending on the projects or the system.

Now, what’s important is thinking of it as a decision-making tool. In a vacuum doesn’t make a lot of sense. If you tell me the TCO is $1.5 million over three years, that’s a number, but the question is compared to what, right? And so it’s a really comparative exercise, and it gives us a standard metric with which to weigh alternatives. And it’s not – it doesn’t give you the answer. You can make a recommendation, and in fact, and often cases you want to make that recommendation for something that might have a higher TCO. But you’re able to explain the benefits of spending more money and seeing the payoff and seeing the return on that investment.

So it’s not the answer, but it’s the foundation. And it’s – because we’re comparing, and again, the question is compared to what, typically you want to start with the baseline, which is your status quo environment. And you want to do a very thorough job focusing on the system or the operation that you’re focusing on for the new recommendation. And quantifying what the status quo is. What combination of those solutions are you using today to get that particular job done? And that’s the point that you’re going to – that’s the thing that you’ll compare to the future state. So starting with the quantification of your status quo is crucial.

And when you do that, this is going to give us one means of apples-to-apples comparison. And we all know everything is different, right. Any two storage systems, even if they’re doing similar functions, they have different functions for functionality, all kinds of different parameters. And so it’s very difficult to get a true apples-to-apples across all features and functions. But one thing you can achieve is the TCO. That is the standard metric in this one that people tend to understand, and it does create this good baseline where you’re layering on top of that baseline, and then you’re getting some of the qualitative nuances above it.

And if you go back to the seven rules, it directly addresses three of the top seven. So you’ve got credibility, focus on OPEX, and demonstrate ROI. But credibility is the first one, and I’ve seen it too many times where if you’re putting together a financial analysis that starts out without much credibility, then sort of everything else goes away, and that’s why it’s always No. 1. And I’ve got a funny slide on that next.

But now let’s show what it actually – yeah, how to put it together. And again, this is a detailed chart, but you’ll have access to it after the event today, and you can always get in touch with DataCore, or me as DataCore’s consultant to help you with it after the fact.

So TCO’s CAPEX plus OPEX. And what that means is capital expenses, which are all of the costs to buy, upgrade, or add to your fixed assets, and that’s mainly – in our context, that’s hardware and software. And this is the easy stuff. How much do you cut the check for to bring in the new system? And then over the course of the next three to five years, when you’re adding software or you’re adding more hardware to expand, what’s the projection look like? So that’s your capital expense, shortened to CAPEX.

OPEX, and we’ve kind of referred to this already a couple of times, is your operating cost, your ongoing costs for running the products or the system. And OPEX is a little bit more challenging, as you’ll see. So that includes the maintenance. That one’s pretty easy, you know, when you sign up and purchase a hardware or software solution that comes with a defined maintenance that’s easy to project over time. But it also includes datacenter costs, the infrastructure, the racks, cables, power, the cooling, the bandwidth, and any other overhead. And the labor, so how much human resources, people hours are going into this over the course of time. And then professional services are also in there as well. If there’s professional services for the deployment or ongoing operation, those are seen as OPEX.

By the way, we’re leaving aside a very important phenomenon that we’re not really touching on here, which is software as a service, which is sort of a utility model, which is very important, and I’m sure all of you are consuming some products that way. But for this case, we’re putting the other sort of traditional deployment within your environment. And so you can see that we’re setting OPEX as something a little bit more challenging because, as you’ll see, these – a lot of these costs are – they don’t jump out at you as much as, like, how much did you cut the check to the vendor for. And don’t worry about all the detailed numbers here, but this is – once you understand the categories, it becomes pretty easy to set it up.

So you – as I mentioned, this is a comparative exercise, so those columns in the middle are the most important here. We’ve got, you know, call it the legacy costs or the status quo in the gray column, and then the column to the right would be the new solution. And this is a generic example, so it’s got detailed numbers just to make the point, but don’t worry about these specific numbers for this presentation.

And so that creates the fundamental comparison. And obviously, you can have many columns, so you can be comparing status quo to solutions A, B, and C, and in many cases you’ll be doing that. And then you go down the line. So you’ve got your CAPEX, OPEX broken out, and then the bottom row is sort of the sum. So you start with CAPEX [unintelligible 00:35:55] and the OPEX and total it up. You’ve got your TCO.

And the last column is the comparison, where you can show the delta. So we’ll spend – you know, in this case, if we look at the bottom line, $1.1 million in change on the new solution; whereas, if we stuck with the legacy we’d spend $2.8 million over time. And so that gives us a cost savings of $1.6 million.

And then – so that’s good. That’s a nice number. And what you can do is then put those returns on the return on investment, which is a simple equation. It’s the total money spent to deploy the new – the all-in costs of deploying and running the new solution as the base, and over that is the money saved. So it’s better, the total cost savings enabled divided by the cost to get into it, which is the acquisition cost. And what you get – let me put these numbers in. So the $1.6 million is the savings, $1.14 million is the total cost of the solution, and that gives you, in this case, 152 percent ROI over the course of three years.

You put those together, you show – you make this argument, and now you’re having a discussion about an investment. And not only that, but when you’re focused on these savings, and this is a really important point, you have an opportunity to explain how you’re going to deploy those savings. So that’s money saved that goes right into higher-level projects on your list. We all have a backlog of strategic initiatives that we don’t have enough time and resources for. And when you’re able to uncover significant savings with a deployment of a new solution, that becomes – think of it as dry powder that can fund the big projects that are on your list that are either unfunded or underfunded today.

So that’s how you build that out, and we just wanted to spend a little bit of time just kind of going through the details. Not because you’re going to memorize that, and a lot of you probably are familiar already, but just to sort of set that baseline.

Now, OPEX. I’ve referred to it a few times, and it’s hard. It’s hard because it doesn’t jump out at you. It’s not – it doesn’t come with a price tag. And this is where a lot of people fail. But because you’re here, this is your opportunity to succeed where others fail. And it starts with an example of what to do. Okay. So this comes as an actual presentation that I saw presented. And I saved it a while ago because it’s a great example of what not to do.

And it starts with – this is slide number one. The TCO analysis is more contentious than politics, so please understand this is one TCO framework. There are many, and the numbers can be debated, but feel free to insert your own numbers into the framework.

So what is this presenter telling us? He’s telling us that we need not pay attention to anything else he’s going to say at this point because he has no confidence in his numbers. The numbers can be debated. You can use your own numbers. Okay. If I’m going to use my own numbers, then I guess there’s not much value in listening further.

But why? So not – I’m not naming names or trying to bash the gentleman who presented the slide in the first place, but the reason he did that is because he’s been burned by it before. People can quickly start to question the numbers and the underlying assumptions, and it can be stressful. One way to get out of that, a bad way, is to say, hey, numbers are numbers, and they can be debated. And these are just my numbers, and you can uses yours. It’s not very effective.

This guy went so far as to put up this warning sign. “Dangerous cliffs. Watch your children. Pay attention.” So he really emphasized the points that he can’t have confidence in numbers, and that’s not a good place to start. So what you want to do is start with – start building out the credibility. But there are challenges, so we don’t to mask over them.

So the biggest one is labor and other datacenter resources, which you can call, like, datacenter overhead, are shared resources, are shared amongst – if you have a storage solution in the datacenter, it’s one of hundreds of other products, rack-mounted products, standalone products, all kinds of different devices, and we haven’t even touched on half the other gear that’s going to be in the datacenter.

So how do you determine how much labor goes into that one specific storage solution, not to mention the power and the cooling and the overall utilities and rent and bandwidth? And ultimately, you do have to make some assumptions. Assumptions have to be used to determine what portion of the shared resources are going to be devoted to the operation of the storage solution that we’re looking at, both the status quo today, so how much of the costs are going into man – running and maintaining status quo. And then as you project ahead to the new solution, what’s the consumption going to look like of these shared resources.

And it is going to be based on assumptions. And you know what they say about assumptions. And if you don’t, I can share some quotes. So this is a cute one. Lemony Snicket, that’s as everybody knows, that’s the pen name for author Dan Handler, who wrote a bunch of cool children’s books, including The Austere Academy. I think a movie was made, A Series of Unfortunate Events, so based on the stuff.

So he wrote, “Assumptions are dangerous things to make. And like all dangerous things to make, bombs, for instance, or maybe strawberry shortcake, if you even make the tiniest mistake, you can find yourself in terrible trouble.”

It’s very funny. I actually – I’m not a good chef, but I did have that experience with strawberry shortcake once where the smallest drop of egg yolk when you’re trying to get the egg whites, and the thing’s not going to rise, and you’re left with something dense and rather unsavory. So that’s very true. I can’t speak for bombs, but for strawberry shortcake I’m aware.

I think we all know this gentleman, Albert Einstein. “Assumptions are made and most assumptions are wrong.” So it’s kind of funny. He’s kind of like the first presenter I was teasing about the – just starting right off the bat saying that these were going to be wrong.

Brian Tracy, a famous Canadian motivational speaker, “Incorrect assumptions lie at the root of every failure.” Okay. Good.

And then finally, this is my favorite of these, Marshal McLuhan, another Canadian. I have a penchant for Canadians and their funny quotes on assumptions. But so he’s a Canadian philosopher. He’s famous for the saying you’ve probably heard, “The medium is the message.” He’s a pretty interesting guy actually. He predicted the world-wide web about 30 years before anybody saw it come into fruition. But he said, “Most of our assumptions have outlived their uselessness.” It’s kind of a play on words. So they’ve always been useless, and they’ve outlived even that.

Okay. So now that we’ve explained how challenging these things are, it does rest on assumptions. You can’t avoid it. And assumptions can be challenging.

Augie Gonzalez: Hey, Tom?

Tom: Yes, sir?

Augie: We had a question for you. I thought it might be a good time to answer that before you go a little further. And that was what’s the level of detail? How do you determine the level of detail you have to get into in these analyses relative to the value of the overall project?

Tom: I mean, it’s a great question. And I mean, going back to something like these seven rules that I keep coming back to and sort of the need to go into details, I think the more the better. I mean, it’s a good point, right, because as with anything, there’s tradeoffs in life, and you don’t want to spend a month doing a detailed analysis on something that’s a minor piece of the infrastructure.

But for – so it’s going to be a judgment call. But I would – my advice would be if you’re talking about a significant purchase decision, and that’s up to you to define, you know, in terms of what percentage of the budget or how strategic it is within your overall IT mission, you want to make sure that you’re doing a thorough job and maybe even going beyond the point of pain where you – because a lot of this can be tedious. But it pays off in spades when you do it well because then it takes away the whole argument.

The whole reason for the whole warning sign, and if you don’t trust these numbers, use your own numbers, you can take that away. The way to take that away is getting into the details and documenting not just the assumptions, but also the data that you used as input. I mean, the less assumptions you use, and the more actual tangible data that you can point to, the better. And all of this stuff is sort of – imagine it as almost the appendix. So the headlines are going to be, going back to our young guru – the headlines are going to be I’d like to spend this to deliver that, and it’s going to save us XYZ, and we’re going to put that money into the following projects. But those are your headlines.

The details are in the appendix, which is when people start to say, “Well, why did you assume that you’re going to save 20 percent of labor, and how can you possibly think that you’re going to reduce overhead by 40 percent?” That’s where you flip to the appendix and you go to the details. And if you’ve done that work, it’s not that it can’t be questioned, but your credibility goes up, the believability and the penchant for getting to use increases dramatically.

So it might be a long-winded way of answering it, but I usually err on the side of more detail. But it’s a judgment call because you’ve always got competing priorities, of course. Good question.

All right. So there’s other pitfalls that you’ve all seen. And one thing I often see is people dismissing OPEX so much that they just don’t even use it at all, which is a big mistake, and you’ll see why. But – and that’s bullet number one where people say, “Well, we can’t include soft costs here. Let’s just focus on the hard costs.” It’s almost like the inverse term. They’re saying soft costs for something that’s difficult to do. I’d call those hard costs. Raising the alarms about labor for a second here.

So when you show a labor cost reduction of a recommendation because, you know, you’re going to get some benefits of automation. Maybe you’re putting more systems or more capacity under the management of one system. And things require less manual effort. That’s the value of technology. That’s the value of IT over time.

What you want to make sure you do is frame that in a way that frees the resources, which are the most precious resource in your environment, to work on more advanced projects. So it’s not about cutting costs, and god forbid, cutting headcount. It’s about freeing those resources that today are spending too much time on a given solution. And they – once those are freed up, we can put those resources into the products that are going to deliver more value for the organization.

That’s really key because we’ve seen – I mean, I’ve seen it many times. You’ve probably seen it yourself where very quickly it looks like a cost-cutting effort. And you know, in certain organizations that might raise alarms. And that’s the last thing we’re doing here. What we’re talking about is more efficient use of the funds and freeing up budget and other resources to go after the things that are going to pay off even bigger. And that becomes part of the overall – it’s like a double – it’s two bangs for one buck kind of thing where you’re not only getting the value of the project you’re recommending, but you’re also freeing resources to advance other projects.

It pains me to say this because I am a vendor, and I’m here on behalf of a vendor, DataCore. A credible vendor, I’ll say. But vendors and their models can complicate things because – and actually, the presentation I was showing before became – was a derivative from a vendor. And they’ve kind of muddied the waters because too often they come to the table with unrealistic proposals, unrealistic visions of what they’re going to deliver in terms of TCO and ROI. And when used without question, that goes back to point number one on credibility and undermine everything.

Now, it’s very important when you’re looking at a new solution, you have to get as much information or data from the vendors, get the specs, talk to reference customers and get some details on that. But it – I must say even though it pains me, in some cases, vendors have given the overall TCO concept a bad name because people say, “It’s TCO. Forget that. Those can’t be trusted.” And in fact, they can be if they’re done well. Not only can it be trusted, but it can be a fundamental part of your overall game plan.

Another problem is OPEX can be like a catchall through for things that’s not truly OPEX. There’s a lot of other benefits that can be quantified that should be quantified, and they’re certainly part of your overall pitch. Things like we’re going to avoid downtime when we do this, or we’re going to reduce downtime from X to Y. And downtime can be very much put in financial terms, and there’s a lot of analysis online for that type of thing. That’s a part of the payoff, that’s a part of the recommendation. It’s not strictly OPEX. And so you could call it a third category, which is like other, or maybe even a tradeoff cost or an opportunity cost.

But the problem that I’ve seen a lot of times is when people call that kind of thing OPEX, it sort of undermines the overall credibility of the presentation since the finance team won’t look at that stuff as OPEX. That’s kind of the point.

So why bother at all? Why not go back to this strategy, which is it’s so difficult, right. But let’s punt, and let’s just say, hey, look, we must do this, and if we don’t, it’s going to be a disaster, and sort of go back to the pleading argument. And I would just argue that you still want to be in this camp, in the unstoppable force of these three where – and why? Because OPEX accounts for the majority of the TCO for these types of solutions, between 50 and 75 percent. I even saw a stat from Gartner up to 80 percent of the overall costs, overall TCO is falling into the operating expenses of the system over time.

And so if it’s ignored – imagine that, if you’re making a recommendation and you’re ignoring 75 percent of the costs, it’s not a credible recommendation. And more than that, it’s something I mentioned a minute ago, which is it’s a golden opportunity to discover funding for new projects, and make the case for new projects.

So I think I’ve made that point, as much as possible. Now the question is, well, how do – okay. So what are the best prices for OPEX? And again, this would be – you know, there’s courses in college or business school, and there’s libraries of books written to it, right. But so the time we have today, I would just recommend a few things and some homework, if you’re going to go into it.

You want to get good at calculating it. And it sounds silly like it’s circular. What’s the best practice? Be good at it. But what I mean by that is really focusing on it and not dismissing it. One thing is to brush up on cost accounting. That’s the process of recording, classifying, analyzing, and this is that keyword, allocating costs to a process or a solution.

So, you know, I took accounting in college, and then quickly forgot about it the day after the final, but it was when I came back to starting to do these things, I realized that cost accounting thing, which I memorized before the test and did okay on the test, and then forgot about, is really crucial. I mean, that’s the essence of so much of what we’re talking about because – and it’s not just in IT. It’s across businesses where we’ve got these sort of macro cost categories that are shared amongst departments or shared amongst the systems or organizations, and there has to be some allocation method.

And in a lot of cases, the allocation method is terrible. If you have a paper library that’s huge and occupies hundreds of square feet, but it consumes almost no power, you don’t want to use square footage and allocate, you know, power, cooling, and bandwidth based on the square footage footprint of a solution, right. It’s sort of an obvious point, but you have to really understand what you’re measuring and find out what those proxies are for cost allocation.

It’s a deep topic, but that sort of lies at the root of OPEX. Take that time, going back to the question that was asked, to create the baseline. It’s tedious work, especially if you haven’t done it yet, but it pays off. And there are folks – you know, if this isn’t your sweet spot, there are folks in your organization who has done this type of work. I mean, whoever designed the datacenter figured out the power of the overall datacenter needs, figured out the UPS requirements, for example. There’s whole methodologies on defining sort of the power requirements, and then padding it or planning for growth. And taking that stuff that’s done at the macro level, you can bring it down to the individual solution level and get a really solid sense of the, you know, power and cooling costs.

And then you’ve got to carefully document labor. And you know, one thing I’ve done before, which again, is tedious. Nobody really likes doing it, but just having people track their time over the course of one or two weeks, and just really break it down like a lawyer does or track in 15-minute increments of where they’re spending their time. It can be so insightful. And people sort of – again, it’s a hassle to do that type of thing, but once you step back and you have a dataset to analyze, you can really see and capture some insights on how much time certain things cost that might be seen as an afterthought, but then you realize, oh, wow, 30 to 40 percent of somebody’s – Sally’s time is going into maintaining the system.

So know that you’re going to get the scrutiny, be prepared, document your assumptions, document your allocation methods. And you know, don’t skip it. So that’s the bottom line with OPEX.

And so now we can tie it all together. So this is the final couple of slides, and then we’ll cover off on software-defined storage, which is one of those things that can help address a lot of this.

But so when you bring it all together, let’s bring back Mr. Wonderful, and his seven rules and make sure we can comply with these things. So the first step is preparing the data. And again, going back to that sort of template where you’re carefully documenting your status quo and all of your assumptions. And you’re going through your existing – your legacy costs, and then the new solutions on a sort of line item basis. And you’re getting to that point where you can highlight the delta, the cost benefits, and get it all the way down to an ROI. So that’s number one, right, and that sort of creates that baseline.

And then you want to frame it. Again, this is sort of the – this is the stuff that goes in the appendix. And when you get to the full recommendation, you want to frame it and set the context, align it with the strategic initiatives of the organization. Paint the picture, how this is going to advance, not just the high-level strategic goals of the company or the organization, but also of the IT team. And then show the impacts, quantitative. Pull the data from the analysis you’ve already done. And then be ready with all the details.

And when you’ve done all of this stuff, the prep work, and you put it all together, and then you pretty much check it off. But that last step is key. That’s where you’re showing your plans, and you can highlight the risks and tradeoffs, the decisions that you’re not makinmag implicitly by doing this. And then validating all of the underlying data to get to an ROI.

And so you come to the table with that and with your preps, and even a wise and scrupulous business guru like Mr. Wonderful will approve and say yes. And that’s kind of what you’re going for.

So given that, so we took you through some of the complexity. And if you go back to the scale that we started with, you know, have we reached that tipping point, this is giving rise to software-defined storage. So I’ll hit this just in a quick slide, and then I’ll turn it over to Augie, the director of project management for DataCore.

But the reason why, you know, the incentive to look at software-defined goes back to that tipping point where software-defined can be an underlying fundamental architectural change. They really can significantly impact that equation, that cost crisis equation. And so if you start with the definition that software is running on commodity hardware that can aggregate storage from lots of systems, and then create the single pull of shared storage. So it’s a diagram would show it as like a – it shows a traditional model, but if you put it – you’re basically taking the controller out of the storage devices there in the middle with the little blue icon, and that creates this shared storage pool.

And this does a lot of things. One is it allows you to add – easily add in new storage into the pool and expand dynamically. It also unlocks siloes of storage that might be locked away and gives access to it. So all of these things accumulate over time to give a real TCO advantage. And in fact, DataCore has done a number of surveys, which I’ll show in a second. But because of the good alternative, where you’re getting all the value of traditional storage, enterprise storage, but you’re able to dramatically impact the TCO savings, the last couple of years has seen a massive growth.

So Gartner is projecting five years from now half the global storage capacity will be deployed behind SDS or as part of an SDS solution. And it says 15 percent today. By – 15 percent today is quite large. I mean, that’s a big part of the market. And over the last few years, it’s grown up to 15 percent really rapidly. And you know, one of the major reasons is people are looking for that alternative.

And so when DataCore polled their customers, this is customer data, they did it with TechValidate, but the chart on the left shows almost half, 47 percent of the customers saw storage spending cut in half, or more, compared to what they were doing before. And almost 100 percent – this is an amazing stat – 99 percent saw a positive ROI within one year. You can measure ROI over time, and in a lot of cases, it takes a lot longer to get it to breakeven or to a positive ROI. But DataCore is very quick because there’s a lot of ways where they can impact the costs.

And so, again, thanks, Augie for inviting me in to share a little bit of the background. But let me turn it to you just real quick. We’re just almost over time, but if people can stick around for a minute, we can just cover off on DataCore Software.

Augie: Yeah. Thanks, Tom. I think we are very short on time, so I think what we’ll do is we’ll setup another BrightTALK session, invite the teams to the visitors to go see. Then we can really dive this – into this a little bit more and flesh out the meaty parts of the cost savings that we’re talking about here.

So I’ll turn it over to Marisa.

Marisa: Thanks, Augie. Thanks, Tom, for presenting. And thank you all so much for taking the time out of your day to join us. Once again, this webinar will be on-demand with a link to the presentation in the BrightTALK follow-up email.

Lastly, we are looking for ways to improve our upcoming webinars, so please leave us your feedback at the end of the webinar.

And I hope everyone has a Mr. Wonderful day. So take care, and thank you for joining.

Read Full Transcript