Search
Languages
<
Webcast - First aired on
1:00 pm EDT

Hot vs Cold Data: Its Influence On Data Placement Decisions & What to Do About It

Webcast Transcript

Carlos Nieves: Good morning, good afternoon, and even good evening for some of you. Thank you for joining us today. My name is Carlos Nieves, and I’ll be your moderator for this event. I’d like to welcome all of you to today’s presentation, focused on the topic of hot vs. cold data, its influence on data placement decisions, and what to do about it. Also joining us today, our presenter, is Augie Gonzalez. Augie is director of product marketing and one of our technical experts on software-defined storage solutions.

Before we begin today’s presentation, I’d like to go over a few housekeeping points. First, the presentation will include some polling questions, so we encourage you to vote and participate as we put the questions out there. Also, we will host a Q&A session at the end of the presentation, so please enter questions at any time using the questions box. Then at the end of the Q&A we will select the winner of the $200 Amazon gift card raffle. We also included some resources in the attachment section, so feel free to check those out. Also, this presentation will be recorded. We will share it with all the attendees, and it will also be available on-demand. Then finally, don’t forget to rate the presentation and provide us any feedback, which is very valuable to us. With that, I leave you with our technical solutions expert, Augie Gonzalez. Augie?

Augie Gonzalez: Hey, thanks Carlos. So when I look at this picture, it makes me chuckle a bit. It reminds me of the big differences in temperature between spring season here in beautiful Florida and everyone else north of us, but really the dramatic temperature contrast also turns out applies to your data. So let’s see how that plays out.

Part of the reason we’re even bringing this up is because it affects how much we’re spending–how much we’re spending on really relatively dormant data that’s not worth the same amount of investment as normal hot data, which we rely on so much. The reason we know this is we spent quite a bit of time analyzing telemetry from thousands of sites, and this gives us a really good perspective on the distribution of hot versus cold data in environments like yours.

Take a look at these traces. They’re very representative of what we see across the board. So if you were to look at the norm of the scenarios, this is what you would find on average. And let me present that for you. The left axis is really trying–and you see that big spike. That’s basically telling you the temperature. So very high peaks on the left–that green goes to the very tip of red. That indicates the most active, most frequently accessed data. As you see, it slopes off as you go to the right and bottom. That starts to drop off dramatically at about 20 percent. So it basically says that most of the hot data represents about 20% of the overall pool capacity that you have access to. That’s very significant because that’s basically saying only a small fraction of all the data you have at your disposal is actually important to you enough to access frequently.

Now, the second chart that you see here, the squiggly line–the kind of orange, yellow, amber lines–those are indicative of the age. This is how long this data has been around since its first arrival. There you also see the same thing: the temperature, when it first arrives is active–is hot–but within hours, in fact, this data becomes relatively stale and unused, and it ages very quickly. I’ll touch on that some more as we talk about the half life of data, but this is just to set the stage for you.  This is very, very typical. If you were to do some introspectives on your scenarios, you would see very similar behavior.

The temperature variations are often described generally as hot data–hot data meaning high frequency of use, frequently accessed, on all the time. The second set is usually associated with warmer data, where there’s more moderate use of it–not as frequent as hot. Then there’s the seldom-accessed information, and that would be treated as–leave it as cold data. There’s actually another one of these. It’s called frozen data, and you’ll see a bit on that. This doesn’t matter whether you’re in Fahrenheit or Celsius. It’s all the same. It’s very much a strong correlation regardless of your orientation.

What’s important, though, about this is that where we mostly concentrate on what’s interesting to us and relevant at a certain point in time, if you were able to peak at the entire volume that you’re confronted with, you would see this hot data makes a small fraction–a disproportionately small amount of your overall volume. Warm data is a little chunk, as well, but both of those are really just the tip of the iceberg.

Most of the information that you’re keeping around and that you’re spending good money on is cold data. That’s interesting, because we lose sight of that very quickly. We’re worried about running the business and sometimes disregard this at our expense.

To put more concrete numbers on this, you have to look at the average capacity managed and how much that’s grown in even the last 2-plus years. Some of this there’s been close to a six-fold increase in the number of petabytes that people are keeping. Now, your environment may be smaller than this, but probably the relative percentage of growth is similar, depending on what type of industry you’re in. So think about that. In a case where somebody is holding close to 10 petabytes of data, if only maybe 20% of that is active and really important, and you’re spending the same amount of money on that 20% to safeguard it, to protect it, and to keep it active and current as you do for the other 8 petabytes–wow, there’s a lot of money being chewed up in the wrong place that you could best put somewhere else.

Now, we’re going to talk about ways to address that, but first what I want to understand is how many different types of storage devices do you currently use? I’m really interested in understanding how you approach this segregation between hot, warm, and cold data. So what I’d like you to do is fill out–choose one of these, only one. Whether you keep all the same storage devices, regardless of temperature–so that basically a single view of this; don’t try to make any distinction today. Some of you may be choosing to split halfway, one type of storage for active and another one for archive data. I’d like to know that. And those who have been a little more sophisticated and more conscientious about this, you may be using 3 or more types to do that. Whether you keep that on premises or not will determine whether you choose C or D. Then there may be others that are something in between. I’m not sure, if you could do some verbatim answers, we would enjoy that. I’ll give you about 10 seconds to answer that–pick one of the two–and then we’ll discuss what the votes look like.

Okay, so here’s what the polls are starting to look like. I’ll just give you a quick snapshot so we can get on with the show. Roughly 20% are using only 1 storage device, regardless of temperature. That’s interesting. 26% or so are using 2, one for active and one for archive data. Only about 4% are using either 3 or more types, all on-premises. And then there’s the balance of it–is a mix of on-premise tiers and cloud storage, which represents about 43%. So, pretty good. Sounds like a lot of you are doing a reasonable job of splitting out the mix and attending to it properly. So that’s good. You’re already on the right track.

Now let’s talk about this half life. This is something that I picked up from Nucleus Research. They looked across industries to find out how important data was after minutes, after hours, and after days, and they reached some important conclusions. They compared this to something like the half life of radioactive materials, where some of these things stay around for a long time, and they still can hurt you. Well, the opposite tends to be true of the half life of data. In one sense, for tactical information–which is the blue curve you see here–within on average 30 minutes of that data having arrived, it is not particularly useful in decision-making. So that’s curious in one sense, because you’re basically saying if I need to make decisions within a few minutes of having this information arrive, if I wait beyond that or look at that data sometime after that period, it has really no value to me. It does not affect–basically you might as well toss it, because it’s no longer going to influence how you approach your choices.

This happens with suppliers. For example, we’re trying to do just-in-time inventory management and delivery. That’s one place. Some of the scheduling that occurs in the airline world sometimes is affected on this order of magnitude. They treat this as very tactical decisions that they have to do very spontaneously. So if they don’t look at that data right now or a few minutes from now, it’s not going to be particularly helpful. In fact, if you waited to print reports–is one of the points they make. If you wait for a printer to report up some of this information, it would be obsolete by the time it hit the ink.

Now, there’s also operational data. Think of the tactical as that first part–got to act on it in real time. Operational is something where decisions based on this data span maybe days or a few weeks. So what happens there is the slope of that fades–that the value of that data is more gradual. In fact, there they’re saying it’s about roughly–and there’s a very large, standard deviation–but about after 8 hours, only 30% of the data that comes after that that sits around for over 8 hours is going to be valuable for anything other than some future planning and some predictive analytics later. But it doesn’t really affect your operational choices after 8 hours.

Now, the green line, which almost like a straight line–and again, this is a composite across a wide variation of industries, and part of their point is it doesn’t so much depend on the industry; it’s more about which phase of your decision-making you’re in. Here they’re talking about the strategic nature. So, things that are strategic are decisions that are made on perhaps a quarterly or annual basis. There the half life declines much more gradually. So after something like 56 hours, I recall, they still feel that 70% of that data is going to be important to help them with long-range strategic choices. So that’s going to weigh in and is going to be part of the criteria you’re going to use to determine not only the temperature of the data as one selection criteria in how much you invest on data, but what is its half life. And if its half life is long, then obviously there’s good reason to spend more on it. If it’s a very short duration, then you want as soon as it’s no longer of use–you ought to consider putting it in much cheaper storage, even though you want to retain it for future reference and planning.

We looked at the iceberg before. This is probably–I’ll click down on that so you can see the relative distribution of that capacity on average across a wide swath of customer environments. You see a small chunk in that order of 20% of the data being hot, warm data representing another maybe twice as much, and then cold data is the biggest part with some then falling into that kind of glacial, frozen data space.

What’s important to keep in mind here is the percent–not the percent, but the actual volume of hot data tends to be fairly constant. That’s based on the sensor inputs, real-time inputs that you’re receiving, transactional information that you’re receiving. Unless your company is growing a lot, that tends to be a pretty finite number. What tends to accumulate is the cold and frozen data. That tends to continue to accumulate over time, so that’s why that iceberg gets so deep and heavy underneath. That’s what we’ve got to figure out: how best to allocate appropriate storage for each of these purposes.

The problem of course is how you go about doing that. I sometimes look at these almost as buckets that are kept from each other. So, we’ve got some cute videos on this, but in this representation what I want to show you is the classical segmentation of storage to address these hot versus warm versus cold and the frozen data is to think of them in the type of storage you would map them into.

A lot of people you would ask would say, well, I keep my hot data–it has to be super quick, low latency. I need to put it in an all-flash array. So how’s that? So that’s the one on the extreme left. I may use a hybrid storage array that’s capable of having some solid state disks and some spinning disks as my warm repository. And for secondary storage, I may use just a cheap and deep–JBOD, for example, or just big spinning drives. Then for everything else, I treat that as an elastic resource that I have in the cloud.

The issue then becomes, first of all, where do you begin? Where do you begin to look at and gauge what data is hot and what isn’t? Once you actually determine that, what’s it going to be a few minutes from now with that same conclusion you reached when you first observed the data? Is that still the same conclusion you would reach later? And given that, how often would you probe it, and how long would it take you to do that every time?

Basically what we mentioned in the abstract is that this is not something a human can do. There’s no good way to do this with a person in the loop. Those who try to, I think from the experiences that I’m seeing in the earlier poll–what I can tell you is I anticipate that these are very tough measures that you end up having to take if you’re having to do any of it manually and may in fact may be either causing certain slowdowns in your users while you’re trying to move data from one place to another, and obviously it’s a chore for yourselves.  We’re going to give you some tips as we approach this problem on how to automate that process.

One thing that I’ve heard when I’ve brought this up in the past is people go whoa, wait a minute. Why are you dealing with all these different types of storage and everything? Why not just pick one? Pick one. Maybe use a hybrid–a big, old hybrid array. It’s going to take care of all of that. Well, the reason is it’s super expensive to do that.

If you were to try to put enough capacity on the floor of hybrid arrays to address all of these, especially the cold and frozen data, you couldn’t–most people can’t afford to do that. Only the most affluent organizations are able to do that. So what we have to do is find ways, more nuanced ways, to address the problem, recognizing that it takes a variation of different classes of storage to best proportion capacity.

The reason that I say that it’s so expensive is this is from a Gartner study, and they basically were just trying to look at what the price differential and price per terabyte from an SSD array, an all-flash array, versus a hybrid array and an all hard disk array? And it’s about 6x more expensive per price per terabyte for an all-flash array. So if you decided, well, I’m going to put everything on all-flash, that’s good if you can afford it, but wow, that’s a precious premium that you’re spending on things that are going to be relatively stale. So probably not the most financially best way to approach it. Some people might scrutinize it and say, wow, you’ve got to spend that money on something else. Financially responsible–I think I’ve heard it described that way.

The picture that we’d like to leave you with here is how DataCore approaches it from a virtualization standpoint. This is a very simple diagram. It gets across that at the top layer are the consumers of storage–the workloads, whether they’re bare metal, whether they’re virtualized servers, or containerized workloads. All of those can basically draw on pools of capacity that have different tiers that are commensurate with the temperature of the data and the value of that data.

The software in this case is taking that responsibility and using machine learning to understand what is in fact being frequently accessed vs. what’s not and, through artificial intelligence, is then deciding where best to place it in real time as the internal instrumentation and telemetry is telling us what’s going on. So it can respond directly to the patterns that you’re experiencing regardless of whether you’re there watching it or not.

We feel we’re the only independent software vendor capable of doing that across–that is capable of not only pooling these diverse storage which are purpose-built for these purposes, but also auto-tiering. So you can do your competitive research, and this is I think where you’ll land.

The automated storage tiering is essentially a way to make the best tradeoff, if you think of it that way, between best performance, lowest latency, and the amount you spend on it–the cost. We’ll walk through this more deliberately, but you’ll see how we’re able to dynamically migrate the blocks amongst the classes of storage, and that will be determined by access frequency and can be overridden by your own user preferences, where necessary.

Think of this from the granularity. It’s actually even more interesting. Think of an Oracle or SQL database. You would think if you had control over this manually–you would say, okay, this is an Oracle database. This is all the financial stuff, all the heavy transactions occurring on this. So I’m going to place it on an all-flash array, because that’s where we’re making all our money. This has to be most responsive.

Well, what you would find, again if you had the instrumentation, you would find that only a small fraction of that database is actually being hammered. The rest of it is–this is kind of the picture I’m showing on the left. There are these red hot zones that are really frequently accessed. You would see some moderately accessed proportions of that same database, and then some that are basically hardly being accessed at all.

What the DataCore software does in this case–it doesn’t try to make this big–okay, that’s the entire chunk that–it’s a SQL database. I’ve got to put it all on an all-flash array. It’s much more intelligent than that, much more refined. It looks in and says okay, I can carve this database, in effect, over the volume that it sits on into blocks, and those blocks I can make specific assignments and allocation of space based on the hot blocks that I can put on flash, the subset that are just moderately being accessed–I can put that on lower-cost storage. Then everything else that’s infrequently accessed I’ll actually put in maybe a 3-three data array.  That will change as the behavior of the users of those databases change. So it will make these shifts in the allocation and the assignment of those blocks accordingly while life goes on, without manual intervention.

One of the things you might think of is, well, now we’ve got all this going on here, but how else can we help? So in addition to the auto-tiering occurring by the AI and ML stuff that’s going on underneath the covers, we’re also caching the data. So any burst that we see, we will cache that in RAM close to the application to further accelerate the value of anything that’s coming in and needs a little more help–oomph. Obviously you don’t want to do that for something that is frozen, because you don’t want to tie up cache with that. So those things naturally gravitate to the far right. So, less active data is placed on lower-cost storage and more active is placed on your fastest. That’s the essence of this whole message, what we’re trying to get across here.

From a visual standpoint, how you can keep track of this within the software is through a number of dynamic charting and tracing that we provide. We do this both in real time as well as historical, so you can see it. What I’m showing you here now is how the fine-tuning occurs automatically and the heat maps that are presented by the software in the console when you’re looking at it, as well as in some of the analytics in the backend.

Each of the group of rows that you see–the first group would be the tier 1, how much space is being allocated to that and how much of it is being consumed. The second 4 or 5 rows are the secondary tier, and the last group of the rows are the third tier in this particular scenario. You would see as the mix of workloads changes how that allocation would change, as well. It’s just a way to keep an eye on it.

You may be wondering how it is that we set these tiers. That’s very straightforward. Basically when you introduce a new storage system into our virtualized pool, you designate what tier you want it to be. Sometimes people, even though they have very fast storage, sometimes there are–I’ll call it politically reasons. We just spent a lot of money on this array. We treat that as tier 1, even though they may have very similar characteristics to some other tier we have, but we want to give preferential treatment to this one. So we can explicitly define a tier for a given resource in our pool. In this case, for our all-flash array, we designate it as tier 1. Our hybrid array would be designated as tier 2, and any secondary, like bulk storage stuff, we might designate as 3 or 4.

As easy as it is to do this, you can also change it. So when your storage arrives–let’s say now in addition to the all-flash array, I want to introduce some direct NVMe storage inside the DataCore, which are virtualizing the pool. Those would be extra fast and extra responsive. So I might tag those now, when they were introduced as tier 1, and I would slide the all-flash array to my tier 2. This I can do in the background without affecting, disturbing, or having any downtime associated with the users. They would simply see the benefit of having an extra-fast tier 1 come into the picture, and the software would automatically do its magic to migrate the blocks there as necessary. Same thing with cascade things. If you had some really inexpensive tier 3 or tier 4 storage you want to introduce to the pool, you would do the same thing. Then we can basically say okay, now that’s the best place to put it–and let the software take care of that.

There are also special circumstances that suggest and may even dictate that you override the intelligence that’s going on here, because there may be a short-lived activity that you want to consciously pin, in effect. I think that’s one of the terms that people use: pin to a tier for a small duration of time. So at the end of the quarter, I need to do some special reporting, even though this workload generally in these data sets are relatively infrequently accessed. What we want to do is I want to take advantage of the fastest possible storage to run this right now, so we can identify certain volumes and change the storage profile explicitly to place them on a type of storage. I can do this probably on two extremes–is usually what we see our customers doing. One is they either say this is super critical, so let’s make sure it gets all the benefits of the fastest stuff as well as–in this case you see on the left the storage profile, critical. The performance class is rated as the max, as is the replication priority, the recovery priority, and any of the auto-tiering is disabled. During this period I explicitly call that out.

The second case is I have something that let’s say backup. I want to put my backups appear–those blocks are being hit as the backup occurs–I want to explicitly place them on my tier 3 or my secondary-type storage, which I know is lower cost. So any volume that’s being used for backups I designate–I override the normal auto-tiering and say please place those on secondary storage, and the software will gladly do that for you.  You can intervene as you feel necessary in those special cases.

This brings me to our second polling question, and I’m super curious to understand, given the variations you showed me a little while ago–and those of you who have 2 or 3 more of these–what tools do you use to move to different storage? As it cools, it goes from very hot to the others. There’s again 4 choices here. I can’t do it today. I’m not doing it. I would expect those are the ones that also answered only one storage device type. Some of you may be using some copying techniques–host-based copying techniques. Then you basically copy it somewhere and then delete the original. And there may be a few that are also using storage migration tools like Storage vMotion. So please give us your impression and see how it compares to everybody else.

Here’s what I’m getting as of right now–our technical information coming in that we can make some decisions on for the rest of the presentation. 33% as of right now are saying cannot do it today–probably the one reason you’re attending. Another 32% are saying copy data and then delete the original, and I feel for you. I know how difficult that is. There’s about 13% that are doing some sort of source migration using something like vMotion, and a balance of about 23% are using some other technique, which I’d love to hear about. So good, thanks for answering those.

This may be another way. With DataCore, you basically have cross-array auto-tiering. It is the ultimate flexibility. It’s basically saying you can hang whatever type of block storage you have to address the primary, secondary, and tertiary requirements, put it in the pool, and let the software decide where’s best to place it. It works with both existing storage and future storage. We’ve had customers running this for over 10 years. They have obviously brought in and decommissioned quite a bit of gear throughout that process, all of that done non-disruptively.

As I showed you, you can define specifically which of these make up each tier. So you can distinguish between the high performance, the midrange, and the low cost. This also turns out to have a benefit in terms of application performance. That is, if you are putting both relatively stale data and hot data on the same storage, what happens is that the storage arrays tend to—as they get more full, their response time degrades. So you’re basically compromising not only the capacity—that you’re taxing it with extra work—and that’s reducing the effectiveness of that to respond in real time. So when you can take off those secondary loads off it and put it somewhere else, there’s more room and more resources to address your high-performance requirements. Again, in the situations where you might want to override that, you can do so.

There are other areas where we’ve found these techniques bring some valuable ways to approach it. One is when you have different—I would call them line-of-businesses preferences. Your consumers that may have called out to you, “I need a particular supplier to address this requirement, because I’ve had luck with them in the past,” and encourage the IT department to purchase that for this project. So you do that. Then what you’re able to do here is you can explicitly assign it for that and then also maybe share in that resource for others that have light requirements. So this gives you a degree of selection that you didn’t have before. Then at any point in time you can say, okay, now I’m going to turn it back into the pool and let everybody take advantage of that resource. It doesn’t have to be so isolated. In fact, it can be shared where excess capacity exists.

It can also be used to deal with mergers and acquisitions. So what we’re increasingly running into is companies that, as they acquire other companies and amass the entire IT real estate there, are dealing with multiple variations, multiple suppliers, and multiple models of storage, and they really don’t know how to rationalize that or how to make sense of that. Here’s the simple way basically to put those into common pools and then designate them. Okay, I know that this might have been HP here. I see here this might have been some Dell EMC equipment that came from the results of these M&A. Let’s identify them according to tiers, simply hang them in the virtualized pools, and let the software decide where best to place the workloads using these techniques.

It also helps, by the way, when new generations of equipment are put in. So as you have older and new equipment coinciding, those can all be treated the same way. You might find that the older ones are a little long in the tooth. They’re not performing quite as well, even though they’re the same model per se and same badge, but we can designate them as slightly different tiers.

I think from one standpoint, you can look at this based on what’s existing, but just as importantly—as is the case with this customer, Architectural Nexus—is the ability to inject next-generation storage at any tier you choose and can afford without downtime. I think they put it really well, which is with DataCore, it’s never a forklift operation. You can purchase the latest hardware and deploy it where it needs to be, gaining immediate benefits of the latest hardware without losing the investment in existing data infrastructure.

It basically says I can slot in new equipment, shiny new objects that appeal to me and that are budgeted for. I can still take advantage of existing equipment, possibly by downgrading what it’s providing. Then at some point, when its financial life is exhausted, I can take it out of the pool altogether. All of those steps can happen without a moment of downtime.

There are major economic benefits. The reason we’re talking about this is because it affects your bottom line. Being able to match your storage spending to time value of data has to be an essential responsibility that we take on going forward, given the volume of data that exists out there—and recognizing how much of it is either lukewarm or secondary and does not merit the same kind of spending.

A side benefit, which is a very strong one in this case, is by putting yourself and insulating the type of storage you use from the consumers in this way—the software-defined storage techniques—you also gain negotiating muscle. That is, storage becomes an interchangeable resource. So you can swap out suppliers if the current supplier is not treating you especially well—not doing good by you. Where in the past you were dependent on special procedures associated with that model and that supplier, here you’re once removed from those. You have a uniform set of software-defined storage services that, whatever you substitute in its place, their operations endure. The same process you used to provision, to safeguard, and to maximize the use of that remains in place.  That’s going to give you the ability to get the best deal each time going forward. A very important element of the overall structure.

Have a look at a couple of resources. I think Carlos will point out some of these, as well, but there are a few descriptions of this and a video, as well as a white paper that’s one of the attachments that we offer to you. This’ll give you a little bit more narrative and also quantify some of the discussions we had today.

With that, let’s take some of the questions from the audience. I’ll walk through some of those right now. The first one is, “Can we have different profile sets for different types of data? This is a software engineering company, and they have terabytes of data with different requirements—summer project, growing media, etc. That’s exactly what you’re trying to do. I can say a profile is critical. I can say it’s normal, or it’s archive, and I can create custom profiles as well to designate the relative value of that compared to everything else.

Here’s another one. “What kind of scenario are you commonly finding out for protecting data storage—hot, warm, or cold—and which kinds of companies are using it?” It’s across the board. This is not a vertically-specific or industry-specific behavior. What you do see is that there are more—so, we do for example a lot of life-critical healthcare organizations, and it’s very, very clear to them what is hot and what proportion of that needs special, super fast stuff, which ones are aging, and which have really long archive requirements. So there it’s more visual. It’s a more natural expression of their business. In other cases, not so much because they haven’t looked at the data in quite that way. And hopefully by understanding what we discussed today here, you start to look at it through those eyes. That’ll be important.

Looking through more of the questions here. Okay, “Outside of the GUI, how can I programmatically change tiering attributes?” Interesting. Yeah, so part of what’s going on with many of these scenarios today is that we have to do orchestration outside. The person cannot be physically standing in front of a console doing these things and making these choices. I mean, that would be not a good use of your time. So there are conditional, programmatic ways. We offer both a full complement—a REST API to invoke any—not only the tiering choices but any of the functions available through DataCore software can be accessed through the rest API. For those of you who have more of a PowerShell inclination, we offer that as well.

Another question is, “How frequently does the software choose to move blocks between tiers?” This is a dynamic choice that it makes. One of the things the software is basically doing is, I’m looking for opportunities where the system isn’t so busy I shouldn’t be moving stuff right now, because that would be a poor use of the resources. So it looks for these opportunities. Okay, I’ve got a little breathing room here. This is a good time. But it does it on a regular basis. So it’s not something you have to schedule or worry that it only happens at certain hours. It is constantly occurring on those resources and on those volumes that you assigned to this capability.

“Is DataCore US-based?” It is. We are a worldwide company, however. Whenever you are, we can reach you. [Laughs] And I think that’s all the questions that I have. Carlos, do you see anything else?

Carlos: No, I don’t see any more questions Augie. Thank you again for presenting a very educational and informative presentation. I would like to just go over a few reminders just to encourage everyone again to go through the attachment section. We have a white paper there related to this topic. Also I encourage everyone to make sure to give us some feedback by rating our presenter and presentation—and also another reminder that this presentation has been recorded, we will be sharing it with all attendees, and it will also be available on-demand.

Finally, we are ready to provide the winner of the $200 Amazon gift card raffle. The winner is Mike Carter from Advance Auto Parts. Again, Mike Carter from Advance Auto Parts. We will be reaching out to you shortly with more information and your gift card. With that, Augie, thanks again for presenting. I’d like to thank the audience for attending. You know, keep in touch. We host webinars every month, so until the next one, thank you and have a great day.

Read Full Transcript