Cisco Hyperflex Hyperconverged Webinar OnDemand
Cisco Hyperflex Webinar
CLICK IMAGE BELOW TO VIEW WEBINAR
Rachel: Good afternoon everyone and welcome to today’s Cisco Hyper Flex presentation. My name is Rachel Cuomo and I am a marketing communications supervisor at Continental Resources. Just a few housekeeping items before we begin. If everybody could put themselves on mute at this time that would be great. We have speakers in different locations. So in nonparticipant speakers could mute themselves we would greatly appreciate that. If you have any questions throughout today’s presentation please enter them into the chat window. I will be monitoring the chat window and we will have a question and answer session at the end of today’s presentation. Now I would like to introduce Allen Latch Cisco product champion at Continental Resources.
Allen: Thanks Rachel and thanks everybody for joining. I am going to make the Continental Resources introduction pretty brief so we can get into the meat of the presentation. I know you are here to learn more about hyper flex then you are about listening to me but obviously the point of the call as she mentioned is we are going to go over some of the things going on within the hyper converge world and specifically hyper flex within Cisco. It is obviously an area that we are putting a big focus on. I know that it is an area that a lot of customer are looking to learn more about. So hopefully that is the goal for today. Just real very briefly for those of you who don’t know about Continental Resources, we have been around for about 54 years. Obviously been through a lot of different changes in the technology world. We have did about 490 million in revenue last year. So we are pretty stable. Over 100 reps. 70 technical people. Women owned business and geographically we are primarily east coast. We go out to Chicago but we do have national customers as well as some international capabilities and we have a location in New Hampshire which is our configuration testing center where we do a lot of customer work for customers. Everything from white labeling to staging facilities. So awful lot going on with us. If you are looking at the technical expertise within the company and specifically in the data center and hyper converged world we have some very senior people who have been with the company for 10-20 even 30 years. One of the gentlemen who is here with me. So a lot of experience cross platforms and certainly within this space. So with that brief introduction I am going to hand it over to Mike. Let me stop sharing here.
Mike: Alrighty. Thank you Allen and thank you everyone for taking time out of your busy day and your busy week to come hear about Cisco Hyper Flex. Joining me today is one of my talented systems engineers Rob Bergin. He will be providing a demo near the end of the presentation today and just by the way of introduction I am a data center product specialist here in New England. So I cover all of New England territory accounts and every state except Connecticut but the goal today is really to give you an introduction to hyper flex. Talk to you a little bit about the background of the hyper converged market and how we got to where we are today and touch on some of the unique aspects that Cisco brings to the table. So without future ado we will get right into it. First of all I just want to frame the conversation because a lot of our competitors I don’t know are one trick ponies like they have one product and that is it so therefore every time you talk to them they are going to try to fit you into a hyper converge solution. At Cisco we are not going to do that. We have a bigger view of data centers in general and our view is something called ASAP and it really is an abbreviation that stands for analyze, simplify, automate, and protect. So 8 years ago Cisco started on journey. Our own digital transformation and we basically knew we had a very good footprint in the datacenter and everywhere around network switching and our engineers started to wonder about the other end of that network cable. Where does it go? Where does it connect to? What if we developed those systems as well and will become able to build a better system that drives out capital expenses as well as operational expenses? And that is what we did with unified computing which came out in 2009. So now today we are going to spend a lot of time talking about simplify and really how do you simplify with hyper flex. Which is our entire converge solution available today. So if you look at hyper convergence more generically it is kind of a buzz word. Right? It came around a few years ago. People were doing converge infrastructure. Cisco is actually the number 1 converge infrastructure provider on the planet. We have a lot of different partnerships with storage vendors but at the end of the day that convergence is still a rack you have to manage individually. So hyper convergence was the idea of making things very very simple. Like how can I simplify things to the point where everything is delivered inside one chastity with one management interface and it is very easy to manage and grow. So the first generation of these type of converge infrastructure platforms really thrives on simplicity. They also needed fast time to market to get themselves established. When you put those things together you create gaps and some of the gaps are ok. Some of the gaps really aren’t ok and I want to talk about that for a second. So the big one and it is a big one for Cisco and it should be a big one for everybody is a lot of these early hyper converge offerings did not include 00:06:02 and so when it comes time to connect your hyper converge solution that you just bought to your network and to other things in your environment the networking is left up to you and you have to have the knowledge and the expertise to get everything connected in the right way. So networking for a lot of these early plays this first generation was very much it amplified and also because of this fast time to market there really wasn’t enough time spent on thinking about how you are going to scale your hyper converge solution. How do I access capacity and that capacity can either be compute or storage or memory or any combination of the above but the architecture is going to dictate what is allowed to scale and what isn’t and some of the earlier products don’t allow you independent scaling of all those dimensions and hten lastly a lot of these solutions came out with their own management solution meaning you had to learn a new set of tools to run their product and if you are willing to invest the time that is great but if you already have enough on your plate you might not want to spend the cycles to learn a one off management platform when you can just use what you are already using; Vcenter, Microsoft systems center etc. and that is where we come in. so our next generation type of converge solution hyper flex offers a lot of better answers and one of them is around agility. So we exist with your existing data center. We integrate your existing tools so your operation is very simplified, it is very efficient. Everything comes preintegrated. As far as capacity planning we did the heavy math for you. So when you bought hyper flex you simply tell us the capacity and the amount of 00:07:42 you need and you get something preintegrated that matches that capacity. You don’t have to figure out what is the right ratio of CPU to disc or what is the right ratio of solid state disc or fast rights for normal traditional high disc drives which are less expensive. We do that for you. We automate the deployment. We give you the ability to adapt and scale any dimension independently of the others. So it is definitely a better answer and this is our hyper flex system. So very simply what is it, it is complete hyper convergence which includes the network. It can be delivered as rack mount only or rack mount plus blades for additional compute capacity and we add to it something called the Cisco HX data platform. Effectively the secret sauce of what makes hyper flex work. It does all the data replication, data D-duplication, data striping. It handles the data services and optimization of the storage for you. So that if you will the software secret sauce and most importantly as I mentioned earlier it is not just the stand alone island product. It is part of their strategy which includes analysis tools like titration, management tools such as UCS director, HPI and cloud center and the whole Cisco security portfolio. So what does it look like? Very simply it is rack mount servers. So you start off with what we call hybrid nodes. We also introduce some option for all flash nodes but they are essentially a 1U or 2U server, 2 socket server with VHX software layered in, preinstalled and you essentially start with as few as 3 nodes and grow to 8 in the cluster. It is a turnkey of clients. It includes VSphere preinstall. The license can either be purchased from us with the product or if you already have an ELA or licenses we can ship I preinstalled and then you just use one of your license keys. So you have a lot of flexible choices there. You buy the hardware once annually. You would pay for hardware support as well as your software subscription. So that HX software has an annual, sold in one year, three year, five year quantity. We deliver today the highest flash density of anyone in the industry and when you pair that with our 40 gig UCS fabric technology which is another part of hyper flex you end up with a cluster that gives you the highest performance, the highest performance, highest IOP’s, and the lowest latency of anything out there and this is an important point to stress, a lot of our competitors in the space either refuse to publish performance data or don’t like how certain benchmarks that they like. We recently conducted a study and we were not only top of mind every major corporation in America but we also had the highest performance of anybody out there and that is an important thing to investigate when you are looking at hyper convergence solutions. So again it is nodes, 2U-1U server’s double socket, and UCS fabric. So if you are already a UCS customer you may already have the fabric if not you will get a fabric when you buy hyper converge. So the network is integrated, it is preset up, uses policy based management. The whole system is automated end to end so it is very very easy to set up and you will see that in the demo today. So what does the data platform look like? This is the software that runs on top of these servers. It is a distributed object based file system. So it is architected for rapid scale out and distributed storage. All of the advanced data services that you would expect built into the file system architecture and this file system architecture was written by the same team of engineers that wrote the original VMware distributed file system. So it has got a nice heritage and a nice lineage there. It is also future ready. So it is designed for next generation apps. A lot of people are looking at the pulling containers or the pulling containers today. We are ready for that and we are also looking at their metal and you need it for flash. Basically our hybrid nodes have a mixture of flash and spinning discs. So the idea is you do all your writing to the SSD’s and hten that data gets migrated to the less expensive spinning discs but then that effect is you feel like you are writing on. So it is blazing fast. Everything is cooled. Maximizes your IOPs throughput and delivers very low latency. With our all flash array now we can include database apps as part of the target for what we do. So some of the benefits independent scaling and we will get into that in a second to scale out architecture you would basically pay as you grow. We give you a lot of enterprise storage features built in. things that you would pay a lot of money for from other vendors. So point of base snapshots, instantaneous cloning, in line you dupe and compression that is always on. If it happens as the data gets written and we integrate into 00:12:52. So I can’t stress this enough. There is no learning curve here. If you are already a VMware shop and you are already using VCenter, you already know how to use hyper flex. Hyper flex will just show up as a pool of resources. CPU resources in one window and tips and resources in another. We give you reporting analytics and if you chose to use UCS director you can orchestrate your hyper flex along with everything else in your data center. So it is very well integrated. Data protection is very highly available self healing architecture at the software layer. There is an easy way to do a single button upgrade if you need to do an upgrade and you got the ability to turn on proactive call home. We also offer onsite 24/7 support with 2 hour, 4 hour response you get to pick that. So let’s look inside the data platform nodes, what do you have. Well at the lowest level on the hardware you have a collection of solid state drives as well as hard disc drives if you have a hardware client. The data store layer is on top of that and above that is a hypervisor with control of VM. So there is a virtual machine on ever node that controls the nodes. So it runs the IOvisor and it runs the VMware storage array API. So that is integrated in. it is VMware aware. All your VM’s sit up on top. This makes it very easy to move data around. As you implements, you are going to have at least 3 nodes. So think of it as a distributed data platform. Each node has its own control of VM. These control of VM’s speak to each other over the network and as data is written it is written everywhere it is sprites automatically. All of your policy settings go along with it. So if you need to add a server you simply plug it in, it gets discovered and data starts to move. As time goes on you might add another server and as time goes on further you might decide to retire an older server when you do that the data has already been replicated so you simply unplug the server. It is very very easy to scale out and remove old servers. Now in a typical hyper converge system you can get hot spots because all the controls have to speak privately over a separate communication bus. The way our dynamic data distribution is designed when a light occurs the light occurs simultaneously across multiple nodes, SSD’s. So you get the fastest write time whether you are running a hybrid node or an RSSD node. And you get automatic of balancing of space utilization. So if you were to move a VM from a node on the far left here to the node on the far right you wouldn’t have to migrate the data because the data is already distributed. So you get a lot of wins just in the way the system is architected. This is where we differ from some of our competitors and it is a good question to ask them, how do you move the data when a VM moves in our case nothing has to happen, it is already moved. So this makes scaling simpler as well because really our platforms architected what the application sees is a pool or a group of pools. They see a pool at CPU, a pool of memory, a pool of write cache, and a pool of SSD’s and effectively they just write to the pool or consume from the pool. So it is very very easy to expand, contract, grow etc. How do we do this independent scaling or compute in particular? So when you add a whole node you are adding compute, you are adding memory, you are adding storage. But we also give you the ability to add compute only blaze or compute only racks. Where you may not have a lot of discs populating these servers but you do have a lot of CPU. So for example you might have 4 nodes that are data heavy meaning 24 disc drives in each and then you might add blades that simply have boot drives but they have a lot of compute capacity because they have got multiple cores per blade times 8. It is very easy to do that way. So independent scaling of compute from the rest of the storage. So what does this look like really? Hybrid on the bottom you can build it up off of one U to 20. So you can build it up off of 2U, 2 40s depending on your capacity needs and to that you can add compute only nodes B200’s straight up C220’s C240’s. Maximize your TCO this way just by pitting what you need and maximize your IOPs by picking off flash. You can minimize clash per BDI by growing exactly the component storage or 00:17:44. So we like to call this our adaptive infrastructure and the idea is I want to be able to deploy intelligently, flexibly as needed. I want to be able to scale independently we just showed you that, and I want to be able to shift my resources around as my operational demands change. So what a unique and probably the most important thing to take away from today’s discussion is that hyper flex integrates into your existing infrastructure. What that means very simply is that I can take whatever san or nas device I have today and I can connect that directly to hyper flex and in a similar way I might have compute, I might have racks, blades that is already on my san. I can simply add hyper flex to that. What that means is you don’t have to plan a data migration of that to go from yesterday’s system to today’s new hyper flex. Instead you can go VM by VM to migrate the data as needed. So this is huge because it allows you to keep both the old operating model in place while the new operating model is being deployed and have a bridge to the past if you need it and of course a way out to the future which is what everybody wants. So above that you have got the common network fabric that connects hyper flex to whatever you already have. That is a fabric interconnects. You can also share some of this fabric with other nodes and use converged infrastructure type of things like lead blocks, flex pods, B specs etc. you can also use UCS director to manage all of it on not just hyper flex but anything else that you have in your data center and if you are entertaining moving to the cloud, cloud center will actually let you move those workloads back and forth between hyper flex and any cloud, any public cloud or a private cloud or whatever you have on your infrastructure. So it is pretty powerful because it is all encompassing. The other thing to keep in mind with intelligent automation is our intelligent configuration approach includes networking. I mentioned this before but here is where it is really important. When you go to deploy something based on a profile that profile includes the compute needs, the storage needs, and more importantly the network connectivity Blan needs or Bsan needs that might be present. You do the software for user interface. We have a wizard essentially installed and away you go. So when it comes time to scale how do you do the scaling? I am a huge spinal tap fan so I am happy to see that all my knobs here go up to 11 but if you start off with a certain configuration consisting of hyper flex, playing UCS racks or UCS blades or whatever and GPU cards, sans, we have a C3rd 260 high capacity storage server as well. You can scale any of those up using UCS director, using what is in hyper flex. You can scale it back down but essentially you don’t have to buy boxes when all you need is more disc or more CPU or more whatever it ends up being. So kind of back to the big picture. Cisco started their journey to servers with UCS. We are in our fourth generation. This would be our fifth generation. So mainstream computing we have rack and blade, we have UCS mini for robo, small offices. We have a lot of convergent infrastructure plays, BCE, EMC, nimble, Hitachi, IBM, Netapp and hten in the middle here we have hyper flex. So for the people that want hyper converge very simple, very small we have that and that can grow to hundreds of terabytes. We also do software to find storage on top of our C3,000 series and 240 with things like Neme and Convault and Seth and all the big data plays. So that is all in the realm and hten above it all UCS manager. So USC manager manages everything server related. Cloud center lets you move workloads on or off of the cloud. ACI basically lets you drive your application needs directly into the switch instead of configuring lines and ports manually. So back to hyper flex. So what do I go out and buy when I decide I want hyper converge and I want to go with Cisco? Well we have a variety of clusters. We have hybrid clusters that are appropriate for remote office, branch office that is based on the 220. So you are looking at 3-1U new boxes and a pair of 1U switches or in some cases 1-1U switch and that will get you from 4.5 terabytes to 9 terabytes and what is important to keep in mind with these capacities is your raw capacity is actually much higher but because of the way we are striping data the user will capacity is smaller because you are replicating the data. So low bow 4.5-9 terabytes think 4 rack units, a straight up HX 220C cluster. Now you are looking at 5 racks unless you can go to 16 terabytes. When you get into the 240’s now you are looking at heavier capacities and again these go up from 3-8 nodes and you are looking at 6 terabytes to 61 terabytes. Then we get into hybrid and hybrid is basically a mix of disc and S60 and wait no, I am confused. So hybrid is adding compute blades to this. Yeah. Then you get into all flash. So all flash we replace the spinning discs with all SSD’s. So in the previous slide, so here we are talking about hybrid configuration. So it is a mixture of SSD and high drive and the reason we do that is to drive cost down. So you are getting the feeling and the latency of writing to SSD all the time but in fact some of the data is being stored on spinning discs. But for some customers they want the ultimate in performance read and write and that is when we go to the all flash configs. So the all flash configs are a tad bit more expensive but the capacities go way up because we are using a 3.8 terabyte SSD. So now you are looking 4-51 terabytes in a 220 footprint and 240 footprint you are going from 4 terabytes to 85 terabytes and in a hybrid you can add computers always out it will be 200’s. This is kind of an eye chart. It is a slide worth saving. This basically compares our all flash products on the bottom with our hybrid products on top and it shows you for pretty much a replication factor of 3 meaning your data is written once but actually written to drives on 3 different nodes giving you the best redundancy and failure coverage. So these are the not the raw storage capacities but the usable storage capacities depending on how many nodes you have. So 3 nodes you are looking at 6 terabytes, 8 nodes 16 on the 220C. If you go to an all flash 220 you can get up to 51 terabytes which is pretty incredible. So 240 you are looking at close to 86 terabytes of usable. So when it comes times to make a decision you want to work in a configuration by all means reach out to Continental Resources. You can reach out to us as well. We can help navigate, guide you to the right configuration for your needs but we have all sorts of tools online and available to our customers to help with the sizing. So with that I was going to hand it over to Rob. Rob do you want to do a little demo of some of the things we just talked about?
Rob: Yeah for sure. Can you guys hear me ok?
Mike: Yeah, we can hear you now.
Rob: Great. So the easiest thing I think when we look at how to administer hyper flex is typically we do everything right from inside VCenter. So the majority of the folks today are used to VCenter, they are managing their hosts and clusters. You know what we have is we built a plugin that is native to VCenter and you can go right in and simply grab the cluster and you can provision your storage right from here. So we looked at, we grabbed our cluster this one is simply, we will take a look at what it is. This is going to be a 4 node cluster. I see the clusters online. I will get my stats on D-dupe performance on the right hand side I get the big 3 indicators which are typically latency through put and performance if I go in and say I want to say monitor take a look at data this way I can chart to see how my IOPs are performing. If I decide I want to go ahead and create a data store it is as easy as going in and in my data store section we will just go ahead and create one here as a sample data store. So HX03, I will make it say 500 gigs the data store. Once that is created I actually have over here the thick client then you will see the data store got mounted and if I go and look at my host I will see that HX03 is now available storage, that I can go ahead and use it. What is great about this is if I look at the storage from the perspective of ok it is 500 gigs, if I want to go back in and make it bigger or make it smaller these aren’t block allocations of disc. So this is a smart distributed file system. So I can go back in HX03 I can edit it, I can make it smaller go to 250, I can make it bigger, go to 2 terabytes and once that happens there is no redestruction of the one, nothing has to get recreated and within a second or two that storage has now been allocated. So if I take a look at this from a how does VM, you know we talk a lot about the performance layer so I have a this is our new UI which will give you a little bit of a better view of the IOPs, the throughput, and the latency but let’s go ahead and just do a quick little demo of, we will clone us some VM. So one of the big things that you do in a virtual environment is a lot of cloning, a lot of VDI environments, you know UAT type environments are going to have a lot of you know create 50-60 VM’s. Inside these spheres, you still have the traditional VSphere clone and the traditional VSphere snapshot manager but down here because we have our own plugin we actually coded in a new snapshot and it is going to HX aware and also a clone Ngen is going to be HX aware. So here I can go ahead and create we will create 20 VM’s and we will just say we will call them VDI test and start numbering them at 10. So once I am done and I fire off those VM’s you will see basically the VM’s get created relatively 231 if I go look at my thick client you know you will see that basically I am going to spin up those VM’s relatively quickly. So this was 20 virtual machines created in maybe under a minute and they are very space efficient. They are using sort of right change base cloning so you are not going to eat up a lot of space with these as the user start to use these VMs they will start storing their data in that delta file. So if I go look at what it took to basically do the cloning let’s just change the view in here, you will see my IOPs and stuff will go up. Let’s go ahead and fire those VMs. We will reboot them all. So grab this view and cluster and I am going to sort for VDI and we will just power down a couple. So if I power down all these VM’s normally that would be the equivalent of say a boot storm. You know somebody had to go in and patch a bunch of virtual machines and maybe they didn’t have they did it in the middle of the day or they wanted to do it a certain time of day. typically what will happen with these sort of boot environments is you will see the IOPs go up and hten you will see the latency kind of go with it. So here you know we are going to see that spike, there is that IO spike of about 2,000 IOPs for about 20 VM’s. So roughly 100 IOPs per virtual machine. You see that band width spike to. You know we are seeing some traffic on the throughput. So we generated some pretty good throughput through the system about 64 Megs but that latency didn’t really get flustered right. So here we saw which was a pretty decent amount of traffic up to 18,000 IOPs booting those 20 VM’s and again that throughput went up to 462 reads you know megs per second on the read side and we just saw latency kind of creep a little bit there but not too too bad. Read latency is still under a millisecond and write latency is right around 6.84 and then it drops back down again. So this is a good example when people talk about you know hyper converged. A lot of times they are trying to do consolidation efforts, pack a lot of virtual machines onto that platform. They are generation one virtualization or they are generation 2 virtualization. Maybe it was heaven forbid, 32-64 gigs of ram then they went to 128 or 256 and now they are on their 3rd incarnation so like I want to do 5-12 gigs of ram per host. I want to do 768 gigs of ram per host and to do that you are going to have a lot of virtual machines, a lot of density in there and as a result we try to tell people that you need a high performing files system to make sure that your IO can handle it. So if you have got more density on your workloads then you had in a while this is an example we will go in and reboot a couple more. So if we reboot these guys again just to generate some IO, that density on 20-30 VM’s on a host that really requires sort of a what I would call a high performing hyper converged architecture and so here with those 10 gig back planes we are basically connecting to the fabric interconnects right. So we look at from a UCS perspective I have got these 2 very high speed 10 gig back plain these are 62-96’s and then I am connecting to them my hyper flex nodes.
Mike: So Cisco used the manager view right and Rob you will use UCS manager occasionally like during setup and maybe when you want to probe a little deeper into the hyper?
Mike: So for the most part…
Rob: That is exactly it Mike. This is my hardware layer and then my Vcenter plugin is my hyper flex layer. So helpful guys? Good overview of what how we basically are managing a cluster with basically using Vcenter and on my second set of reboots throughput went up great. We have seen the data and cache now so on that second round of rebooting my latency is still sub 2 millisecond and that is a hybrid file system. So if I look at my servers you know I go look in the cluster all of my reads and writes are being serviced by that front end SSD. That is the hybrid architecture, that is able to run a bunch of VM’s with really little latency and then if a customer wanted the all flash experience these same servers can go into an all flash experience. Mike anything else you want me to show them?
Mike: No that was really good. You got the main point across which is if you are already a VMware environment and you are already using Vcenter, you already know how to use hyper flex. It is just going to show up as a set of tools inside of Vcenter and that little bit of configuration that needs to occur in UCS manager you know either Conres will do that for you or you can do it yourself by following a script. It should take an hour or less to do your initial setup. So very much a next generation approach to the hyper converged. What I am going to do is I am going to go back to a few slides to just kind of close out with a few thoughts and then we will open it up to Q&A. How does that sound? Good? I guess everybody is muted so I am going to assume it is good. Ok so let’s see am I sharing my screen, yes I am. Ok so we talked about capacity. You know I wanted to go back to this complete data center strategy because it is important to realize that we are not just about hyper converged. I mean it is a great product and it is selling like crazy. We have over 1200 customers worldwide, we are one of the fastest products to ramp up next to our own UCS we have 60,000 customers worldwide but the important thing is it is not an exclusive strategy. If you go hyper flex you don’t have to get rid of everything else. Instead you can plug it in. so one of the most important things we keep hearing about from customers is the desire to take sporadic workloads and move it to the cloud where it might be less costly to run and we are worried about cloud over the years is very simple. If you have a workload that you are going to use 80% of the time you know 24/7 it is going to be 80% utilized or better keep on prem. It is going to be cheaper but if you have a workload that you set up once a month for a day or every day for 5 minutes that is the kind of stuff you put on the cloud. What cloud center does and you can use this with hyper flex. Cloud center allows you to model your application and that application can be anything and cloud center actually comes with some of the more popular containerization packages and applications already built in, already premodeled and if it isn’t there you can build the model yourself. But you model it you build a workflow for your application set and then you look at deployment. Now when you look at deployment cloud center will show you a chart of kind of the predicted cost. What it will cost to deploy in your own data center in your own private cloud in your data center or co-op and hten at the public cloud. So at the point of deployment you can make the decision where you want to deploy it. Now you might deploy it to the public cloud first and hten you watch is your cost high or low. When that happens with cloud center you are easily quickly able to move it back to your own data center. So for anybody who is startling that fence and they are hearing all about cloud and how great it is, take a look at cloud center in addition to hyper flex because hyper flex clearly will drive cost value data center but cloud center will basically allow you to control that cost as you deploy and manage existing workloads. So that is a huge win. And again it speaks to the fact that hyper flex isn’t just a point product living by itself. It is part of an overall data center suite. The other thing is UCS director. So I hate the name. I will start right there. It should not be called UCS director. It directs much more hten UCS. It should be called data center or data domain director because it really manages everything. It will manage UCS, it will manage hyper flex, it will manage EMC storage areas like Nemax, it will manage me top fast appliances, it will manage net top arrays, it will manage flex pods that is actually why it came into being was specifically to manage flex pods and Dblocks but with UCS director it is part of our enterprise clouds here at ECS for short. You can manage essentially hyper flex and anything else in your environment and do it in a way that is application workload driven. So it drives out op, drives out cap ex, makes things policy based, secure even, you can see things that are instantly out of the norm. It is a great product. Something worth looking at. The other area back to the cisco roots is pure networking. So we have had a lot of success with catalyst network products, nexus network products. Our newest generation nexus something called the nexus 9,000 series it runs annex OS which is traditional Nexus operating system. But it also runs something called ACI and ACI is the application centric infrastructure. It is our version of software defined networking but it is built from the hyper up. So one of the things we are quick to point out to people is don’t think for a second that buying a third party software layer to manage all of your network devices and then lay it on top of some random collection of network devices is going to be as good or as smooth as buying it all integrated as a stack from one vendor. We are open we will support other switches moving forward or other products like F5 but for the most efficiency going with ACI on top of a nexus 9K allows you to define your application workflow and push it out. So that means we can control all the network flows you know on hyper flex into the data center and into the cloud. We also instrument everything that allows us to do some deep packet inspection run forensic, make sure that you are compliant with whatever policies HIPPA etc. that you need to be compliant with, PCI if you are in the commercial retail space. So it is a great story and it works with hyper flex. It actually works with NSX. So a lot of VMware customers that have NSX on their EOA they don’t realize it. Putting NSX together with ACI just gives them a better level of security and control then what they would get with NSX alone. Then lastly replication backup data protection. I mentioned the C30 260 you can run beam on top of that and use 1 or 2 of our C30 260 matching storage rates to do that backup for you. So it is very very easy to use if you are already a beam customer. Then around the world we have had a lot of success in all industries financial services, pharmaceutical, healthcare, you name it. What we are seeing is places where they don’t want to spend a lot of money on IT in terms of OPEX hyper flex is very very well received because it instantly drives out operational complexity, drives out the case where human error could bring something down. You are automating, you are recording, you are scripting. It is all built in there. So some of these cases places where you might apply it in your environment clearly VDI. If you are looking at any type of virtual desk top infrastructure project moving forward this is one of the greatest platforms of VDI because of its predictability, small footprint, and scale. Straight up server virtualization of the data center. We have all been there, we have all virtualized servers but now if you hyper converge it, it gets even better. We have a lot of software developers that are looking at how do I spin up test and development environments quickly and then tear them down quickly and reuse those components. It I a great platform for that. promote branch offices where you have zero IT staff on site, this product is very easy to use because it is based on UCS it has the smart call home feature, it has predicted failure analysis, it has all that built in. in one hour we don’t have time to get in all the details but these are just some of the things that hyper flex brings to the table and now with the introduction of all the flash arrays where suitable platform for larger databases. So you could definitely consider hyper flex for a database as well. So just to summarize, you know we have the best PCO in the industry. Don’t take my word for that, challenge us, ask Conres they will come in, they will bring us in. we can show you how we calculate that PCO but it is by far the easiest setup in ongoing management administration of any platform out there. As Rob showed you, we use tools that you already know so that greatly simplifies the management of compute and storage but also networking because of Cisco. You are going to lower your capital cost because you can just buy what you need. You don’t have to over buy and you can leverage cloud economics. So you could build your own cloud. When people say hey I want to go cloud say right here. I have it on prem and with cloud center as I briefly showed you, you could burst out to those other public clouds as well but still maintain ip addresses, security settings, control, compliance all of that by using components of the Cisco Happ data center. So just to sum up we are a premier next generation hyper converge platform. I can’t hype on the highest CCO enough but we definitely get you there and it is complete. It includes networking, it is engineered on top of the very successful UCS platform that has been out for 8 years and it is very adaptive and it eliminates stylus. So with that thank you and we will open it up to any questions that you may have.
Rachel: Alright. Thank you Mike and thank you Rob. We have a few questions come in. the first is, do you guys include backup with your solution and if not how do you integrate with other solutions and handle backups?
Mike: So I will take that and then Rob can fill in. so currently we don’t include backup. You can use the standard VMware backup product. We integrate very well with beam and convault and on our roadmap there was a plan to include our own version of backup. Rob did you want to add anything?
Rob: Yeah no so the I would say that is a great summary. I mean a lot of hyper converged architectures what they are going to have inside of them is snapshot technology. The ability to do sort of local in place snapshots, have them be very space efficient so they don’t consume up a lot of space and then leverage for example beam leverages our snapshots to make their backup architecture even better. So that is typically leveraging you know the existing backup ecosystem has been what most of the customers have said is well I am already using a technology for backup and I don’t want to change it. We have seen some hyper converged architectures do have some integration with backup replication. Sometimes it is just sort of souped up snapshot technology, sometimes it is more formal. We felt the infrastructure is what we were bringing to the market and then partnering it with backup vendors or supporting backup vendors was where the inoperability would be for us.
Rachel: Thank you. The next question is, is there a timeframe for hyper V support?
Rob: Yeah everyone asks about hyper V. we have been pushing for it to get it done in the next I would say, 6-12 months. I would think I don’t want to steal anybody’s announcement if there is on the roadmap. We definitely through Conres you can do a nondisclosure agreement with Cisco and we can give you more formal roadmap dates for features and that type of thing. So talk to your Conres account executive for an NDA roadmap session but I would tell you that it is definitely 1 or 2 top requests that we have is support for other hypervisors other than VSphere. I can just give you an idea that those are coming and I can’t speak to the ETA without the NDA in place.
Rachel: Ok. The last question we got through the chat is, for the small cluster at 9 terabytes usable what would be a rough estimate to effective capacity?
Rob: So that is a great question. So a 9 terabyte in a cluster we say a cluster has 9 terabytes raw. We do an RF3 for a resiliency factor or if it is nonprob they can do RF2. So divide by 3 or divide by 2. So 9 terabytes raw would give you either 3 terabytes with RF3 or it would give you 4.5 terabytes at RF2 and that is before D-dupe and compression and things like space efficient clones kick in. so we expect that that 3 terabytes on RF3 might actually look more like 4 or 5 terabytes with RF3 plus data services.
Rachel: Ok great. Well that is all we have for questions. So I am going to turn it back to Allen for closing remarks.
Allen: Thanks Rachel and thanks Mike and Rob. Just very briefly in closing I guess the thing I want to leave you with is as they said fi you have any questions or if we can be of any help feel free to reach out to us that is why we are here and we would be happy to help you out with either existing situations or anything you are considering in the future. So if there aren’t any other questions I guess we are all set.
Rachel: Thank you everyone.