VMware vSan Hyperconverged Webinar OnDemand
VMware vSan Hyperconverged Webinar
CLICK IMAGE BELOW TO VIEW WEBINAR
Rachel: Good afternoon everyone. Welcome to today’s VMware vSan Hyperconverged Webinar. My name is Rachel Cuomo and I am a marketing communications supervisor at Continental Resources. Just a few housekeeping items before we begin. If everybody could put themselves on mute that would be greatly appreciated. If you have questions throughout today’s presentation please enter them into the chatroom and we will take those questions at the end of today’s session. Now I would like to introduce Mark Boisvert, Storage and Virtualization Product Manager at Continental Resources.
Mark: Thank you Rachel and thank you everybody for joining today. Again, my name is Mark Boisvert I am the Storage and Virtualization Product Manager here at Continental Resources. I do manager our VMware product line and I am joined by Thomas Rentert one of our solutions engineers today. For a little bit of the housekeeping/commercial for Continental Resources or Conres, we are a value added reseller, a VAR. we do specialize in design, resell, and implementation of hardware and software solution. We are woman owned. Been in business for over 55 years. Headquartered in Bedford Mass if you don’t know where that is, it is about 30 minutes outside of Boston. We also have regional offices in New York, New Jersey, Philly, Maryland, and Chicago and offices worldwide in Hong Kong, United Kingdom, and Canada. So if anyone needs help globally we can help you out with that as well. We are the highest level VMware partner. We are a premier partner and from the side of engineering expertise we can do any pre and post sales work around VMware. We also have a full complement of VMware labs on demand that we can show to you. They cover VSan which we are talking about today, horizon which is VMware’s VDI solution and NSX which is the network virtualization solution. I would like to turn it over to Tom who is going to go through a little bit of an overview of VSan today and a demo and then followed by questions at the end. Again, I thank everybody for joining today.
Tom: Alright. Thanks Mark. On the agenda today we have introduction to VSan a little bit about how VSan integrates into VMware software defined data center, a little overview of the architecture of VSan and some of the features, outlying some use cases, and we will open it up for questions at the end. Feel free to jot anything down that you might be curious about and hopefully we can get you an answer in detail. Now starting with a little bit about VSan. We really want to approach the question, why is there a need for VSan and I think we all know the answer here, digital businesses and really any enterprise business is on an unsustainable path, complexity is increasing, there is a number of products on the marketplace but you get vendor lock in, all of these complicated 3rd party management tools, you have to be an expert in multiple technologies. What we really need is a simple solid and robust storage platform that something you can keep hands off, put it in place, and rely upon without constant maintenance using the tools you are familiar with VSan is something you should definitely look at leveraging inside your data center. So before we get too far into the weeds we want to talk about what VSan is. VSan is a software based distributed storage solution. SDS stands for software defined storage. There is a number of players in this marketplace but this is the VMware product line. It integrates directly in the ESX hypervisor and the main benefit is it is radically simple. Very easy to implement, there is no physical or virtual appliances required, it is built directly into ESXI and we will go into different architecture and a live demo as we get later into this presentation. So what does VSan do? VSan aggregates direct attached storage devices on ESXI host and makes them operate as a single pool of storage. So any standard X86 server that you have, any local hard drive that has been installed gets presented over a network as a single shared storage device. So you no longer need a San or Nas in your environment to cluster your servers and provide high availability. You can do this in a hybrid configuration with SSD providing cache and regular magnetic spinning drives. Cheap commodity server drives providing the backend capacity or if you are more concerned with the performance and stretching your capacity and density you can look into an all flash configuration. VSan is integrated with VMware’s hypervisor directly inside the ESXI. Definitely can provide extremely low latency all the discs are local and you don’t need any protocols transmitting back and forth to your San or your Nas device adding extra overhead to the communication for your storage traffic. Why should you look at using VSan? Well the main reason VSan is to storage what VSphere was to compute. When you started looking in the early 2000’s you had the storage sprawl, the server sprawl, everything getting very compartmentalized into different racks and lots of things that you really wanted to consolidate down to something simple, easy to manage and monitor on a day to day basis. That is what VSan is doing for storage. Again it runs on any X86 server, it pools your flash drives and your regular hard drives into a single data store and it is very scalable without compromising on performance. So this leads into VMware software defined data center we are just going to touch on this but something that you should definitely explore as a possibility for your business. At the core it is the phrase used to define a data center where all infrastructure is virtualized and delivered as a service. So you can really think of the SVDC as an IAAS solution for your own network. Control the data center is fully automated by software meaning hardware and configuration is made through intelligence software systems. Everything is extracted into a pool of capacity and consumed as a virtual resource. So while there is this hardware layer that you have to put in place and an architect appropriately on a day to day basis administration is strictly at the software level and you really don’t need to worry about your servers or your storage or San or anything like that in order to operate your business and run your applications. The VMware software defined data center is comprised of its 3 core products. At this point in that portfolio that is storage virtualization which is the VSan platform what we are talking about today, server virtualization ESX people have come to know and love that and it is the primary hypervisor that you will see used by small and large businesses around the world, and network virtualization which is NSX. Those 3 tools together combined with a VServe web client and the Vrealize suite provide management simplicity using tools that people are already familiar with and they make it very easy to scale and grow your environment. So some of the benefits of adopting the SBDC model include that web scale architecture simply add another node when you need more space or more compute. Add more discs if you just need storage and no additional compute. Very agile, speedy workloads deployments. Centralized management tools using software you are familiar with and it is really the foundation for the hybrid cloud. Once you move into a fully virtualized model all of your workloads, your applications become very mobile and you can move it from a private cloud to a DR site to a public cloud using a number of tools out there but getting into the specifics on VSan, we are going to overview the architecture and some of the features available. With VSan you have both hybrid and all flash configurations available. A hybrid node fully stocked with discs it is capable of producing 40,000 IOP’s per host and all flash configuration you are capable of seeing 90,000 or even 100,000 plus IOP’s per host. Core differences on each model you do require SSB to be your cache in solution and then the all flash solution you can use read heavy SSD to be your capacity solution as well. SSD’s are used for read cache and write buffer cache in the VSan architecture that applies to either hybrid or all flash while the capacity stores all VM files and provides 100% of what you see of usable consumable storage. All of VSan communicates using a standard IP network at a layer or two network segment. There is no need to route any traffic, use any form of switch from any major vendor as long as it supports multitask. You configure a VM Kernel for it on each host and that provides the IP level communication for the storage and if you use the virtual distributed switch which comes with a VSan license this supports network IO control. So you can share your 10 gig uplinks for all of your traffic. Not just VSan but for virtual machines, for VMotion, things of that nature as well. So how does VSan work? We will start at the lowest level component which is a disc group. A Disc group combines hard drives and SSD and they are comprised of at least 2 types. You need a casting tier which has to be flash and that usually enterprise tier very fast can be mixed use or write intensive or read intensive. Capacity tier can be hard drives or flash drives up to 7 and those are usually a mixed use or write intensive tier. Disc groups cannot be created without a flash device. So there is no such thing as VSan with all magnetic drives. You do need enterprise flash providing the cache and glare. You can have a total of up to 5 disc groups per host for a maximum of 35 capacity drives. So you can see some pretty significant density now a days with a capacity drive being in the 2 and 4 terabyte range. At its simplest VSan works because it aggregates disc groups into a shared data store. That object store presents itself to VSphere as a file system and then mounts the volumes across all the clusters and presents them as a single logical unit. You can have compute only nodes in a VSan environment as long as they are licensed for a VSan and a member of the VMware cluster. If you have a large VM of up to 64 terabytes that can be striped across multiple disc groups if necessary but obviously your footprint for redundancy will grow significantly. For resiliency VSan really incorporates the idea of raid. So it uses a distributed raid architecture both striping and mirroring to separate your data. All hard drives present themselves as raw data to the VSan cluster. A VM is striped across a certain amount of hard drives as defined in the 00:13:38 and that object or those stripes are hten mirrored to another disc group on another node somewhere else in your VSan cluster but really overviews the hybrid configuration there are some more advanced and more efficient storage configurations available in all flashed 00:13:59 that really mimic raid 5 or raid 6 level efficiency verses a raid 1 which halves their raw storage by 50%. An all flash configuration you also benefit from compression and d-duplication just like other major storage platforms. VSan works because it takes advantage of a standard IP network. Across that IP network which is a dedicated layer 2 network segment VSan replicates all data and intercluster communication. This supports standard Vswitches as well as the distributed Vswitch which is recommended and comes with VMware VSan licensing. So even if you don’t have enterprise plus if you purchase VSan at any licensing tier you can take advantage of the distributed Vswitch so that you can set up network IO control policies on your storage traffic on your sharing adapters. 00:15:10 can be teamed in Vsan but this is really for high availability and it is not for bandwidth aggregation. Another thing layer 2 multitask is a requirement. All jumbo frames are just recommended. You can support VSan using 1500 MTU packet sizes but if your switches do not support layer 2 multitask you can’t move forward with an implementation. So if you deploy VSan and you have a need to grow it, it is very easy to enable very clear linear scaling of capacity air performance. If you want to scale out you simply add another node to your cluster. This is completely no disruptive with no down time. If you need more storage and you have drive slots available you can merely add more disc groups to your existing nodes. So it is very common to see a server node with 24 drive slots initially only have 8 drives but you have the capability of adding up to 16 more drives to that server at a later date. In VSan all storage policies happen at a VM level. So rather than a traditional San or NAS device where you are configuring performance based on a lun or volume in VSan you are really configuring everything the same way you would in virtual volumes which is with the storage policy. A storage policy provides intelligent placement within the VSan cluster and it can define your performance characteristics by letting you set the number of stripes and thusly the number of underlying physical discs that make up your data set for any VM object. Lastly your redundancy is also defined as part of the storage policy. This is where you designate what is known as a failure to tolerate which can really be thought of as an individual storage component you are ok to lose without impacting your business or your application. So if you lose a disc group, if you lose a host all of those things should not affect your storage footprint because there is another copy of your VM object somewhere else on the VSan cluster. You can also set that to more than one copy on a per VM basis. So if you have mission critical VM’s where losing any data is absolutely not an option and you are still concerned about the raid 1 mirroring go ahead and put a third or fourth copy out there somewhere off in the VSan cluster. There is zero data lose in case of disc or network failure but if you do lose a host just like anything else in a VMware environment high availability needs to bring that VM back up on another host. Like a lot of VMware products VSan comes in multiple additions. There is a standard, an advanced, an enterprise, and a robo which we will outline on the next slide. There is various features across both of them. The key characteristic if you are looking for an all flash VSan configuration you need at least a VSan advanced license to accommodate that. That also comes with d-duplication and eraser coding to get further efficiency out of your VSan deployment and really stretch your capacity and effective storage when you start putting VM’s on that object. If you get VSan enterprise that supports a stretch cluster configuration so within a low latency network you can actually lose an entire rack and your VSan would still be up and running in another rack. Now that does require the provisioning of extra capacity but instantaneous fail over capabilities in the event of a power outage or something of that nature that was isolated to just the rack or to just a row. You can also set QOS policies to limit the amount of IOP’s that some VM’s consume. Rowboat is the remote office branch office solution. So for very small offices where it doesn’t make sense to put in place a 3 or 4 node cluster rowboat allows you to do a 2 node cluster with a 10 gig cross connect cable. So you don’t even need a switch. Licensing for Rowboat is sold in 25 packs of virtual machines. So a single rowboat license pack will support multiple sites. You could have 1 VM and 25 different sites or you could have 5 VM’s at 5 different sites however you want to manage and distribute your VM services as long as there is no more than 25 VM’s in one site you will stay compliant and it doesn’t matter how many hosts you have at each site. Just a count of the VM’s. One of the great advantages of VSan is that it is an all software platform. There is no vendor lock in. however there are certain vendors that VMware has worked with to validate hardware components and make sure that any hardware that they submit will be supported for VSan. These include all your standard tier one server vendors, Cisco, Fujitsu, HP, Dell, IBM, Super Micro. All components have to come off a hardware compatibility list that is really just taking into account a couple of factors primarily the drive type and the storage controller have to be on the list to support VSan. Ready nodes can come in all different shapes and sizes. We will go over a couple of them on the next page and they are actually a prevalidated configuration a single sku you can purchase from any of these vendors that is designed to support a certain kind of workload. So for server virtualization this is broken into a couple different types. On the hybrid side you have low profile, medium profile, and high profile which you may see abbreviated HY4, Hy6 Hy8. Those are the different kinds of capacities that you can get out of a single node. Keep in mind that your usable space is really half of what you see and the kind of characteristics that you have come to expect from that. All flash configuration you do not have a low profile server just a medium and a high. Higher density, higher capacity, higher performance all around. VSan is also ideal candidate for desktop virtualization or any kind of VDI whether that is VMware’s horizon view which includes licensing for VSan or whether it is Citrix Zen Desktop (?) it is a perfect storage solution as the storage is local to each server and it is very low latency, the cost for all flash is very reasonable when compared with traditional NAS and San arrays. You can do that in either a hybrid configuration or as recommended for the highest efficiency and performance you should really look at a virtual San all flash when you are talking VDI. In an appropriate large node you can hold up to 200 desktops at a time. So we will take a quick break from the slide deck here and move into a live demo of VSan. This is in our Conres Bedford lab. As you can see in the demo this actually runs a number of core services for our lab. It is not something that we set up for just these presentations. It is something that we rely upon on a day to day basis and has been in place for over a year. So this is the standard Vserve web client. All the tools you are familiar with in Vserve. All VSan management is done through this pane of glass. This here is Vcenter 6.5 while we have Vsan running on 6.0 host. So within the Bedford cluster these 3 hosts from fujitsu all have a VSan shared data store. Because VSan is strictly local disc it doesn’t prohibit you from connecting to other data stores that are still traditional or legacy or within your environment. You know, VSan is very complimentary to your existing infrastructure. It is not a rip and replace like you may see with a San replacement. So out there on our network we also have a netapp traditional array with 00:25:07 and NFS discs but our VSan data store across 3-1U servers has a total capacity of 10.8 terabytes. That is fusible. If you look at this data store we have almost 50 VM’s running on it all on a day to day basis. These hosts also run zerto so you can have IP based replications straight from VSan to any other VMware or Hyper-V infrastructure. It is pretty neat that it doesn’t limit you from any kind of replication as long as it is IP based it doesn’t need to be VSan or the other side. We can move it into any other hypervisor, private cloud, public cloud as you see fit. If we go back to our cluster view we can look at the configuration of VSan and see that it is set to add discs to storage automatically. So if we were to cable up and connect the 4th node as soon as we add it to the VSan cluster VSan will automatically divvy up and consume the disc group to expand that VSan data store from 10.8 terabytes to 14 terabytes. Very easy to scale out. We could also go in there and add more discs and just bring that capacity up within the 3 nodes that we already have. You can actually see the firmware version or format version depending on your manufacturer in version 6.6 the latest version of user you can actually update firmware for your storage controllers and for your discs from within this console. So there is really a lot of tight integration from VMware and the tier 1 vendors to make it so you no longer need to go to an IOL or an ID rack or anything to perform your day to day maintenance tasks. You can actually take a look at the individual disc view for how groups are configured all the way out to the serial number so there is a disc group view, individual discs on the bottom. It is a little hard to see with this resolution. We have identified our flash discs as well as our capacity tier. So the Vsan environment that you see here is hybrid. If we have another rack and we wanted to demo felt domains we could actually add another 3 hosts and we could lose 3 hosts all at once without losing any data. It looks like we have got a networking difficulty. Give me one moment. Alright while this demo is being restored, I think we could probably move onto the questions phase. So opening the floor does anybody have any questions they would like to ask?
Rachel: Thanks Tom. We had a few questions come in. the first is, what server vendors support VSan?
Tom: So all major server vendors at this point have a VSan configuration whether that is something off the shelf or already node. So Dell, Fujitsu, Venovo, HP, Super Micro, NEC that is just to name a few but odds are if you are in the server manufacturing business you have a VSan option out there.
Rachel: Ok. Thank you. Next question, does VSan support Hyper-V/KVM?
Tom: No it does not. VSan is strictly a VMware technology at this point and no other hypervisors are on the road map although if you use a third party tool like serto you can replicate between a VMware VSan environment and a Hyper-V environment.
Rachel: Next question is, how many discs does VSan support per server node?
Tom: Per node you can have up to 35 capacity discs with 5 cache discs. So if you have a single server that can support 40 drives you can scale your density quite high. That also goes to a total of 64 nodes in a cluster with up to 6400 virtual machines working in a single VSan data store. So if you have a very large VDI deployment this is an ideal space to set it up.
Rachel: Ok. The final question is how was redundancy achieved using VSan?
Tom: So redundancy is achieved with those VM storage policies we discussed earlier. You are going to stripe your data across multiple discs and wherever those discs reside VSan will actually mirror your data to another location and another disc group on a different host. Now you can actually set that to multiple copies if you like. It doesn’t have to be strictly a second copy. You can have a third copy out there. That is the jest of it.
Rachel: Ok thank you. Well that is all we have for questions Tom. Are you done with your demo?
Tom: Let me see if we are back online. We can just go over a little bit more, keep it brief but since we have seen some of the configuration options you may want to look at what kind of native monitoring there is. So you can run a VMware house service as well as proactive tests in your environment to make sure that everything is healthy. As you can see when there is a new version out it will alert you and the HCL database will tell you when the last time you looked for a compatible component was but you get overall healthy checkmark if everything is looking good. You can actually run tests on individual components or groups of components and retest at any point in time. See exactly how the capacity is being consumed in your environment. By default all VSan storage is consumed within provision. So it is important to look at this capacity to realize how much space you actually have available compared to how much space you logically have available and you can also see the resyncing status. So for whatever reason you take the node off for maintenance or you have lost the server if it goes down for more than 2 hours you can check how long it will take to copy all of the data back to that node when you restore it. I think that covers everything on the live demo that I wanted to touch on.
Rachel: Ok great. Thank you Tom and thank you Mark and thank you everybody for joining our presentation today. If you would like to learn more about our hyper converged offerings please reach out to your Conres rep or engineer or you could also visit our hyper converge page on conres.com. Thank you everyone.