HPE Hyperconverged Webinar OnDemand

HPE Hyperconverged Webinar

CLICK IMAGE BELOW TO VIEW WEBINAR
hpe-hyper-converged-webinar

TRANSCRIPT:

Rachel: Good afternoon everyone. Welcome to today’s webinar, Hyper Convergence from HPE and Simplivity. My name is Rachel Cuomo and I am a marketing communications supervisor here at Continental Resources. Just a few housekeeping rules before we get started. All attendees will be on mute. If you have any questions throughout the webinar please enter them into the chat window. We will address questions at the end of the webinar. Now, I would like to introduce Rick Nagengast, Hyper Converge Practice Manager at Continental Resources. Rick, take it away.

Rick:  Thank you Rachel. As your sponsor I would like to take a moment and give you a snapshot of who Continental Resource is. At Conres we believe that IT data centers can be radically simplified with the adoption of emerging technologies such as hyper convergence, software defining networking, and cloud technologies. For those of you who don’t know Continental Resources is, we are a 50 year old half a billion dollar strong hybrid IT solutions partner serving over 1500 customers located between Boston and Washington DC as well as greater Chicago. At Conres our hyper converge practice has endorsed Simplivity as one of our preferred HCI vendors over the last 3 years and now with the acquisition by HPE we are even more bullish giving HPE’s financial backing and their accelerated product road maps. I am pleased to introduce Tom Oertel, Hybrid IT Specialist from HPE.

Tom: Thanks Rick and if I may here I am going to start by going through our strategy. Maybe help understand why we acquired Simplivity. Rachel? So it is just 3 things really. HPE strategies to make hybrid IT simple. We have been inspecting our customers sheets from our briefing centers over the years and a high propensity of them are moving towards the cloud have already put workloads in the cloud. So we are gearing everything up and have been for the last couple of years to make this hybrid IT environment simple. Private cloud, public cloud, 00:02:38 all those things in that category. Traditional workloads as well as software defined. We also are empowering and powering the intelligent edge. That is not just branch and campus but it is things like the industrial internet of things which is going to drive tremendous traffic through the internet soon. By 2020 the estimate is 20 billion connected devices and those all will benefit from a platform like Simplivity brings us. We also have the expertise of course and because things are moving to be more complex from an implementation standpoint and we have now we have different consumption models which customers are telling us are very popular as they move from CapX to OpX (?) models being able to pay as you go or pay as you grow are the way many of them are heading and Simplivity will help us be foundational to that as well. To be more specific, with Simplivity this is going to dramatically help us take complexity out of the data center. Our customers just told us that is number 1 along with security on their wish list and we are actively moving to it. Simplivity is going to help us there especially with their data services which could help automate as well as take complexity out of the data center. Also and our consumption model again they are going to be foundational to our industrial internet of things and powering as this segment of the market. Say the market grows and takes up as much as 1/3rd of internet traffic in the next few years and then lastly this is not just a one trick pony, we are taking the intellectual property that we gain from the Simplivity acquisition and applying it to other platforms. We recently unloosed the next generation of blade system called Synergy that is a compostable infrastructure platform and will benefit tremendously from having Simplivity data services and deduping and compression in it later on this year. So this will be exactly the fulfilling of the promise that we struck with the acquisition. So with those few things let me pass it on now to hear more about Simplivity itself to Dan Pearl.

Dan:  Great. Thank you everybody. Thanks from the Conres team and also from the HPE team. On behalf of Simplivity, we couldn’t be more excited to be part of the HPE family and to be part of this broad portfolio that places hyper converged in a strategic position along with the other amazing products that HPE has from marketing reading, blades, rack mount systems, all the way up to the new best and brightest from things like compostable with synergy. So we couldn’t be more excited to be part of the team and what we thought we would do today in this webinar is to go through a little bit around the market overview in the market dynamics for why hyper converge and specifically why Simplivity has been growing as fast as it was to the point where HPE decided to gobble us up, moving into a bit of the technical deep dive, some competitive intelligence, and then we will end with a short demo to whet the appetite for potentially some fall on or some next steps if people are interested afterwards. So with that going through a little bit of where the market has been if you think about what the reports from IDC, Gartner, Forester etc. has been showing over the past couple of years, as everyone already knows the market is dynamically changing. So most of the traditional IT players and if you think about all of the big mergers that have happened in the market to the different spinouts and spin mergers to the companies who maybe aren’t here anymore and the ways we have been used to over the past 20 years. Generally speaking traditional IT is flat or down and where all the money is going and where all the spend is going is either to the public cloud or to hyper converge or to both and what HP believes actually fundamentally as Tom already mentioned is it is going to be both. It is going to be that hybrid environment and companies are going to need to find that right mix and hyper converge fits with the really strategic position in terms of not only being the new way of manage ring virtual environments on premises but also being that growth factor to enable that move into that right mix and that hyper cloud and if you think even more deeply with some of the specific reports are going if you have seen some of the press one thing I think actually is pretty cool, Gartner saying that hyper converge is going to be mainstream over the next 5 years and if you hit next you will see that it is actually more detailed in terms of by 2019 actually 30% of all storage capacity worldwide in enterprise data centers is going to go hyper converge which I think is pretty impressive and 451 said that actually today 40% are already using it. So what this means is that basically doesn’t matter which analyst you look at they all think it is growing and they all think it is already here which is a cool place to be and something that is enabling our customers to take a look at a new way of doing things. In fact Simplicity is the fastest growing of those and I won’t belabor the point but build this slide out. The more important reason why it is growing this fast is because hyper converge fundamentally starts to solve challenges that customers have been having for the past give or take 20 years and if you think about your career with an IT for any of you IT administrators are on the phone or if you are running a team for the IT administrators who work for you I would be shocked if behind the scenes right now you are not nodding your head against one of these problems that might have happened and might have led you to say some of these things in your career over time and so some of those things might be things like hey I am complaining about performance, I Ops have been a challenge, back up windows, RPO’s, RTO’s, everyone knows testing DR on your DR weekend where you have to try and make everything work is never a fun time. Spinning up and spinning down VMs faster, development complaining because there is never enough resources, managing inner OP lists, managing ACL’s. these are all things that I can almost guarantee you have said at least some of these things throughout your career and part of it or in fact a lot of it has to do with the fact that underlying technology as additional data services have become critical in managing virtual environments really just haven’t kept pace and so if you think about it 1) capacity is growing exponentially. It is no surprise to anyone the data tsunami, the digital universe, the digital whatever it is growing a lot. Everyone gets that. But more importantly is the fact that performance isn’t keeping pace and if you think about the performance challenges that people have today especially when it comes to storage but also in terms of backup infrastructure is you have really one of three answers that the traditional vendors will tell you. If you need more in performance that means you either got to add more spindles, that is the traditional answer or you got to add flash or you got to do both and you have to intelligential tier your raise and you need different policies and different raid sets and you have to have different policies for your logs verses your database files verses your files themselves verses objects and you have to have someone who knows that can configure a wand and can configure a target or an aggregate and configure that all through the fiber channel network and mask that or zone that back to the machine and it means you have to have a person or groups of people who actually know how to do all of those things and all in the same realm as managing your virtual environment because of course most of these technologies that we use in our data centers today were built before VMware was built and therefore were built with constructs that didn’t exist when the virtual environments then came to be and so all the greatness that we have in terms of managing the applications, moving VM’s around, Vmotion, storage Vmotion, a lot of that falls apart when it goes down into that physical infrastructure. So ultimately what ends up happening is that most of our data centers kind of sort of look like this. We call it the best of breed architecture and I don’t care really whether you are an SMB or a large enterprise. You might have one array or a thousand arrays you might have one data center or many but I would bet most of us look like this which is a variety of servers somewhere usually between 10 and 1,000’s each running a set of virtual machines. Those VM’s are going to be connected in over ISCSI or fiber channel or SITs or NFS or SNB 2.0 or some combination of all of them down to one San or tier two sans or tier three or tier four with multiple policies. Everyone knows you have to back it up. You probably have different backup applications for VM’s verses physical. Those then have to get sent somewhere so you have a backup target usually running some D-Dupe and then you got to get it all safe for DR. So you have different pipes, you have got LAN replication, you have got different forms of replication at different levels and if I count it all up just sort of thinking out loud you got probably 10 different applications, 10 different appliances. Each one of those has its own management screen and probably multiple management screens. You have to have 10 different people understand how these things all work, how they all interoperate, 10 different support structures, 10 different vendors to call, 10 different upgrade procedures, 10 different patching procedures and it gets, I think as we all know, really complicated and on top of that none of these devices from when they were originally built were built by the same company and built to work with each other. So in fact you have got different processing power and different times that the application is going to process that information at different points in time all from different vendors and it leads to a lot of inefficiency and none of this was built, I should add, for the wrong reasons. This was all built because at each point in time different services required by those VM’s and by those applications that the business had to be able to deliver and this was the best way to do it. Historically this was the way to do it. If you needed backup you had to buy back up application. If you needed to save that backup you had to buy a backup appliance. If you needed DR you had to buy LAN replication and other replication tools but what it has ultimately led to is a time problem. So we have all heard the expression, it takes 80% of the time spent keeping the lights on and only 20% of the time for innovation. Well IDC actually took that a step further, broke that down, and not only is it only 20% innovation at best. It is that 80% that has led to things like monitoring and troubleshooting and provisioning and patching and service requests and dealing with vendors and all of that fundamentally is infrastructure maintenance. It is designed to manage infrastructure and instead we thought what if you could flip it around. What if instead of it being 80/20 it could 20/80 and you could 80% of your time in innovation instead of just keeping the utility running and this is where hyper converge comes in. so we say imagine your future. Sort of imagine for a second and we have all heard stories that are too good to be true in the past. We have all heard data center in a box stories before but just imagine if this time it is real and this time we can actually do what we say we can do and we can give you a simplified view. So we think about a couple of things. We think about everything above the hypervisor. Let’s just say that stays the same for right now. We actually make that a little bit easier but let’s just say it stays the same and then we got everything below the hypervisor. We got your servers, your storage switching, your storage, your backup, your backup appliances, your replication, we got all these devices that I talked about and in the best degree architecture and what if instead you could do it all in a single box. What if instead you could do everything that I show from a functionality perspective on the left and you could do it with a single box running effectively the same HP servers that we have already known and loved for the past 25 years industry leading and we could shove all that functionality into that single 2 use space and then add another one there for HA and I add a third for scale out and I add a fourth for DR and we are off and running. If we could do this and think about a greenfield data center environment which of course nobody has, it doesn’t exist but if you did and you could start with everything on the left verses the 2 boxes on the right I think I sounds pretty cool and immediately the first question, I know you are all on mute right now, but I assume your first question is going to be how is that possible. How could that possibly be true and why is this the time where it is finally going to work. Well first I am going to talk about the underlying technologies at high level and then at the end we will show you a demo and show it to you in action. So the first thing we have got is we call it the Omni stack data virtualization platform. Plain and simple without any marketing involved, what this means is a file system and object stored in a PCLU accelerator card all built by Simplivity, all built from scratch with our patents that would then integrate it into the HPBL380 to deliver an integrated appliance that delivers all of that functionality. At a high level what we mean by data virtualization is that what Simplivity and HP are providing is fundamentally similar to what VMware provided to applications. So 10-20 years ago depending on when you started down that virtualization roadmap and now most people are 70-80-100% virtualized. What VMware does is we all know is it presents an abstraction layer or a hypervisor to the applications. So the applications will think that they have got full access to the underlying physical resources but in reality they have been abstracted by VMware and VMware kind of gets in the way to deliver that efficiency. So instead of physical memory and physical CPU like applications always used to have, now they have virtual memory and virtual CPU and they don’t know any different. What Simplivity is going to provide is the same thing at the data layer. So instead of VMware writing down its data in the way it would normally expect to and that having that data be sent out to a san or sent out to a backup device or sent over to replication. Instead Simplivity is going to get in the middle, it is going to virtualize that information and it is going to make VMware think like it is doing things in the same way it always has and instead of things like ones and ray groups and storage protocol management and masking and zoning, instead Simplivity is going to manage virtual machines and that is it and we do it with a couple of different underlying technologies which is number 1) that data accelerator card that I mentioned before. It is that PCIU accelerator card that is going to plug into the back of the system and that card with the combination of Simplivity software is going to duplicate, compress, and optimize all data at and just once and forever across all stages of the data’s lifecycle which is a mouthful that fundamentally means we are going to de-duplicate and compress all data in line 100% of the time always and because it is offloaded to our offload engine it is not going to give any performance hit to the CPU on the system and in fact even better it is actually going to speed up performance because your acknowledgements is going to go back to the host at ram speed. So we think that is pretty cool. The second thing we are going to do is we are going to manage everything from the same pane of glass you already know how to use which is VMware and for HP fans out there eventually it is going to be one view on the HP side as well. So built in, we didn’t build our own GUI. We don’t want you to have to learn anything new. Then third is built in data protection and DR. so built in backup whether it is backup for a full terabyte VM in under 60 seconds which we guarantee or whether it is restore a full terabyte VM in under 60 seconds which we also guarantee in your contract or whether it is sending that data at 10 minute RPO’s across to a different site without any bandwidth requirements. These are all built into the code without any add-ons, upgrades or additional licensing and in fact I mention we guarantee it. We call it the Simplivity hyper guarantee. The top 2 of these I think are the ones that really matter. The bottom 3 or more marketing. The top one is 10-1 D-Dup and in fact you can go to Simplivity.com and get phone home data every single day for what the actual averages are in the field. It is usually somewhere between 30-50:1 on average and we guarantee 10. We also guarantee backup performance and restore performance and sort of ask you in the market as a challenge to ask any of the other providers if anyone would ever guarantee backup performance. You wouldn’t because it would be crazy except for Simplivity which we will tell you about in a second and we backed it up on the business side as well in terms of business outcomes. So Forester is going to report in terms of the average ROI that customers are seeing when you compare Simplivity against Legacy and it is 224% with a 6 ½ month payback on average and the evaluator group compared us to the cloud as well because everyone knows, everyone’s executives are tasking them with looking at the cloud. In fact we think that is a good idea. There are a lot of use cases that make a lot of sense whether it is salesforce.com, workday, office 365 that makes tons of sense. We compared more against that infrastructure as a service space at AWS and 00:18:59 we are going after and found that we are anywhere from 22-49% more cost effective on a price per VM basis. So some business outcomes backing up the technology and use cases. In fact customers always ask us, oh hey hyper converge must only be 4 robo and for BDI because that is what our competition says and in fact we fit there pretty well but most of our customers you will see at the top run mission critical applications and have been doing so for some time since we GA the products back in 2013 and 80% of our customers have been running sequel server which is something we are pretty proud about. When it comes to use cases I did say robo is something we do as well. So not to do that a disservice. Here is one example of a public customer reference we have, Merlin Entertainment. You may be familiar because they manage a lot of different theme parks around the world including for those in the US Lego Land is one for example that my nephews love and that is something we can do as well. We can do up to 32 Simplivity systems in a single V center right now which is far more hten most of our customers ever need and then we can stitch together multiple clusters of 32 with higher level automation tools all with D-duplication compression and built in backup across the entire infrastructure which is something that really helps with centralized management of these remote sites and when it comes to consolidation and standardization. So another great reference for us, a company called New Page who then merged and became Verso in the paper manufacturing business out in the Midwest and other parts of the United States and Canada and they had a whole set of Legacy tools and Legacy storage systems and Legacy backup tools and they had over 166 rack units worth of gear and that now looks like 12 units worth of gear on Simplivity with 1 user interface called Vcenter and then another 4U worth of space in a second site for DR for the first time they have ever been able to do it and been off to the races from there. Red bull racing I think you might have seen this on Palm slide early on. There is a little preview for it but they are a large and happy customer. Simplivity Red bull racing is running on our technology out in the UK and we like customers who promote speed and we do the same thing. Now I will go through a little bit, I know I am sort of racing through this. So please bear with me on the line but we will get to questions and demo pretty soon. I wanted to quickly tie on the competition because everyone always asks us hey Simplivity how do you compare against Nutanix, how do you compare against VXrail, how do you compare against Hyper flex, how do you compare against all the other hyper converge vendors out there and I am going to get to that but actually fundamentally we compete against much more hten that and historically the large share of our competition has been status quo. It has been just doing the same things people have always been doing. We sort of tongue in cheek call that the legacy stack but it is that set of infrastructure that we have all been operating for the past 20 years that set of server, storage switching, storage, backup app, backup appliance, DR and that is really the lion share of what Simplivity competes against down in the field because that is what most of us are coming from and so back in 2009 is actually when the market started to switch. So 2009 as you all know from this history books is when VBlock came out. It is when Flexpod started to be developed, and it is when the big vendors, the EMC’s and Dells and Netapps of the world started to realize that their existing set of systems and tools that they were delivering to customers were too complicated to manage. The inner app lists were too complicated, the matrices were too complicated and so they built these integrated systems to start to cobble together that gear in the factory before it shifts. So everything is precabled, preassembled, the right firmware and it is ready to be plugged in. it still might take a couple of months but that was significantly better then what it was and we give them a lot of credit and they built like a 3 billion dollar runway business just off of taking the same stuff they already sold and putting it together. So it was still the same if I used Eblock as an example. EMC storage, Cisco servers, Cisco Switching, and VMware and they put that wrapper around it. We would say it is a great start but it sort of missed 2 fundamental things. It missed the fact that it was still the same old way of doing things in terms of the management and operations and it missed the data services. So for data protection you still needed Avamar, a networker, a data domain. For DR you still needed recover point, you still needed additional data services. So then comes along the next set what we would call the other converge vendors who are what most people would call our hyper converge competitors and what these guys did is fix, we think, 1 of those 2 problems. They started to develop new applications of ways to manage storage. So instead of a San there would be no San. The storage would be loaded into the servers as the discs that were packaged into those servers and then would be stitched together with a distributed storage fabric that you could then scale out almost infinitely and it was again, a great step in the right direction. We give them a lot of credit, we have nothing but respect for our competition. We think the one big thing they missed was data protection. So if you ask any of the other vendors, I don’t need to name who they are, who do you recommend for backup they will pick another vendor that they are partnered with, who do you recommend for DR, again they will pick another vendor. Simplivity is the one where we don’t recommend anybody else. When it comes to backup we recommend ourselves, when it comes to DR, we recommend ourselves. We are the only ones who have it built in. we are the only ones who have done it from scratch from the beginning and in fact guarantee the results of how it performs and a lot of the reasons we can do that come back to that technical underpinning and the technical architecture I mentioned before all around data virtualization and what you get to and this is a bit of a corny slide I admit but really it is think about our own personal lives where you have got an iPhone and the iPhone is what it is and you can’t necessarily change every bit and byte or every piece of the underlying code but it gives you all the functionality you need and it is super easy to use and super simple and most of us on a daily basis aren’t carrying around a separate iPod or a separate calculator or a separate notepad or an email device or a palm pilot or a camera. Instead we all just have our iPhones and Simplivity really views hyper converge as a data center in the same vein. With that it is that all in one solution. So instead of a couple of different boxes for backup and a couple different for DR and a couple different applications for application. We are, we believe, to be the most complete hyper converge platform out there on the market and I will leave you with this slide before turning it over to the demo which is really 3 things to keep in mind. So the first is back up and DR. Right, again we are the only ones who got that locked in and got that built in and we back it up where I think number 1 here is something different. Number 1) backed up by the guarantee. Really that should be number 3 because everything is underscored by that contract we give you in your user agreement for 10-1 D-Dup, for 60 second backup, 60 second restore. We have got Raid on the system which I haven’t gone into a lot of depth at but I would ask your other vendors how they deal with the end resiliency data protection within the system and last is that PCOP. So I briefly mentioned Forester did a report that is available for us to share in terms of 224% ROI and 3.7 times PCO. The evaluator group compared us to the cloud but if you think about it logically instead of spending different money for servers, different money for storage, and different money for San Switches and different money for backup and different money for DR. instead we have a single solution, it should and talk to your sales teams, but it should come in at a significant cost advantage over time especially over a 3-5 year period and especially when you think about all that management that won’t be there anymore. So I am going to stop there and ask Rachel to turn over the control to me that way I can share my screen and show you guys a quick demo to sort of whet the appetite for some next steps and for some additional interaction. So that way, and here it comes, now I am the presenter, share my screen. There we go. And I am going to show you a couple of quick things that I think are pretty exciting and pretty cool and the customers that we have out there and I myself am a sales rep for the greater Boston Territory and my customers many of them are happy to talk about this but this is really some of the results that they are seeing. What I have got here to orient you is for anyone who is a VMware administrator this should look really familiar because it is the same exact screen you are used to using. It is this plain old regular Vcenter and I have got a Simplivity plugin and that is it. In this example I have got one federation made up of 2 data centers. One in Mumbai and one down here in Seattle and Seattle has got a single system because we can go as low as 1 system in a cluster and 2 systems up here in HA and then a whole bunch of VM’s and I am going to right click one of these VM’s. Let’s do win01_bam I am going to right click it. I have a whole bunch of the normal regular VMware capabilities and I am going to go down here to where it says all Simplivity actions where I have a couple of choices including backup. So hit the local backup first. I will call it Conres local and I am going to hit ok. Just keep the local data center cluster there and while that is happening I am also going to simultaneously try and back this up to the DR site and for anyone who is paying really close attention you will notice that I failed in that mission because in fact the local one is already finished but I will do my remote one now and while that is happening I am going to take a full clone of a running virtual machine and I am going to try to do this simultaneously. Now the remote one is already done and what we are going to do in the clone is actually take it one step further and register that VM into Vcenter which is happening right now. There is my clone right there. I bet you are thinking ok that is cool. So he has got some prepopulated data and he took a backup and hten he took a remote and then he took a clone who cares. Well first is I am going to show you the summary of this and show you that the VM I was working on is about 500 gigs in size. Which means because these are full backups every time I just did about a terabyte and a half worth of data in, I don’t know, 30 seconds of actual time here and again I would think you are thinking ok cool so he backed it up. It is probably some sort of pointer based system for backup. Well in fact the answer is no. it is a full back up every time and I can restore it by showing you my backup report which is as simple as right clicking and searching the backups of that particular VM. I am going to scroll to the right systems. Here is the ones I just did. Conres local, Conres remote. You will see first of all, for the local machine it was full backup. I said no data because it is fully duplicated. I never process the same data twice and the remote one you will notice for a 500 gig VM in size I actually only sent 18 meg of that VM over the wire to the other data center across the world. A lot of that goes into our data virtualization and most importantly is let’s show you what a restore looks like. So this one is over on the other side of the world. I am going to pick a data store and a cluster over there, I am going to restore it and while that is happening I am going to go back in time to an earlier one out of order on a different side of the world, I am going to restore this one as well and while that is happening you will notice the first restore is done, the second one is happening, and now that one is done as well and while that is happening the last thing I will show you is how much D-Dup we are getting in the system because we mentioned that we guarantee 10-1 and here is a pretty accurate environment. Now it is a demo environment, so it is a little bit bigger than the normal but it is actually pretty close to what we see on a daily basis and I would recommend you go check out Simplivity.com if you are interested but what I have got here is around 15 terabytes worth of VM data and I am storing that around 6.1 terabytes of actual used space but then I have also got another 350 or so terabytes worth of backup and I am storing that all on the same 6.1 terabytes of space giving me approximately 60-1 data efficiency ratio which is D-Dup and compression combined. So that is just to give you a quick snapshot in terms of backup performance, restore performance, the management of how we do this, and the D-Duplication and compression capabilities and I think we will now open it up for questions.

Rachel: Great. We have a couple of questions. What server platforms are Simplivity supported on today?

Dan:  So Simplivity today historically Simplivity supported a number of different platforms. So we supported Cisco and Obo, and Dell, HP and moving forward we are going to continue to support those for all of our existing customers. So all of our customers out there who have got those platforms we continue to support them through the life cycle of those technologies and moving forward it is going to be you know as part of the big happy HP family we are going to be moving forward on the DL380’s and other HP platforms as the road map continues to move forward.

Rachel: Another question is, what hypervisors are supported today?

Dan: Today, Simplivity supports VMware, we have got hyper B in beta and we expect that to come out later this year and we are pretty happy with that considering VMware has got still around 75% market share at the end of market today.

Rachel:  Great.

Rick: So thank you Dan. Thank you Rachel. So kind of in summary if you are considering or evaluating hyper convergence or even looking at a cloud like AWS, by all means call us. At Conres we specialize in helping customer’s kind of sort through the noise, make sense of all the options, and then help you evaluate, procure, and deploy. So, if you are interested, if this is something you would like to know more and meet with our sales or our technical teams please give me a call at my email right there. Thank you for your time and please join us for our next session next week, Thursday at 2 o’clock where we will spotlighting new mechanics. Thank you.

 


Print Friendly and PDF
  • Share
  • FacebookTwitterGoogle+LinkedInPinterest

Interested in HPE?

Fill out the form below to speak with an HPE Specialist
  • This field is for validation purposes and should be left unchanged.

@ConRes
Follow Us →