It has been
interesting working with Tony B, the network optimization consultant brought on
board by Newton Ad Agency. Tony really knows optimization, and the information
technology (IT) management team has charged you with reporting on how to fit
Tony’s message about optimization to the specific business case of Newton. In
this discussion forum assignment, you will give an informal report.
Review Tony’s discussion of optimization, Five Best Practices for Optimizing Your Network, at the Help Systems webinar.
Given Tony’s message, when and how will it be appropriate to optimize networks at Newton Ad Agency? If applicable, post any other optimization experiences you may have.
The report must follow APA format with a proper citation and reference.
Jake: Hello, everyone, and welcome to today’s Windows IT Pro Web Seminar, “Five Best Practices for Optimizing Your Network,” sponsored by Intermapper. I would like to remind the audience, if you have any technical difficulties during today’s session, please press the Help widget on your player console to receive assistance in solving common issues. And if at any time we are having audio difficulties or issues with the advancing of the slides, simply hit your F5 key to refresh your webcast console. Please also be aware, today’s web seminar is being recorded and will be available on-demand for 12 months starting tomorrow. You will receive an email when ready.
Need assignment help for this question?
If you need assistance with writing your essay, we are ready to help you!
Why Choose Us: Cost-efficiency, Plagiarism free, Money Back Guarantee, On-time Delivery, Total Сonfidentiality, 24/7 Support, 100% originality
And now, I would like to introduce today’s presenters. Tony Bradley is a respected authority on technology, writes for a variety of online and print media outlets, and has authored or co-authored a number of books. He has been a CISSP for over 10 years, and he has been recognized by Microsoft as an MVP in Windows and Windows Security for nine consecutive years.
Kevin Jackson is a Technical Solutions Consultant for Intermapper. Prior to joining HelpSystems, Kevin was a pre-sales design engineer and business analyst for Richard Fleischman and Associates for seven years.
And with that, Tony, the floor is yours.
Tony: All right. Well, thank you very much. Today, we’re going to talk about the five best practices for optimizing your network, and actually, I’m going to design the five problems to get around to the solutions as well. But as we have already talked about me, so let’s take a look at an overview.
So what I’m going to talk about, I’m going to talk about the importance of the network to business in general and walkthrough as to why it’s important to optimize, and make sure that you have an accurate inventory of what is on your network that this activity, and that you have processes and procedures in place to make sure that you’re escalating and resolving.
So let’s start off with talking about network. The network in business is like breathing is to you and me. It’s something you have to do that you don’t really notice, you don’t really think about. It’s just there. But if you were to try and stop breathing long enough, it could cause significant damage, possibly… and the same thing is true of the way that networks impact the business.
So it’s there, especially from the end user perspective. So if you think about it from the end user perspective, as opposed to an IT perspective, they don’t really care that the network is there. All they want is for things to work the way they are supposed to work, as efficiently as they are supposed to work. The nuts and bolts that are going out underneath that, they don’t really care about. All they know is they need the network to get their business done.
In 2013, Ponemon Institute did a study and found that data center downtime costs $7,900 per minute on average. Now, in that study, they were focusing on data centers, 2,500 or more square feet, probably focused on data center-specific businesses as opposed to a business that has a data center. But there is no denying that network downtime has a serious impact on any organization’s bottom line, especially today, when more of what we’re doing is online.
I, in a former life, worked in the trenches as a network admin at a dotcom and there wasn’t a lot of applications, like we weren’t using server-based application and have cloud stuff, or anything when I was working in the trenches, if that tells me how far back that goes.
But I do know there’s nothing more stressful than a server outage or having end users get frustrated about network performance because ultimately you end up getting it from the end users and management simultaneously while you’re struggling to figure out what’s going on with the network and try to get everything fixed, and you’ve got people yelling at you from both sides.
So that was a long time ago. And we were operating in, really, a purely reactive manner. We didn’t have a lot of tools or processes in place in terms of proactively monitoring the network or trying to identify issues. We just addressed issues as complaints came in. And we could get away with that. It probably wasn’t the best way to do it even then, but we can get away with it because we weren’t as dependent on the network as organizations are today.
So, like I say, when you have cloud applications and you’re running virtual applications across the network, and everything is much more network-centric today than it was. So the network is more crucial than ever for your business.
So let’s talk a little bit about optimizing network performance. The best way to avoid network problems is to proactively monitor and optimize network performance to make sure that those problems don’t appear in the first place. For organizations that are running server-based, network-based applications, and where productivity relies on the network, optimizing network performance also means maximizing employee productivity, maximizing employee performance, and getting the most from your bottom line ultimately.
So that all sounds great, but how do you optimize network performance? So we’re going to move on into some of those other areas that I talked about upfront in terms of knowing what’s on your network, and watching for suspicious activity, and having solid processes in place for resolving and escalating issues.
But just off the top of my head, there are a number of ways to optimize network performance. You could increase bandwidth across the network, or increase the bandwidth of the pipeline coming into the network from outside, server loop balancing, browser tuning, web page optimization, you can take a look at the script source and whether or not those are impacting performance.
So there are a variety of ways to potentially optimize network performance, and any one of those would probably yield some positive impact, whether it would actually be the problem or not, but as we will get to, it’s better to understand where the bottlenecks are, it’s better to have an idea of what problem you’re trying to solve, so that you can prioritize the optimum techniques that are exercised and some success, and not just try things for the sake of trying them.
So accurate inventory, again, repeating myself, when we were doing it, it was basically an Excel spreadsheet that we kept asset tag numbers in as new systems came on. As we issued new computers and provisions and new servers, we would just have asset tags and track them in the spreadsheet. But you can’t really do that anymore, or at least, it would be very tedious and inefficient.
But first, let’s talk why do you need the accurate inventory? You can’t secure, protect, optimize things that you don’t even know you have. That’s the reason in a nutshell. You can’t conduct an accurate list of vulnerability assessment if you don’t even know what hardware and software are on your network. You can’t ensure the patches and updates are in place or appropriate mitigation efforts are in place, if you don’t even know what you’re trying to protect.
In fact, conducting vulnerability or heuristic assessments in a client’s patches and updates without an accurate inventory causes problems from the standpoint of creating a false sense of security, because you think that you’ve assessed your network or you think you’ve patched the environment, but you still have rogue systems, or systems or applications out there that you haven’t accounted for, and those leave you exposed to risk.
So you need to have a good idea of what hardware is present in your network, servers, routers, switches, what software is running on your network. And even when I say, what or where, that gets a little tricky because when you’re talking cloud and you’re talking virtualization, a lot of your quote-unquote
is actually software. It’s just as buffer hardware. But then that complicates asset tracking as well, which I will talk about in a minute.
But one of the other things that you need to know when it comes to your inventory, it isn’t just, okay, I’ve got XYZ server, but where is that server,
is that server, and who owns it, what team or what individual? Because when we get to the part about escalating and resolving issues, you have to know who owns the asset in order to be able to escalate appropriately. So as I led with on this, what you don’t know can hurt you because you do have to know what’s on your network in order to protect it.
So let’s talk about…the anti-malware tools are sort of table stakes now. Every organization really has some form of antivirus software. Probably at the server level, at the endpoint level, you’ve got network firewalls, you’ve got endpoint firewalls. And all of those things are generally good at identifying and blocking known threats. And as new threats comes out, if something comes out right now, the vendors are relatively quick at reverse-engineering that, figuring out a signature or whatever for that, and rolling out the update so their products can detect that threat.
So it’s still a reactive solution because nine times out of ten, the virus has to exist first, for the virus, the exploit, the whatever, has to exist first, has to be discovered, so there is potential somebody is going to get hit with it before the security vendors can respond. But the reality is major attacks, the data breaches that make the headlines every day, they’re not your standard virus, they’re not traditional attacks from that perspective. And they could have and should have been detected… monitoring for suspicious activity, because detecting suspicious activity helps you to determine whether or not there’s something weird going on.
If you look at Edward Snowden, you can look at Target, you can look at Sony, any of the big breaches, most of the attacks that have happened in the past few years are insider attacks, the point of infiltration or exfiltration, at the point that there’s an actual attack going on.
And the reason I say that is because whether it’s an inside attack like an Edward Snowden, or an outside attack like Target or Sony, the attack is using authorized or compromised credential. So from the perspective of the company, from the perspective of you as the network admin, or whatever, from the perspective of the network, it is an authorized access. They’re using their active credentials. So your anti-malware tool can’t see that because there isn’t an “exploit,” there isn’t a virus. It’s someone logging in with valid credentials, whether those are compromised or not.
So you have to watch for the overtly malicious activity, but you also have to monitor for suspicious activity, meaning seemingly authorized activities that’s out of the ordinary. So, for example, John is an authorized user for data, that’s great. But why is he downloading two terabytes of archived company financials? That might be weird. David is an authorized user and logs in Monday through Friday like normal. Why or how is he logged in from two geographically separate locations at one time? Or if David was in Seattle and he just left work three hours ago, how is it that he’s logged in from Thailand at 11 p.m. on a Friday night?
Those are the kind of suspicious activities, anomalous activities that you should be watching for that would tell you, hey, there’s something weird going on here. And it might not be a virus, and it might not be an exploit, and it’s going to circumvent the standard anti-malware and the standard security tools, but you have to be aware of those things. Because if you see that kind of activity, if you see somebody who is authorized to access information and accesses that information on a daily basis, but for some reason, today, they’re actively downloading the entire database, that should trigger some red flags.
So now, let’s tie a couple of those things back together and talk about escalating and resolving issues. Just going back to the optimizing, the best way to fix a problem is to not have it in the first place. The more proactive you would be in understanding where the bottlenecks are in your network, understanding what’s normal network activity versus anomalous network activity, the better job you can do at proactively addressing issues, and hopefully resolving them before management and before the end users even realize they’re there.
But at the very least, when an end user contacts you, it won’t be the first you heard of it, and you’ll be able to say, “Yes, we’re aware of that. We’re working on it,” or whatever, something to that effect. So you want to be as proactive as you can be, but you have to have processes and procedures in place to escalate and resolve issues. Because depending on the site escalation, it’s very easy to say, “Okay, well, I’ve identified this issue. I assigned it to John. It’s off my plate. I’m done.” And then you don’t really know, well, what did John do with it? Did it get fixed, what’s the current status?
A lot of organizations, you’ll have some kind of a ticketing system, help desk ticketing system, or something in place for tracking those, and hopefully tracking them through to completion. You have a dashboard that a network admin can look at and say, “Okay, well, I know that we have these open tickets. They’re assigned to these people. This is the status of where they’re at right now, and what do we need to do to get these resolved.”
But regardless of that, regardless of what sort of ticketing system you have in place, the main thing is that you have a clearly defined process for resolving issues, and make sure that that process is followed. So who is going to be responsible? How are issues going to be communicated? How are troubleshooting and resolutions going to be checked? How is the issue going to be escalated if it can’t be resolved?
And that goes back to what we talked about with maintaining the accurate inventory and making sure that you are aware of who the asset owner is and things like that. Because when you have issues, it may mean taking a server offline, it may mean that you need to patch something, and you need to know what role that asset plays in the business and who owns it, so that you can make sure that everyone’s aware, “Hey, I’m going to be taking this offline,” or whatever, and if you take it offline as opposed to this impact of maybe postponing issue resolution till a later day when it’s outside of business hours.
So now, let’s talk a little bit about proactive monitoring. It’s just going to be the glue that ties the whole thing together. As I talked about up front, cloud and virtual infrastructure definitely complicates things. When I was in the trenches and working in a data center, we just had a physical inventory. It was relativity static, it was relatively easy to track. We got new servers, UPS showed up with 200 boxes from Dell. We stuck them on the rack. We knew where they were. We knew what they did.
When you’re dealing with the cloud, and virtualization, and DevOps, and docker containers, and other micro services and all the things that are driving development and IT infrastructure, it’s a lot harder to maintain an accurate inventory and monitoring because that entire infrastructure could be erased, changed, expanded exponentially, at the push of a button or through automated processes. It could be programmatically scaled to meet demands. And maybe this morning, you had 50 servers, and this afternoon you have 200. That change in structure is definitely challenging to proactively keep track of.
And as an IT admin, you got enough going on. The network admins are managing enough stuff, there’s already enough plates spinning, and it’s too much that can change too quickly for old-fashioned asset tracking, or security log management, or any of those types of things that organizations have relied on for years. So you need a solution that can help automate proactive, comprehensive monitoring, and be able to alert you to what matters, and free you up to focus on other things. So with that, I’m going to hand things over to Kevin and let him explain more about that to you.
Kevin: Sounds good. Thank you, Tony. That was pretty awesome. So in essence, just to follow up on what you’re talking about in terms of proactively monitoring your infrastructure, as an IT professional, really our job is to ensure that we have the right technology in place. And we’re responsible for all aspects of technology, we’re accountable for all aspects of technology. So we want to make sure that technology is working the way we intend it to.
But sometimes, network outages are pretty much inevitable. But what we’re trying to do is we are trying to preempt it. We want to step in front of those possible network outages before it becomes a huge issue. So again, what you want to look at is you want to be able to monitor your devices for certain thresholds. You want a software that can help you provide you with the time that you need to mitigate those potential risks before those risks becomes more of a critical issue.
So again, the premise is you want to have some understanding, some visibility, some inclination of how your network is behaving and performing. So if there is an issue on the horizon, you can take those necessary steps to mitigate.
So what we’re going to do, we’ll talk about Intermapper, the product. What we do is we monitor, we alert, and we provide you with that visibility of your infrastructure. We’ll take a look at some slides, we’ll go through some examples of a couple of brands of companies that you may recognize that might not have the best solution in place or at that particular time.
But next, what I want to do is I want to ask a couple of questions. I ask this one question just to get a feel for the attendees. I want to know what kind of pain points you’ve experienced recently, or some of the pain points that you’ve experienced in the past? So those who are joining, the question is, what was the most recent cause of an outage on your network?
And the options are human error: was it a human error? Someone did something that there weren’t supposed to? Was it an environmental issue? A configuration issue? Or a lack of network visibility? So what we’ll do is we’ll give you guys a minute or two to enter your response and then we’ll show the polling results as a result.
But again, the most important thing is trying to have that overall understanding of how your network is performing. You want to have that visibility, you want to have the know, you want to be always be in the know, and again effectively, you don’t want users to be the ones who tell you when things are going wrong.
So what we’ll do is we’ll go and we will check out the polling results. All right, so this is pretty informative. So again, as you can see, 57% have experienced an outage based on environmental issues. And just to throw it out there, one of the most important thing that I’ve faced as an IT professional is you’d be surprised that not a lot of people are actively monitoring their environmental devices, i.e. they’re not monitoring their UPS, they’re not monitoring their HVAC, or if they have environmental sensors or lack thereof, they’re not monitoring those as well.
And we all know, effectively that the most important thing, the most important aspect of your data center, your IT closet, is the environmentals because without power, you have no service. Without HVAC, your service will effectively go down. So again, very important aspect of monitoring is making sure that your environmental components are also monitored as well. So excellent, really good information there.
So our agenda today is we’ll go and we’ll talk about some of the things that causes network outages, as we previously talked about. Again, very important that we identify, we verify, and we look to mediate or remediate those potential impact. We’ll take a look at, as I mentioned, a couple of examples of large brands, brands that we’re very familiar with, that had some network outages and some of the financial impact on those outages.
We’ll talk about Intermapper as well. We’ll also discuss what we find are some really key monitoring attributes that are effective in preventing some outages. We’ll talk about the software and how you can improve your network efficiency by being proactive, and also having a good monitoring solution in place to help you with that.
So let’s understand what causes network outages. As we mentioned, it’s very apparent that human error is actually on the top of the list in terms of the biggest impact to network outages. So we’ve seen that a lot of folks effectively make changes, if you unplug a device or you plug a patch cable into the wrong port, this can bring down an entire network. So human error has always been the top of the list in terms of what can cause a network outage.
And as we talked about, and as you guys have clearly pointed out, environmental factors have been the one pain point that you’ve experienced in terms of an outage on your infrastructure. So again, keeping those environmental conditions under control, monitoring those environmental conditions.
It’s awesome now because now a lot of these environmental devices are smarter than they used to be. So they have SNMP capabilities, which allows us to do a lot of different things. So we can pretty much monitor just about anything using SNMP. So that allows us to effectively reach a lot of those environmental factors or environmental devices that we wouldn’t normally monitor in past. So again, very important that we add that to some of the things that we’re looking to monitor.
Configuration issues, of course. That goes back to what Tony was mentioning in terms of optimizing your network. Configuration is a key issue, a key component of optimization. So you want to make sure that your device is properly configured. Again, a lot of network performance issues can be attributed to misconfiguration of your devices.
Case in point, you have a router or switch that’s running, have duplex and you have full duplex infrastructure, or you have a device that’s a 10/100 running on a gigabit infrastructure. Obviously, there’s going to be some issues there. So again, this is part of optimizing, but you also have to have that understanding, you have to have that discovery, you have to have that visibility before you can take those necessary steps.
And then that goes into my next point: lack of visibility. This prevents a lot of issues. If you don’t know what’s running on your infrastructure, then you can’t take the necessary steps to optimize, to mitigate, and to make sure that your infrastructure is running the way you want it. So again, these are some really basic, but very powerful causes that we’ve seen in the industry that have called some widespread network outages.
So the next couple of slides, I’m just going to highlight a couple of the large players and the type of outages that they have experienced, some recently, some a little longer in tenure.
Amazon, we all know Amazon. Awesome, awesome, awesome company. They are known for their Amazon Cloud, their web services. So again, what we’ve seen, this was a few years ago where Amazon had this issue. They had a network configuration change that affected their cloud data store. So the stores where all their clients stored their data, someone went in and misconfigured that store.
What happened was there was on outage and the cost, 10-day credit to all the users affected. So you can imagine how many customers Amazon has, and to provide them 10-day credit based on a misconfiguration, a simple misconfiguration that could have been caught, that should have been caught, or never happened, that cost them a lot of money.
So just to jump back, Tony mentioned early about a few brands with some security issues, some security breaches that that was quite frankly very embarrassing on their part. So these next brands, including Amazon, these are more on a network, the physical device, the network infrastructure where they had some issues, where whatever monitoring solution or whatever change management solution they had in place just didn’t get the job done. So again, similar results, loss of revenue, and also a very big hit to that product standard in their respective space.
BlackBerry. BlackBerry had a core switch outage. So the core switch, the major core switch went down which affected their service. Cost millions of users worldwide. They were down for 24 hours. Twenty-four hours without service. Now, for a company that’s currently battling with the likes of Apple and other Android devices too in that market space, you cannot afford to have a 24-hour outage based on a core switch failure.
As a customer, I’m banking on you having your stuff together. I’m banking on you having the proper infrastructure, the proper applications in place, so you can continue to provide me with the service that I’m paying for. So again, this was a big outage, big black eye for BlackBerry. This pushed them back in the race a little bit.
Navitaire is a subsidiary of Virgin Blue, their reservation management company. So if you ever heard of Virgin Airlines, they use these guys to do their ticketing and their reservation bookings. So they had an outage. Their online booking system went down.
So they actually suffered a server hardware failure and the cost, $20 million in compensation back to Virgin Blue. So they had to cough up $20 million because their server went down and they didn’t have any steps, anything in place to mitigate that risk. Or they didn’t have any monitoring solution that could preemptively tell them if they are approaching some kind of server failure. Again, very big compensation there.
Bloomberg. Again, we all know Bloomberg, large company, well-known in the financial industry. Their issue, they had a terminal outage. So again this was a combination of hardware and software failures within their network. And for those who don’t know, Bloomberg terminals are widely used in the financial industry and these things go for probably $2,000 a pop.
So think about the fact that there’s about 325,000 Bloomberg users worldwide. You can do the math. $2,000 a user; that’s a lot of money provided to Bloomberg. So for them to have that kind of outage, that kind of failure in their product and their services, that’s also a big impact to their business.
So again, the point is, network outages can happen to anyone. It doesn’t have to be a large company, doesn’t have to be mid-size, little-known company. It can happen to the mom and pop shop. It can happen to the large corporate establishment. It can happen at home. So the bottom line is you need to provide the necessary steps to potentially mitigate those risks, and you want to lower the impact of those outages. Get to the bottom of those outages before they become really critical, before they cost you millions and millions of dollars, if you’re a Navitaire or Bloomberg.
So in terms of network monitoring attributes that I think are really effective in preventing outages. First and foremost, device discovery, and device discovery is going out there and finding out what you have running on your infrastructure. There’s a lot of times where we’re watching our network, we’re monitoring our network, we’re responsible for our network. And then there’s things that’s constantly being added. People are bringing in devices. You might have someone else on your team throwing on a switch or doing some upgrade.
You want to have that the full understanding of when these devices hit the network. Because again, you want to be able to effectively monitor that device the way it needs to be monitored. And just have the understanding of what that device is and what it’s doing on your infrastructure. Making sure that it’s not providing any issues.
Displaying and visualizing the status of your devices. So what we do is, Intermapper can again effectively do your device discovery, do an autodiscovery. We can also display and visualize all the status of your devices, so we can show you unique information from that device. We can capture information from that device. And we can display that information to you, and show you what exactly what’s going on. And we use exception-based alerting and notification. So, we’re always keeping you informed.
The one nice thing about what we do is we have that, not just the visual representation, but we have these status badges. And what we do is we provide you with a gradual progression. So your device is going through certain steps in terms of failure. Going from a critical to an alarm, to a warning, we show you that gradual progression of your device as it goes through different stages visually. So again, you can see how that device is going on the map. And again, it’s nice to have that visual, as well as being alerted with different ways when devices are in specific states.
So, how does network monitoring software improve your network efficiency and performance? Again, providing the visibility, I think, is first and foremost. Again, having that visual representation allows you to prevent some really costly issues. We have the ability to do real-time monitoring. Again, very important that you are getting up-to-date information. We poll every 30 seconds on our software. So every 30 seconds, Intermapper is talking to your devices and bringing back information and providing you with that output. So real-time monitoring is key. Not five minutes intervals, not ten minute intervals, you want as close to real-time as possible. So you have effectively an accurate kind of understanding of your devices.
And then you want to be able to leverage the right technology. The live map, the flow-based technology, if you’re looking monitor bandwidth traffic. You’re looking to monitor traffic at the edge of specific networks. You want to see who are the top talkers or the top listeners? How that bandwidth is being utilized? You want to be able to utilize a flow-based technology product. We have that. We can provide you with that solution.
Layer 2. If you want to see how your backbone is interconnected, we provide you with the Layer 2 mapping. And again if you want to do some log data analysis, taking logs and analyze those logs, looking for anomalies, looking for different ways to troubleshoot those devices, we have the ability to analyze some logs and ship logs to syslog server or Splunk Enterprise server for those who are using Splunk. Splunk is a big player in the network management space. So there’s a lot of opportunities there. So we play nicely with Splunk, and we can send our information to Splunk for data analysis. But analysis in general is also a nice tool, a nice way to effectively manage your overall infrastructure.
And again, optimizing the devices. Tony mentioned and he talked extensively about network optimization. So it’s really key that you first and foremost discover your devices, figure out what’s going on, figure out what’s running. Then find out how those devices are performing, and then how you want those devices performing. And then you take those steps to make sure that you optimize those devices effectively the way you want them to.
And then once you’ve optimized the devices, and you are happy with the performance of your devices, now you look to monitor your device with a solution like Intermapper. You put your devices on the map and then you can see when things are going on, and if there’s issues, you can mitigate those issues before they become critical.
So why we feel like Intermapper is the right solution; why would you want to use Intermapper? Intermapper software is very easy software to use. We don’t have a lot of built-in modules within the product. It’s a straightforward installation. We also support our software on pretty much all of the major operating systems out there. So we know that you have an affinity for a specific operating system that you use to manage your network or run your management applications on, and we support Windows, Linux, Mac solutions as well.
So whatever operating systems you run your management utility on, Intermapper can be installed on it and be supported on it. And we also play nicely with other products, so if you have other products, we also understand that folks like to have a suite of products that do different things. Inventory management, maybe some data analysis, we can integrate with those. We integrate with third-party applications using our web services. So we play nicely with others.
And if you have remote sites, or business units, or effectively want to separate your infrastructure into various groups, we have the ability to monitor groups, remote sites, business units. So you can clearly create maps for whatever type of scenario you have.
And it reduces costs associated with downtime. What we want to do is we want to reduce that cost. We want to take away those lingering costs that are associated with extended downtime. So if you can preempt those network outages by seeing the device, see some issues before the device comes into a critical state. That can save you a lot of money. As you can see, it didn’t really do anything for those other customers who probably didn’t have the right solutions in place.
But again, downtime equals money. And we all know that technology is not a profit center, it’s a cost center. Everyone sees us as a cost center. They see us as costing them money. So what we want to do is we want to make sure that we don’t cost them any additional money by providing them with an infrastructure that is shaky, that we can contain, that we can control. So again, very important that we have the right solution in place to mitigate those.
And again, that leads me into my next point. It helps with forecasting IT budgets. So what we want to do is we want to take you away from worrying about the day-to-day, those little mundane aspects of the day-to-day operations, and have you focus on creating those budgets, looking at big picture items. Coming to your manager with recommendations that can effectively improve your infrastructure and take your company forward. And again, proactively approach to ensuring a healthy network. Proactive, proactive, proactive. That’s kind of the keyword, that’s the buzz, that’s what we are harping on. You have to be proactive when monitoring the health of your network.
Reactive, you see the results of the reactiveness of certain companies, those large brands were very reactive in terms of what happened, and it cost them millions of dollars. Some companies don’t have millions of dollars to lose. So again, proactively monitoring the network is important. And we’re a low-cost, high-reward product. For our price point, we are priced very moderately and competitively. And for the functions and features that we provide, I think, we provide a pretty good solution if you are looking for that kind of monitoring capability.
So these are just some of our customers that use Intermapper. We have close to 5,000 customers worldwide that use our product. We have tons of network managers and IT professionals using our product for anything from the mapping, the monitoring—so any aspects of their day-to-day.
Effectively, we currently support lot of universities. We find that universities are constantly trying to understand who is using their bandwidth because you think about a university, you think about that set up, it’s very difficult to actually have a really good view of all the nodes, all the end points, all the students and what they’re doing. So you want to make sure that those students aren’t chewing up the bandwidth that’s needed for business purposes. So we have a lot of universities that’s using our product.
So again, the key is providing that proactive monitoring capability, having a monitoring solution in place. Having the visibility, so you can see what’s going on and being able to discover, autodiscover, and get a good understanding of all the devices that’s running on your network. And as new devices are added and removed, you get to see that since you get to see that representation as well. So that leads us into our next portion of the webinar. And Jake, I think, this is the best time to come in and do some Q&A.
Jake: Fantastic. Thanks a lot Kevin and before we do begin the Q&A session, I would like to ask the audience to please take our survey by clicking on the red survey widget on your audience console.
As Kevin mentioned, we do have a Q&A session. We have about 10 to 12 minutes for that. At present, we actually do not have any questions from the audience. So if you do have questions for Tony and/or Kevin, please put those in. We’ll get to as many as we can. In the meantime, what I would like to do is have both of our presenters take a minute or two to cover their main points off again, and give everybody an idea of what they should really take away from this session today. So Tony would you mind starting off?
Tony: Sure. Yeah, my main part of my presentation was to try to address what the pain points are. For a network admin, I think speaking in large part from my own experience in the network admin trenches, but also as an analyst and tech writer now in terms of what I see. And I think, it was important to stress because I have seen a lot of stuff today with DevOps, with containers, with micro services. And that fundamentally changes the game of trying to keep track of your environment, trying to proactively monitor your environment.
And also to simplify some things in terms of troubleshooting and resolution because when you have a server that is a virtual server that you can just create another instance of, that can actually make it more efficient in terms of troubleshooting, but it’s just that issue of trying to keep tabs of what exactly is even on your network. Because if you don’t know that, then you can’t protect it. You can’t patch it. You can’t monitor it. So I think, that’s the primary takeaway for me is that proactive asset tracking and proactive monitoring of the network.
Jake: Fantastic. Thank you. Kevin, did you want to take a minute or two?
Kevin: Sure. So in essence, I think my takeaway would be just understanding what causes these outages. The way I see it is, if you understand truly how your network is running and how it evolves, and all the components that can possibly cause you some issues, then you can take those necessary steps. Then you can figure out, “Okay, what do I need to make sure happens so these devices run the way that I want to. Am I going out there, am I monitoring all the necessary hardware equipment that will affect my network in a critical way?” So I mentioned you have to identify, you have to verify, and you have to remediate.
So again, identifying all the devices on your network, and all the devices that can potentially be an issue. And then you want to take that information and then verify that those devices are running the way they want to. Making sure that the configuration is proper. Making sure that the thresholds have been set to where you want it in terms of if you are using a monitoring solution. So you want to verify that and verify the performance on those devices.
And then you want to have steps in place to remediate if there’s issues. How do you address those issues if they come up, if they pop up? If the server goes down, how long will it take to get that server back up? If a device is down, how long will it take you to restore configuration? How long would it take you to figure out where the issue is?
So these are all situations that can potentially impact your company with dollar signs. So again, the takeaway is just as long as you understand all the things that can cause you some pain points and some issues on your infrastructure, you can get the necessary tools to help you and assist you in your day-to-day. And a monitoring solution is, I think, the first step in just overall understanding, and just being able to sleep at night. I mean, again, that’s what you want to do. Sleep at night. So that’s the takeaway. Just understanding what can cause the outages and effectively what you can use, what kind of tools, you can utilize to ensure that those outages do not become a critical event.
Jake: Great. Thanks a lot Kevin, and we did not get any questions. So we’re actually going to end the web seminar. But before we sign off, I wanted to let everybody know on your screen is some information for Kevin, if you do want to reach out to him one-on-one, email and a telephone number.
Also, if you want more information on Intermapper, you can visit their website or call their support line which is also on the screen. I would like to thank Intermapper for making today’s event possible. And of course, I would like to thank Tony and Kevin for their great presentations.
Just a reminder, this web seminar will be available on-demand, starting tomorrow. So feel free to come back and review. Have a great day everyone, and thank you so much for attending.
Kevin: Thank you.
The assignment is about network optimization. Review the scenario and provide a technical report base on the attached file, and the instruction provides on file.