Tag Archives: Big Data Analytics

End of the Beginning for Machine Learning

Well thats a wrap for my initial journey into machine learning and artificial intelligence. But it’s only the end of the beginning for machine learning.  Before I finish up this series some people have asked that I provide some context. This technology is changing the industry. Some players have already adopted these technologies – my company and competitors. So to placate the masses, let’s list out what I have heard is currently available in the marketplace. I am not omnipotent, so if you see anything missing or wrong please let me know.

Artificial Intelligence is Old Hat

 
Let’s first talk legacy. Artificial intelligence has been part of software since my start of the industry. The most common of which are “rules”. Humans define the model though. This model could be a list of “if” statements. A tree stored in a database is also a model. The difference is not on the AI side, but the machine learning. Automated model building is what is different. Other legacy concepts are algorithm based. Two examples I have for you: linear regression trending and Holt-Winters smoothing. Both are available in open-source like MRTG as well as many commercial applications today. The commonality is that the algorithm provides the model. Let’s be clear, the algorithms doesn’t build the model, it IS the model. These are robust and well regarded solutions in the marketplace today.

Anomaly Detection vs Chronic Detection

 
 
Now lets move to machine learning. Anomaly detection, with various degrees of accuracy, is getting to be common in the marketplace. Many are black boxes that strain credibility and others are open time abyss of customization. The mature solutions are trying provide a balance between out-of-the-box value and flexibility. There are plenty of options with anomaly detection. Chronic detection and mitigation is much more rarer. I have not seen many who offer that functionality, especially accomplished with machine learning. Again on dealing with chronics, you mileage may vary but its out there.

Takeaways

Many of the products that use this technology do not specifically reference it. Usually when you hear analytics nowadays you can expect machine learning to be part of it. Most performance alerting (threshold crossing) leverage it in the realm of big data analytics. Most historical performance tools leverage machine learning to reduce the footprint of reporting. These three areas commonly have machine learning technology baked in.
 
What this means is that machine learning is NOT revolution technology that solves all our problems. At least not yet. Its revolution technology that lowers the bar. Because of this technology problems can be solved easier with far less resources than ever before. The price you pay for this is simple, machine learning will not catch everything. You will have to be fine with a 80% quality with 0% effort.
 
Thanks again for all the great input, keep commenting and I will keep posting.
Service Assurance's Future with Machine Learning

About the Author

Serial entrepreneur and operations subject matter expert who likes to help customers and partners achieve solutions that solve critical problems.   Experience in traditional telecom, ITIL enterprise, global manage service providers, and datacenter hosting providers.   Expertise in optical DWDM, MPLS networks, MEF Ethernet, COTS applications, custom applications, SDDC virtualized, and SDN/NFV virtualized infrastructure.  Based out of Dallas, Texas US area and currently working for one of his founded companies – Monolith Software.

Understanding Service Assurance Correlation & Analytics

Data is good right?   The more data the better.   In fact, there is a whole segment of IT related to data call BIG data analytics.   Operations have data, tons of it.    Every technology devices spits out gigabytes of data a day.   The question is figuring out how to filter data. It’s all about reducing that real-time stream of data into actionable information.   Understanding service assurance correlation & analytics is all about focusing operations. That attention can produce that better business results.   This blog details common concepts and what’s available in the marketplace. I want show the value of driving data analytics into actionable information. Value operations can execute successfully.

Maturity Curve for Service Assurance Correlation & Analytics

Let’s talk about terminology first.    Correlation versus analytics is an interest subject.   Most people I talk to, consider correlation to be only within fault management.    Analytics includes TimeSeries data like performance and logs.   Now I know some would disagree with that simplification, we can use it here to avoid confusion.   Whether it be either term, what we look for is reduction and simplification.   The more actionable your information is, the quicker you can resolve problems.

Visual Service Assurance Correlation & Analytics

Visual Correlation & Analytics

First step on the road to service assurance correlation and analytics is enabling a visual way to correlate data.   Correlation is not possible if the data is missing, so have unified collection is your first step.    Once you have the data co-located you can drive operations activities to resolution.    Technicians can leverage the tool to find the cause of the fault.   Drill-down tools can help uncover enough information.  Then the NOC techs can perform manual parent/child correlation.   

Once executed, users of the assurance tool can also suppress, or hide, faults.   Faults that are not impacting or known false errors become sorted out as “noise”.    Assurance systems then leverage third party data to enrich faults.   Enrichment would allow faults to include more actionable data. This makes them easier to troubleshoot.    All these concepts should be second nature.   Operations should have all these visual features as part of the assurance. Otherwise they are hamstrung.

Basic Service Assurance Correlation & Analytics

Basic Correlation & Analytics

Once you have a tool that has all your data, you will be swimming in the quantity of that data.    You must reduce that data stream.   If not, you will overload the NOC looking at the stack instead of the needle.   There are many basic level strategies that allow that reduction.

First, look at de-duplication.   This feature allows you to match up repeat faults and data points. Which eliminates 80% of duplicate data.   Matching “up” to “down” messages allow elimination of 50% of your data stream.    Reaping jobs can close out data that are not deemed “current” or limited log data.    Another common feature is suppressing faults. Suppression by time windows during scheduled maintenance or excluding business hours.    Threshold policies can listen to “noise” data and after X times in Y minutes create an alert.    These features should be available on any assurance platform. If yours lacks them, look to augment.

RCA Service Assurance Correlation & Analytics

Root Cause Analysis Correlation & Analytics

If you have a NOC with thousands of devices or tens of domains, you need cross domain correlation.   Root cause analysis is key to reducing complexity of large access networks.   Performing RCA across many technology domains is a core strategy. Operations can use it for consolidated network operations. Instead of playing the blame game, you know which layer is at fault.   Leveraging topology to sift through faults is common. Unfortunately its not typical in operations.   Topology data can sometimes be difficult to collect or of poor quality.   Operations needs a strong discovery strategy to prevent this.

Cluster-based Correlation

Cluster-based correlation is another RCA strategy. This one does not rely upon topology data.   The concept here is using trained or machine learning data. A written profile will align data when a certain pattern matched.  The tools create patterns during troubleshooting process. Others have algorithms that align faults with time and alerts.   Once the pattern matches, the alert fires causing a roll-up by the symptoms to reduce the event stream.    This correlation method is popular, but hasn’t provided much results yet. Algorithms are the key here. Many challenge its ROI model that requires machine training.

Customer Experience Assurance

Next, RCA enables operations to become more customer-centric.   Service oriented correlation allows operations to see the quality of their network. All through their customers eyes.   Some call this functionality “service impact analysis”. I like the term “customer experience assurance”.   Understanding what faults are impacting customers and their services enables higher efficient operations.   The holy grail of operations is focusing on only root causes. Then prioritize action only by customer value.

Service Quality Management

Lastly, you can track customer temperature by moving beyond outages and into quality.   Its important to under the KPIs of the service. This allows clarity on how well the service is performing. If you group these together, you simplify.  While operations ignore bumps and blips, you still need to track them.    Its important to understand those blips are cumulative in the customers eyes.   If the quality threshold violates, customers patience will be limited. Operations needs to know the temperature of the customer.   Having service and customer level insights are important to provide high quality service. Having a feature like this drives better customer outcomes.

Cognative Service Assurance Correlation & Analytics

Cognitive Correlation & Analytics

The nirvana of correlation and analytics includes a cognitive approach.   Its a simple concept. The platform listens, learns, and applies filtering and alerting.    The practice is very hard.   Most algorithms available diverse. They are either domain specific (website log tracking) or generic in nature (holtz-winter).    Solutions need to be engineered to apply the algorithms only where they make sense.

Holtz-Winter Use Case

One key use case is IPSLA WAN link monitoring.  Latency across links must be consistent.  If you see a jump, that anomaly may matter.   The Holtz-Winter algorithm is for tracking abnormal behavior through seasonal smoothing.   Applied to this use case, an alert is raise when the latency breaks its normal operation.    This allows operations to avoid setting arbitrary threshold levels.   Applying smart threshold alerting can reduce operational workload.   Holtz-winter shows how cognitive analytics can drive better business results.

Adaptive Filtering Use Case

Under the basic correlation area I listed dynamic filtering.   A fault can happen “X times in Y minutes”. If so, create alert Z.    This generic policy is helpful. The more you use it, you will realize that you need something smarter.   Adaptive filtering using cognitive algorithms allows for a more comprehensive solution.   While the X-Y-Z example depends upon two variables, the adaptive algorithm leverages hundreds.    How about understanding whether the device is in a lab or a core router?   Does the fault occurs ever day at the same time?   Does it proceeds a hard failure.   

You can leverage all these variables to create an adaptive score.   This score would be an operational temperature gauge or noise level.   NOC techs can cut noise during outages. They can increase it during quiet times or sort by it to understand “what’s hot”.    Adaptive filtering enables operations the ability to slice and dice their real-time fault feeds. This feature is a true force multiplier.

The Value of Correlation & Analytics

Understand the Value in Service Assurance Correlation & Analytics

The important part of correlation & analytics with service assurance is its value. You must understand what is freely available and it’s value to operations.    This subject varies greatly from customer to customer and environment to environment.   You have to decide how far the rabbit hole you want to go.    Always ask the question “Hows does that help us”.    If you are not moving the needle, put it on the back burner.   

If you are not saving 4-8 hours of weekly effort a week for the policy, its just not work the development effort.    Find your quick wins first.     Keep a list in a system like Jira and track your backlog.   You may want to leverage a agile methodology like DevOps if you want to get serious.   Correlation and analytics are force multipliers.   They allow operations to be smarter and act more efficiently.   These are worthwhile pursuits, but make sure to practice restraint.   Focus on the achievable, you don’t need to re-invent the wheel.   Tools are out there that provide all these features.   The question to focus on is “Is it worth my time?”.

Shawn Ennis

About the Author

Serial entrepreneur and operations subject matter expert who likes to help customers and partners achieve solutions that solve critical problems.   Experience in traditional telecom, ITIL enterprise, global manage service providers, and datacenter hosting providers.   Expertise in optical DWDM, MPLS networks, MEF Ethernet, COTS applications, custom applications, SDDC virtualized, and SDN/NFV virtualized infrastructure.  Based out of Dallas, Texas US area and currently working for one of his founded companies – Monolith Software.

IoT Service Assurance Key Concepts

The IoT/IoE generation has been born.   Now countless things are about to be inter-connected.   We all see the hype is non-stop, but there many things are becoming a reality.   AT&T/Maersk closed a deal back to 2015.  This recently became a reality for asset tracking cold shipping containers.   Now, Uber is providing driverless trucks to deliver beer.    While GPS trackers are being used to track the elderly.   These services are being ubiquitous and common.   We are seeing the use cases have variety and are growing in depth.   But we also see that IoT is a very pioneering field.   If IoT managed services are to exist, operations will need to manage them.   The goals here is to start asking key questions.   The hope is through analysis we can provide some answers.   Let’s discuss the key concepts driving the new field of IoT Service Assurance.

Key Perspectives for IoT Service Assurance

For any IoT service, you must understand who uses it and who provides it.    As I explain it, there are three key perspectives for IoT services.    First, you have the network provider.   They provide the network access for the “thing”.   The “network” could mean LTE or Wifi or any other technology.     Network providers see the network quality has the focus.  This is similar to typical mobile providers.    Compare that to IoT services monitored with an application focus.  Its about monitoring the availability and performance of the “things”.  You want to make sure they are working.    Lastly, you may not care about the “things“.   Perhaps you only care about the data from the them.   Performing correlation and understanding the “sum of all parts” would be the key focus.   These perspectives drive your requirements and the value prop.    Through them, you can define quality and success criteria for your IoT services.

Key Requirements of IoT Service Assurance

Before we get to far along, let’s first talk about terminology.   In the world of IoT, what is a device?    We have to ask, is this “thing” a device?    With the world of mobility, the handset is not a devices its an endpoint.    So is the pallet being monitored in the cold shipping container a device or an endpoint?  Like the perspectives that drive your requirements, we should agree on terminology.   Let’s talk some use cases to better understand typical requirements.

Cold Storage Tracking IoT Service Assurance

Smart Cold Storage

In the Maersk use case, let’s say the initial roll-out listed as 250k sensors on pallets.   These sensors, at regular intervals, report data in via wireless burst communications.   The data includes KPIs that drive visibility and business intelligence.   Some common examples I have found are: temperature, battery life, and vibration rate.    Other environmental KPIs required can exist: light levels, humidity, and weight.   As we have discussed, location information with signal strength could be useful.   We can track in real-time to provide trend and predication.   One would think it would be best to know a failure before putting the container on the boat.

Bottom line is would have about around 25 KPIs per poll interval.  Let’s do some math for performance data.  Estimate 250k sensors * 25 kpis * 4 (15 min polls, 4/hour) * 24 (hours/day) = 600 million data points per day.   If you were to use a standard database storage (say mysql) you would require 200GB per day.   Is keeping the sensor data worth $300/month per month of data on AWS EC2?   Storage is so inexpensive, real-time monitoring of sensor data becomes realistic.

Now faults are different.  Some could include failed reconnects and emergency button pushed scenarios.    These faults could provide opportunities. Shipping personnel can fix the container before the temperature gets too warm.    Faults could provide an opportunity to save valuable merchandise from spoilage.   Together this information combines to provide detailed real-time IoT Service Assurance views.

Driver-less Trucks for IoT Service Assurance

Driverless Trucks Use Case

Let’s look at another use case: Uber with driverless trucks.   The Wired article does not include how many cars, so let’s look at UPS.   UPS has >100k deliver trucks.    Imagine if these logistics were 100% automated. This would create a tons of “things” on the network.  The network, controller, and data would work together to provide a quality IoT service.

First, let’s look at performance data.  The KPIs should be like the Maersk example.    Speed, direction, location, and range would be valuable real-time data.    Service KQIs like ETA and number of stops remaining would be drive efficiencies.  Let’s do the same math as the Maersk example. Say 100k trucks * 50 kpis * 4 (15 min polls, 4/hour) * 24 (hours/day) = 480 million data points per day.  So $240/day per day on AWS.    This shows that storage and requirements are practical for driverless logistics.

Now some faults would include vital real-time activity.   Perhaps an ‘out-of-gas’ event or network errors.    Getting real-time alerts on crash would definitely be useful.   So fault management would be a necessity in this use case.   Again, there are plenty of reasons to create and leverage real-time alerts.

Smart Home for IoT Service Assurance

Another use case would be smart home monitoring, like Google Nest or Ecobee.   These OTT IoT providers track and monitor things like temperature and humidity.   There is no fault data and no analytics.   The amount of homes monitored by Nest or Ecobee is not readily available on the internet.   According to Dallas News, there are 8 million thermostats sold yearly.   According to Fast Company, Ecobee has 24% marketshare, so 2 million homes per year.   Ecobee has been in business for more than 5 years, so assume they have 10 million active thermostats.  Doing some math, we have 10M homes, 10 kpis * 4 (15 min polls, 4/hour) * 24 (hours/day) = 10 billion data points per day.  So that would be around $4800/day per day on AWS.

IoT Service Assurance is Practical

What is interesting about these use case are their practicality.  Scalability is not a problem with modern solutions. All three cases show that from any perspective. Real-time IoT service assurance is achievable.   I am amazed how achievable monitoring can be for complex and IoT services.  Now you must asked the questions “why” and “how”.   To answer these questions, you must understand how flexible your tools are. What value can you get from them.

Understanding Flexibility of IoT Service Assurance

Let’s discuss flexibility.    First, how difficult is collecting this data?    So let’s focus this in the world of open APIs. The expectation is these messages would come through a load balanced REST application server.   I can image that 600 million hits per day is 2.7k hits/sec.    This is well within apache and load balancer tolerances.   As long as the messaging follows open API concepts collection should be practical. So from a flexibility, assuming you embrace open APIs, this is practical as well.

Understanding the Value of IoT Service Assurance

Its a fact, real-time is a key need in IoT Service Assurance.   If whatever you want to track can wait 24/48 hours before you need to know it, you can achieve it with a reporting tool.   If all you need is to store the data and slap a dashboard/reporting engine on top, then this becomes easy.   Start with open source databases like mariaDB are low cost and widely available. Next, add a COTS dashboards and reporting tools like Tableau provide a cost-effective solution.   

In contrast, Real-time means you need to know immediately that a cold storage container has failed.   Being able to automate dispatch to find the closest human and text that operator to fix the problem.    Real-time means that you have delivery truck on the side of the road and need to dispatch a tow truck.   Real-time IoT Service Assurance means massive collection, intelligent correlation, and automated remediation.  Now let’s look at the OTT smart home as a use case. The NEST thermostat is not going to call the firehouse when it reaches 150F.    Everything is use case dependent, so you must let your requirements dictate the tool used. 

Lessons Learned for IoT Service Assurance

  • IoT-based managed services are currently available and growing
  • Assuring them properly will require new concepts around scalability and flexibility
  • With IoT, you must always ask how far down is it worth monitoring
  • Most all requirements include some sort of geospatial tracking or correlation
 My advice on IoT Service Assurance
  • As always, follow your researched requirements.   Get what you need first, then worry about your wants.
  • Make sure you have tools with a focus on flexibility, scale, and automation.   This vertical has many fringe use cases and they are growing.
  • IoT unifies network, application, and data management more than any other technology.   Having a holistic approach can provide a multiplying and accelerating affect.

About the Author

Shawn Ennis IoT Service Assurance

About the Author

Serial entrepreneur and operations subject matter expert who likes to help customers and partners achieve solutions that solve critical problems.   Experience in traditional telecom, ITIL enterprise, global manage service providers, and datacenter hosting providers.   Expertise in optical DWDM, MPLS networks, MEF Ethernet, COTS applications, custom applications, SDDC virtualized, and SDN/NFV virtualized infrastructure.  Based out of Dallas, Texas US area and currently working for one of his founded companies – Monolith Software.