The Beginning of the Enterprise AI Arms Race

Most days digital engineers and architects feel like cloud-centric wrench turners. Thought leaders have talked about new ways to solve common problems. The world is changing as unsolvable problems are being addressed by AI. I do not want to harp on using AI to solve age-old mathematic problems. Or that Telsa is using computer vision to automate driving. All these are noble goals, but they are on the margins – net new impacts to enterprise and business. Fun and lovable, AI has been seen as a cheat code for engineering. The perfect shortcut. But now things are different. Now generative AI is showing us that it can disrupt business services.
 
Technology services like ChatGPT are different sorts of AI. First, it is contextual. You can treat it like a human and tell it things to remember, then ask it to do something. It’s historic; in that, it has decades of content to leverage so it can answer your request. Lastly, it’s in real-time. You can interact as if it’s a person. You do not HAVE to understand how to interface with it. You can create 10-page white papers explaining your rationale on how you solve business problems. I know this because I use it that way.
 
The disruptive quality of the technology is fascinating. “Binary” level content creators as far apart as technology coders and legal copyrights — will all need to learn a new way how to do their jobs. Technology can rarely change the level of effort at two orders of magnitude – but here we are. Another area I am seeing disruption is with artists. Great experiences are created via iteration – explore, editorialize, repeat. Generative AI offers a method to go on any flight of fantasy, generate an outcome, and see the results in minutes. Tons of other industries will be disrupted. Compared to the ATM banking machine, this should be a bigger and more widespread impact.
 
Many challenges lie ahead. The technology is already being politicized and taken out of context. The ethical use of generative AI would require it to be used in a sidecar fashion first. The answers you get from your queries sound confident, but the accuracy does not align with that confidence. My top recommendation is to always remember the value you get out of the technology. This is what matters – what force multiple are you looking at? If you cannot save 50x time from a radical change, keep looking for value.
 
That all being said. OpenAI got my $20/month. This technology helps me efficiently convey my value. As a businessman thought leader, technologist, and executive it increases my value. I hope to get my money’s worth. Leave a comment or reach out to me on my Twitter DMs, I would love to hear about your generative AI journeys.

High ROI IoT and Edge Solutions for Sustainability

Earlier this year, a Gartner® report mentioned: “Previously, most enterprises viewed sustainability as a reputational overlay, which affected mostly marketing and communications. Nowadays, a growing number of companies see sustainability as a strategic parameter that directly affects how they do business and run their operations.”[1] Why? Because it’s a money maker. A well-crafted corporate environmental sustainability practice increases sales, investment, brand equity, and profitability.

Here are some of the ways that the IoT and Edge team at Concentrix Catalyst helps companies meet their sustainability and financial goals:  

We find that many are surprised to learn that edge solutions for sustainability are also affordable and easy to implement. Here’s additional detail on each:

Agriculture

In a traditional farm setting, the farmer needs to physically be in the field each day to monitor the equipment, assess its condition, and take readings. With equipment manufactured by a variety of vendors, farmers must learn to use multiple software tools to configure each piece of equipment. In addition, there is an industry-wide underutilization of scientific data for effective farming; for example, many irrigation systems continue to sprinkle water, even when it’s raining. An increasing number of farmers are turning to precision control and monitoring capabilities that allow them to manage their business more efficiently—freeing up time and money for other undertakings in an industry with notoriously small profit margins. Concentrix Catalyst developed central mobile-enabled application able to monitor, control, and manage operations across the farm, freeing the farmers from the need to physically assess conditions and control equipment on their farms. The solution enables farmers to monitor soil moisture and manage ancillary tools, sensors, and devices, regardless of manufacturer. A farmer can access the system, which helps utilize critical resources effectively by calculating scientific data, via desktop or a mobile app.

Streaming audio and video with a command-based interface allows her to view field conditions in real-time and respond accordingly.

Hospitality

A well-known worldwide fast-food franchise needed to reduce waste. Kitchen equipment was prematurely ending up in landfills due to accidental, but irreparable, damage by untrained operators. Food, like fryer oil, was being wasted due to the inefficiency of kitchen technology and employee practices. The restaurant chain hired Concentrix Catalyst to analyze kitchen operations using edge and Internet of Things (IoT) technology. We used edge and IoT to collect and analyze data on kitchen equipment and kitchen staff protocols, resulting in optimization recommendations for over 45,000 restaurants globally.

Google Coral is a leader in edge computing solutions like this, as Coral devices are affordable and the platform processes data very quickly.

Real Estate

Heating, ventilation, and air-conditioning (HVAC) systems in medium and large industrial and office buildings ensure safe, healthy, and comfortable conditions for occupants. A large office park or university campus can consist of hundreds, or thousands, of HVAC units. Industry estimates purport that these systems account for 35 percent of and 40 percent of commercial building energy consumption worldwide. Accordingly, maximizing the efficiency of these systems can result in considerable savings for an organization.

To leverage IoT and edge solutions for sustainability, Concentrix helped a specialty manufacturer of industrial application control valves design precision control of commercial heating and air conditioning systems. They wanted to help organizations better manage costs related to commercial heating and air conditioning control systems. Statistical analysis of data collected via climate-control systems and existing building automation systems affords the opportunity to control the climate of a building or group of buildings more precisely. It also reduces the resources needed for troubleshooting and addressing problems with the HVAC system due to central control of the systems and the ability to better pinpoint issues with the system.

We helped create a streamlined data pipeline that allowed the client to collect the information required, display the correct information to the clients who owned the data, and give engineers access to devices from a central location. A web app allowed for access to the precision-control system, and Catalyst designed the embedded system’s software for control and optimization of sensors and control valves—the software layer, or interface, with which the client’s customers and technicians interacted.

The end solution included an extensive feature set that allowed multiple consumers to benefit. C-level executives could view information about the building at the campus or building level to ascertain where money was being spent on energy use. Because the solution worked with existing building-operation systems, building operators could log into the app to better control building climates, choosing when to override the existing automated controls and responding to occupant comfort. A field engineer suite allowed technicians to monitor, analyze, and update the control system, and enabled remote management and firmware updates for the system hardware—reducing the time and resources needed to send engineers out to examine individual connections for problems.

Good for business. Good for the planet.

Sustainability and profitably are not mutually exclusive, and both are necessary to compete in the modern market. Companies that carefully plan their sustainability strategy and leverage technologies, like IoT and Edge, realize meaningful improvements in sustainability while also improving margins.

For more information about edge solutions for sustainability, and how our Sustainable Experience Engineering (SEE) practice can help your organization with its corporate environmental sustainability efforts, please contact us.


[1] Gartner, Competitive Landscape: Sustainability Consulting,., January 18, 2022

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Applying Web3 Concepts

Web3, the buzzy term for the next internet era based on blockchain technology, can foster a better customer experience in many use cases. For enterprises, a turnkey hosted decentralized application framework with the trust and privacy of blockchain built-in can enable better B2C applications, since the very design of blockchain allows companies to be everywhere all the time. When done right, Web3 concepts can be used to create personalized and frictionless experiences for consumers, while still guaranteeing privacy and security.

Below, we’ll break down a few examples of practical uses for Web3 concepts.

Web3 concepts in action

Let’s say you want to create a certificate of authenticity for a physical product you market, manufacture, and sell (like Nike does), or It could be a “badge” indicating attendance at an event (looking at you, AMC). To tie customers to your brand, you need a record of ownership that customers find value in now and over time. In a Web3 world, that’s called an NFT—non-fungible token. The art and card trading worlds are currently reinventing themselves with this technology, but it can also be used by enterprises. 

This digital artifact can be given to a customer to keep in their digital wallet (that is, their blockchain user account). When they want to resell that product or prove ownership, it’s available 24/7/365 at no cost to you (the manufacturer) or them (the owner).

What do you need?

The math in a real-life example can be useful. Let’s look at what you need for execution:

  • Omnichannel registration portal
  • Marketing data reporting solution, including revenue
  • Track and trace engine, to zoom in/zoom out of the data

This would cost you around $250k a year in a hosted software as a service (SaaS)-type framework. But what would that buy you? Say you want five limited edition runs monthly of around 25k NFT “badges.” The quantity changes barely affect the cost, but the results would be 1.5M NFTs a year, handed out to your customers digitally. You can charge for these items as you would for certificates of authenticity to the end customer or embed it in the cost. You can also add a “tail” to the NFT, in that when a transfer happens, you get a portion of the third-party sale as compensation for its creation. This feature could enable the solution to effectively pay for itself.

Applying Web3 concepts

Let’s look at another use case, where you can provide a valuable, 100% digital product, like a media file—audio, visual, or audio-visual. We’ve built a use case around an immersive mobile application like Nike built, but this one is for watches. Consumers get exclusive, limited-edition designs from their favorite designers. The use case for the brand would be testing out new designs—to discover which designs are popular before investing millions of dollars and months to produce them. You would also need an omnichannel marketplace to go with the same portal and reporting solutions from before. The total cost of ownership is $750k with $175k/year. It’s a steep price for a campaign, but a cheap price for a factory to create unlimited digital products for pennies in minutes. The revenue potential here is telling:

  • MSRP: $5 with a 10% tail
  • 1.5 times a year the watch is re-sold
  • Selling the same 1.5M “watch” NFTs a year

This leaves you with an amazing $8.4M revenue stream with a recurring revenue stream of $900k. Yes, this means recurring revenue on products you already manufactured and sold. Our calculations are a 22-day ROI with 91% margin. Eliminating physical manufacturing and distribution shows its upside here.

Applying Web3 concepts

What if you create media content instead of apparel? Imagine seats for an exclusive event or a concert. Web3 can enable secure content digital distribution without using a middleman. The content would use an immersive customer experience, but the delivery method would be the same as before. The key importance of this use case is that you are not tied down by categorization or classification. If you can devise, market, and sell the experience, you can engage the customer using a Web3 lever.

Here are some specifics to the business we use:

Applying Web3 concepts

Advocates of Web3 see the ability to reduce latency and intermediaries between the businesses making products and the customers consuming them. They see a fully hosted network connecting us all, free to use to view and track, and one that costs us fractions of pennies for each product produced or sold. Using this technology allows enterprises to connect, serve, and learn from customers directly without the need of physical presence, third-party transaction go-betweens, and/or heavily regulated privacy limitations.

So, what’s the catch? The technology is new and the skills to do this work are limited. You need smart, experienced, and capable people to help you turn Web3 from a pipe dream into real-world products. We offer that expertise. With our turnkey technology, we facilitate test runs of publicly tradeable NFTs at a fixed price. Find out how we can turn your Web3 project into reality.

Why Web3 can improve enterprise CX

The hype around Web3 and how it can transform the internet | World Economic  Forum

By now, you’ve probably heard the tech buzzword “Web3” in the context of Bitcoin or NFTs of viral memes. Web3, considered to be the next iteration of the internet, is based on blockchain technology, allowing users to read, write, and own their data. However, what you might not know is that Web3 can be leveraged by enterprise organizations to guarantee consumer privacy and security while still allowing for better CX as an outcome.

Here are a few reasons why Web3 can be the future of CX.

Web3 vs. blockchain: A primer

Blockchain is a distributed, decentralized ledger that records transactions securely, permanently, and efficiently. It is not controlled by any one entity, so it has no single point of entry. No one person or entity “owns” the information. The model is highly secure and can be applied to anything of value: currency, personal information, and data of any kind. Think of blockchain as a database arrayed in multiple redundant nodes in many locations. Web3 is the overarching trend that blockchain is a part of —it is the practice of using blockchain to accomplish business solutions beyond simply storing data. Web3 gets its name from its agreed-upon place in the evolution of the internet—the original internet made up of static HTML pages is known as “Web 1.0” and the transition to dynamic social media as Web 2.0. Thus, we’ve arrived at Web 3.0, or Web3.

Blockchain can be leveraged by smart developers to create what are called “smart contracts.” The name is a misnomer, as famous Web3 developers have said that they are neither “smart” nor “contracts.” If you are lost in the minutiae of this terminology, you are not alone. Elon Musk has famously said he is “too dumb to understand smart contracts.”

People get confused thinking that Web3 is the same as cryptocurrency. Though Web3 does include cryptocurrency, the key takeaway of this technology is that it can serve as a building block for enterprise solutions. It’s available to you turnkey and hosted in a public, private, and hybrid way—just like virtualized computers and databases—and adding blockchain to many enterprise solutions generates concrete business value.

But just because a technology is “cool” and hyped does not mean it’s relevant to your business. There is a lot of froth in the world of blockchain, so not all solutions branded as Web3 will yield much, if any, use value. The tenets of Web3, however, do allow for a more decentralized approach to applications and business services.

Why Web3 fits into CX strategy

Companies engage with customers where they are at. Nobody lives in the cloud. From an organizational perspective, however, the traditional challenge of being “everywhere all the time” for customers is expensive—for hosting, managing, and dealing with complexity. One such challenge of needing to be everywhere all the time is enabling point of sale and currency transactions. This is the origin story of blockchain to cryptocurrency to decentralized applications (dApps).

Why does Web3 matter to the enterprise? A turnkey hosted decentralized application framework with the trust and privacy of blockchain built in means fertile ground for B2C applications, by allowing for companies to be everywhere all the time. Web3 makes blockchain technology actionable.

Let’s say we want to create a certificate of authenticity for a physical product we market, manufacture, and sell (like Nike does). Remember the goal: we want to tie our customers to our brand. We need a record of ownership that our customers find value in now and over time. In a Web3 world, that is called an NFT—non-fungible token (a fancy way to say record of ownership). The art and card trading world is in the process of reinventing itself with this technology, but the same technology can be leveraged by enterprises.

This digital artifact can be given to a customer. That customer keeps it in their digital wallet (fancy way to say blockchain user account). When the customer wants to resell that product or prove ownership it is available 24/7/365 at no cost to you (the manufacturer) or them (the owner).

Get up and running with Web3

Advocates of Web3 see the ability to reduce latency and intermediaries between businesses making products and customers consuming them. They see a fully hosted network connecting us all, free to use to view and track, and one that costs us fractions of pennies for each product produced or sold. Leveraging this technology allows enterprises to connect, serve, and learn from customers directly without the need of physical presence, third party transaction go-betweens, and/or heavily regulated privacy limitations—resulting in a faster, more convenient, and more enjoyable experience for customers.

Implementing IoT for Your Sustainability Journey

You can’t manage what you can’t measure, and this is especially true for implementing IoT sustainability initiatives to reduce operational costs and increase efficiency.

Currently, investors and stakeholders are demanding measurable outcomes and economic benefits from ESG initiatives. Environmental benefits, in the form of energy savings, can return both carbon reduction and energy cost reduction. But achieving large savings, and doing so quickly, is hard to do. In addition, not a lot of people are aware of IoT’s role in accelerating sustainability initiatives, and that implementation is actually economically achievable.

In my latest webinar, I moderated a panel including representatives from Optio3 and Carbon Lighthouse. We discussed how businesses can transition toward measurable sustainability quickly and easily using IoT Edge technology.

If you missed the webinar, or would like to experience it again, watch the video below.

https://pkglobal.com/blog/2021/07/iot-sustainability/

End of the Beginning for Machine Learning

Well thats a wrap for my initial journey into machine learning and artificial intelligence. But it’s only the end of the beginning for machine learning.  Before I finish up this series some people have asked that I provide some context. This technology is changing the industry. Some players have already adopted these technologies – my company and competitors. So to placate the masses, let’s list out what I have heard is currently available in the marketplace. I am not omnipotent, so if you see anything missing or wrong please let me know.

Artificial Intelligence is Old Hat

 
Let’s first talk legacy. Artificial intelligence has been part of software since my start of the industry. The most common of which are “rules”. Humans define the model though. This model could be a list of “if” statements. A tree stored in a database is also a model. The difference is not on the AI side, but the machine learning. Automated model building is what is different. Other legacy concepts are algorithm based. Two examples I have for you: linear regression trending and Holt-Winters smoothing. Both are available in open-source like MRTG as well as many commercial applications today. The commonality is that the algorithm provides the model. Let’s be clear, the algorithms doesn’t build the model, it IS the model. These are robust and well regarded solutions in the marketplace today.

Anomaly Detection vs Chronic Detection

 
 
Now lets move to machine learning. Anomaly detection, with various degrees of accuracy, is getting to be common in the marketplace. Many are black boxes that strain credibility and others are open time abyss of customization. The mature solutions are trying provide a balance between out-of-the-box value and flexibility. There are plenty of options with anomaly detection. Chronic detection and mitigation is much more rarer. I have not seen many who offer that functionality, especially accomplished with machine learning. Again on dealing with chronics, you mileage may vary but its out there.

Takeaways

Many of the products that use this technology do not specifically reference it. Usually when you hear analytics nowadays you can expect machine learning to be part of it. Most performance alerting (threshold crossing) leverage it in the realm of big data analytics. Most historical performance tools leverage machine learning to reduce the footprint of reporting. These three areas commonly have machine learning technology baked in.
 
What this means is that machine learning is NOT revolution technology that solves all our problems. At least not yet. Its revolution technology that lowers the bar. Because of this technology problems can be solved easier with far less resources than ever before. The price you pay for this is simple, machine learning will not catch everything. You will have to be fine with a 80% quality with 0% effort.
 
Thanks again for all the great input, keep commenting and I will keep posting.

Service Assurance's Future with Machine Learning

About the Author

Serial entrepreneur and operations subject matter expert who likes to help customers and partners achieve solutions that solve critical problems.   Experience in traditional telecom, ITIL enterprise, global manage service providers, and datacenter hosting providers.   Expertise in optical DWDM, MPLS networks, MEF Ethernet, COTS applications, custom applications, SDDC virtualized, and SDN/NFV virtualized infrastructure.  Based out of Dallas, Texas US area and currently working for one of his founded companies – Monolith Software.

Service Assurance’s Future with Machine Learning

Thanks again for all the great feedback on this blog series. I want to continue the ongoing discussion by guessing service assurance’s future with machine learning. There are infinite operational problems out their for providers and IT. Machine learning offers an inexpensive, yet expansive flexible way to solve problems. Here are some of the most extreme ideas I have thought of to common problems of the industry. If you have heard anyone tackle these with machine learning I would love to hear more about it.

“We Hate Rules” — Says Everyone

Service Assurance's Future with Machine Learning
One common compliant I have heard from customers and partners as long as I have been in business is around rules. “We hate rules!” I don’t like rules by the problem this technology solves is a big one. How do I decrypt vital fault details in a variety of different ways into operational actionable events? Right now people have compilers to take SNMP MIBs and export them into rules of some sort. From HPOV to IBM Netcool to open-source MRTG – its the same solution. What if?  Is it possible to apply machine learning? What if automation enriches faults and decides which KPIs are important? Google is a great source of truth. Consider deconstruction of the MIB into OIDs and google it. Based upon parsing the search results you may consider its worth of collection or not. Then, let’s use some of the solution we discussed already – fault storms and reduction. We can bubble up anomalies and chronics with zero human touch. How accurate it could it be? This should be surprising. You could always have an organic gamification engine to curate. Think about it from a possible results. No rules, No human touch, No integration costs, only ramp time. An interesting idea.

Are We Really Impacting the Customer?

Service Assurance's Future with Machine Learning
 
I know we have all heard this one before. Service impact. How do you know if a fault is service impacting or not? If you notify a customer they are down and they are not – they lower their opinion of you. Flip it around they hate you. Understanding impact is a common problem. Common industry practice is leverage a common event type category – think trap OID name. The problem is that it over simplifies it and their is a lot of guess work in those rules (see above). What if they fault is on a lab environment?  Is there no traffic on that interface? What if its redundancy is active or failed? Too much complexity. This is machine learning’s sweet spot. Imagine a backfill from ticketing to show that the customer confirms there was an impact. Than linking that data pool to the model of faults. Where you can compare that model to a current situation to score a likelihood of impact. That way you are using a solid source of truth, the customer, to define the model. — UPDATE — It’s true you could use network probes and they could scan the data to ensure the service is being used. Pretty expensive solution IMHO, buying two probes for every network service. It would be cheaper to use Cisco IPSLA/Juniper RPM or Rping MIB.

Which KPI is Important?

Service Assurance's Future with Machine Learning
 
Last idea I have seen is around service quality management. In service management, customers complain about templates and models need to be pre-defined. Typical SLAs do not have the detail required to support a technology model to track them. The research required to determine the performance metrics that drive takes too much time and effort. With machine learning and public algorithms like “Granger causality” a new possible emerges. The service manager can identify and maintain the model — whatever the product offered. How could it work? My thought is simple using root-level metrics – availability, latency, bandwidth to provide baseline. All other vendor OIDs or custom KPIs available can be collected and stored. With machine learning, you can develop the models for each root metric and each custom metric. Using artificial intelligence, you can identify which custom metrics predict the degradation of a root one. So those are the metrics you want to poll more frequently, have higher priority, and power service quality metrics. The result would be fewer high frequency polling. Then more meaningful prediction for service quality management.
 
 
Let me know your thoughts. These are some of the crazier ideas I have seen/heard, but I am sure you have heard of others.

Service Assurance's Future with Machine Learning

About the Author

Serial entrepreneur and operations subject matter expert who likes to help customers and partners achieve solutions that solve critical problems.   Experience in traditional telecom, ITIL enterprise, global manage service providers, and datacenter hosting providers.   Expertise in optical DWDM, MPLS networks, MEF Ethernet, COTS applications, custom applications, SDDC virtualized, and SDN/NFV virtualized infrastructure.  Based out of Dallas, Texas US area and currently working for one of his founded companies – Monolith Software.

Automated Detection and Mitigation of Chronic Issues

Let’s continuing our discussion around machine learning and AI focusing on chronic issues. Key is the automated detection and mitigation of chronic issues. As we have discussed in many of blogs of this series, anomalies are unusual behaviors while chronic situations occur all the time. One customer put it best. “I deal with more of the same than different every day — help me there”. Chronics are not noise. The example I give is the scenario where every night a managed router goes down in a WAN site. The identified root cause is router a plugged into light switch enabled power receptacle. After the janitorial staff finishes, they flip the switch and DOWN IT GOES. Customer does care because they turn on the light in the morning and do not notice it. Operations have no way to control this problem, but they need to track it. The RCA worked, its service impacting, but customer does not care. Do you leverage a business hours suppression engine? No, because if someone is working late and it goes down, you have lost the customer. As you can see chronics are common and frustrating for operations. Too many times these waste effort and cause complacency. Giving the power to humans within operations to ignore an outage is always a bad idea.

What is an Outage?

Automated Detection and Mitigation of Chronic Issues
The correct solution is to look for a typical behavior of outage. If the outage follows that procedure suppress, otherwise treat it as normal. Machine learning can detect the scenario in an automated fashion. Now who does the compare of the current pattern to the learned behavior model — artificial intelligence. The chronic detector will fire off a message. This will suppress the outage during the learned window. This can be overridden by the anomaly detector. This covers chronic conditions, but exits from the model to revert the chronic suppression. Together humans in operations can focus on what they do best — ACT. Instead of what can be difficult — remembering and tracking.
 

Example – Firmware Bugs

Automated Detection and Mitigation of Chronic Issues
We have discussed a customer driven behavior model, but what about a technology driven one. One of my customers is doing heavy amounts of work in SDN/NFV. They have a VNF with vendor “firmware” that had a nasty reboot bug. The trouble? Well the reboot was done in <1 second. It reoccured every 3 days for every VNF depending upon their boot cycle. While their system caught it was chronic. Their network services dropped traffic and sessions entirely every three days. It took weeks to understand the bug, but with chronic detection it becomes a snap. Machine learning would include the firmware version. Hundreds of VNFs on the same version would identify the problem. Machine learning with chronic detection would prevent a new ticket opening every time it occurred. Instead it would correlate to a root cause — bad vendor firmware. Once identified operations can escalate to the vendor, but keep their screens clear of all the random reboots.

Takeaways

With proper chronic detection and mitigation operations are free to operate as they do best. No longer are their screens cluttered with non-actionable events. No longer do operational learning curves start at 6 months and could be longer than 18 months. Operations need the freedom to assimilate new technology. Handling change with ease is the direction the business is saying. How do you do that? By simplifying operations so that do what the do best — ACT.

Automated Detection and Mitigation of Chronic Issues

About the Author

Serial entrepreneur and operations subject matter expert who likes to help customers and partners achieve solutions that solve critical problems.   Experience in traditional telecom, ITIL enterprise, global manage service providers, and datacenter hosting providers.   Expertise in optical DWDM, MPLS networks, MEF Ethernet, COTS applications, custom applications, SDDC virtualized, and SDN/NFV virtualized infrastructure.  Based out of Dallas, Texas US area and currently working for one of his founded companies – Monolith Software.

Learning Your Operational Performance

In business intelligence reporting, a common area is around learning your operational performance. This means tracking operation’s workload and results. While this can be a sticky subject for operations, its also a great opportunity to improve. Its a fact, when overloaded, operations suffers in the quality of their response. So its only common sense to track the NOC like you track the network. If operations is overload causing quality issues, operations need to be aware so that remediation actions occur. This could include staff augmentation or improved training regimes to drive better results. Trouble becomes how. Many focus on ticketing solutions. The ITIL compliance allows management of operational performance to set specifications. But those levels are not real-time. How does it help to know you needed help last Wednesday!

Where Machine Learning Coming Into Play

Learning Your Operational Performance
 
Again, ML/AI technology helps. Fault managers can track user and automation interactions with those faults. Most call these “event managers”. This audit trail can have machine learning applied to create the standard operational model. The result is a discovered model. Say a common fault usually takes 10 actions and 15 minutes to fix during business hours. When the NOC deviates from their previous score – good or bad. The AI can alert to the group, either GREAT JOB improving here is the new bar or let’s RALLY we are getting behind.

Proactive Workload Management

Learning Your Operational Performance
 
Let’s get into the details. Let’s say that machine learning exposes that during a certain time of day/day of week, 4 level1, tickets 5 level2 tickets, and 15 level3 tickets. Then the system is showing a systemic increase 2x, then 5x, and then 10x. AI agents can see this risk and alert. That alert can show that we have an abnormal amount of tickets opened. Operations managers can call in resources. The system can send an advisory email to the ticketing administration asking for a health check. Without ML/AI technology, running reports and interpreting requires so much time, most organizations will not even try. Those that do, latency could be weeks between needing a change to recognizing that need.

Positive Impact to Operations

Learning Your Operational Performance
 
The results of operational performance monitoring should be a smoother working operations teams. Fewer errors and happy customers is what every NOC should try to provide to the organization. Accomplishing this with zero human touch with a latency of less than 15 minutes. This has been unimaginable functionality up to this point. The difference has been the emergence of ML/AI technologies.
 
Let me know what you think here in the comments below. This is a cringeworthy conversation with operations. I do believe that near real-time operations performance management has value to NOCs today.

Learning Your Operational Performance

About the Author

Serial entrepreneur and operations subject matter expert who likes to help customers and partners achieve solutions that solve critical problems.   Experience in traditional telecom, ITIL enterprise, global manage service providers, and datacenter hosting providers.   Expertise in optical DWDM, MPLS networks, MEF Ethernet, COTS applications, custom applications, SDDC virtualized, and SDN/NFV virtualized infrastructure.  Based out of Dallas, Texas US area and currently working for one of his founded companies – Monolith Software.