Monday, September 19, 2016




Need for a re-evaluation of the Security model with IOT
-              Is a singular model for Enterprise IT+ OT possible?


















Internet of Things (IoT) as the next wave of technology disruption first came into the limelight  at the  2009 Intel Developer Forum as the “continuum of computing” from PCs, tablets, smart phones to other devices or “things” that were to be connected to the internet. Indeed, IoT is now well past its peak of inflated expectations reached in 2011 (Gartner Hype curve) and many claim this wave will be as profound as the internet itself.











Gartner, forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from 2015, and will reach 20.8 billion by 2020. In addition nearly $6 trillion will be spent on IoT solutions over the next five years!  

However, we have challenges ahead of us.  It is acknowledged by most technology pundits and CIOs that Security is by far the #1 impediment to mass scale adoption of IOT solutions and will gate the lofty projections cited by Gartner and others.

To address this challenge, the security model for enterprises needs a re-evaluation with a holistic view of both IT+OT as the total protection boundary.

Given this requirement -- is there a singular and unified architectural approach to deploying a next-generation security system?  Or should we treat IT and OT as fundamentally different landscapes with their own architectures but build “secure integration bridges” between them? 

To answer this question, we need to address multiple dimensions of the IOT security challenge. Most significant here is that a number of the foundational security architectures that have matured in enterprise IT over the past two decades just do not fit well into the OT landscape. In addition, the lines of decision making between the operations groups managing OT (eg. Industrial controls) versus the IT/CIO function have historically been independent of each other. This results in a lack of an “end-to-end” blue print for security and potentially opening a backdoor for breaches from the OT environment inside the IT infrastructure.  

So how different is the IOT environment from IT?  There are five factors that stand out:

1.     Visibility (lack of) of IOT devices on the network - 50% of the security operations personnel surveyed (Forescout 2016 survey) expressed they had “little to no confidence” that they were aware of ALL the IOT devices on their network.  This is a serious problem as any breach from one of these compromised devices can have no trace-path as it is invisible to the security administrator.  This is reminiscent of the first time we had an extensive enterprise-wide security attack over the internet in 2001 with Code Red & Nimda viruses in rapid succession.  It took us two weeks, with a brutal 24x7 schedule to physically account for ALL servers (we did not have a central asset management system back then) at a large multi-site IT infrastructure. This nerve-racking intervention required us to take them offline and patch them individually. A scenario like this in the OT environment has the potential to be 10x worse given the sheer scale, heterogeneity and lack of visibility that could take months to root cause the breach and remediate.  

2.         Machine to Machine (M2M) connected network – IOT devices in general have very long persistent sessions once they are authenticated into the network. This is in contrast to traditional IT infrastructure components which have a high element of human – machine interaction and the sessions are short lived. The persistent nature of the M2M session at scale (1000’s of devices) and heterogeneity (many different models, some with no in-built security) offer multiple attack surfaces and a perfect source for an exploit that can go unnoticed for a long time.


3.    Traditional Enterprise security models do not scale and will not fit – Identity “measurement” (example: x86 based platform trusted boot) and authentication mechanisms using a centralized Public Key infrastructure & certificate of authority (CA) for binding identities are ill-suited for deployment in OT environments. Many of these sensors run on 8/16/32bit CPUs with limited RAM and battery power and don’t have the ability to support a trusted boot model (at least in current times). Moreover, given the distributed and at-scale nature of the OT network we need a distributed “PKI” equivalent model enabled as a network service (versus a central instance).

4.         Lack of communication standards – The Enterprise has moved to an IP world and this enables the formulation of a scalable and rich fabric of communication and data services on this protocol stack. The OT world by comparison is the “wild west” and by last count had 15+ communication protocols (& growing). This poses a significant challenge to both interoperability & integration services that are needed for a singular SIEM system that can bring IT+OT under the same management and orchestration environment. 

5.         Legacy – Yes there are a ton of IOT devices (before the word IOT was coined!) especially in industrial control systems (manufacturing, oil & gas, utilities/power generation etc). A majority of these devices have been installed in the 70’s and 80’s, have no intrusion detection or prevention systems and are expected to be around for a long time. Security has been managed through “moats” and proprietary management environments. These customers do not want any agents or software installed on these systems for fear of introducing risk and a new variability. Any new IOT system therefore has to encapsulate the legacy infrastructure to create one unified framework.  This has to be accomplished while ensuring flexibility of security zones that are needed to protect individual environments from each other. 

Given these key challenges, we will need to temper our assumptions on how fast the B2B environment will adopt IOT. The progression will certainly start with green field environments (smart homes, connected cars for example) but the uphill task for a majority of the Industry running today’s infrastructures will be to overcome the constraints discussed earlier. More on how to overcome these constraints and build viable solutions in my next blog. 

Prasad


 




Thursday, September 15, 2016

Enterprise Storage disruption – need for an end-user deployment recipe

The disruption of storage with the rapid innovation of Flash technology and software architectures has fundamentally changed the face of the industry.

Consolidation and mergers have been rampant and will no doubt establish a new set of leaders over the next 3-5 years. This was best punctuated with Dell closing a 60 billion dollar merger with EMC last week.  

This rapid innovation cycle with Flash technologies has also led to an explosion of storage products and architectures (more on this later) and poses a significant challenge for the IT professional in rationalizing “what’s the best recipe” for rolling out the next generation infrastructure.  To compound the problem, a sizable portion of enterprise workloads are also migrating to the public cloud (expected to be 50% by 2021 – VMW 2016 keynote) posing yet another additional set of choices based on the cloud service provider selected by the firm.

Contrast this with my first experience deploying enterprise storage (circa 1997) as part of the infrastructure roll-out and transformation to a 100% eBusiness corporation. As you may well imagine, the scope of this deployment at this leading semi-conductor firm was colossal (Web front end, ERP, EAI, ETL, B2B, Content management, Data Warehouse, Security, etc). Central to the enterprise blueprint was the company’s data management strategy and the deployment of a scalable and secure storage infrastructure. However, it was not hard to make an architectural and product decision given the mandate to move away from the mainframe (IBM/DB2). We rolled out a fiber channel SAN infrastructure consisting of “7 global SAN Islands” reflecting our workload segmentation characteristics based on RPO/RTO, performance and security objectives

Fast- forward to 2016 - what does it take for a CIO to deliver a next generation infrastructure for a fast growing multi-billion dollar enterprise? The complexity of the task along with the array (no pun intended!) of choices is daunting. Here is a growing list of architectures/product segments all vying for a dis-proportionate share of the the $37 billion (2015 IDC)  enterprise storage systems market and the CIO mindshare.

1.         Traditional and/or Hybrid storage arrays

2.         All Flash Arrays

3.         Hyper converged (HCI) and traditional Converged infrastructure

4.         PCIe/NVMe extreme performance appliances (4-6u and blade form factors)

5.         Emerging NVM fabric (RDMA/IWARP) Rack scale solutions

6.         SDS (software defined storage)

7.      Intelligent IO offload products coupled with # 1 - #6.

8.        …

The IT professional in 2016 has to grapple with cost-effective choices that can be presented to the C-Suite as part of a hybrid cloud strategy and not just be “sold” on the merits of the next “killer” product.  To address this there will be a need for a workload <=> product /architecture mapping model that best fits the performance, agility and tco needs of the customer. Most large IT shops have their own benchmarking environments to wring this out on a case by case basis and define their course. However this exercise is destined to get much more complex with m-products x n-cloud service providers in the decision matrix.  Perhaps there is an easier way!

To mediate this, the eco system needs to offer a much simpler and unified menu for IT so we do not re-create the very complexity that we are trying to get out of in the migration journey to a  cross-cloud landscape as next generation IT.

Complexity evolution in IT - looking back and now


Prasad Rampalli
9/5/2016

The notion of simplifying the IT environment for optimized TCO and Agility is not new. For many of us involved in designing or running IT, complexity has come in many avatars over the past three decades.

My first experience with this was in 1987 – early 90’s while deploying Manufacturing Execution Systems (MES) in wafer fabrication facilities at a leading semiconductor company.

Our big challenge was to deal with the complexity of configuring and ensuring timely change control of the manufacturing process.  What made this daunting was the combinatorial effects of a given Process with 10+ Routes x 100+ Process steps/route x 100+ equipment x 1000’s of Statistical process control and engineering parameters x 100000’s of Lots... am sure you get the picture.

We had an army of Shop floor analysts dedicated to keeping up with these changes and ensure the overall system was reflecting what was really going on in the shop floor at all times. However, we found ourselves to be slow ( & costly) to respond to changes needed and the quality of the data from the shop floor system as a result was never a 100%. 

To solve this problem we felt we needed to eliminate the shop floor systems analyst as the “middle-person” and have the process engineers or planners in the Fab directly implement these changes into the system. This was not an easy task as the UI or management/ease of making these changes needed a skilled analyst who understood the intricacies of the shop floor  modules and their data relationships to ensure changes made were validated for any downstream effects on logic and accuracy.

After some brainstorming with my team and the peer industry network (AMD, Harris, TI, National ..) we felt the ideal solution was to create a standard “declarative format” in a business friendly language to front-end all our shop floor configuration.  We called it “Rules or Spec driven enterprise”.  The idea was to make the fab process spec the “master” and embed an active declarative language directly into the online spec.  We would then “push button” the changes in the real system once the process changes in the on-line spec were approved by the engineering or manufacturing lead in the Fab.  In essence we would “mask” the inherent complexity of the system from the end-user (the process engineer or planner in this case) by codifying the shop-floor rules as an integral part of the process spec.

Think of it as the “dev-ops” for running the shop floor back in the 1990’s!

Did it work?  Not really. We didn’t have the technology maturity in our modeling tools and deployment architecture to make this a reality (not to mention organization/business transformation issues – a whole different topic). The notion of a UML (universal modeling language), addressing the interoperability challenges along with standard semantics for the shop floor in semi-conductor manufacturing just had too many hurdles to overcome in 1990.

Since then – I have seen this need for solving complexity come up many times at different levels in the solution stack in my 30+ years in the industry. Here is a smattering (not meant as an all inclusive list)-
  1.       . The roll out of packaged ERP systems in the late 90’s brought about the need for enterprise integration as a key architectural focus. EAI (enterprise application integration) technologies with message bus and spoke-hub architectures became the panacea for solving our “spaghetti point to point” complexity of applications.  
  2.         We invested millions of dollars in corralling data management and one version of “truth” for master data with Enterprise Data Management tools, ETLs and scalable (albeit proprietary) Data Warehouses 
  3.         Virtualization led to VM sprawl and the need for operations automation became paramount.  “Infrastructure as Code” was born with SOI (service oriented infrastructure) and IDEs’ (integrated development environments) to address IT Agility.


As a glass half full view – I think we made progress in each phase.

Fast forward to the Cloud era – complexity continues to be the # 1 challenge (besides security of course!) with the current transition underway. In the 90’s we were dealing with spaghetti “mess” inside the four walls of the enterprise; now the same problem has oozed in Hybrid or Cross Cloud deployment architectures as It is a given that a F1500 IT shop will typically have multiple public clouds and its own private cloud as a setup in the coming decade (VMW 2016 key theme).

The good news (unlike circa 1990) – the entire infrastructure, OS and application stack can now truly be represented in software – thanks to virtualization of compute, storage and now the network and containerization.  The ability to create a “model driven cross-cloud architecture”  and implement this as a real time DevOps model in rich, user friendly declarative formats is sorely needed to accelerate this transition.  There are many players jumping into this space, each from their position of strength and it will be interesting to see the emerging winners.

More on this on my next Blog.

Prasad

=