Any vendor saying they can do it all for the Internet of Things (IoT) is either lying or delirious… this could not have been clearer in our Dell EMC World IoT booth earlier this fall. One of the biggest reason that a single vendor cannot do it all is because the IoT is really a grouping of many (maybe hundreds) different use cases which require connecting to different things, deploying different architectures, and engaging different partners.We displayed this at Dell EMC World with 15 use case demonstration all wrapped around the story of how the IoT helps ice cream get made and delivered more efficiently. Across these 15 use cases we actually had 22 different partners from the Dell IoT Solutions Partner Program in our booth with us showing connectivity, analytics, security, and even augmented reality software solutions built on top of our edge computing product offering.To help our customers address the diverse use cases of the IoT we develop what we call Blueprints. To build these Blueprints, we select the right infrastructure products from our broad Dell Technologies portfolio, identify the best software partners who bring best practices and deep use case expertise, and finally validate the solution in our IoT labs. We do all of this work so customers don’t have to build a solution from scratch, but rather customize a tested offering, and get to ROI faster.So, I am sure you are dying to know at this point what the 15 different use cases were that we showcased around ice cream. The storyline actually started with all of the energy production to power the ice cream factory and supply chain. SAP showed how real-time analytics on an oil rig can improve safety and efficiency for Oil and Gas operations and Emerson, OSIsoft, and Microsoft demoed solution for remote critical equipment monitoring focused on valves in a refinery, and ThingWorx and Vuforia (PTC Companies) showed how augmented reality enabled field maintenance improves operator precision. Continuing with energy FogHorn Systems proved that wind energy asset management can improve energy production predictions and finally ELM FieldSight presented its Microgrid energy management to automate and optimize the use of solar, battery, and grid power.Along with energy, another key part of the ice cream manufacture is data (or IT) infrastructure. Tridium Niagara (a Honeywell Company) and Controlco demoed its solution for IT Critical Infrastructure Management which ties together the facility data with IT equipment data to improve asset uptime.The next ice cream use case was around agriculture efficiency and quality where Bosch showed how farms are the next smart factories. From here the energy and the raw ingredients each flow into the smart factory where we showcased 4 different manufacturing use cases. Kepware and SoftwareAG demonstrated how they could consume streaming data from an industrial mixer and complete Predictive Maintenance to identify potential maintenance events and plan downtime accordingly. To measure overall equipment effectiveness and improve return on assets (RoA) IBM showcased its factory optimization offering. To ensure quality of the ice cream packaging rolling of the factory floor Eigen Innovations exhibited its real-time quality control solution. The last use case in the smart factory portion of our booth was Wurldtech (a GE Company) showcasing Operational Network Security for the IIoT to ensure that all the data from the industrial equipment was secure.Ice cream is only good if the refrigerated warehouses and trucks keep it frozen. IMS Evolve demonstrated its Cold Chain Logistics offering to ensure that the supply chain is using energy for refrigeration as efficiently as possible. To make sure the ice cream gets there quickly and the fleet of delivery vehicles is as efficient as possible, Nokia presented its Fleet Management solution.The final stage in the supply chain is the environment of the smart city where Riptide provided Smart Retail Facilities Management helping the retailer reduce operating costs while maintaining a high quality product offering. We all want to be safe when we are buying ice cream so V5 Systems demoed its Video Surveillance and Gunshot Detection solution which is solar powered and installs in an hour or so.There you go, 15 diverse use cases requiring connectivity to different things, deployment of different architectures, and engagement with 22 different partners. For those thinking the IoT was mostly theoretical, the IoT showcase demonstrated real-world solutions. We did not actually have a partridge in a pear tree in our booth because we ran out of space with all our use cases, but we did give away 2,000 ice cream treats, which I think is better anyways. </p><p>
Meeting with customers and partners, especially those who are on the edge of innovation, is always thought-provoking. Recently, in meetings with Communications Service Providers, the discussion often turns to the current limitation of Network Function Virtualization in addressing some of their long-term architectural concerns, I’ve often gotten into the debate that started with a simple question – “What About Micro-services architectures?”It might have been framed as “Cloud Native” or “Containers”, but really is the same core question regardless of terminology. The intrinsic question that I think is being asked – how do we simplify and isolate network functions, and reduce/reuse/centralize the valuable flow and state information contained within each network function?What Are Microservices? Before providing my perspective here, let’s level set on micro-services architectures. In the interest of keeping the blog short, I will direct you to a 2015 blog by EMC entitled “Five Things You Need to Know About Microservices”. The context in the blog is focused around an e-commerce application, but the same logic applies to any software function.The blog calls out is a few things:This is not a new conversation. Microservices were on the hype curve since 2015 when the world started to discover how organizations such as Apple, Google, Facebook, Netflix and others managed to iterate and innovate with speed and agility.The foundational principle of microservices is disaggregation. Rather than referring to the disaggregation of hardware and software (NFV), or the disaggregation of control and data plane (Software-Defined Networking), this disaggregation specifically targets the software stacks themselves.The goal to unlock value in a large application, like a network function, is to disaggregate it into a set of composable services. Those services can expose a set of APIs to applications or other services.Disaggregating software stacks has an innate impact on operations. Companies that have embraced and leverage microservices architectures require changes to operational processes and technical skill sets – most notably an increased ability to develop (or script) applications to take advantage of these new APIs. DevOps, and Continuous Integration / Continuous Development (CI/CD) principles are intrinsically linked to microservices conversations and processes ensuring stable development test and delivery.The net result of this effort is accelerated innovation, improved manageability, increased resiliency, and higher scalability.Applying Microservices to Network Functions Applying the same design logic of microservices for enterprise applications to network functions requires us to first define some of those microservices. Today, a Virtual Network Function (VNF) is tightly integrated software application consisting of a data plane for bit processing, a control plane that programs the data plane based on events/information received, control logic, a database for storing state information, and a message bus for communicating between services. Those APIs are either closed (vendor-proprietary), or non-existent (boundary between software function is not exposed externally between services). This architecture has allowed networking vendors to continue to differentiate their product offerings across a number of parameters:PerformanceScaleLatencyFeature differentiationEase of operationsAnd further differentiate their companies across another two:Velocity of feature developmentServices and supportSuch a model has persisted for the entire history of networking, both in the enterprise Local Area Network (LAN) and in the Wide Area Network (WAN), and has stretched across fixed and mobile networks from DSL and cable to 3G, 4G, and even the pending 5G architectures.Going forward, with a microservices approach, we see that many of the core functions of a VNF may get disaggregated, with the potential that many of these disaggregation functions are commoditized in open source, turning what were once differentiators in a composed network architecture into infrastructure services in a composable network architecture model.What microservices can we expect to become infrastructure services in the long run?Integrated Databases give way to Database as a ServiceIntegrated Data planes become shared Data PlanesMessage Bus as a service (MBaaS)Not only does this microservices framework allow for disaggregation, it also allows for re-aggregation in new and exciting ways. Much like service function chaining allows network functions to be arbitrarily arranged in series, such Superservices allow services to functionally chained together in parallel. As such, a packet can be taken off the wire one time, replicated across a set of services (firewall, IPS, DPI, etc.), and analyzed against their individual rules (or control logic). If the packet does not meet one or more of the rules associated with the individual service, a packet handling decision (I.e., Drop) can be made.I won’t go so far as to say that this architectural model is either imminent or ubiquitous. In fact, there is still considerable maturing of container networking that needs to happen. Further, not all network functions will fit neatly into the prescribed architecture, and we will find that network functions that leverage pieces of this infrastructure services framework while keeping other, more differentiated componentry, integrated.The implication to the network function vendor community are as important to those facing the CSPs themselves (discussed in the next section). As a means of competing in an increasingly-disaggregated market, the levers of differentiation get smaller:Performance differentiation goes away (since software functions leverage a shared data plane)Scale goes away (since the infrastructure offers database services)Latency may go away (MBaaS)The net result is that product and corporate differentiation in network functions are largely limited to:Differentiated feature sets in the control plane[i].Feature velocityEase of operationalizationHistorically, features get commoditized as the industry adopts them, so this is not sustainable. Maybe the only sustainability is feature velocity, meaning that pace of innovation, and ability to operationalize that innovation, is the only differentiator left?The Industry Quandary If you are a vendor who embraces disaggregation in all facets, the challenge is daunting – Can we build an architecture for communications service providers that is truly open, scalable and able to adapt as this future world unfolds. Can those communications service providers take the building blocks of the solution and swap them out as needed rather than getting a completely and tightly locked in vertical solution. If you are a CSP who embraces disaggregation as a means to optimize around next-generation service delivery models, the challenge is even more daunting: Do I jump to a virtualized version of my network function (i.e. router, firewall, etc.) or perhaps do I wait a little longer until those functions are disaggregated into the infrastructure? How Should The Industry React? If we have learned one thing in the constantly-changing world of network communications, its that foundational paradigm shifts such as the one above seldom happen in real-time. Instead, they go through iterations all objectively targeting the ideal end architectural state. Perhaps the answer is to begin this journey with an eye on the end goal.I think there are three things we, as an industry, and especially the CSP, can do now to prepare:Truly understand this changing world – even if they don’t do the integration themselves, CSP need to really understand it so that they get the architecture right.Be able to swap in new, highly-pluggable blocks to the solution easily, even if they get a software integrator to help pull in a new part to the solution their operational processes need to be able to quickly take this new block and put it into production.Start adapting operational processes towards DevOps, most specifically incorporating a Continuous Integration / Continuous Development (CI/CD) process for network operations, with an end-goal of eliminating the periodic, lengthy maintenance windows in favor of an ongoing rollout of new functionality.[i] Historically, features get commoditized as the industry adopts them and as standards organizations create specifications, so this is not sustainable.
One of the greatest things about product releases and the excitement they draw, is the opportunity to sit down and talk with customers. Yes, customers want to talk to us about what’s new, but more importantly, they want to share incredible stories about their journeys. As you listen to these stories, it becomes crystal clear the “technology knife fights” vendors get into pale in comparison to the economics and business realities these businesses face. Let’s unpack what this means.Digital Transformation – it’s not just a buzzword, it’s the new reality. Every organization in every industry is taking big strides to apply technology to their business. The truth is, it’s not even a choice anymore; it’s a key vehicle to gaining competitive advantage, and in many cases, a mode of survival. I remember a few years ago when Walmart, on a financial analyst call stated, “we are a technology company”. Yes, they know retail better than most anyone, but their future success was dependent on transitioning to this digital age. Data is now one of the most critical and valuable assets organizations can own, but only if it is leveraged to drive operations and delivers insights.Digital transformation presents an interesting dilemma for businesses – they have substantial technology investment decisions to make, and concerns about their ability to consume and onboard that investment in technology in a manner that isn’t overly disruptive. Cloud becomes one of the most obvious solutions organizations look to – consume only what you need, scale up to meet growing demands, and put the power in the hands of employees – what an incredible promise. But there are organizational realities that must be accounted for, especially when injecting technology like cloud.Skillsets and Process MismatchesMost organizations have already made substantial technology investments and as a result, resources have been hired with the skillsets tied to those investments. New technology and sweeping changes require new investments in reskilling and can be bad for employee satisfaction, morale, and retention. Additionally, existing processes likely add tremendous value and are often the product of extensive research and applied learnings over the years. Will you miss these processes if they are removed? What should they be replaced with? If these questions are not answered for both your skillsets and processes, then the result is often the injection of a lot of risk into the organization, or costly remediation efforts.Talent Scarcity There is vast competition for technical talent. These resources are being bombarded with tantalizing offers to join the big brand tech organizations and up-and-coming startups. Your business could potentially face an uphill battle securing the best and brightest. It could necessitate the need to pay a premium for some tech roles. This can result in not having the right skillset in house to make use of all these new technologies.Putting Innovation on HoldThe whole point of digitally transforming is to apply technology to the business and leverage data as a competitive advantage. The longer it takes to digest the infusion of technology into the business, and apply it to business activities means the longer it takes to see the fruits of your digital transformation. The larger the scale of the change, the longer it takes if your IT staff get bogged down in a protracted migration or re-platforming effort that can have a material impact on a quarter or fiscal year.Elevating Your Cloud StrategyIt is essential for your organization to land on a pragmatic cloud strategy that applies a filter that looks at your people, process, and objectives while selecting the technologies to onboard and to what degree. It’s not so much what “operational nirvana” looks like. It’s a matter of what can feasibly be accomplished.Take a look at your existing processes and see what can be leveraged and ported to the cloud to lower the friction of adoption and lower the complexity of the shift to a new operating environment.Don’t get sucked into bleeding edge technologies and the promises they might hold if there isn’t a corresponding way to incorporate them into your business. Search for solutions that align with your organizations’ resources or for widely adopted technologies that offer a wide selection of resources.Cloud offers a lot of great things, but you shouldn’t let cloud exuberance get in the way of an orderly transition to this new environment. Applications should be vetted, strategies should be clear, and early wins should be established before attempting massive overhauls.It is important to keep your options open by deploying in hybrid environments that offer portability and reduce the management overhead of maintaining two or more clouds. Many CIOs we’ve talked to indicate that even when they make a substantial investment in the cloud, they have maintained facilities to mitigate the risk should a cloud exodus be required later.By applying a methodology that accounts for business imperatives and sustainability in your technology selections and investments, you can avoid the pitfall of over pivoting in your desire to put a compelling technology in place. At the end of the day, technology needs to serve the business need, even if you’re now a technology company.To learn more about hybrid cloud solutions and how they can help your organization visit: Dell Technologies Cloud
Hanoi was a perfect setting for our APJ Partner Advisory Board (PAB) meeting earlier this month. Hanoi is bustling and always full of action—just like our Dell Technologies Partner Program.Partner Advisory Board members have been a great sounding board to provide constant feedback and a continuous reality check on our performance, on areas of improvement and all things that we continue to enhance to make this program the best in the industry. Our regular and frequent PAB meetings ensure we keep our eyes and ears close to the ground, and every meeting brings forth interesting insights and perspectives that further strengthen our commitment to partners and customers.The APJ Partner Advisory Board meeting in Hanoi saw high levels of interaction and engagement, had many firsts and was once again scored highly by all participants. Partners showed particular interest in Dell Technologies’ working capital and consumption-based offerings.For the first time we hosted a dedicated Distribution PAB session to better understand how our distributors are advancing their digital future as well as extending the reach of our entire partner ecosystem. Our Distribution partners also participated in a workshop on emerging technologies such as 5G, AI & ML, future technology trends and modern workloads at the New Frontier Edge Automation and Management.MARC struck a chordTaking our PAB members through the Many Advocating Real Change (MARC) training program was another first for our partners in APJ—and they simply loved it. Dell Technologies is the first company in the IT industry to implement the MARC training program. MARC challenges us to question our own hard-wired prejudices that we carry around subconsciously. It is important to be aware of this conditioning and consider in our daily decision-making. This training covered inclusive leadership strategies, sharpening awareness of inequalities, unconscious biases and privilege, and how to hone skills for lasting impact. We believe that women and men need to work together to change bias in the workplace.Through the MARC training, we heard insights on diverse leadership styles and approaches to management. The overall message was a reminder that people are different and real diversity comes from creating an open environment where everyone feels included and valued, where we can express our views, and where are all empowered to do our best work. It was timely that this gathering included our first female member of the APJ PAB.Extending this discussion to our partners is priceless, as our partner ecosystem is an extension of our Dell Technologies brand. As a sign of our commitment to this initiative, I urge all our APJ PAB & Distribution partners to share this message on diversity and inclusion in their organizations.Thank you to all our APJ PAB & Distribution members for joining us on this purposeful journey together, in the true spirit of partnership.To learn more about the momentum Dell Technologies is driving with partners, watch my Mid-Year Partner Update for APJ.Also, follow us @DellTechPartner and @Tian_Beng_Ng, and tell us your #MARCtraining takeaway!
LOS ANGELES (AP) — Most California Roman Catholic bishops are asking a judge to throw out a 2019 law that allowed accusers of clergy sexual abuse to sue even if they were molested decades ago. Motions filed this month in southern and northern superior courts ask judges to rule Assembly Bill 218 unconstitutional. California is among at least 15 states that have extended the window for people to sue institutions over long-ago abuse. Attorney John Manley, who’s handled some of them, calls the California church challenge “morally reprehensible and hypocritical.” The California church already has paid more than $1 billion to settle previous claims.