sexta-feira, 2 de outubro de 2015

Deloitte E Software-defined everything

Software-defined everything
Breaking virtualization’s final frontier
Amid the fervor surrounding digital, analytics, and cloud, it is easy to overlook advances currently being made in infrastructure and operations. The entire operating environment—server, storage, and network—can now be virtualized and automated. The data center of the future represents the potential for not only lowering costs, but also dramatically improving speeds and reducing the complexity of provisioning, deploying, and maintaining technology footprints. Software-defined everything can elevate infrastructure investments, from costly plumbing to competitive differentiators.

VIRTUALIZATION has been an important background trend, enabling many
emerging technologies over the past decade. In fact, we highlighted it in our very first Technology Trends report six years ago.1 While the overall category is mature, many adoptions focused primarily on the compute layer. Servers have been abstracted from dedicated physical environments to virtual machines, allowing automated provisioning, load balancing, and management processes. Hypervisors—the software, firmware, or hardware that control virtual resources— have advanced to a point where they can individually manage a wide range of virtual components and coordinate among themselves to create breakthroughs in performance and scalability.
Meanwhile, other critical data center components have not advanced. Network and storage assets have remained relatively static, becoming bottlenecks limiting the potential of infrastructure automation and dynamic scale.  Enter software-defined everything. Technology advances now allow virtualization of the entire technology stack—compute, network, storage, and security layers. The potential? Beyond cost savings and improved productivity, software-defined everything can create a foundation for building agility into the way companies deliver IT services.  
Network building blocks
Software-defined networking (SDN) is one of the most important building blocks of software-defined everything. Like the move from physical to virtual machines for compute, SDN adds a level of abstraction to the hardware-defined interconnections of routers, switches, firewalls, and security gear. Though communication gear still exists to drive the physical movement of bits, software drives the data plane (the path along which bits move) and, more importantly, the control plane, which routes traffic and manages the required network configuration to optimize the path. The physical connectivity layer becomes programmable, allowing network managers— or, if appropriate, even applications—to provision, deploy, and tune network resources dynamically, based on system needs.
SDN also helps manage changing connectivity needs for an increasingly complex collection of applications and end-user devices. Traditional network design is optimized for fixed patterns, often in a hierarchical scheme that assumes predictable volume between well-defined end points operating on finite bandwidth. That was acceptable in the early days of distributed computing and the Web. Today, however, many organizations must support real-time integration across multiple servers, services, clouds, and data stores, enable mobile devices initiating requests from anywhere in the world, and process huge and expanding volumes of internal and external data, which can cause traffic spikes. SDN helps manage that complexity by using micro-segmenting, workload monitoring, programmable forwarding, and automated switch configuration for dynamic optimization and scaling.
Software-defined everything
The network is not the only thing being reimagined. Software-defined storage (SDS) represents logical storage arrays that can be dynamically defined, provisioned, managed, optimized, and shared. Coupled with compute and network virtualization, entire operating environments can be abstracted and automated. The software-defined data center (SDDC) is also becoming a reality. A Forrester report estimates that “static virtual servers, private clouds, and hosted private clouds will together support 58 percent of all workloads in 2017, more than double the number installed directly on physical servers.”2 This is where companies should focus their software-defined everything efforts.
Yet, as you determine scope, it is important to recognize that SDDC cannot and should not be extended to all IT assets. Applications may have deep dependencies on legacy hardware. Likewise, platforms may have hooks into thirdparty services that will complicate migrations, or complexities across the stack may turn remediation efforts into value eroders. Be deliberate about what is and is not in scope. Try to link underlying infrastructure activities to a broader strategy on application and delivery model modernization.
Show me the value
A recent Computer Economics study found that data center operations and infrastructure consume 18 percent of IT spending, on average.3 Lowering total cost of ownership by reducing hardware and redeploying supporting labor is the primary goal for many SDDC efforts. Savings come from the retirement of gear (servers, racks, disk and tape, routers and switches), the shrinking of data center footprints (lowering power consumption, cooling, and, potentially, facility costs), and the subsequent lowering of ongoing recurring maintenance costs.
Moving beyond pure operational concerns and cost outlays can deliver additional benefits. The new solution stack should become the strategic backbone for new initiatives around cloud, digital, and analytics. Even without systemic changes to the way systems are built and run, projects should see gains through faster environment readiness, the ability to engineer advanced scalability, and the elimination of power/connectivity constraints that may have traditionally lowered team ambitions. Leading IT departments are

Software-defined savingsa
A Deloitte analysis of normalized data from software-defined data center Company profile
(SDDC) business cases for Fortune 50 clients revealed that moving eligible
systems to an SDDC can reduce spending on those systems by approximately revenue $25+ billion
20 percent. These savings can be realized with current technology offerings
and may increase over time as new products emerge and tools mature. Not technology spend $5+ billion every system is suited for migration; ideal candidates are those without tight employees 150,000+
Infrastructure lever 1: optimize infrastructure
Application development Taking advantage of economies of
Production support soft lever 1 scale, better aligning demand and
31% 38% supply to reduce underutilized assets,
Values in $ millions. and simplifying the environment by
moving to standard platforms.
20% lever 2: orchestrate
infrastructure labor lever 2
49% 25% Automating labor tasks through
automation and orchestration to
46% hard reduce manual work, hand-offs, 69% errors, and process bottlenecks.
36% lever 3 lever 3: automate operations
41% 37% Automating development operations,
particularly production support-type
15% 14% activities, to increase productivity
and reduce support costs.
spend reduced sddc sddc
in scope spend due savings savings
for sddc: to sddc: by typeb by lever
55%
33%
12%
$3,350 $2,700

integration to legacy infrastructure or bespoke platforms.

total tech spend: $5,000

Source: a Deloitte Consulting LLP proprietary research.
b Hard savings are those that result in direct bottom-line savings, and soft savings are those that result in improved productivity and efficiency

and the redirection or redeployment of labor and resources.
reimagining themselves by adopting agile methodologies to fuel experimentation and prototyping, creating disciplines around architecture and design, and embracing DevOps.4 These efforts, when paired with platform-as-a-service solutions, provide a strong foundation for reusing shared services and resources. They can also help make the overall operating environment more responsive and dynamic, which is critical as organizations launch digital and other innovative plays and pursue opportunities related to the API economy, dimensional marketing, ambient computing, and the other trends featured in this report.
From IT to profit
Business executives should not dismiss software-defined everything as a tactical technical concern. Infrastructure is the supply chain and logistics network of IT. It can be a costly, complex, bottleneck—or, if done well, a strategic weapon. SDDC offers ways to remove recurring costs. Organizations should consider modernizing their data centers and operating footprints, if for no other reason than to optimize their total cost of ownership. They should also pursue opportunities to build a foundation for tomorrow’s business by reimagining how technology is developed and maintained and by providing the tools for disruptive digital, cloud, analytics, and other offerings. It’s not just about the cloud; it’s about removing constraints and becoming a platform for growth. Initially, first movers will likely benefit from greater efficiencies. Yet, soon thereafter, they should be able to use their virtualized, elastic tools to reshape the ways their companies work (within IT, and more importantly, in the field), engage customers, and perhaps even design core products and offerings. 
Lessons from the front lines
Driving tomorrow
Cisco Systems Inc. is currently developing a suite of products dubbed Application Centric Infrastructure (ACI), featuring tight integration between the physical and virtual elements. The goal is to move softwaredefined everything beyond hardware and infrastructure into applications and business operations. What’s more, rather than only abstracting and automating network components, ACI provides hooks into compute and storage components, tools, service level agreements, and related services like load balancing, firewalls, and security policies. According to Ishmael Limkakeng, Cisco’s vice president of sales, “ACI will enable cost efficiency, flexibility, and the rapid development and implementation of business apps that drive the bottom line.”
As an alpha customer for its own technology, Cisco is currently in the midst of a three-year internal ACI roll-out. To date, IT teams have constructed ACI infrastructure in a number of data centers and have begun a multi-year journey of migrating the company’s portfolio of 4,067 production applications.5 In deploying ACI internally, Cisco CIO Rebecca Jacoby is looking to achieve productivity gains by simplifying the provisioning, management, and movement of resources. At the heart of this strategy lies an innovative policy model in which approaches for configuring, using, reusing, and deploying the company’s network become standardized.
Cisco’s ACI deployment teams are working toward reducing overall IT operating expenses by 41 percent. This goal includes 58 percent cost savings in network provisioning and 21 percent cost savings in network operations and management. Moreover, they expect a fourfold increase in bandwidth, which, in turn, could lead to a 25 percent savings in capital expenditures, a 45 percent reduction in power usage, and a physical footprint 19 percent smaller than it was before ACI deployment.
Finally, Cisco aims to improve the flexibility of the use of those resources—expanding from just productivity in IT, to productivity for the business. Cisco has already seen performance gains via its CITEIS private cloud—an implementation of the VCE Vblock architecture stack. And the ACI business case includes a potential 12 percent optimization of compute resources and a 20 percent improvement in storage capacity.6
Divvying up expertise with a PaaS7
AmerisourceBergen’s IntrinsiQ unit is a leading provider of oncology-focused software. IntrinsiQ’s applications automate nearly every element of oncology—from treatment options to drug prescriptions—and the software is onsite at more than 700 clinics across the United States and Canada. These installations are important for the company’s physician clients, but installing and maintaining software across hundreds of locations comes with significant overhead.
To ease the overhead burden, IntrinsiQ opted to deploy a single-instance, multitenant application to reduce software development and long-term maintenance costs. The 80-person company, however, had no experience in multi-tenancy or in developing the underlying layers of a cloudbased application. Moreover, in moving to the cloud, IntrinsiQ had to demonstrate to its customers that the security of patient data would continue to be compliant with strict industry regulations.
To reduce costs, bolster innovation, and accelerate time to market, IntrinsiQ partnered with PaaS provider Apprenda. Apprenda’s platform provided back-end functions including multi-tenancy, provisioning, deployment, and, perhaps most importantly, security. IntrinsiQ continued to focus on developing oncology applications, as well as incorporating the new private cloud offering into its IT delivery and support models.
The division of duties achieved the needed results. The company’s oncology software specialists were able to work at high productivity levels by focusing on the application functionality and leveraging the PaaS software’s out-of-the-box management tools. IntrinsiQ was able to collapse its development schedule and reduce its costs: The new customer-facing, cloud-based application hit the market 18 months earlier than planned. Additionally, the cloud-based solution has made IntrinsiQ more affordable for smaller oncology clinics—expanding the company’s potential market.
The backbone of global connected commerce
Each day, eBay Inc. enables millions of commerce and payments transactions. eBay Marketplaces connects more than 155 million users around the world who, in 2014 alone, transacted $83 billion in gross merchandise volume. And PayPal’s 162 million users transacted more than $228 billion in total payment volume just last year.8 For eBay Inc., scale is not optional: It is the foundation of all its operations and a critical component of the company’s future plans.
Over the last decade, eBay Inc.’s platform and infrastructure group recognized that as the company grew, its infrastructure needs were also growing. The company’s IT footprint now included hundreds of thousands of assets across multiple data centers. Product development teams wanted to get from idea to release in days, but environments often took months to procure and “rack and stack” via traditional methods. Moreover, the scope of eBay Inc.’s business had grown far beyond online auctions to include offline commerce, payment solutions, and mobile commerce—all domains in which reliability and performance are essential.
The company decided to make infrastructure a competitive differentiator by investing in an SDDC to drive agility for innovation and efficiency throughout its operations while simultaneously creating a foundation for future growth. The company kicked off its SDDC efforts by tackling agility through the construction of a private cloud.
The first step to this process was one of standardization. Standardizing network design, hardware SKUs, and procedures helped create homogeneity in infrastructure that helped set the foundation for automation and efficiency.
Historically, the company had procured servers, storage, and network equipment on demand when each team or project requested infrastructure. Cloud solutions, on the other hand, made it possible to decouple the acquisition of compute, storage, and network resources from the provisioning cycles. This allowed for better partnership with the vendor ecosystem through disciplined supplychain practices, and enabled on-demand provisioning for teams and projects that required infrastructure. Today, engineers are able to provision a virtual host in less than a minute and register, provision, and deploy an application in less than 10 minutes via an easyto-use portal.
The infrastructure team took an end-toend approach to automation, all the way from hardware arriving at the data center dock to a developer deploying an application on a cluster of virtual machines. Automation of on-boarding, bootstrapping, infrastructure lifecycle, imaging, resource allocation,
repair, metering, and chargeback were core to building on-demand infrastructure with economic efficiency.
Compute on demand was only the beginning of eBay Inc.’s SDDC strategy. Next, the company tackled higher-order functions like load balancing, object storage, databasesas-a-service, configuration management, and application management. By creating a portfolio of internal cloud computing services, the company was able to add software-defined capabilities and automate bigger pieces of its software development infrastructure. The infrastructure team now provides product teams with the software-enabled tools they need to work more like artists and less like mechanics.
Beyond creating agility, the softwaredefined initiative has also helped drive enforceable standards. From hardware engineering to OS images to control plane configuration, standardization has become an essential part of eBay Inc.’s strategy to scale. The development culture has shifted away from one in which people ask for special kinds of infrastructure. Now, they are learning how to build products for standard infrastructure that comes bundled with all the tools needed to provision, develop, deploy, and monitor applications, though all stages of development. IT still receives (and supports) occasional special requests, but the use of container technologies helps it manage these outliers. These technologies provide engineers with a “developer class of service” where they are able to innovate freely as they would in a start-up— but within components that will easily fit into the broader environment. Moreover, knowing beforehand which commodity will be provided has helped both development and operations become more efficient.
eBay Inc. is working to bring software-
defined automation to every aspect of its infrastructure and operations. In the company’s network operating centers (NOCs), for example, advanced analytics are now applied to the 2,000,000 metrics gathered every second from infrastructure, telemetry, and application platforms. Traditionally, the company’s “mission control” was surrounded by 157 charts displayed on half a dozen large screens that provided real-time visibility into that carefully curated subset of metrics and indicators engineers deemed most critical to system stability. These same seven screens can now cover more than 5,000 potential scenarios by only displaying those signals that deviate from the norm, thus making the detection of potential issues much easier than before. And, with software-defined options to segment, provision, and deploy new instances, engineers can take action in less time than it would have previously taken them to determine which of the screens to look at. Through its SDDC initiative, the company is now able to direct its energy toward prevention rather than reaction, making it possible for developers to focus on eBay Inc.’s core disciplines rather than on operational plumbing.
 Next-generation infrastructure
Since joining Little Rock-based Acxiom, a global enterprise data, analytics, and SaaS company, in 2013, Dennis Self, senior vice president and CIO, has made it his mission to lead the organization into the new world of software-defined everything. “Historically, we’ve propagated physical, dedicated infrastructure to support the marketing databases we provide customers,” he says. “With next-gen technologies, there are opportunities to improve our time to deliver, increase utilization levels, and reduce the overall investment required for implementation and operational activities.”
In early 2014, Self’s team began working to virtualize that infrastructure—including the network and compute environments. The goal is to build next-generation infrastructureas-a-service and network-as-a-service capabilities that help create economies of scale, improve speed of delivery, and increase overall efficiency through automation. This effort is helping to drive innovation at the application level as well. Software engineers are able to quickly design new solutions to help customers cleanse and integrate the data required to enable their marketing strategies and related activities.
Though there is considerable work still to be completed, Acxiom is already seeing benefits. Provisioning, which once took days or weeks, now takes minutes or hours—allowing teams to quickly jumpstart new product offerings and internal projects.
Self sees Acxiom’s current efforts as part of a disruptive trend that may transform the way businesses approach the infrastructure, databases, and software that they need to succeed. “I can foresee a point in the future when infrastructure will be commoditized and offered at market rates by a handful of utility companies,” he says, “That will allow IT teams to focus on other, more strategic IT services and solutions that will help enable business strategies and operations.” 

Software-defined everything
My take
Greg Lavender,
Managing director, Cloud Architecture and Infrastructure Engineering,

Office of the CTO Citi
At Citi, the IT services we provide to our customers are built on top of thousands of physical and virtual systems deployed globally. Citi infrastructure supports a highly regulated, highly secure, highly demanding transactional workload. Because of these demands, performance, scale, and reliability—delivered as efficiently as possible—are essential to our global business operations. The business organizations also task IT with supporting innovation by providing the IT vision, engineering, and operations for new services, solutions, and offerings. Our investments in softwaredefined data centers are helping on both fronts.
With 21 global data centers and system architectures ranging from scale-up mainframes and storage frames to scale-out commodity servers and storage, dealing effectively with large-scale IT complexity is mission-critical to our business partners. We became early adopters of server virtualization by introducing automation to provisioning several years ago, and we manage thousands of virtual machines across our data centers. The next step was to virtualize the network, which we accomplished by moving to a new two-tier spine-and-leaf IP network fabric similar to what public cloud providers have deployed. That new physical network architecture has enabled our software-defined virtual networking overlays and our next generation software-defined commodity storage fabrics. We still maintain a large traditional fiber channel storage environment, but many new services are being deployed on the new architecture, such as big data, NoSQL and NewSQl data services, grid computing, virtual desktop infrastructure, and our private cloud services.
Currently, we are engaged in three key objectives to create a secure global private cloud. The first objective focuses on achieving “cloud scale” services. As we move beyond IT as separate compute, network, and storage silos to a scale-out cloud service model, we are building capabilities for end-to-end systems scaled horizontally and elastically within our data centers— and potentially in the not-too-distant future, hybrid cloud services. The second objective is about achieving cloud speed of delivery by accelerating environment provisioning, speeding up the deployment of updates and new capabilities, and delivering productivity gains to applications teams through streamlined, highly automated release and lifecycle management processes. The results so far are measurable in terms of both client satisfaction and simplifying maintenance and operations scope. The final objective is to achieve ongoing cloud economics with respect to the cost of IT services to our businesses. More aggressive standardization, re-architecting, and re-platforming to lower-cost infrastructure and services is helping reduce technical debt, and will also help lower IT labor costs. At the same time, the consumption of IT services is increasing year over year, so keeping costs under control by adopting more agile services allows our businesses to grow while keeping IT costs manageable. Our new CitiCloud platform-as-a-service capabilities—which feature new technologies such as NoSQL/NewSQL and big data solutions along with other rapid delivery technology stacks—help accelerate delivery and time to market advantages. Packaging higher-level components and providing them to application teams accelerates the adoption of new technologies as well. Moreover, because the new technology components have strict compliance, security, and DevOps standards to meet, offering more tech stacks as part of platform-as-a-service provides stronger reliability and security guarantees.
By introducing commodity infrastructure underneath our software-defined architectures, we have been able to incrementally reduce unit costs without compromising reliability, availability, and scale. Resiliency standards continue to be met through tighter controls and automation, and our responsiveness—measured by how quickly we realize new opportunities and deliver new capabilities to the business—is increasing.
Focusing IT on these three objectives—cloud scale, cloud speed, and cloud economics—has enabled Citi to meet our biggest challenge thus far: fostering organizational behavior and cultural changes that go along with advances in technology. We are confident that our software-defined data center infrastructure investments will continue to be a key market differentiator—for IT, our businesses, our employees, our institutional business clients, and our consumer banking customers.

Cyber implications
R
ISK should be a foundational consideration as servers, storage, networks, and data centers are replatformed. The new infrastructure stack is becoming software-defined, deeply integrated across
components, and potentially provisioned through the cloud. Traditional security controls, preventive measures, and compliance initiatives have been challenged from the outset because the technology stack they sit on top of was inherently designed as an open, insecure platform. To have an effective softwaredefined technology stack, key concepts around things like access, logging and monitoring, encryption, and asset management need to be reassessed, and, if necessary, enhanced if they are to be relevant. There are new layers of complexity, new degrees of volatility, and a growing dependence on assets that may not be fully within your control. The risk profile expands as critical infrastructure and sensitive information is distributed to new and different players. Though software-defined infrastructure introduces risks, it also creates opportunity to address some of the more mundane but significant challenges in day-to-day security operations.
Security components that integrate into the software-defined stack may be different from what you own today—especially considering federated ownership, access, and oversight of pieces of the highly integrated stack. Existing tools may need to be updated or new tools procured that are built specifically for highly virtual or cloud-based assets. Governance and policy controls will likely need to be modernized. Trust zones should be considered: envelopes that can manage groups of virtual components for policy definition and updates across virtual blocks, virtual machines, and hypervisors. Changes outside of controls can be automatically denied and the extended stack can be continuously monitored for incident detection and policy enforcement.
Just as importantly, revamped cyber security components should be designed to be consistent with the broader adoption of real-time DevOps. Moves to software-defined infrastructure are often not just about cost reduction and efficiency gains; they can set the stage for more streamlined, responsive IT capabilities, and help address some of today’s more mundane but persistent challenges in security operations. Governance, policy engines, and control points should be designed accordingly—preferably baked into new delivery models at the point of inception. Security and controls considerations can be built into automated approaches to building management, configuration management, asset awareness, deployment, and system automation—allowing risk management to become muscle memory. Requirements and testing automation can also include security and privacy coverage, creating a core discipline aligned with cyber security strategies.
Similarly, standard policies, security elements, and control points can be embedded into new environment templates as they are defined. Leading organizations co-opt infrastructure modernization with a push for highly standardized physical and logical configurations. Standards that are clearly defined, consistently rolled out, and tightly enforced can be a boon for cyber security. Vulnerabilities abound in unpatched, noncompliant operating systems, applications, and services. Eliminating variances and proactively securing newly defined templates can reduce potential threats and provide a more accurate view of your risk profile.

Where do you start?
THE potential scope of a software-defined everything initiative can be daunting—
every data center, server, network device, and desktop could be affected. What’s more, the potential risk is high, given that the entire business depends on the backbone being overhauled. To round matters out, an initiative may deliver real long-term business benefits, yet only have vague immediate impacts on line-of-business bottom lines (depending on IT cost models and charge-back policies). Given the magnitude of such an effort and its associated cost, is it worth campaigning for prioritization and budget? If so, where would you start? The following are some considerations based on the experiences of early adopters:
Creative financing. When working within traditional budgeting channels, many organizations source efforts around SDDC as net-new, one-off investments. With this approach, allocations do not affect operating unit budgets or individual lineof-business. Increasingly, organizations are looking at more creative ways to financially engineer their SDDC/SDI investments. For example, some vendors are willing to cover the up-front costs, achieving ROI from the savings realized over time. Others pursue more long-term returns by looking for ways to monetize pieces of the platform build-out.
Patterns. Software-defined everything’s flexibility makes it possible for each development team to potentially configure its own stack of virtual components tailored to its individual needs and circumstances, which can undermine efficiency gains. For this reason, companies should make standardization a design mandate from day one and utilize template-based patterns.
Setting a cadence of commonality from the beginning will help ease maintenance complexity, allow for better terms in supplier negotiations for underlying components (assuming the templates are geared towards non-differentiated services), and support the creation of standard policies around security, controls, and monitoring that can be automatically deployed and enforced.
Meeting in the middle. Drive the buildout of SDDC from the infrastructure organization, with suitably aggressive goals. In the meantime, engage with application teams to jointly determine how best to architect for new platforms and infrastructure services. New standards, patterns, and approaches will be required; by accelerating awareness, new applications can be compliant as soon as the environments are ready.
Not as easy as “lift and shift.” Architecture and development matter. Beyond the complexity of standing up and migrating the operating environment, the assets that run across the network, storage, and servers will likely require remediation. Direct references to network addresses, data structures, or server components should be redirected to the backplane. Virtualization management tools cannot dynamically scale or failover applications that single thread, block, or use primitive resource control constructs. Existing assets should be analyzed application by application and workload by workload to determine the technical considerations needed to support migration. The business needs should then be layered on—both the potential benefits from the new environment, and the longterm viability of the solution.
Commoditization and open stacks. Intelligent controls and management capabilities in the software layer can also enable organizations to transition from large, expensive, feature-rich hardware components to low-end, standardized servers deployed in massively parallel configurations. Independent nodes at risk of failing can be automatically detected, decommissioned, and replaced by another instance from the pool of available resources. This ability has led to an explosive growth in the number of relatively new players in the server market—such as Quanta, which sold one out of every seven servers in 2013—as well as products from traditional large hardware manufacturers tailored to the low-end market. Various standards and implementation patterns have emerged in support of the movement, including the Open Compute Project,
OpenStack, Open Rack, and Open Flow.
Beyond the data center. Companies may realize numerous benefits by coupling SDDC initiatives with a broader transformation of the IT department. DevOps is a good place to start: By introducing automation and integration across environment management, and enhancing requirements management, continuous build, configuration, and release management approaches, among other tasks, development and operations teams can meet business needs more consistently and drive toward rapid ideation and deployment. Software-defined everything doesn’t entirely hinge on a robust DevOps function. But together, they form a powerful bedrock for reimagining the
“business of IT.”

Bottom line
IN mature IT organizations, moving eligible systems to an SDDC can reduce spending on those systems by approximately 20 percent, which frees up budget needed to pursue higher-order
endeavors.9 These demonstrated returns can help spur the initial investment required to fulfill virtualization’s potential by jump-starting shifts from physical to logical assets and lowering total cost of ownership. With operational costs diminishing and efficiencies increasing, companies will be able to create more scalable, responsive IT organizations that can launch innovative new endeavors quickly and remove performance barriers from existing business approaches. In doing so, they can fundamentally reshape the underlying backbone of IT and business.

Deloitte D Dimensional marketing

Dimensional marketing
New rules for the digital age
Marketing has evolved significantly in the last half-decade. The evolution of digitally connected customers lies at the core, reflecting the dramatic change in the dynamic between relationships and transactions. A new vision for marketing is being formed as CMOs and CIOs invest in technology for marketing automation, nextgeneration omnichannel approaches, content development, customer analytics, and commerce initiatives. This modern era for marketing is likely to bring new challenges in the dimensions of customer engagement, connectivity, data, and insight.

ACCORDING to MBA textbooks,
marketing is the “art and science of
choosing target markets and getting, keeping, and growing customers through creating, delivering, and communicating superior customer value.”1 This core mission hasn’t changed. However, marketing has evolved significantly in the last five years, driven by the rapid convergence of customer, digital, and marketing technologies. Marketers have access to an unprecedented amount of data to inform targeted marketing campaigns. Channel access is ubiquitous, as are touchpoints of all kinds—offline and on. Consumer messaging has morphed into social engagement, allowing companies to view their brands from the outside in.
The result is a magnification of customer expectations in terms of relevancy, intimacy, delight, privacy, and personal connections. Increasingly, organizations no longer market to masses. They are marketing to individuals and their social networks. Indeed, marketing itself has shifted from the broadcast of messages
to engagement in conversations, and now to the ability to predict and rapidly respond to individual requests. Organizations are increasingly able to engage audiences on their terms and through their interests, wherever and whatever they are. And customers are learning to expect nothing less, from both B2C and B2B enterprises.
What does all of this mean for the CMO? And the CIO? To begin with, CIOs and CMOs should embrace the reality that the marketing levers of the past no longer work the same way, if at all. The front office of marketing has been recast around connectivity and engagement—seamless contextual outreach tailored to specific individuals based on their preferences, behaviors, and purchase histories. At the same time, marketing’s back office has been transformed by new technologies for accelerating and automating campaigns, content, and positioning—fueled by data and analytics. Together, these new dimensions are ushering in a new breed of marketing: dimensional marketing.
The four dimensions
In simpler times, linear constructs such as the four Ps (product, price, promotion and place) served us well as the foundational ingredients of marketing strategies. In the era of dimensional marketing, however, many companies are adding four new dimensions to the original marketing mix: engagement, connectivity, data, and technology. The concept of dimension is important. It reflects how the levers are now integrated and interrelated.
Experience is all: The engagement revolution
Over 86 percent of Americans have Internet access.2 Fifty-eight percent have smartphones, and 42 percent have tablets.3 Consumers are now using new technologies to research products and shop through a variety of channels. These connected consumers can buy from retailers regardless of geography or store opening hours. The consumer experience now demands a balance of form and function. Experiences should be personalized, contextual, and real-time to “me” in the environment and with the method that makes the most sense in the moment. This is a dramatic shift from the days of catering to broad demographics and customer segments. Organizations are armed with deep, granular knowledge of individuals; just as importantly, they have access to multiple channels through which to conduct personalized outreach. Gartner’s 2014 Hype Cycle for Web Computing found that “Many big data use cases are focused on customer experience, and organizations are leveraging a broad range of information about an individual to hyperpersonalize the user experience, creating greater customer intimacy and generating significant revenue lift.”4 Every experience reflects the brand, transcending campaigns, products, sales, service, and support across channels. User experience and great design should be cornerstones of every solution, which requires new skill sets, delivery models, and interactions between the business and IT. Behind the scenes, content and digital access management are critical to a seamless integration of campaigns, sales, services, supply chains, and CRM systems.
Relationships are interactions:
The connectivity revolution
One-way communication with consumers is a thing of the past. Marketers should build sustained relationships through a deep and meaningful understanding of individual customers. After all, effective relationships drive loyalty, build communities, and cultivate influencers. Meaningful relationships also require dialogue. The shift from omnichannel to omni-directional communication across channels is giving communities and individuals the opportunity to create new levels of engagement. A recent Deloitte study commissioned by eBay found that being broadly present across channels, and enabling each channel to serve the customer at any point through the purchase journey, raised brand awareness and drove loyalty.5 The study also found that leading retailers with a presence across store and non-store channels succeeded in capturing additional sales from non-store channels due to increased awareness of their products, expanded market share and/or a greater share of sales captured from competitors, and access to fast-growth channels. Social (both social technology and real-world social behavior) plays an important role by activating audiences and sustaining (or heightening) their interest through tailored, relevant content delivered on their own terms and in their own words.6
Intelligence is targeted: The information revolution
Deriving meaningful customer, sales, and product insights requires an appetite for enormous amounts of data and analytics. Gartner’s Hype Cycle for Digital Marketing found that “The hype around data-driven marketing is largely justified, and data-driven marketing will help make marketing better, faster, and more cost-effective while better aligning marketers with the marketplace, not to mention enterprise objectives, through richer, more reliable metrics.”7 And a recent Teradata survey found that 78 percent of marketers feel pressure to become more datadriven, with 45 percent agreeing that data is the most underutilized asset in the marketing organization.8 Real-time analysis can drive adjustments and improvements to marketing campaigns and promotions. Intelligence gives us the technical capability to close the loop and measure real business results by providing multiple ways to interpret and make use of data. Better targeting and visibility across the full customer life cycle enhances
the traditional model Marketing began as an isolated step occurring at the end of a linear business process focused on brand and awareness. Core technology functions such as ERP, data, and analytics were bolted on to marketing as needed. the new model Today’s marketing is a multifaceted entity with hooks into all steps of the business and product cycle. With the customer as the main actor, the business aims to integrate engagement, connectivity, information, and technology in order to create a
The evolution of marketing
the use of standalone tools in areas such as campaign automation and bid management systems—indicative of the trend to understand individuals versus broad segments.
Channel orchestration is multidimensional: The technology revolution
Channels and customer touchpoints are constantly multiplying. Marketers now own or manage the marketing platforms, architecture, and integration required to provide a consistent experience across channels. Although marketing has evolved from broadcast to interactivity and now finally to digital, many organizational capabilities still remain in silos. With dimensional  
marketing, traditional, digital, customer, and enabling business systems are converging into one integrated offering that operates simultaneously in harmony. This harmony demands platforms that are deliberately designed to accommodate multiple devices and touchpoints. Contextual architecture should provide data, images, video, and transactions dynamically—and be based not just on who the customers are, but where they are, what they’ve done, and what they’re likely to want next.
A digital platform divided
The stage is set for technology and analytics to play a more impactful role in this new world—delivering seamless, contextual, and hyper-targeted customer and prospect experiences, and helping marketing departments repatriate duties from agencies through their own capabilities for automation, precision, and efficiency. CMOs, working in partnership with CIOs, should command a richer, data-driven, targeted repertoire of campaigns, promotions, and properties across multiple channels for varied customer types and objectives. Customer awareness, acquisition, conversion, and retention are top priorities and require attention and investment.9
Organization-wide platforms to target, provision, deploy, and measure digital assets are needed and should be integrated across:
Channels: offline and online and across paid, earned, and owned media
Context: based on the individual’s behavior, preferences, location, and other cues
Campaigns: pricing, promotions, and offers tailored to an individual in a specific point in time
Content: internally and externally sourced, with increasing focus on social media and video, and optimized for mobile
CIOs should be prepared for a sizeable increase in marketing technology initiatives— akin to the wave of automation in the worlds of finance and supply chain. Marketing’s expanded scope will likely require changes far beyond traditional marketing systems, with integration into CRM and ERP systems in areas such as pricing, inventory, order management, and product R&D. And, as analytics, mobile, social, and the Web become marketing’s digital battleground, CIOs should expect aggressive pushes in these areas. These forays could affect the organization’s enterprise strategy in each domain. CIOs should not settle for being responsive, informed parties as the revolution unfolds; they should be seen as a strategist and act as a catalyst.
Lessons from the front lines
Consumerized insurance
Amid growing competition in the insurance industry, some providers in the B2B space are taking steps to differentiate their brands and increase market share by adopting a more consumer-centric approach to marketing. In contrast to traditional product-centric strategies, this approach—which some industry trend watchers refer to as “the consumerization of B2B marketing”10— integrates different aspects of dimensional marketing such as customer experience, relationships, analytics, and technology to deliver seamless, personalized interactions across a variety of platforms.
One insurer, faced with increasing brand parity within retirement and insurance services, determined that it would need to improve its digital positioning and overall retention of assets under management to better differentiate its brand in the marketplace. The company developed a solution that featured a redesigned Web experience, a financial wellness scoring tool for customers, and a new CRM system. It also stopped trying to focus solely on educating people about product offerings and, instead, began emphasizing real testimonies from other customers. This new foundation of customer-centric marketing tools is expected to deliver a 40 percent increase in retention, as well as improved brand recall and purchase intent.
Another provider was looking to sell direct insurance to small businesses, an area traditionally underserved by large insurance providers due to the complexity involved (providing real-time, online quotes for these businesses requires considerable knowledge of unique risks and regulations that vary by geography and industry, as well as advanced analytics and predictive models to advise significant underwriting requirements). With this challenge in mind, the company set about designing a website with a front end that would be sufficiently user-friendly to prevent potential customers from getting turned off by a complicated, lengthy quote process. The end result was a responsive, intuitive site with predictive models as the DNA of the process; the site also incorporates clean UX design principles on top of a REST service layer. Customers are now able to easily and independently navigate the quote process in addition to customizing, purchasing, and managing their policies through this site. The impact of the trend toward the “consumerization of B2B marketing” is rippling beyond messaging and rebranding. As B2C companies expand into the enterprise market, enterprise customers are increasingly expecting the same highly engaging, intuitive approach across all interactions. For the insurance industry in particular, this means simplifying, streamlining, and humanizing their messaging and technology platforms in ways that reduce the frustration customers can feel when dealing with complicated financial instruments.
Dimensional platform
Traditionally, marketers focused on demographics, organizing channels into silos, and optimizing traditional metrics such as above- vs. below-the-line spend or working vs. non-working dollars. Media buying evolved into a process in which marketers perform audience analysis, establish segments, and target each segment with banner ads, offers, and other tailored content requiring considerable human involvement and expertise. With each new channel or segment, the process complexity and content permutations increase.
Enter Rocket Fuel Inc., which has developed a marketing platform featuring an artificial intelligence (AI) engine for automated, programmatic media buying and placement. Instead of relying on static, predefined customer segments, algorithms  make decisions on media buying and placement based on real-time information— blending audience analysis, campaign management, pricing optimization, and dynamic budget allocation.
Rocket Fuel also provides ways to link channels across a full customer lifecycle. A telecommunications customer using the platform can drive placement of banner ads timed to coincide with delivery of direct mail offers, or send a text-based offer to speak to a live customer service representative if a highvalue customer visits the company’s website multiple times in a day.
John Nardone, Rocket Fuel executive vice president and general manager, says, “The goal needs to be relevance, not personalization.” Consumers may not respond to something simply addressed to them, but they will likely respond to something relevant to their lives, tastes, and desires. In a time of generic junk mail, spam, and ubiquitous banner ads, understanding who each consumer is, what motivates them, and what their unique needs are matters more than ever. Rocket Fuel’s platform helps to drive contextual interactions across channels—online and off.
Digital first
Six years ago, Telstra, Australia’s largest telecommunications and information services provider, needed to find a new strategy to remain competitive. In 2010, Telstra was facing declining revenues and narrowing profit margins. The overall market was changing, with customers dropping fixed-line services. The internal and external environment was shifting: the company had completed a multiyear privatization, competition was rising, and non-traditional competitors in the digital space were emerging.
“The company decided that focusing on customers should be our number-one priority, and it has been ever since,” says Gerd Schenkel, Executive Director, Telstra Digital. One of the other changes facing the industry was the increase of digital channels and service options for customers. “Our customers’ digital choices continue to increase, so we needed to make sure we were offering digital solutions our customers valued.”
Telstra’s multifaceted approach for creating a high-quality online experience for customers leverages data, digital tools, and dimensional marketing techniques to transform customer engagement, service, and the traditional vendor-customer relationship. The first step—one that is ongoing—was to learn more about what customers wanted in a digital experience: specifically, how operational data can be turned into insights in ways customers find not just acceptable, but valuable. An example comes from smartphone users: As customers continue to consume more and more data, it is important to be transparent with customers about their usage to help prevent bill shock. Through digital channels, Telstra is in a position to proactively approach customers with early warnings of potential billing implications, as well as to trigger offers tailored to their individual needs. Doing this yielded a valuable insight: With dimensional marketing, traditional boundaries between marketing, sales, and service are disappearing. Almost every customer touchpoint presents opportunities to market, sell, and provide service. When service improves, customer satisfaction typically rises. Sales will likely follow.
On the service front, Telstra is taking a similar proactive, data-centric approach. By tracking and analyzing customer support data, Telstra discovered that customers often require additional support with billing and similar inquiries that depend on the company’s legacy back-end systems. Telstra is now routinely measuring current and expected customer experiences, resolving issues, and proactively contacting the customer.
In sales, Telstra has launched the ability to push tailored offers to customers using the company website. It has also deployed several algorithms to proactively offer online customers live chat with a sales representative if it appears that they need help. The company plans to extend this capability to its service pages. A similar focus has been placed on connectivity. Telstra’s “CrowdSupport®”11  community and “Mobile Insider” program are activating influencers and advocates, soliciting more than 200,000 pieces of user-created content for servicing, product demonstrations, and broader brand promotion.
Telstra’s most recent initiative, “Digital First,” will build a digital ecosystem designed to elevate customer engagement by empowering both the customer and the company. The ecosystem aims to consolidate customer data into a single, detailed profile available for any interaction across online and offline channels: website, call center, retail store, or a service event. This would allow a Telstra representative to see a customer’s history, usage, service issues, preferences, past interactions—and, with permission, even social media activity and a photograph. This broad, detailed view of the customer should help the company provide a more consistent experience and better satisfy customer needs. For example, rather than greeting customers with a generic “How may I help you?,” having such data readily available could allow employees to greet them and provide an update on what is being done to address their specific concerns.
Schenkel says that, though Telstra is still in the early stages of its digital journey, its initiatives have already begun to pay off. “They’ve delivered significant value. What’s more, Telstra customers continue to be happier with their online experiences, with all key digital satisfaction measures improving considerably in 2014.” 

Tech Trends 2015: The fusion of business and IT
My take
Ann Lewnes, chief marketing officer, Adobe
Over the past few years, data and visibility into data have, in large part, transformed virtually everything about marketing. In this new customer-focused, data-driven environment, marketing is missioncritical: Adobe’s overall success is partly contingent on marketing’s ability to deliver personalized, engaging experiences across all channels.
The need to create such experiences has led us to develop an even deeper understanding of our customers, and to construct advanced platforms for creating, deploying, and measuring dynamic content.
Along the way, we’ve also pursued opportunities to leverage technology to improve marketing’s back office, as well as to evolve our relationships with traditional agencies.
Roughly 95 percent of Adobe’s customers visit our website, which translates to more than 650 million unique visits each month. A variety of applications make it possible for us to know who these customers are, what they do during each visit,
and—through integration with
social channels—whom they are connected with. We have applied
personalization and behavioral
targeting capabilities, which help us
provide more engaging experiences
based on individual preferences. We have
also layered in predictive and econometric
modeling capabilities, opening the door for assessing
the ROI of our marketing campaigns. Whereas 10 years ago, marketing may have been perceived as something intangible or unquantifiable, we now have hard evidence of our contribution to the company’s success.
Increasingly, companies are using marketing to drive digital strategies. Moreover, the expanding scope of dimensional marketing is driving increased connectivity among various enterprise groups. For example, at Adobe, marketing and IT are collaborating in ways that move the entire company forward. Historically, these two groups were isolated from each other; marketing bought its own technology and software and kept them relatively siloed, apart from the core. Today, marketing’s systems integrate into corporate systems. If you want to develop a comprehensive, data-driven view of customers, you need access to customer data in CRM, financial databases, and other systems. And, while marketing has its own group that conducts Web analytics and insights, we rely on IT to provide integration, data platforms, visualization, and security.
It is critical to team with the CIO and the broader IT organization. Luckily, Adobe’s IT organization very much wants to support marketing’s strategies and efforts, which has helped the relationship between our two groups evolve into one of shared responsibility.
Digital marketing has fundamentally transformed the way we think about marketing’s mission and the way we work to fulfill it. It took us a long time to get to where we are today, and the journey was not without challenges. Along the way, we had to retool the organization and reskill our people. But now we’ve arrived at a good place, and we have instilled a strong sense of confidence and motivation throughout the marketing organization. Though in the past we may have been somewhat of an organizational outlier, today we are proud to have our identity woven throughout the fabric of the Adobe organization.

Cyber implications
DIGITAL has changed the scope, rules, and tools of marketing. At the center are customers and the digital exhaust they leave as they work, shop, and play. This can be valuable information to drive
the new dimensions of marketing: connectivity, engagement, and insight. But it also creates security and privacy risks.
“Fair and limited use” is the starting point—for data you’ve collected, for data individuals have chosen to share, for derived data, and for data acquired from third-party partners or services. There are questions of what a company has permission to do with data. Laws differ across geographies and industries, informed by both consumer protection statutes and broader regulatory and compliance laws. Liability is not dependent on being the source of or retaining data; controls need to extend to feeds being scanned for analytics purposes and data/services being invoked to augment transactions. This is especially critical, as creating composites of information may turn what were individually innocuous bits of data into legally binding personally identifiable information (PII).
Privacy concerns may limit the degree of personalization used for offerings and outreach even when within the bounds of the law. Even if the information is publicly available, customers may cry “Big Brother” if it seems that an inappropriate amount of personal information has been gleaned or a threshold level of intimacy has been breached. Derived data can provide insights into individual behavior, preferences, and tendencies, which in the hands of marketers and product managers is invaluable. In the context of cyber security, these insights can also help organizations identify potential risks. Organizations should clearly communicate to customers the policies and boundaries that govern what data is being collected and how it will be used.
Public policies, privacy awareness programs, and end-user license agreements are a good start.
But they need to be joined with explicit governance and controls to guide, monitor, and police usage. User, system, and data-level credentials and entitlements can be used to manage trust and appropriate access to raw transactions and data. Security and privacy controls can be embedded within content, integration, and data layers—moving the mechanics into the background so that CMOs and marketing departments inherit leading practices. The CISO and CIO can bake cyber security into the fabric of how new services are delivered, and put some level of policy and controls in place.
Finally, understanding your organization’s threat beacon can help direct limited cyber security resources toward the more likely vectors of attack. Dimensional marketing expands the pool of potentially valuable customer information. Organizations that are pivoting their core business into digital assets and offerings only complicate the matter. Core product IP and the digital supply chain come into play as digital marketing becomes inseparable from ordering, provisioning, fulfillment, billing, and servicing digital goods and services.
Asset and rights management may be new problems marketing has not traditionally had to deal with, but the root issues are related to the implications described above. Organizations should get ready for the radical shift in the digital marketing landscape, or security and privacy concerns may slow or undermine their efforts.

Where do you start?
DIMENSIONAL marketing has the potential to succumb to its own transformational
promise. As with any massive undertaking, objectives, priorities, and expected outcomes should be clearly defined. Below are steps that many leading organizations are taking to prepare themselves to operate in this new environment:
Customer-led. Digital agencies can spend too much time focusing on a single approach, or even self-serving tactics such as “storytelling.” If marketing focuses on what your company is saying rather than what customers are asking for, your organization may not be focused on the pillars of dimensional marketing: listening, being personal, and focusing on authentic engagement. Instead, you should anchor your efforts on the end-to-end customer journey by understanding customer needs, actions, and motivations, from awareness through retention, across channels. These insights should carry more weight than the pursuit of particular tactics. It would be better to disregard the notion of customer loyalty to a brand, and embrace the concept of a brand becoming loyal to the customer.
Data, data, data. Capturing, correlating, and capitalizing on customer information is at the heart of dimensional marketing. Depending on their roots, marketing technology vendors tend to emphasize either current customers or the wider pool of prospects. But both are relevant. Early efforts should focus clearly on targets; next should come an analysis of the history, preferences, and context of those audiences. Don’t limit yourself to today’s marketing signals; determine how ambient computing,12 wearables,13 and other trends may play into your ability to collect and interpret signals. Big data and predictive analytics should play a role in how you invest in specific audiences and targeted priorities.
All together now. Marketing automation should mean much more than email campaign management. It is almost a given that a holistic approach requires Web, mobile, social, broadcast, and direct mail. Social graphs should source not just Facebook, Twitter, LinkedIn, and Instagram, but also specialized blogs and industry- or domain-focused communities. Analytics, digital offerings, and back-office marketing tools (from lead management to search engine optimization to pricing engines) should be geared toward omnichannel and cross-dimensional capabilities.
(Contextual) content is king. As video, mobile, and other digital assets emerge as the building blocks of campaigns and servicing, content management becomes central to dimensional marketing. Many content management systems have a narrow focus on document management or just Web content management. This narrow focus leaves these systems ill-equipped to deal with the impending explosion of content types and deployment needs. Authoring, provisioning, and measuring usage and effectiveness need to be seamless processes. These should be combined with the ability to collaborate with in-house and contracted professionals, as well as with a mix of third-party agencies.
Social activation. Social media topped the list in a recent survey of digital advertisers’ spend and priorities.14 Organizations need to move from passive listening and impersonal social broadcasting to social activation:15 Social activation entails precise targeting of influencers, development of contextual outreach based on tangible, measurable outcomes, and cultivation of a global social content supply chain that
can create meaningful, authentic social campaigns. In short, social activation should inspire individuals to carry out the organization’s missions in their own words, on their own turf, and on their own terms. Companies should build and nurture perceptions, instead of focusing on empty metrics such as volume or unfocused sentiment.

Bottom line
GARTNER’S 2014 CEO survey found that “CEOs rank digital marketing as the No. 1 most important tech-enabled capability for investment over the next five years.”16 And with
marketing’s expanded scope likely including the integration of marketing systems with CRM and ERP systems in areas such as pricing, inventory, order management, and product R&D, IT’s mission, if they choose to accept it, is to help drive the vision, prioritization, and realization of dimensional marketing. IT can potentially use its mission as a Trojan horse to reinvent delivery models, technology platforms, and IT’s reputation across the business. Who better than the CMO to help change the brand perception of the CIO? And who else but the CIO can help deliver analytics, mobile, social, and Web while maintaining the enterprise “ilities”—security, reliability, scalability, maintainability, and interoperability? The stage is set. It is time for the next wave of leaders to deliver.

Deloitte C Ambient computing

Ambient computing
Putting the Internet of Things to work
Possibilities abound from the tremendous growth of embedded sensors and connected devices—in the home, the enterprise, and the world at large. Translating these possibilities into business
impact requires focus—purposefully bringing smarter “things”
together with analytics, security, data, and integration platforms to make the disparate parts work seamlessly with each other. Ambient computing is the backdrop of sensors, devices, intelligence, and agents that can put the Internet of Things to work.

THE Internet of Things (IoT) is maturing from its awkward adolescent phase. More
than 15 years ago, Kevin Ashton purportedly coined the term he describes as the potential of machines and other devices to supplant humans as the primary means of collecting, processing, and interpreting the data that make up the Internet. Even in its earliest days, its potential was grounded in business context; Ashton’s reference to the Internet of Things was in a presentation to a global consumer products company pitching RFID-driven supply chain transformation.1 And the idea of the IoT has existed for decades in the minds of science fiction writers—from the starship Enterprise to The Jetsons.
Cut to 2015. The Internet of Things is pulling up alongside cloud and big data as a rallying cry for looming, seismic IT shifts. Although rooted more in reality than hype, these shifts are waiting for simple, compelling scenarios to turn potential into business impact. Companies are exploring the IoT, but some only vaguely understand its full potential. To realize that potential, organizations should look beyond physical “things” and the role of sensors, machines, and other devices as signals and actuators. Important developments, no doubt, but only part of the puzzle. Innovation comes from bringing together the parts to do something of value differently—seeing, understanding, and reacting to the world around them on their own or alongside their human counterparts.
Ambient computing is about embracing this backdrop of sensing and potential actiontaking with an ecosystem of things that can respond to what’s actually happening in the business—not just static, pre-defined workflows, control scripts, and operating procedures. That requires capabilities to:
Integrate information flow between varying types of devices from a wide range of global manufacturers with proprietary data and technologies
Perform analytics and management of the physical objects and low-level events to detect signals and predict impact
Orchestrate those signals and objects to fulfill complex events or end-to-end business processes
Secure and monitor the entire system of devices, connectivity, and information exchange
Ambient computing happens when
this collection of capabilities is in place— elevating IoT beyond enabling and collecting information to using the fabric of devices and signals to do something for the business, shifting the focus from the novelty of connected and intelligent objects to business process and model transformation.  
What is the “what”?
The focus on the “things” side of the equation is natural. Manufacturing, materials, and computer sciences continuously drive better performance with smaller footprints and lower costs. Advances in sensors, computing, and connectivity allow us to embed intelligence in almost everything around us. From jet engines to thermostats, ingestible pills to blast furnaces, electricity grids to self-driving freight trucks—very few technical constraints remain to connect the balance sheets of our businesses and our lives. The data and services available from any individual “thing” are also evolving, ranging from:
Internal state: Heartbeat- and ping-like broadcasts of health, potentially including diagnostics and additional status reporting (for example, battery level, CPU/memory utilization, strength of network signal, up-time, or software/platform version)
Location: Communication of physical location via GPS, GSM, triangulation, or proximity techniques
Physical attributes: Monitoring the world surrounding the device, including altitude, orientation, temperature, humidity, radiation, air quality, noise, and vibration
Functional attributes: Higher-level intelligence rooted in the device’s purpose for describing business process or workload attributes
Actuation services: Ability to remotely trigger, change, or stop physical properties or actions on the device
New products often embed intelligence as a competitive necessity. And the revolution is already well underway. An estimated 11 billion sensors are currently deployed on production lines and in power grids, vehicles, containers, offices, and homes. But many aren’t connected to a network, much less the Internet.2 Putting these sensors to work is the challenge, along with deciding which of the 1.5 trillion objects in the world should be connected and for what purpose.3 The goal should not be the Internet of Everything; it should be the network of some things, deliberately chosen and purposely deployed. Opportunities abound across industries and geographies—connected cities and communities, manufacturing, retail, health care, insurance, and oil and gas.    
Beyond the thing
Deliberate choice and purpose should be the broader focus of ambient computing. Analytics is a big part of the focus—turning data into signals and signals into insight. Take transportation as an example. Embedding sensors and controls in 24,000 locomotives, 365,000 freight cars, and across 140,000 miles of track supporting the United States’ “Class I” railroads only creates the backdrop for improvement. Moving beyond embedding, companies such as General Electric (GE) are creating predictive models and tools for trains and stockyards. The models and tools optimize trip velocities by accounting for weight, speed,

From the Internet of Things to ambient computing: A concentric system
The Internet of Things lives through sensors and actuators embedded in devices interacting with the world physically and functionally. Ambient computing contains this communication at the core, and harnesses the environment for business processes and insights.
Sensors & sensors light, vibration, pressure, torque, electrical current.  Temperature, location, sound, motion,  connectivity actuators Valves, switches, power, embedded
Underlying components controls, alarms, intra-device settings.
allowing intelligence commmunication From near- to far-field: RFID, and communication to NFC, ZigBee, Bluetooth, Wi-Fi, WiMax, cellular, 3G, be embedded in objects. LTE, satellite.
Device consumer products glasses, dishwashers, washing machines, thermostats.Smartphones, tablets, watches,
ecosystem industrial Construction machines, manufacturing
New connected and and fabrication equipment, mining equipment, intelligent devices across engines, transmission systems, warehouses, smart
categories making legacy homes, microgrids, mobility and transportation objects smart. systems, HVAC systems.
Ambient integration orchestration Messaging, quality of service, reliability.Complex event processing, rules services engines, process management and automation. The building blocks of analytics Baselining and anomaly monitoring, ambient computing and signal detection, advanced and predictive modeling. services powered by security Encryption, entitlements management, sensors and devices. user authentication, nonrepudiation.
Business basic tuning, risk and performance management.Efficiency, cost reduction, monitoring and
use casesa advanced Innovation, revenue growth, business Representative scenarios insights, decision making, customer engagement, by industry to harness product optimization, shift from transactions the power of ambient to relationships and from goods to outcomes. computing.
logistics Inventory health & wellness mechanical Worker manufacturing
and asset management, Personalized treatment, safety, remote trouble- Connected machinery, fleet monitoring, route remote patient care. shooting, preventative automation. optimization. maintenance.
Source: a Deloitte Development LLC, The Internet of Things Ecosystem: Unlocking the business value of connected devices, 2014, http://www2.deloitte.com/us/en/pages/technology-media-and-telecommunications/articles/internet-of-things-iot-enterprise-value-report.html, accessed January 7, 2015.

fuel burn, terrain, and other traffic. The gains include faster-rolling trains, preemptive maintenance cycles, and the ability to expedite the staging and loading of cargo.4
The GE example highlights the need for cooperation and communication among a wide range of devices, vendors, and players—from partners to competitors, from customers to adjacent parties (for example, telecommunication carriers and mobile providers). The power of ambient computing is partially driven by Metcalfe’s Law, which posits that the value of a network is the square of the number of participants in it. Many of the more compelling potential scenarios spill across organizational boundaries, either between departments within a company, or through cooperation with external parties. Blurry boundaries can fragment sponsorship, diffuse investment commitments, and constrain ambitions. They can also lead to isolationism and incrementalism because the effort is bounded by what an organization directly controls rather than by the broader analytics, integration, and orchestration capabilities that will be required for more sophisticated forays into ambient computing. Ecosystems will likely need to evolve and promote industry standards, encourage sharing through consortia, and move away from proprietary inclinations by mandating open, standardsbased products from third parties.
Ambient computing involves more than rolling out more complete and automated ways to collect information about real-world behavior. It also turns to historical and social data to detect patterns, predict behaviors, and drive improvements. Data disciplines are essential, including master data and core management practices that allow sharing and provide strategies for sensing and storing the torrent of new information coming from the newly connected landscape. Objects can create terabytes of data every day that then need to be processed and staged to become the basis for decision making. Architectural patterns are emerging with varying philosophies: embedding intelligence at the edge (on or near virtually every device), in the network, using a cloud broker, or back at the enterprise hub. One size may not fit all for a given organization. Use cases and expected business outcomes should anchor the right answer.
The final piece of the puzzle might be the most important: how to put the intelligent nodes and derived insights to work. Again, options vary. Centralized efforts seek to apply process management engines to automate sensing, decision making, and responses across the network. Another approach is decentralized automation, which embeds rules engines at the endpoints and allows individual nodes to take action.
In many cases, though, ambient computing is a sophisticated enabler of amplified intelligence5 in which applications or visualizations empower humans to act differently. The machine age may be upon us—decoupling our awareness of the world from mankind’s dependency on consciously observing and recording what is happening. But machine automation only sets the stage. Real impact, business or civic, will come from combining data and relevant sensors, things, and people so lives can be lived better, work can be performed differently, and the rules of competition can be rewired.
Lessons from the front lines
From meters to networks
ComEd, an Exelon company that provides electricity to 3.8 million customers in Northern Illinois,6 is in the midst of a $2.6 billion smart grid project to modernize aging infrastructure and install smart meters for all of its customers by 2018.7 The primary goals of this undertaking are to enhance operational efficiency and to provide customers with the information and tools to better manage their energy consumption and costs. Featuring advanced meter infrastructure (AMI), the new meters reduce electricity theft and consumption on inactive meters, reduce the number of estimated electric bills, minimize energy loss, and reduce the need for manual meter reading. Numerous operational efficiencies and benefits are emerging. For example, on Chicago’s south side, AMI meter reading has increased the percentage of meters read from 60 percent to 98 percent. Last year, ComEd was given the green light by the Illinois Commerce Commission to accelerate its smart meter installation program, thus making it possible to complete the project three years ahead of schedule.8
The smart grid effort will also improve ComEd’s ability to maintain its overall infrastructure. Real-time visibility into transformers, feeders, and meters will help the company detect, isolate, and resolve maintenance incidents more efficiently. Other smart grid components will improve communications among field services technicians, operators, and even customers. Analytics, residing atop integration and event processing layers, will form an integral part of the company’s ambient computing platform. Security and privacy capabilities will help protect against attacks to critical infrastructure by providing the company with remote access to individual meters and visibility into usage for any given residence or commercial location.
ComEd is also developing a suite of services that will make it possible for customers to view their own energy usage (including itemized energy costs per appliance). The goal of these services is to help individuals proactively regulate their own power consumption and achieve greater efficiency during periods of peak power usage.
As the smart grid project progresses, ComEd leaders remain strategic and flexible. When opportunities to accommodate future technological advances emerge, they adapt their approaches accordingly. For example, when it installed a network of AMI access points, the company decided to place the network physically higher than needed at the time. Why?  Because doing so could make it possible to repurpose the existing residential mesh network, should the opportunity ever arise. And one year later, the company is piloting new LED streetlights powered by the repurposed mesh network.
With ambient computing advancing more rapidly each year, ComEd’s leaders are keeping their options open and the new smart grid system as flexible as possible in order to take advantage of new improvements, devices, and opportunities that may emerge.
Home sweet conscious home
The makers at Nest Labs view embedded sensors and connectivity as a means to an end, not as ends unto themselves. Their vision is one of a “conscious home” that emphasizes comfort, safety, and energy savings. Many products in the broader Internet of Things space focus on raw technology features. However, Maxime Veron, Nest’s head of product marketing, downplays the technology aspects of Nest’s offerings, noting: “The
fact that your device is connected does not automatically make it a better product.”  A case in point is the Nest Learning Thermostat—a next-generation wall thermostat that uses occupancy sensors, on-device learning, cloud-based analytics, and Web services to learn an occupant’s schedule and integrate into his or her life. The company designs customer experiences that focus on usability from the point of installation: The thermostat features snap connectors for wiring, includes a carpenter’s level built into the base of the unit to ease finishing, and comes with a multi-head screwdriver to help installers more easily replace legacy hardware. The thermostat’s operation similarly evokes the qualities of ambient computing in that the complexity of sensing, learning from occupant behavior, and self-tuning settings remains largely invisible to the user. Veron notes, “We don’t want to give you something to program.”
Nest Labs is looking beyond any single device toward broader platforms and services. The company launched a partner program to allow third-party products to interact with Nest products. The goal is to create more intuitive ways to learn about and respond to specific user behavior and preferences. For example, your car can alert the Nest Thermostat to begin cooling your home at a certain point during your evening commute. Upon your arrival, the house is comfortable, but you haven’t wasted energy cooling it all day long. Nest Labs’ second product, Nest Protect, is a smoke and carbon monoxide alarm that can send a message to a mobile device about what it has detected, turn off heating by a gas furnace when it detects a possible CO leak (if the customer has a Nest Thermostat as well), and link to the company’s Dropcam video camera to save a clip of what was happening when the alarm initiated.
These scenarios involve not just connectivity and interoperability, but also advanced levels of orchestration and analytics, as well as sophisticated but simple user experiences. Nest Labs, acquired by Google in 2014, has kept the majority of its development in-house, believing that applying the same standards and rigor to its design process from beginning to end—including hardware, software, external data inputs, sensors, and app development—will ultimately result in a more powerful experience for customers inhabiting a
“conscious home.”
No more circling the block
Many of us have had a parking experience so bad that we avoid the area in the future, opting for restaurants or stores that do not require a frustrating parking lot tour. And because parking tickets and meter fees are often considerable sources of revenue for cities overseeing public parking and for organizations that own parking lots and structures, opportunities to address commuter frustration, pollution, and lost sales revenue through better parking regulations may be mismanaged or ignored altogether. Enter Streetline, Inc., a San Francisco Bay-area company that helps to solve parkingrelated challenges from the ground up (literally) through its mesh networking technology, real-time data, and platform of parking applications.
The Streetline approach is composed of three layers. First, when deploying its platform in a new location, Streetline installs sensors which determine space occupancy or vacancy in individual parking spaces. The second layer is a middleware learning platform that merges real-time and historical sensor data to determine the validity of a parking event (a true arrival or departure) and relays the current status of each space to the system’s backend. An inference engine weeds out false positives such as a garbage can left in a space, or a driver pulling into a parking spot for a moment and then leaving. Finally, there is the application layer that includes a variety of mobile and Web-based tools that deliver up-to-the-minute parking information to commuters, business owners, city officials, and parking enforcement officers in or near the deployment area.
Streetline’s Parker™ app guides motorists to open parking spaces, which can decrease driving times, the number of miles traveled, and motorist frustration. Through integration with leading mobile payment providers, the Parker app enables drivers to “feed” parking meters electronically—without the hassle of searching for quarters. Furthermore, motorists can add time to their meter remotely before time expires to avoid parking tickets. ParkerMap™ makes it possible for companies to create online maps of available parking spaces in a given area, along with lot hours and parking rates. Using the ParkerData™ Availability API, cities can publish parking information on dynamic signage, strategically placed around a city. Combined, these different methods of way-finding help consumers find parking more quickly, increasing parking space turnover—and thereby potentially driving increases in foot traffic and sales among local merchants. In fact, studies have revealed that smart parking systems can improve the local economy, as evidenced by a 12 percent increase in merchant sales tax revenue in one of Streetline’s customer cities.9 Moreover, the cities, universities, and companies that own parking in a given area can get access to information about utilization and consumer trends, as well as recommendations for better parking policies and pricing. Law enforcement also has access to similar information, helping enforcement officers increase their productivity and efficiency by as much 150 percent.10 Streetline’s products are deployed in 40 locations globally, and the company is currently exploring ways to increase the pace of adoption through new use cases, sponsorships, and a monetized API. It is also exploring the capture of new data types including ground surface temperature, noise level, air quality, and water pressure, to name a few.
What began as a desire to make life a little easier for motorists in the congested streets of San Francisco is quickly becoming a foundational layer for the emergence of smarter cities and the Internet of Things worldwide.
Products to platforms
Bosch Group knows a thing or two about disruptive technologies and their business potential. As the world’s third-largest private company, it manufactures a wide range of products, from consumer goods to industrial equipment, including some of the building blocks of ambient computing—shipping roughly 1 billion microelectromechnical systems (MEMS) sensors in 2014. Recognizing the potential of the Internet of Things (IoT), its vision has been embedding connectivity and intelligence in products across its 350-plus business units.
In 2008, the company launched Bosch Software Innovations (Bosch SI), a business unit dedicated to pioneering IoT and ambient computing solutions for industrial environments. “We are trying to bring 130 years of manufacturing experience to connectivity,” says Troy Foster, Bosch SI CTO Americas. Bosch SI approaches its mission from an enterprise software perspective— looking beyond the device to enable the kind of business intelligence, processes, and decision making that drive value from data.
To that end, Bosch SI’s IoT platform is composed of four primary software components: a machine-to-machine layer, business process management, business rules management, and an analytics engine. The IoT system was designed to accommodate growing data volumes as sensors get smaller and cheaper, spurring wider deployment. Configurable rules allow evolving, actionable insights to be deployed.
For example, Bosch SI is currently developing preventative maintenance solutions that leverage IoT predictive analytics capabilities to analyze system and performance data generated by sensors embedded in industrial equipment. The goal is to predict equipment failures and perform maintenance proactively to address potential issues. Costs mount quickly when a manufacturing line goes down or mining equipment in a remote location fails; preventing incidents can save customers considerable sums of money.
Other examples include improved visibility of deployed equipment in the field—from factory equipment to vending machines. Bosch SI also helps automobile manufacturers and their suppliers refine and improve their products. To do that, they need data from cars in operation to understand how components such as a transmission system,
for example, perform. Traditionally, they only got that information when the car was in for maintenance. Now sensors and telematics can convey that data directly to the manufacturers. Using similar technology, Bosch helps insurance companies move to usage-based coverage models instead of using hypothetical approximations of risk.  
Beyond improving existing products and processes and helping manufacturers work more efficiently, the IoT is enabling new business models. “We are looking at many different pieces including smart homes, micro grids, and usage-based car insurance, to name a few,” Foster says. “Many business ideas and models that were considered prohibitively expensive or unrealistic are viable now thanks to advances in IoT.” 

Ambient computing
My take
Richard Soley, PhD
Chairman and CEO, Object Management Group
Executive director, Industrial Internet Consortium

As head of the Object Management Group, one of the world’s largest technology standards bodies, I’m often asked when standards will be established around the Internet of Things (IoT).11 This common question is shorthand for: When will there be a language to ease interoperability between the different sensors, actuators, and connected devices proliferating across homes, business, and society?
In developing IoT standards, the easy part is getting bits and bytes from object to object, something we’ve largely solved with existing protocols and technologies. The tricky part relates more to semantics—getting everyone to agree on the meaning and context of the information being shared and the requests being made. On that front, we are making progress industry by industry, process area by process area. We’re seeing successes in use cases with bounded scope—real problems, with a finite number of actors, generating measurable results.
This same basic approach—helping to coordinate industrial players, system integrators, start-ups, academia, and vendors to build prototype test beds to figure out what works and what doesn’t—is central to the charter of the Industrial Internet Consortium (IIC).12 The IIC has found that the more interesting scenarios often involve an ecosystem of players acting together to disrupt business models.
Take, for example, today’s self-driving cars, which are not, in and of themselves, IoT solutions. Rather, they are self-contained, autonomous replacements for drivers. However, when these cars talk to each other and to roadway sensors and when they can use ambient computing services like analytics, orchestration, and event processing to dynamically optimize routes and driving behaviors, then they become headliners in the IoT story.
The implications of self-driving cars talking to each other are profound—not only for taxicab drivers and commuters, but also for logistics and freight transport. Consider this: Roughly one-third of all food items produced today are lost or wasted in transit from farm to table.13 We could potentially make leaps in sustainability by integrating existing data on crop harvest schedules, grocery store inventory levels,
and consumer purchasing habits, and analyzing this information to better match supply with demand.
The example that excites and scares me the most revolves around maintenance. The IoT makes it possible to reduce—and potentially eliminate— unexpected maintenance costs by sensing and monitoring everything happening within a working device, whether it be a jet engine, medical device, or distribution system. Rather than reacting to mechanical or system breakdowns, engineers could work proactively to address problems before they become full-blown malfunctions. Companies could deploy systems in which nothing fails. Imagine the impact on industry. Business models based on replenishment/replacement cycles would need to be overhauled. Manufacturers of spare parts and providers of repair services might potentially disappear completely, as the focus of maintenance shifts from objects to outcomes. The list of possible ramifications is staggering.
When the future-state level of interconnectivity is realized, who will own each step along the supply chain? End-to-end control affords significant opportunity, but it is rarely achieved. When the IoT evolves, I imagine it will resemble the newly integrated supply chains that emerged in the 1980s and 1990s. While no one controlled the entire supply chain, it was in everyone’s interest along that chain to share and secure information in ways that benefited all parties.
My advice to companies currently considering IoT investments is, don’t wait. Begin collaborating with others to build prototypes and create standards. And be prepared—your IoT initiatives will likely be tremendously disruptive. We don’t know exactly how, but we do know this: You can’t afford to ignore the Internet of Things.

Cyber implications
Enabling the Internet of Things requires a number of logical and physical layers, working seamlessly together. Device sensors, communication chips, and networks are only the beginning. The additional services in ambient computing add even more layers: integration, orchestration, analytics, event processing, and rules engines. Finally, there is the business layer—the people and processes bringing business scenarios to life. Between each layer is a seam, and there are cyber security risks within each layer and in each seam.
One of the more obvious cyber security implications is an explosion of potential vulnerabilities, often in objects that historically lacked connectivity and embedded intelligence. For example, machinery, facilities, fleets, and employees may now include multiple sensors and signals, all of which can potentially be compromised. CIOs can take steps to keep assets safe by considering cyber logistics before placing them in the IT environment. Ideally, manufacturing and distribution processes have the appropriate controls. Where they don’t, securing devices can require risky, potentially disruptive retrofitting. Such precautionary steps may be complicated by the fact that physical access to connected devices may be difficult to secure, which leaves the door open to new threat vectors. What’s more, in order to protect against machines being maliciously prompted to act against the interests of the organization or its constituencies, IT leaders should be extra cautious when ambient computing scenarios move from signal detection to actuation—a state in which devices automatically make decisions and take actions on behalf of the company.
Taking a broad approach to securing ambient computing requires moving from compliance to proactive risk management. Continuously measuring activities against a baseline of expected behavior can help detect anomalies by providing visibility across layers and into seams. For example, a connected piece of construction equipment has a fairly exhaustive set of expected behaviors, such as its location, hours of operation, average speed, and what data it reports. Detecting anything outside of anticipated norms can trigger a range of responses, from simply logging a potential issue to sending a remote kill signal that renders the equipment useless.
Over time, security standards will develop, but in the near term we should expect them to be potentially as effective (or, more fittingly, ineffective) as those surrounding the Web. More elegant approaches may eventually emerge to manage the interaction points across layers, similar to how a secured mesh network handles access, interoperability, and monitoring across physical and logical components.  
Meanwhile, privacy concerns over tracking, data ownership, and the creation of derivative data using advanced analytics persist. There are also a host of unresolved legal questions around liability. For example, if a self-driving car is involved in an accident, who is at fault? The device manufacturer? The coder of the algorithm? The human “operator”? Stifling progress is the wrong answer, but full transparency will likely be needed while companies and regulators lay the foundation for a safe, secure, and accepted ambient-computing tomorrow.
Finally, advanced design and engineering of feedback environments will likely be required to help humans work better with machines, and machines work better with humans. Monitoring the performance and reliability of ambient systems is likely to be an ongoing challenge requiring the design of more relevant human and machine interfaces, the implementation of effective automation algorithms, and the provisioning of helpful decision aids to augment the performance of humans and machines working together—in ways that result in hybrid (human and technical) secure, vigilant, and resilient attributes.

Where do you start?
MANY don’t need to be convinced of ambient computing’s opportunities. In a
recent survey, nearly 75 percent of executives said that Internet of Things initiatives were underway.14 Analysts and companies across industries are bullish on the opportunities. Gartner predicts that “by 2020, the installed base of the IoT will exceed 26 billion units worldwide; therefore, few organizations will escape the need to make products intelligent and the need to interface smart objects with corporate systems.”15 Other predictions measure economic impact at $7.1 trillion by 2020,16 $15 trillion in the next 20 years,17 and $14 trillion by 2022.18 But moving from abstract potential to tangible investment is one of the biggest hurdles stalling progress. Below are some lessons learned from early adopters.
Beware fragmentation. Compelling ambient computing use cases will likely cross organizational boundaries. For example, retail “store of the future” initiatives may cross store management, merchandising, warehouse, distribution center, online commerce, and marketing department responsibilities—requiring political and financial buy-in across decision-making authorities. Because the market lacks end-to-end solutions, each silo may be pursuing its own initiative, offering at best incremental effect, at worst redundant or competing priorities.
Stay on target. Starting with a concrete business outcome will help define scope by guiding which “things” should be considered and what level of intelligence, automation, and brokering will be required. Avoid “shiny object syndrome,” which can be dangerously tempting given how exciting and disruptive the underlying technology can seem.
User first. Even if the solution is largely automated, usability should guide vision, design, implementation, and ongoing maintenance plans. Companies should use personas and journey maps to guide the end-to-end experience, highlighting how the embedded device will take action, or how a human counterpart will participate within the layers of automation.
Eyes wide open. Connecting unconnected things will likely lead to increased costs, business process challenges, and technical hurdles. Be thoughtful about funding the effort and how adoption and coverage will grow. Will individual organizations have to shoulder the burden, or can it be shared within or across industries and ecosystems? Additionally, can some of the investment be passed on to consumers? Although business cases are needed, they should fall on the defensible side of creative.
Network. With the emphasis on the objects, don’t lose sight of the importance of connectivity, especially for items outside of established facilities. Forrester Research highlights “a plethora of network technologies and protocols that define radio transmissions including cellular, Wi-Fi, Bluetooth LE, ZigBee, and Z-Wave.”19 Planning should also include IPv6 adoption,20 especially with the public IPv4 address space largely exhausted and the aforementioned billions of new Internetenabled devices expected in the next 10 years.
Stand by for standards. Standards help create collaborative and interoperable ecosystems. We expect that IoT standards for interoperability, communication, and security will continue to evolve,
with a mix of governmental bodies, industry players, and vendors solving some of the challenges inherent in such a heterogeneous landscape. Several IoTfocused standards bodies and working groups including the AllSeen Alliance, Industrial Internet Consortium, Open
Interconnect Consortium, and Thread
Group have formed in the last two years.21  Having preliminary standards is important, but you shouldn’t hold off on investing until all standards are finalized and approved. Press forward and help shape the standards that impact your business.
Enterprise enablement. Many organizations are still wrestling with smartphone and tablet adoption—how to secure, manage, deploy, and monitor new devices in the workplace. That challenge is exponentially exacerbated by ambient computing.  Consider launching complementary efforts to provision, deploy policies for, monitor, maintain, and remediate an ever-changing roster of device types and growing mix of underlying platforms and operating systems.

Bottom line
AMBIENT computing shouldn’t be looked at as just a natural extension of mobile and the initial focus on the capabilities of smartphones, tablets, and wearables—though some similarities hold. In those cases, true business value came from translating technical features into doing things differently—or doing fundamentally different things. Since ambient computing is adding connectivity and intelligence to objects and parts of the world that were previously “dark,” there is less of a danger of seeing the opportunities only through the lens of today’s existing processes and problems. However, the expansive possibilities and wide-ranging impact of compelling scenarios in industries such as retail, manufacturing, health care, and the public sector make realizing tomorrow’s potential difficult. But not impossible. Depending on the scenario, the benefits could be in efficiency or innovation, or even a balance of cost reduction and revenue generation. Business leaders should elevate discussions from the “Internet of Things” to the power of ambient computing by finding a concrete business problem to explore, measurably proving the value, and laying the foundation to leverage the new machine age for true business disruption.