About
Feedback
Login
NEWS
ORGANIZATIONS
PROJECTS
PRODUCTS
1 to 37 of 37
Arup
Arup, a global consulting engineering firm, recently welcomed clients and partners to its 65,000-sf, four-floor Toronto offices to unveil two new ‘incubators,’ the Maker’s and Pegasus Labs.

Maker’s Lab (pictured above) facilitates modelling, production, assembly and prototyping. The open collaboration space is equipped with a laser cutter, 3-D printers, manual tools and common materials like wood, composites, plastics, light metals and cardstock. Arup encourages using discarded materials for sketch models and early concepts or prototypes.

Pegasus Lab, meanwhile, is dedicated to experiential design through digital engineering workflows and visualizations of operational processes and designs. It features virtual reality (VR), gesture recognition, artificial intelligence (AI), machine learning, video analytics, augmented reality (AR) and Arup’s own Neuron ‘smart building’ platform.

“Arup was the first firm to embrace digital engineering in 1957 during the design of the Sydney Opera House by using the Pegasus computer,” explains Justin Trevan, the company’s digital technology consulting and advisory services leader for Canada. “Today, the firm continues to innovate for efficient, sustainable and economical solutions.”

In addition to live demos in the two new labs, guests experienced such installations as Motion Platform, which allows users to feel the vibrations of a building while it is still on the drawing board, and Mobile Sound Lab, an immersive audiovisual (AV) environment with simulations of both existing and as-yet-unbuilt spaces.
MIT
Blaine Brownell explores emergent teleoperation and telerobotics technologies that could revolutionize the built environment.

Design practitioners have become familiar with an array of evolving technologies such as virtual and augmented reality (VR/AR), artificial intelligence (AI), the internet of things (IoT), building information modeling (BIM), and robotics. What we contemplate less often, however, is what happens when these technologies are combined.

Enter the world of teleoperation, which is the control of a machine or system from a physical distance. The concept of a remote-controlled machine is nothing new, but advances in AR and communication technologies are making teleoperability more sophisticated and commonplace. One ultimate goal of teleoperability is telepresence, which is commonly used to describe to videoconferencing, a passive audiovisual experience. But increasingly, it also pertains to remote manipulation. Telerobotics refers specifically to the remote operation of semi-autonomous robots. These approaches all involve a human–machine interface (HMI), which consists of “hardware and software that allow user inputs to be translated as signals for machines that, in turn, provide the required result to the user,” according to Techopedia. As one might guess, advances in HMI technology represent significant potential transformations in building design and construction.

Tokyo-based company SE4 has created a similar telerobotics system that overcomes network lag by using AI to accelerate robotic control. Combining VR and computer vision with AI and robotics, SE4's Semantic Control system can anticipate user choices relative to the robot’s environment. “We’ve created a framework for creating physical understanding of the world around the machines,” said SE4 CEO Lochlainn Wilson in a July interview with The Robot Report. “With semantic-style understanding, a robot in the environment can use its own sensors and interpret human instructions through VR.”

Developed for construction applications, the system can anticipate potential collisions between physical objects, or between objects and the site, as well as how to move objects precisely into place (like the “snap” function in drawing software). Semantic Control can also accommodate collaborative robots, or “cobots,” to build in a coordinated fashion. “With Semantic Control, we’re making an ecosystem where robots can coordinate together,” SE4 chief technology officer Pavel Savkin said in the same article. “The human says what to do, and the robot decides how.”

Eventually, machines may be let loose to construct buildings alongside humans. Despite the significant challenges robotics manufacturers have faced in creating machines that the mobility and agility of the human body, Waltham, Mass.–based BostonDynamics has made tremendous advances. Its Atlas humanoid robot, made of 3D-printed components for lightness, employs a compact hydraulic system with 28 independently powered joints. It can move up to a speed of 4.9 feet per second. Referring to BostonDynamics’ impressive feat, Phil Rader, University of Minnesota VR research fellow, tells ARCHITECT that “the day will come when robots can move freely around and using AI will be able to discern the real world conditions and make real-time decisions.” Rader, an architectural designer who researches VR and telerobotics technologies, imagines that future job sites will likely be populated by humans as well as humanoids, one working alongside the other. The construction robots might be fully autonomous, says Rader, or “it's possible that the robot worker is just being operated by a human from a remote location.”

IBM
A biweekly tour of the ever-expanding cartographic landscape.

In 2014, researchers from the University of Washington announced that pairing Google StreetView with a cluster of “smart” surveillance cameras allowed them to create “a self-organized and scalable multiple-camera tracking system that tracks humans across the cameras.”

In so many words, they showed that it was possible to build a dynamic, near real-time visualization of pedestrians and traffic flows, projected onto a 360-degree map of the world. A bit of machine-learning software helped erase any seams. This was an early proof of concept in an urban setting of a technological model now known as a “digital twin.”

“Digital twin” is a creepy-sounding phrase, conjuring visions of pixelated doppelgangers haunting your every step. It doesn’t necessarily describe an all-out surveillance state, though: In some ways, this is an extension of the 3-D computer models that architects and engineers use to help plan a building, or maneuver the inner workings of a car engine before they hit the factory.

But the big difference with what the UW researchers were doing is that they were feeding real-time, real-world data into the digital platform, enabling an exact virtual simulacrum of physical streets. What’s more, AI enabled the virtual world to respond to the projected movements in a way that made it seem more real. This technology has taken off in the years since: IBM, Microsoft, HERE Maps, and Descartes Labs are all working toward building “digital twin” technologies for different uses, including for city planning.

For local governments, the benefits could be big. Already, a number Indian cities have adopted “digital twin” software to help manage water and energy infrastructure. In the U.K., researchers at Newcastle University built a digital twin of their city to help it better respond to flooding.

And the bylaws of the Open Mobility Foundation, a global nonprofit recently established to help cities govern the future of mobility data, state that a “digital twin” is the “only way” for cities to get control over the scooters, ride-hailing cars, and other conveyances clogging their streets. It describes how a digital replica of city streets could quickly model how, say, switching traffic signals to prioritize a speeding ambulance would affect other vehicle flows and what transportation officials would need to adjust in order to manage them.

On the other hand, the privacy implications of such a paradigm are pretty big. Who says a city should have that much oversight into the individual movements of every vehicle on the road? How much personally identifiable information would that require a city to absorb and own, and for how long? Players in the world of transportation technology are asking these questions now, as the public officials who head up the Open Mobility Foundation convene for their first board meeting next week. We’ll see what they have to say. (And watch for my story with more about digital twins, later this week in CityLab.)
Architect Magazine
From 89 submissions, the jury picked eight entries that prove architects can be at the helm of innovation, technology, and craft.

Do we control technology or does technology control us? Never has that question seemed more apt than now. The use of computational design, digital manufacturing, and artificial intelligence, if mismanaged, can have frightening consequences, the implications of which society is just beginning to comprehend. But the jury for ARCHITECT’s 13th annual R+D Awards was determined to accentuate the positive side of these advancements, seeking the best examples that “melded technology, craft, and problem-solving,” says Craig Curtis, FAIA.

The eight winners selected by Curtis and fellow jurors James Garrett Jr., AIA, and Carrie Strickland, FAIA, prove that designers can remain solidly in the driver’s seat despite the frenetic pace of technological developments in the building industry and beyond. “Architects are anticipating the future, helping to shape it, and giving it form,” Garrett says. “Moving forward, we are not going to be left behind. We are going to be a part of the conversation.”

JURY

Craig Curtis, FAIA, is head of architecture and interior design at Katerra, where he helped launch the now 300-plus-person design division of the Menlo Park, Calif.–based technology company and oversees the development of its configurable, prefabricated building platforms. Previously, he was a senior design partner at the Miller Hull Partnership, in Seattle.

James Garrett Jr., AIA, is founding partner of 4RM+ULA, a full-service practice based in St. Paul, Minn., that focuses on transit design and transit-oriented development. A recipient of AIA’s 2019 Young Architects Award, he is also an adjunct professor at the University of Minnesota School of Architecture, a visual artist, a writer, and an advocate for increasing diversity in architecture.

Carrie Strickland, FAIA, is founding principal of Works Progress Architecture, in Portland, Ore., where she is an expert in the design of adaptive reuse and new construction projects and works predominantly in private development. She has also taught at Portland State University and the University of Oregon, and served on AIA Portland’s board of directors.
ZGF Architects LLP
ZGF is analyzing how employees use its Seattle office with computer-vision software

ZGF Architects LLP is testing a computer-vision system in house to see if the technology can help it design office space better. If the pilot goes well, the firm plans to offer the service to clients.

The Portland, Ore.-based architectural firm, which has done work for Amazon.com Inc., Microsoft Corp. and Stanford University, assesses how clients use office space through surveys and staff observations. It is turning to computer vision to collect more-granular details on how people move around and use amenities. The hope is that more accurate data will allow the firm to make informed decisions on how wide stairways should be or the size and number of conference rooms a client needs, for instance.

“Being able to quantify what needs to go into building—rather than roughing it or building something bigger than it needs to be—means we can be more precise about how we design things,” said Dane Stokes, who leads the five-person ZGF computational design team that’s managing the pilot at the company’s Seattle office.

The computer-vision system under testing currently consists of four cameras that feed footage into object-recognition software.

The company has been moving those cameras around hallways and 12 conference rooms in a 39,000-square-foot office spread across two floors and connected by two stairways. It is testing the optimal placement for counting people and assessing the system’s effectiveness in recognizing objects such as office chairs and cellphones.

ZGF’s computer-vision trial illustrates how businesses are discovering new uses for artificial intelligence.

Very few architectural firms tap computer vision to analyze how office space is used, said Stanislas Chaillou, an architect and data scientist at Oslo-based property-technology company Spacemaker. The move could give ZGF a competitive edge, particularly when bidding on remodeling projects, he added.

“And as the space is being analyzed and the client sees that there is value to remodel the space, then that firm will be the company they call,” Mr. Chaillou said.

Computer-vision systems use machine learning to identify images. ZGF is using open-source software programs OpenCV and YOLO, which recognize thousands of objects such as humans and electronic devices up to roughly 50 feet away from the camera. The data is then fed into a visualization program that creates a 3-D representation of the space, its objects and occupants and their movement.

ZGF is working on integrating data from the computer-vision system with information collected from workers, including their feedback on environmental factors such as lighting and acoustics and their satisfaction about the availability of amenities such as conference rooms. The feedback will give ZGF architects a more complete picture of how people utilize and feel about a space.

One challenge the company is working through is getting its employees to trust the system, Mr. Stokes said, given the public concerns about intelligent cameras and privacy. Mr. Stokes said the system doesn’t utilize facial recognition; it only identifies people as “humans.” It also doesn’t record video: the system analyzes footage in real time but doesn’t save the images it captures, only the related data. To help allay any concerns during the internal trial, Mr. Stokes said he gave employees a demonstration of the system’s capabilities.

Even with all those steps, he said, clients may be hesitant to opt in to the system. “Working with clients is going to be interesting,” he said. “We’re trying to answer questions with our own staff so that we can speak more confidently about deploying it in our client spaces.”

He added: “As we go through this quest to learn more about how our spaces work, we’ve learned the ethics of how we should track that data [and] how we can get a better data set without compromising people’s comfort about the technology. We don’t want to get all ‘Big Brother’ on people.”

This summer, ZGF plans to launch an external trial on an undisclosed university-affiliated research center it designed, which has about 80,000 square feet of labs, classrooms and collaboration spaces.
Civil + Structural Engineer
Data Center Powerhouse ScaleMatrix has a Message for the AEC Industry: Bring it On.

Foreseeing the time when AEC firms will face data management issues caused by the mainstream implementation of AI and machine learning, California-based ScaleMatrix says it will be ready.

Mark Ortenzi and Chris Orlando, the high-performing masterminds and co-founders of ScaleMatrix, have invented a hybrid air/liquid cooled cabinet built to house virtually any hardware needed for an organization’s computing needs. With built-in logic, the cabinets are efficient, high-density, closed-loop, and fully modular. And compared to the installation of a traditional data center, ScaleMatrix can reduce the deployment time by as much as 75 percent, a deployment that is measured in days, not months or even years. If this cabinet is the meteorite, the old data center systems are the dinosaurs.

The ScaleMatrix cabinet has the ability to scale from 1kW to 52kW of workload, and it can handle anything an AEC firm can produce, especially as the industry has yet to employ AI and other cognitive technologies on a meaningful scale. However, with AI technology expected to boom in the coming years, that will probably change as engineering firms follow the lead of more progressive segments of the economy.

In a nutshell, data growth leads to compute and density increases – more processors – which leads to more power outputs, and thus increased heat, which leads to heightened cooling requirements. In the old days, the raised floor, the wind tunnel, and the chilled room were sufficient. Ortenzi and Orlando know all about it, because it was in the data center industry where they cut their teeth and made their names. But even as they flourished in that industry, they also saw the need for disruption.

“I wanted to invent a better mousetrap,” Ortenzi said.

Or, as Orlando likes to say, “If you want a cold beer, you don’t put it into a cold room. You put it in the refrigerator.”

With important partnerships with leading companies like Hewlett Packard Enterprise and NVIDIA – ScaleMatrix is a select partner in NVIDIA’s DGX-ready data center program – and now with data centers in San Diego, Seattle, Houston, Charlotte, and Jacksonville, ScaleMatrix upped the ante with the recent acquisition of Instant Data Centers, a deal that adds ruggedized, micro-data centers that can function on the edge – near the action and in remote locations, like a mine.

Even though the technology behind what ScaleMatrix does is perhaps dizzying, the philosophy is quite simple.

“Everything we do in this business is power and cooling,” Ortenzi said. “Next to labor, power is the biggest expense. It takes so many amps to cool so many amps. It takes so many watts to cool so many watts.”

The cabinets have built-in logic that responds to usage requirements, making the variable system “one big, breathing animal that modulates based on requirements,” Ortenzi said. The ScaleMatrix design includes full cooling support, redundant power supply, fire suppression, and integrated network support. When one cabinet gets filled up, just add another one. While ScaleMatrix at first offered cloud and colocation services, it has since added another distinct business line, the DDC™ cabinet for companies that want them for their own data centers.

While the reaction from the market has certainly been favorable – ScaleMatrix had 2018 combined sales of about $20 million and employs 52 people – it wasn’t necessarily instant and overwhelming.

“That’s a great novelty, but who needs that?” Ortenzi said, referring to the initial reaction he and Orlando got when they introduced a system that could handle such a heavy workload.

But all that changed about two years ago, when AI and machine learning came in from the fringe and entered the mainstream. Seemingly overnight, companies were dealing with more data than ever, and ScaleMatrix started fielding calls from all across the country, and even the world.

“All of a sudden, two years ago, all hell breaks loose and no one knew what to do,” Ortenzi said. “We’ve set ourselves up to be in a position to help people. Where else are they going to go?”
DamienGeso/iStock, David von Diemar/Unsplash]
In a broad new set of sustainability commitments, the company wants to use its tech to develop tools to monitor and find insights in environmental data.

In 2012, before declaring your company “carbon neutral” was de rigueur, Microsoft committed to that standard across its operations. Since then, Microsoft has continued to take steps toward cleaning up its own act, purchasing enough green power to equal its electricity consumption, investing in reforestation projects, and setting the target of reducing its emissions 75% by 2030.

Even though Microsoft has worked diligently to advance sustainable practices, its approach, says Lucas Joppa, the company’s chief environmental officer, has remained fairly internal. “We’ve been so focused on reducing the environmental footprint of our own operations–that was really the traditional focus,” Joppa says. Now, the company feels that it’s time to expand its its approach. Through a new set of sustainability commitments, Microsoft wants to turn its sustainability efforts outward, through making its artificial intelligence and tech tools more widely available for use in environmental research, and through new research and advocacy efforts in the environmental field.

“The reason we’re doing this is almost perfectly correlated with impatience,” Joppa says. “The reality shows that no matter how successful we are, sustainability actions inside of our own four walls are entirely insufficient for moving the world toward an environmentally sustainable future.” The same logic applies across the corporate world: No matter how much an individual company works to achieve personal sustainability goals, it’s not going to create the kind of large-scale change we need to combat climate change.

Microsoft’s plan is to turn what it does well–technology and AI–outward to support climate action. It will aggregate and host environmental data sets on its cloud platform, Azure, and make them publicly available (it’s also using AI to make its Azure data centers run more efficiently). Those data sets, according to Microsoft, are too large for researchers to use without advanced cloud computing, and hosting them on Azure should ease that issue.

The company will also scale up the work it does with other nonprofits and companies tackling environmental issues through a data lens. Microsoft has already worked in concert with the water management company Ecolab to develop a tool to assess and monetize a company’s water usage, and how much they would save–both in financial and environmental terms–by driving down their consumption and waste. They’ll also work with The Yield, a company that uses sensors to assess weather and conditions for farmers, to improve the operations of their tools and equip them with AI that will help them predict weather patterns and soil conditions in advance. And they’re equipping SilviaTerra, a startup that uses AI to monitor global forest populations, with the tools it needs to store and analyze vast amounts of data.

Alongside these partnerships, Microsoft is also working to prove that these types of data-driven projects can deliver enormous benefits to both the environment and the economy. Through research conducted with PwC, Microsoft looked at how AI could be applied across four sectors with implications for the planet: agriculture, water, energy, and transportation. “Even just for a few different sectors, and a few different levers in those sectors, a rapid adoption of AI-based technology has the potential to not only make significant gains for the environment, but also for the GDP overall,” Joppa says. Microsoft found that advancing AI usage across those four sectors could boost global GDP by as much as 4.4% by 2030, and reduce greenhouse gas emissions by around 4% in the same time period. “We need to get past the idea that acting on climate will slow economic growth,” Joppa says.
Nvidia
Today at Nvidia GTC 2019, the company unveiled a stunning image creator. Using generative adversarial networks, users of the software are with just a few clicks able to sketch images that are nearly photorealistic. The software will instantly turn a couple of lines into a gorgeous mountaintop sunset. This is MS Paint for the AI age.

Called GauGAN, the software is just a demonstration of what’s possible with Nvidia’s neural network platforms. It’s designed to compile an image how a human would paint, with the goal being to take a sketch and turn it into a photorealistic photo in seconds. In an early demo, it seems to work as advertised.

GauGAN has three tools: a paint bucket, pen and pencil. At the bottom of the screen is a series of objects. Select the cloud object and draw a line with the pencil, and the software will produce a wisp of photorealistic clouds. But these are not image stamps. GauGAN produces results unique to the input. Draw a circle and fill it with the paint bucket and the software will make puffy summer clouds.

Users can use the input tools to draw the shape of a tree and it will produce a tree. Draw a straight line and it will produce a bare trunk. Draw a bulb at the top and the software will fill it in with leaves producing a full tree.

GauGAN is also multimodal. If two users create the same sketch with the same settings, random numbers built into the project ensure that software creates different results.

In order to have real-time results, GauGAN has to run on a Tensor computing platform. Nvidia demonstrated this software on an RDX Titan GPU platform, which allowed it to produce results in real time. The operator of the demo was able to draw a line and the software instantly produced results. However, Bryan Catanzaro, VP of Applied Deep Learning Research, stated that with some modifications, GauGAN can run on nearly any platform, including CPUs, though the results might take a few seconds to display.

In the demo, the boundaries between objects are not perfect and the team behind the project states it will improve. There is a slight line where two objects touch. Nvidia calls the results photorealistic, but under scrutiny, it doesn’t stand up. Neural networks currently have an issue on objects it was trained on and what the neural network is trained to do. This project hopes to decrease that gap.

Nvidia turned to 1 million images on Flickr to train the neural network. Most came from Flickr’s Creative Commons, and Catanzaro said the company only uses images with permission. The company says this program can synthesize hundreds of thousands of objects and their relation to other objects in the real world. In GauGAN, change the season and the leaves will disappear from the branches. Or if there’s a pond in front of a tree, the tree will be reflected in the water.

Nvidia will release the white paper today. Catanzaro noted that it was previously accepted to CVPR 2019.

Catanzaro hopes this software will be available on Nvidia’s new AI Playground, but says there is a bit of work the company needs to do in order to make that happen. He sees tools like this being used in video games to create more immersive environments, but notes Nvidia does not directly build software to do so.
Lauren Nassef
It's not a matter of if the architecture profession will feel the impacts of artificial intelligence—it's a matter of when.

“Self-driving cars can identify objects as they drive,” a video from the company Smartvid.io proclaims. “What if we could bring this ability to the industrial world?” The Cambridge, Mass.–based outfit has developed technology to do just that: It offers software that analyzes huge amounts of data—in the form of photos and videos from construction sites—to identify safety risks that might not be evident to a human observer. It tags, for example, workers who are missing hard hats and types of ladders considered risky, promising to help “reinforce safety culture.”

“The risks might not be obvious right away, but when you look at the total data, it emerges,” says Imdat As, an expert in the rise of artificial intelligence in the field of architecture and founder of Arcbazar, a competition platform for architectural design projects. As notes that this type of artificial intelligence used by Smartvid.io—called deep learning—is an early application of what we’ll see from AI in architecture more broadly, such as computer tools that will offer alternative design solutions.

Many architects are excited about these opportunities, and some large firms are exploring the latest technology. But what about smaller firms? According to the AIA's 2018 Firm Survey Report, 75.8 percent of firms have one to nine employees. How will these smaller outfits, with smaller budgets, confront the rise of AI? Though smaller firms may face resource challenges, as artificial intelligence tools become more widespread and less expensive, they perhaps stand to benefit the most.

From Automation to Artificial Intelligence

Already, architects are increasingly using technology to automate the quantifiable aspects of architecture, such as apps that give a designer almost instant access to zoning rules or building codes in a certain area. But this isn’t AI, explains As, noting that the way we think about AI today stems from work that began accelerating in 2011 because of better and cheaper computers, as well as increasing amounts of available data. “Ninety percent of all data available in the world has been produced in the last two years,” he says.

Artificial intelligence thus doesn’t merely automate a task by serving as an efficient clearinghouse of data; rather, it analyzes data and generates new ideas or solutions, similar to how a human mind would approach a problem. Hence, there is a need for more and better data from which machines can learn.

While most of the currently popular AI applications involve the processing of text, audio, and images—such as what self-driving cars and Smartvid.io’s construction software does—As says new forms of AI tools that can learn from different data sources, such as drawings, are on their way for architects. (Other forms of AI research that are not datadriven, such as evolutionary algorithms, also might someday provide alternative solutions to architectural issues.)

In the future, for instance, architects will likely be able to tell a program that they want a house for a family with two children and a dog that must also be handicapped-accessible. Though the system can theoretically generate millions of examples, it will narrow them down to the dozens that it “thinks” are best, and the designer can further develop one or more of those.
Misty Robotics
Misty II is a development platform for engineers and makers that was created to change how we think about robots.

Developers may remember a time when you'd boot up your computer and all you'd get was a blank screen and blinking cursor. It was up to engineers and coders to build the content; the computer was just a platform. Ian Bernstein, founder and head of product at Misty Robotics, believes robots today are in that same place that computers were decades ago. “We're at that same point with robots today, where people are just building robots over and over with Raspberry Pis and Arduinos,” Bernstein told Design News.

Bernstein is calling for a departure from thinking of robots as tools and machines to thinking of them more as platforms. Misty Robotics has designed its flagship robot of the same name, Misty, with that idea in mind. “It's about giving people enough functionality to start to do useful things—but not too much, where it becomes too expensive or complicated,” Bernstein said. “It's also about complexity. For developers, it is not approachable if you don't know where to start.”

Boulder, Colorado-based Misty Robotics' upcoming product, Misty II, is a 2-ft-tall, 6-lb robot. It is designed to do what the smartphone has done for mobile app developers, but for robotics engineers and makers—provide access to powerful features to open up the robot for a variety of applications. At its core, Misty II is driven by a deep learning processor capable of a variety of machine learning tasks, such as facial and object recognition, distance detection, spatial mapping, and sound and touch sensing. Developers can also 3D print (or even laser cut or CNC machine) custom parts to attach to Misty to expand its functionality for moving and manipulating objects. Misty II will also feature USB and serial connectors as well as an optional Arduino attachment to allow for hardware expansion with additional sensors and other peripherals.(One planned for release by the company is a thermal imaging camera.)

There are already several single-purpose robots available to consumers to use in the home. People will be most familiar with the Roomba robotic vacuum, but there are also robotic window washers, lawnmowers, security guards, and even pool cleaners currently available.

Speaking with Design News ahead of CES 2019, where Misty II was available for hands-on demonstrations, Bernstein said that, while the idea of a smart home full of connected robots all going about their various tasks sounds like the wave of the future, he doesn't find this
McKinsey
How do the best design performers increase their revenues and shareholder returns at nearly twice the rate of their industry counterparts?

We all know examples of bad product and service design. The USB plug (always lucky on the third try). The experience of rushing to make your connecting flight at many airports. The exhaust port on the Death Star in Star Wars.

We also all know iconic designs, such as the Swiss Army Knife, the humble Google home page, or the Disneyland visitor experience. All of these are constant reminders of the way strong design can be at the heart of both disruptive and sustained commercial success in physical, service, and digital settings.

Despite the obvious commercial benefits of designing great products and services, consistently realizing this goal is notoriously hard—and getting harder. Only the very best designs now stand out from the crowd, given the rapid rise in consumer expectations driven by the likes of Amazon; instant access to global information and reviews; and the blurring of lines between hardware, software, and services. Companies need stronger design capabilities than ever before.

So how do companies deliver exceptional designs, launch after launch? What is design worth? To answer these questions, we have conducted what we believe to be (at the time of writing) the most extensive and rigorous research undertaken anywhere to study the design actions that leaders can make to unlock business value. Our intent was to build upon, and strengthen, previous studies and indices, such as those from the Design Management Institute.