About
Feedback
Login
NEWS
ORGANIZATIONS
PROJECTS
PRODUCTS
1 to 36 of 36
Arup
Arup, a global consulting engineering firm, recently welcomed clients and partners to its 65,000-sf, four-floor Toronto offices to unveil two new ‘incubators,’ the Maker’s and Pegasus Labs.

Maker’s Lab (pictured above) facilitates modelling, production, assembly and prototyping. The open collaboration space is equipped with a laser cutter, 3-D printers, manual tools and common materials like wood, composites, plastics, light metals and cardstock. Arup encourages using discarded materials for sketch models and early concepts or prototypes.

Pegasus Lab, meanwhile, is dedicated to experiential design through digital engineering workflows and visualizations of operational processes and designs. It features virtual reality (VR), gesture recognition, artificial intelligence (AI), machine learning, video analytics, augmented reality (AR) and Arup’s own Neuron ‘smart building’ platform.

“Arup was the first firm to embrace digital engineering in 1957 during the design of the Sydney Opera House by using the Pegasus computer,” explains Justin Trevan, the company’s digital technology consulting and advisory services leader for Canada. “Today, the firm continues to innovate for efficient, sustainable and economical solutions.”

In addition to live demos in the two new labs, guests experienced such installations as Motion Platform, which allows users to feel the vibrations of a building while it is still on the drawing board, and Mobile Sound Lab, an immersive audiovisual (AV) environment with simulations of both existing and as-yet-unbuilt spaces.
MIT
Blaine Brownell explores emergent teleoperation and telerobotics technologies that could revolutionize the built environment.

Design practitioners have become familiar with an array of evolving technologies such as virtual and augmented reality (VR/AR), artificial intelligence (AI), the internet of things (IoT), building information modeling (BIM), and robotics. What we contemplate less often, however, is what happens when these technologies are combined.

Enter the world of teleoperation, which is the control of a machine or system from a physical distance. The concept of a remote-controlled machine is nothing new, but advances in AR and communication technologies are making teleoperability more sophisticated and commonplace. One ultimate goal of teleoperability is telepresence, which is commonly used to describe to videoconferencing, a passive audiovisual experience. But increasingly, it also pertains to remote manipulation. Telerobotics refers specifically to the remote operation of semi-autonomous robots. These approaches all involve a human–machine interface (HMI), which consists of “hardware and software that allow user inputs to be translated as signals for machines that, in turn, provide the required result to the user,” according to Techopedia. As one might guess, advances in HMI technology represent significant potential transformations in building design and construction.

Tokyo-based company SE4 has created a similar telerobotics system that overcomes network lag by using AI to accelerate robotic control. Combining VR and computer vision with AI and robotics, SE4's Semantic Control system can anticipate user choices relative to the robot’s environment. “We’ve created a framework for creating physical understanding of the world around the machines,” said SE4 CEO Lochlainn Wilson in a July interview with The Robot Report. “With semantic-style understanding, a robot in the environment can use its own sensors and interpret human instructions through VR.”

Developed for construction applications, the system can anticipate potential collisions between physical objects, or between objects and the site, as well as how to move objects precisely into place (like the “snap” function in drawing software). Semantic Control can also accommodate collaborative robots, or “cobots,” to build in a coordinated fashion. “With Semantic Control, we’re making an ecosystem where robots can coordinate together,” SE4 chief technology officer Pavel Savkin said in the same article. “The human says what to do, and the robot decides how.”

Eventually, machines may be let loose to construct buildings alongside humans. Despite the significant challenges robotics manufacturers have faced in creating machines that the mobility and agility of the human body, Waltham, Mass.–based BostonDynamics has made tremendous advances. Its Atlas humanoid robot, made of 3D-printed components for lightness, employs a compact hydraulic system with 28 independently powered joints. It can move up to a speed of 4.9 feet per second. Referring to BostonDynamics’ impressive feat, Phil Rader, University of Minnesota VR research fellow, tells ARCHITECT that “the day will come when robots can move freely around and using AI will be able to discern the real world conditions and make real-time decisions.” Rader, an architectural designer who researches VR and telerobotics technologies, imagines that future job sites will likely be populated by humans as well as humanoids, one working alongside the other. The construction robots might be fully autonomous, says Rader, or “it's possible that the robot worker is just being operated by a human from a remote location.”

Gensler
When Gensler employees come to work at the company’s new downtown offices, they’ll be able to set up in one of at least six workspaces. If they’re feeling stressed out, they can step into a “wellness room” to decompress. Those who bike to work will be able to take an elevator straight into the office, which will have its own bicycle storage.

“A lot of people ride their bikes to work and it seems like we’re getting even more, so we decided to accommodate a large number of bikes in the work area,” said Gensler’s Vince Flickinger, who was part of the team that designed the company's new space in 2 Houston Center.

The architecture firm signed a lease earlier this year for 50,000 square feet on two floors of the building at 909 Fannin, part of the larger Houston Center office complex on the eastern end of downtown. The company will relocate from Pennzoil Place once construction on the new space is complete.

San Francisco-based Gensler is known for its high-end corporate interiors. In recent years, its Houston office has implemented more of the design trends it studies and carries out for its clients, which include some of this region's top law practices, financial institutions and energy firms.

The new space will bring even more forward-thinking design.

About 70 percent of the Houston 288-person office will focus on so-called agile working, where employees can choose from a variety of workplace settings, whether it’s outside on a patio, in a huddle room or at a stand-up desk.

One section of the office will house mobile work stations that can be fully reconfigured. All workspaces throughout the office will have sit-to-stand capabilities.

“We like to see our office as a testing ground,” Flickinger said.

A design lab will include a makerspace with 3D printers, a virtual reality testing space and a shop area for making architectural models. The firm’s materials library will be twice the size of its current footprint in Pennzoil Place.

Employees will have access to a “sensory-lined wellness room” with adjustable light and sound systems to create a calming atmosphere. Gensler designers also plan to use the room for research on how sight, smell, touch and sound affect the workplace. Other quiet areas will encourage employees to relax without electronics.

“As you have more open areas some times some people just need to get away,” Flickinger said. “Not focus rooms or huddle rooms, but rooms for you to separate yourself from the working environment to get refreshed.”

Houston Center has its own amenities for tenants, including a fitness center, shops and restaurants. The complex is in the throes of its own renovation, which Gensler designed for landlord Brookfield.
Architect Magazine
From 89 submissions, the jury picked eight entries that prove architects can be at the helm of innovation, technology, and craft.

Do we control technology or does technology control us? Never has that question seemed more apt than now. The use of computational design, digital manufacturing, and artificial intelligence, if mismanaged, can have frightening consequences, the implications of which society is just beginning to comprehend. But the jury for ARCHITECT’s 13th annual R+D Awards was determined to accentuate the positive side of these advancements, seeking the best examples that “melded technology, craft, and problem-solving,” says Craig Curtis, FAIA.

The eight winners selected by Curtis and fellow jurors James Garrett Jr., AIA, and Carrie Strickland, FAIA, prove that designers can remain solidly in the driver’s seat despite the frenetic pace of technological developments in the building industry and beyond. “Architects are anticipating the future, helping to shape it, and giving it form,” Garrett says. “Moving forward, we are not going to be left behind. We are going to be a part of the conversation.”

JURY

Craig Curtis, FAIA, is head of architecture and interior design at Katerra, where he helped launch the now 300-plus-person design division of the Menlo Park, Calif.–based technology company and oversees the development of its configurable, prefabricated building platforms. Previously, he was a senior design partner at the Miller Hull Partnership, in Seattle.

James Garrett Jr., AIA, is founding partner of 4RM+ULA, a full-service practice based in St. Paul, Minn., that focuses on transit design and transit-oriented development. A recipient of AIA’s 2019 Young Architects Award, he is also an adjunct professor at the University of Minnesota School of Architecture, a visual artist, a writer, and an advocate for increasing diversity in architecture.

Carrie Strickland, FAIA, is founding principal of Works Progress Architecture, in Portland, Ore., where she is an expert in the design of adaptive reuse and new construction projects and works predominantly in private development. She has also taught at Portland State University and the University of Oregon, and served on AIA Portland’s board of directors.
Squint/Opera
Plus, Katerra offers an update on its K90 project in Las Vegas, Google pledges $1 billion toward affordable housing in the Bay Area, and more design-tech news from this week.

Bjarke Ingels Group (BIG) and UNStudio are working with digital agency Squint/Opera on the development of Hyperform, a design platform that facilitates collaboration in 3D augmented reality. Initially prototyped last year, Hyperform allows multiple users to work in scale models as well as immersive 1:1 environments. Users can also create still renderings as well as video recordings. "In the future every physical object will be connected to one another, sensing each other and everything in between," BIG founder Bjarke Ingels said in a press release. "For every physical object there will be a digital twin. For every physical space a virtual space. Hyperform is the augmented creative collaborative environment of the future which will allow an instantaneous confluence of actual and imagined realities—the present and the future fusing in our augmented sense of reality." [Squint/Opera]

In its latest project, New York–based SoftLab has created a "circular constellation" in Manhattan’s Seaport District that features 100 sensor-enabled glowing poles that emit different colors and sounds based on visitors' touch. [ARCHITECT]

This week, tech giant Google pledged to invest $1 billion in land and money to construct houses to help ease the housing crisis in the Bay Area. Over the next 10 years, the company has promised to convert $750 million of its land that is currently zoned for commercial development into residential property for some 15,000 new houses. Additionally, Google will establish a $250 million investment fund to assist developers in creating 5,000 affordable housing units. "In the coming months, we’ll continue to work with local municipalities to support plans that allow residential developers to build quickly and economically," the company writes in a press release. "Our goal is to get housing construction started immediately, and for homes to be available in the next few years." [Google]

Menlo Park, Calif.–based technology and construction company Katerra has released an update on K90—its ambitious garden apartment project in Las Vegas that the company is aiming to complete in 90 days. While slab-up construction typically takes 120 to 150 days, Katerra is believes it can deliver in a little over half the time using proprietary tools such as a material auditing app that alerts construction teams to incoming materials—which are delivered directly to installation point rather than a general project-site drop-off—wall panels that have pre-installed electrical wiring, and its bath kit that includes carpet, tile, plumbing fixtures, hardware, wood trim, light fixtures, light sources, and mirrors. [Katerra]

Researcher from Okinawa Institute of Science and Technology Graduate University (OIST) in Japan published findings that adding a "self healing" protective layer of epoxy resin to perovskite solar cells (PSC) helps reduce leakage of pollutants, helping to push the technology toward commercial viability. “Although PSCs are efficient at converting sunlight into electricity at an affordable cost, the fact that they contain lead raises considerable environmental concern,” said OIST professor Yabing Qi in a press release. “While so-called ‘lead-free’ technology is worth exploring, it has not yet achieved efficiency and stability comparable to lead-based approaches. Finding ways of using lead in PSCs while keeping it from leaking into the environment, therefore, is a crucial step for commercialization.” [OIST]
Wikipedia Commons
In the short history of computing, an ongoing research project is human-computer interaction (HCI). We know the results of this research as the ever-expanding catalog of input devices developed since the 1950s for interfacing with computers. A few successful and obvious ones are: the keyboard, the mouse, the trackpad, the touchscreen, the pen, and the joystick. If most of design labor today is produced with mice (and/or pens), why are there so few discussions on those instruments? In a field bombarded with debates on the digitization of design, I’ve found everyday devices to be the most fascinating, yet overlooked, subject. So in lieu of reviewing the latest touchscreen, VR controller, or AR app, I’d like to talk briefly about mice and pens.

When it comes to drawing on a computer, designers are quite comfortable with these two instruments. They are tools that embody an elegant balance of ergonomics, precision, and intuition. The mouse, with its hand-cradling design, is by far the most common. It can be manufactured cheaply and has an average of three buttons. The pen, on the other hand, is not as ubiquitous. It is often expensive due to its pressure sensors, and it requires a compatible surface. But this was not always the case. Though we typically associate the mouse with personal computing, it was the pen that paved the way for dynamic interfaces.

The computer mouse was invented at the Stanford Research Institute between 1963 and 1964, and it was debuted in 1968 at what is now referred to as “The Mother of All Demos.” This event introduced the world to an interactive screen and its possibilities: word processing, file storage, and graphics. The mouse was a central component as it allowed the demonstrator and research director, Douglas Engelbart, to move around the 2-dimensional, X-Y plane of the screen seamlessly. Most of the demonstration was, of course, slow and glitchy, but the reason for its matriarchal label is simple: many of the highlighted behaviors are still in use today. We type text on word processors, navigate from window to window, and mouse movements still correspond to X-Y coordinates.

Before the mouse, however, there was the pen; and before the pen there was the gun. This is largely because the pursuit of drawing on a lit screen was first taken up, unsurprisingly, by the military. Project Whirlwind, a 1945 Department of Defense research project conducted at MIT, would gain notoriety in the history of computing for its pioneering work on computer memory and real-time processing, but it was also responsible for developing the first handheld computer-screen interfacing device: the light gun. Though much of the focus was on the design of a physical computer, the Whirlwind machine itself required a means to interact with the operator. The solution was a large, round cathode ray tube (CRT) screen with a handheld electron gun (think: a precursor to Nintendo’s 1984 game Duck Hunt).

A light gun works like this: it contains a light sensor which, when pointed at a CRT, generates a signal each time the electron beam raster passes by the spot the tip of the gun is pointing at. The point is then stored in the computer’s memory and can be retrieved at any time. If a dot on the screen represents an airplane, the gun can retrieve data about that object. The gun eventually morphed into a pen, a much more benign accessory. The pen invited one to draw—rather than target—objects. This would in turn provide the framework for Ivan Sutherland to develop Sketchpad, the first CAD program, which used the pen as the core input device. After Sutherland and Engelbart, the history of mice and pens is a bit more familiar. Apple and Microsoft enter the picture and mice become household items, while pens are adopted by the professional graphics industry.

But this abridged story of mice and pens sheds little light on their physiological effects. These devices are as much a part of our emerging digital behaviors as the images on our screens. The sheer variety of ergonomic designs and accessories available to treat side-effects of their daily usage signals their very real imprint on our physical bodies.

Consider the photographs taken by Howard Schatz at the 2000 Olympics. Here professional athletes are placed side by side and one can easily see the effects of physiological specialization. While designers may not have an optimized body type, I know plenty of them with
Mancini Duffy
From photo-real renderings to the proliferation of architecture-orientated social media accounts, digital tech has transformed the way designers envision the world and the way the world engages with design. With today’s tech, the sky’s the limit for what an architect or interior designer can imagine. One firm in particular has realized digital tech’s revolutionary potential and has ran with it, creating multiple new services that promise substantial ROIs and a more collaborative, expedited design process.

That firm is Mancini Duffy, a veteran powerhouse in the New York design scene. At a recent lunch and learn hosted at Interior Design’s New York City headquarters, Mancini Duffy principal Michael Kipfer and his team presented several digital services that are already impacting the physical world. “Over the last five years, we’ve really embraced a startup mentality in our R&D department,” explained Kipfer. “Our end-goal is to spread this tech to other firms and completely transform the way the way our kind of work is done in the future.”

Over the course of the hour-long lunch, Kipfer elaborated on the boundary-pushing services Mancini Design Lab has developed between when it opened in June 2018 and now. These include a 360-degree design session, aided by top-of-the-line augmented and virtual realities developed using a popular video game engine called UnrealUnity. Clients are invited to participate in the design process, speeding up the time it takes to get a final client sign-off down from a few weeks to a single three-hour collaborative session. In this way, Kipfer said, everyone’s time is respected. They most recently used this technology at Pier 17 for the ground floor public spaces and restaurants.

VR is also used in the Mancini Duffy’s Mancini:Tool Belt. Powered by the HTC Vive, designers can grab and move objects in a Rhino and Revit-created space, “paint” them with different finishes, measure them, and teleport freely through the proposed project, picking up on design flaws long before they have advanced to stage where they would be costly to fix. This tech was first developed when Boqueria’s owner Yann de Rockfort and Chief Executive Chef Marc Vidal approached the firm about designing a new kitchen for their staff. Today, it’s a standard tool embraced across Mancini Duffy’s project teams.

Mancini Duffy makes use of new tools outside of simulated realities, as well. They recently completed a parking study for a national financial client using drones to survey the number and flow of cars across a 157-acre site. What ordinarily would have taken a team of three humans a day to accomplish was completed by the drone in five minutes. Realizing this, the team used the drone to map the site twice an hour day for two days, taking in LIDAR data and importing it into a 3-D software. From this data, heat charts and flow diagrams were made available to the remote Mancini team, speeding up the process for the client and cutting down on the design team’s time wasted on travel.

So what’s next for the Design Lab? “On the whole, we foresee 2-D drawings and construction documents completely disappearing from the design process,” said Kipfer. That could be accomplished by licensing or trademarking the aforementioned services to be used by the wider architectural community. “We see a huge potential to make the design process more expedient, more collaborative, and ultimately more creative with what we’ve invented at Design Lab. It’s not about keeping it all to ourselves and outpacing the competition. It’s about creating a new competitive environment that stimulates better design and ultimately gives the end-user something better than they could have ever expected.”





Vargo
PARKED IN A Berlin platz at night, the concept car gleams, city lights dancing off its sinuous lines. I crouch down next to its hood to admire its shape, and as the paint twinkles, a word etched on the tire catches my eye. SPEEDGRIPP. Though faint, each letter looks pristine and unbroken, like the tire’s never seen a mile of road, like it’s been airlifted from some secret factory. It’s not like anything I’ve seen before—and certainly not in a VR headset, where print legibility goes to die.

Credit the resuscitation to Varjo. The first time I saw the Helsinki company’s prototype headset, nearly two years ago, it was little more than a kludge—an Oculus Rift that Varjo had rigged to project an ultrahigh-resolution microdisplay into the center of my field of view. Rift or no Rift, it was the most stunning clarity I’d ever seen. It’s better now, and it’s also a finished (and Finnish) device.

Varjo bills the VR-1, which goes on sale today, as “the world’s only professional VR headset with human-eye resolution.” The word “professional” is key here: While the VR-1’s mirror-polished eyebox and unprecedented visual fidelity make it feel like an artifact from the future, its $5,995 price tag makes clear that this isn’t a device for everyone. Specifically, it’s not for consumers, but for Airbus, Audi, architecture firm Foster + Partners, and dozens of other companies who participated in Varjo’s beta program over the past year. Any sticker shock pales next to the benefits they’ll see in the long run.

In the enterprise world, your VR headset isn’t for games or social experiences; it’s for work. So a company’s feature wish list is a bit different. You don’t need it to be completely untethered, because you’ll be sitting at your workstation. Instead, you want it to work with your professional design or rendering software of choice, whether that’s Autodesk VRed, Unreal, Lockheed Martin’s Prepar3D, or any of a half-dozen others. You also probably want it to have eye tracking, especially if you’re using the headset for training and simulation.

Those were things the Varjo team kept hearing as they worked with early partners, and as they grew from 12 employees to over 100 (thanks in large part to a $31 million Series B round last year). “This is something that was done with the professionals, for the professionals,” says Varjo CMO Jussi Mäkinen. “It's not a consumer product retrofit for the professional market.”

But from the moment Varjo emerged from stealth in 2017, one constant corporate chorus rose above the rest: resolution, resolution, resolution. “If you can crack that,” Varjo CTO Urho Konttori says, “you win the professionals.”
Varjo Technologies/Umbra
Could new technology that simplifies the transfer of BIM models to augmented reality push AEC firms to go all in on extended reality?

xtended reality (XR) is in a unique phase of its life cycle. The technology is readily available for anyone and everyone who thinks they can do something with it. And for better or worse, it is anyone and everyone who thinks they can do something with it.

New applications for AR and VR are more ubiquitous than superhero movies. Unfortunately, they are just as vapid. The trick with XR is to shift it from novelty to necessity, and the AEC industry has proven to be the one that offers the best opportunity to do exactly that.

“With a single button click, Umbra does all the heavy lifting so designers can share huge, complex models with anyone, anywhere,” says Shawn Adamek, Umbra’s Chief Strategy Officer. “Never before have people had access to view complete, full-resolution BIM models in AR on untethered mobile devices.”

Once the model has been optimized in the cloud, users can log into their Web-based account, where they can view the model in the browser, send it to their mobile device, or share it with others.

A big part of what makes this technology so helpful to end users is the fact that it is compatible with mobile devices like iPads and smartphones. AR-specific devices, such as the Microsoft HoloLens, are still relatively rare among even the largest architecture and construction firms. Expanding the point of entry by making common mobile devices compatible with the technology increases the number of users who can benefit from BIM-to-AR applications, while also advancing the rate at which the technology evolves and improves.

The AEC industry has already done a good job at helping XR claw its way out of the novelty category. Recent developments like Umbra’s new BIM-to-AR technology are a big reason why. This innovation uses Umbra’s cloud-based technology—adapted from the company’s tools for the photorealistic video game industry—to take 3D data of any size and optimize it so that it can be delivered and rendered on mobile devices.

The technology, called Umbra Composit, can be used with common design tools such as Revit, Navisworks, and ArchiCAD to upload 3D BIM models directly to the company’s cloud platform. From there, Umbra automates the process of optimization and prepares the BIM model to be shared with anyone on XR platforms.
VIATechnik
The architecture, engineering, and construction (AEC) industry is ripe for disruption, and emerging technologies are poised to usher in a new era of increased design and construction productivity, quality, and efficiency. While many members of the industry have been slow to embrace change, firms like The Lamar Johnson Collaborative (LJC) are setting a stellar example of what a forward-thinking AEC firm should look like.

LJC may only be just over five months old, but as founder Lamar Johnson says: “It’s been 20 years in the making.” Johnson launched his new firm with the idea of bringing together the very best people he’s worked with over the past two decades. This people-first philosophy endowed the firm with a depth of experience and range of capabilities that allowed them to hit the ground running and tackle large-scale, complex projects right out of the gate. Moreover, it gives them a unique perspective into the changes caused by recent technological advances.

Last month Anton Dy Buncio (COO, VIATechnik) and Gregg Young (Board of Advisors, VIATechnik) sat down with Johnson, Tod Desmarais (Managing Director at LJC), and Mariusz Klemens (Associate, Architect and Urban Designer at LJC) to talk innovation, tech, and the future of the AEC industry — here’s what they all had to say.

Anton Dy Buncio (ADB): These days, everyone is talking about autonomous vehicles, coworking/coliving, prefabrication, machine learning/AI…what do these technologies bring to the table, and what are the limitations?

Lamar Johnson (LJ): We built our firm around the idea of integrating technology into the architecture and design process in a holistic and authentic way. Of course, technology allows us to implement a vision and respond to issues more efficiently, but it doesn’t necessarily compel us to think differently. We still have to do that ourselves. Technology can empower us; it can supplement our thinking; it can make us more nimble; it can help us deliver our ideas in a more complete and effective manner. At the end of the day, however, it’s the energy, effort, and brain power that people put into projects that really make the difference.

When you combine that mindset with the power of cutting-edge technology, you can achieve really great things. It requires a lot of confidence — in both yourself and your technology — to raise unasked questions or suggest unexpected or innovative solutions, but we’re not afraid of presenting something unbelievable, because we know that what we’re doing works.

ADB: To that end, AEC has a reputation as a generally risk-averse industry, and yet you guys seem to be very comfortable with taking risks. Why is that?

LJ: I’d say that there are two sides to risk. In some situations, you take on much greater risk by doing nothing. Inactivity is a decision, and it can create a lot of risk in and of itself. If you fail to adapt or react to a changing environment, that’s taking the worst risk of all.

But it’s also important to note that “risk” is not a gamble. A gamble involves unknown odds; it’s taking a chance or a guess. Proper risk assessment entails a careful review of a situation, an analysis you then use to make an informed judgement. We do take risks — and so do our clients — but we thoroughly evaluate them beforehand.