1 to 31 of 31
Building on its launch last year of Autodesk BIM 360 Design, Autodesk announced Oct. 30 the addition of Civil 3D to the cloud solution platform. Users say the enhanced collaborative abilities with BIM 360 and Revit will streamline design of projects that include both civil and vertical components, such as airports and rail stations.

Collaboration for Civil 3D, now included with a BIM 360 Design subscription, allows subscribers of both to work collaboratively with project partners anytime and from anywhere, regardless of team locations and disciplines, says Theo Agelopoulos, senior director with Autodesk.

Customers can now collaborate using streamlined workflows on a unified platform and also perform their day-to-day data management activities in the same place, he says.

While Collaboration for Civil 3D on the BIM 360 Design platform does not yet offer the same worksharing capabilities as Revit, beta users say the ability to access, iterate, and mark up Civil 3D models in real-time in the cloud constitute a game-changer.

Stacey Morykin, design technology manager for Pennoni, says Autodesk gathered client feedback and brainstorming ideas before developing a beta for clients to test. “We’ve been waiting for this for a really long time,” she says. “We do have some projects that have a vertical infrastructure as well as horizontal. Before, when collaborating on a project, we felt like an outsider. Now we have a chance to be an insider.”

In the past, project partners had to export civil 3D files for Pennoni to import into its drawings. “By the time I hung up phone, there would be another change, so I’m still behind,” says Morykin. “If the architect changes a building footprint or door location, now with this integration we can see it.”

Russ Dalton, AECOM BIM director for the Americas, says the enhanced collaboration can improve production efficiency by 32%. “We work on surveying, preconstruction, predesign, all through turnover and operations. We needed a single data source. When we looked at the total picture of delivering a product that looks the same inside the computer screen and physically, it had to come into play,” he says. Historically, there would be a delay in coordination between architect, mechanical engineering and civil design, he says. “Layouts change all the time. The HVAC and architectural teams are working at a fast clip.”

The development also improves collaboration with other programs, such as ProjectWise from Bentley, he adds. “We’re using Civil 3D on top of ProjectWise and that had never worked well. With the new Civil3D collaboration tool, we can add BIM 360 to the workflow, as BIM360 and ProjectWise do collaborate well.”
Blaine Brownell explores emergent teleoperation and telerobotics technologies that could revolutionize the built environment.

Design practitioners have become familiar with an array of evolving technologies such as virtual and augmented reality (VR/AR), artificial intelligence (AI), the internet of things (IoT), building information modeling (BIM), and robotics. What we contemplate less often, however, is what happens when these technologies are combined.

Enter the world of teleoperation, which is the control of a machine or system from a physical distance. The concept of a remote-controlled machine is nothing new, but advances in AR and communication technologies are making teleoperability more sophisticated and commonplace. One ultimate goal of teleoperability is telepresence, which is commonly used to describe to videoconferencing, a passive audiovisual experience. But increasingly, it also pertains to remote manipulation. Telerobotics refers specifically to the remote operation of semi-autonomous robots. These approaches all involve a human–machine interface (HMI), which consists of “hardware and software that allow user inputs to be translated as signals for machines that, in turn, provide the required result to the user,” according to Techopedia. As one might guess, advances in HMI technology represent significant potential transformations in building design and construction.

Tokyo-based company SE4 has created a similar telerobotics system that overcomes network lag by using AI to accelerate robotic control. Combining VR and computer vision with AI and robotics, SE4's Semantic Control system can anticipate user choices relative to the robot’s environment. “We’ve created a framework for creating physical understanding of the world around the machines,” said SE4 CEO Lochlainn Wilson in a July interview with The Robot Report. “With semantic-style understanding, a robot in the environment can use its own sensors and interpret human instructions through VR.”

Developed for construction applications, the system can anticipate potential collisions between physical objects, or between objects and the site, as well as how to move objects precisely into place (like the “snap” function in drawing software). Semantic Control can also accommodate collaborative robots, or “cobots,” to build in a coordinated fashion. “With Semantic Control, we’re making an ecosystem where robots can coordinate together,” SE4 chief technology officer Pavel Savkin said in the same article. “The human says what to do, and the robot decides how.”

Eventually, machines may be let loose to construct buildings alongside humans. Despite the significant challenges robotics manufacturers have faced in creating machines that the mobility and agility of the human body, Waltham, Mass.–based BostonDynamics has made tremendous advances. Its Atlas humanoid robot, made of 3D-printed components for lightness, employs a compact hydraulic system with 28 independently powered joints. It can move up to a speed of 4.9 feet per second. Referring to BostonDynamics’ impressive feat, Phil Rader, University of Minnesota VR research fellow, tells ARCHITECT that “the day will come when robots can move freely around and using AI will be able to discern the real world conditions and make real-time decisions.” Rader, an architectural designer who researches VR and telerobotics technologies, imagines that future job sites will likely be populated by humans as well as humanoids, one working alongside the other. The construction robots might be fully autonomous, says Rader, or “it's possible that the robot worker is just being operated by a human from a remote location.”

John Apicella
In his third post analyzing project delivery, Phil Bernstein discusses its tenuous nature as well as the unrealized potential of BIM.

This is the author’s third post in a series covering an Autodesk project delivery workshop series that explored the relationship between emergent digital collaboration technologies and the AECO sector. The six workshops were held worldwide over 18 months in 2018 and 2019.

Can a given set of data be trusted by both its creator and its users across the complex transactions that comprise the delivery of a construction project? Information reliability was a core theme that emerged throughout our project delivery workshops series. Technical, procedural, and cultural roadblocks combine to interfere with opportunities for substantial improvement in building this trust. In this article, I investigate the underlying causes of these roadblocks.

In modern design and construction, almost all information is developed on digital platforms. It is not surprising, then, that an underlying anxiety about technical problems and their root causes exists among designers, builders, and building operators. Multiple incompatible platforms for generating data in a variety of formats proliferate in the industry. Given that the building industry is one of the last enterprises to digitize, the development of these tools and their outputs seems to be moving far faster than users can adopt them—much less keep track of them and their subsequent updates. Developing “industry standard” formats for compatibility and interoperability, however, would slow necessary innovation. The Tower of Babel continues to grow accordingly.

The potential of BIM, touted since the approach reached widespread adoption in the U.S. market in the years following the global financial crisis, has hardly been realized. Everyone has a lot of interesting 3D data and accompanying metadata, but hardly anyone knows how to share the information in a meaningful, safe, and profitable way. Even when model-based data is generated in the same software tool, significant effort is required to establish the workflow protocols, sharing approaches, and levels of resolution necessary for trustable exchange. Digital deliverables derived from models are infrequent. As a result, BIM is often reduced to a sophisticated drawing management system, as drawings are well understood and present few technical challenges—their lack of detail, fidelity, and precision notwithstanding.

Even when model-based data is generated in the same software tool, significant effort is required to establish the workflow protocols, sharing approaches, and levels of resolution necessary for trustable exchange.

The real question posed here is one of chicken and egg: the generation of digital data and its proper use. As Barbara Heller, FAIA, president of Washington, D.C., firm Heller & Metzger, described in a 2008 DesignIntelligence article, buildings are delivered by an “immense aggregation of cottage industries,” where developing standard workflows, protocols, or even compatible business models is a challenge. Procedural incompatibilities at all levels are the result: Architects, builders, and facility managers have different needs and uses for data, making its coherent flow from design to operation almost impossible. This challenge is traditionally “solved” by re-representing that information in each subsequent interaction of the design-to-build process: concept drawings, construction documents, shop drawings, and then whatever hybridized or bespoke format a building owner creates to manage the resulting information flow after construction completion and the departure of the design-build team.

Further calcifying information flow is the structure of typical delivery itself, presupposed to be a strictly linear process of phases that accompany each of the deliverables described above, from schematic design through construction administration. Process loops—where insights from, say, construction logic might inform a design strategy—do not exist, so important information has no route to swim upstream against the current. While iteration of alternatives does occur within each phase “process silo,” opportunities for design strategy to inform construction or for technical insight to improve cost estimating are made almost impossible by both procedural and technical incompatibilities.

At the foundation of this tower of process-disconnects is a misalignment of management approaches. The overarching goals of a given project, establ
Building Design + Construction
With the rapid evolution of available technologies, and the integration of them into the profession, the role of an architect is changing faster than it ever has before.

The profession of architecture is one that dates back to ancient times, with a profound impact on the built environment of civilizations all over the world. The evolution of the practice has been relatively slow; while technologies and styles have evolved, the fundamentals today are not all that different than they were historically.

However, with the rapid evolution of available technologies, and the integration of them into the profession, the role of an architect is changing faster than it ever has before. At HMC Architects, we believe that the best way to stay relevant in our changing profession is to always be considering what the future holds, and pushing ourselves and the boundaries of the profession.


Taking a building from concept to reality is a long, involved process, with each project presenting its own unique set of challenges. For the sake of discussion, the core tenets of the architectural process can be simplified as follows:
  • Interpreting client
  • Developing a design solution
  • Submitting a design for approval from the local building agency
  • Conveying the design solution to the contractor via construction documents
  • Verifying that the construction is true to the documents provided
There are nuances to those responsibilities, such as code compliance and environmental considerations, but the core of our business is still solution-based, with a focus on problem-solving.

Looking forward, while the tenets may more or less stay the same, there will be less of a focus on the drawing process of the construction documents, and more of a focus on innovative solutions and how they affect as well as support the users of the space.

In turn, clients are becoming more sophisticated, and are demanding a higher level of understanding of the process and, in some cases, desire to be integral to its completion. Luckily, technologies are also advancing, allowing a higher level of information to be easily conveyed.


Technology is migrating into architecture more and more every day. The speed to market has increased significantly with the industrialization of construction with companies like Katerra and DIRTT. These firms are applying logistics via Google Maps to deliver materials to the job site quicker, along with the science of prefabrication to increase the efficiency of construction, which in turn delivers the project quicker to market.

While this is ideal from an operational and logistical standpoint, it also means that some of the traditional aspects of architecture, specifically the drawings, are going to fade away, and the next generation of architects will have a whole new type of deliverables.

These digital outputs, such as building information modeling (BIM), assist in achieving higher performing buildings by looking at regenerative design, renewability, life-cycle costs, and app-based maintenance programs. We also anticipate that, with the digital delivery of construction documents, they will no longer be plan checked by an individual, but by a program-based software; a virtual plan check of sorts. This will speed up the agency approval time, streamlining the path from design to construction while reducing the margin for human error.

The focus is shifting from pure architecture to an environment that is both architectural and user-focused to enhance the occupants’ experience. Our clients are looking for ways to get the most out of their buildings with user apps and sensors that allow them to gather data to determine which spaces are truly utilized, which will drive the need to design for more or less space. Clients will also be using technology and data analytics to determine the life-cycle costs of buildings, as well as forecast occupant experiences to drive future buildings and programs.

With this heightened emphasis on technology, the role of the architect has frequently come into question. While the human component of architecture can never be replaced, many of the once-manual processes can. Architecture and its practitioners must be willing to embrace the migration towards a wholly digital design experience. Adaptability, flexibility, and early adoption of new technologies and procedures will ensure that the collaborative minds at the center of the profession remain a fundamental component of archite
A biweekly tour of the ever-expanding cartographic landscape.

In 2014, researchers from the University of Washington announced that pairing Google StreetView with a cluster of “smart” surveillance cameras allowed them to create “a self-organized and scalable multiple-camera tracking system that tracks humans across the cameras.”

In so many words, they showed that it was possible to build a dynamic, near real-time visualization of pedestrians and traffic flows, projected onto a 360-degree map of the world. A bit of machine-learning software helped erase any seams. This was an early proof of concept in an urban setting of a technological model now known as a “digital twin.”

“Digital twin” is a creepy-sounding phrase, conjuring visions of pixelated doppelgangers haunting your every step. It doesn’t necessarily describe an all-out surveillance state, though: In some ways, this is an extension of the 3-D computer models that architects and engineers use to help plan a building, or maneuver the inner workings of a car engine before they hit the factory.

But the big difference with what the UW researchers were doing is that they were feeding real-time, real-world data into the digital platform, enabling an exact virtual simulacrum of physical streets. What’s more, AI enabled the virtual world to respond to the projected movements in a way that made it seem more real. This technology has taken off in the years since: IBM, Microsoft, HERE Maps, and Descartes Labs are all working toward building “digital twin” technologies for different uses, including for city planning.

For local governments, the benefits could be big. Already, a number Indian cities have adopted “digital twin” software to help manage water and energy infrastructure. In the U.K., researchers at Newcastle University built a digital twin of their city to help it better respond to flooding.

And the bylaws of the Open Mobility Foundation, a global nonprofit recently established to help cities govern the future of mobility data, state that a “digital twin” is the “only way” for cities to get control over the scooters, ride-hailing cars, and other conveyances clogging their streets. It describes how a digital replica of city streets could quickly model how, say, switching traffic signals to prioritize a speeding ambulance would affect other vehicle flows and what transportation officials would need to adjust in order to manage them.

On the other hand, the privacy implications of such a paradigm are pretty big. Who says a city should have that much oversight into the individual movements of every vehicle on the road? How much personally identifiable information would that require a city to absorb and own, and for how long? Players in the world of transportation technology are asking these questions now, as the public officials who head up the Open Mobility Foundation convene for their first board meeting next week. We’ll see what they have to say. (And watch for my story with more about digital twins, later this week in CityLab.)
Claire Anderson/Unsplash
Potential productivity benefits for architecture, engineering, and construction may depend on the outcome of copyright litigation by the International Code Council (ICC) against San Francisco-based startup UpCodes. The firm, which aims to reduce perceived bottlenecks in the implementation of the nation’s 93,000 building codes, faces charges that its public posting of codes undermines the public-private partnership that develops them.

The nonprofit ICC, which prepares the International Building Code and other model codes adopted by multiple jurisdictions, contends that UpCodes has appropriated its property and “does not need to violate ICC’s copyrights to further its claim to innovate,” an anonymous ICC spokesperson commented for this article through its public relations firm. UpCodes regards its practice as fair use, citing precedents establishing that information “incorporated by reference” into law (the applicable legal term) enters the public domain. Other appeals courts, ICC counters, have protected copyrights in cases it considers comparable.

The suit involves a tension that jurists have long recognized in copyright law: the need for material support and incentive for creators (who have exclusive rights “for limited times” under the Constitution’s copyright clause) versus the need to prevent monopoly control from stifling the circulation of ideas. The conflict pits ICC’s interests in codebook development and sales, and its assertion that its website already provides adequate access, against UpCodes’ interests in expanding access and linking codes with building information modeling (BIM) systems.
Flickr; The Natural Step Canada
Companies like Skender, Brasfield & Gorrie and Southland Industries are leveraging lean construction practices to save time, money and resources.

Wth productivity rates at the bottom of the charts, the construction industry may be a little too comfortable with waste in the many forms it can take — overlapping workflows, underutilization of talent, overproduction of materials and more. Small inefficiencies that go unnoticed for a long period of time can add up and cause schedule setbacks and project increases, some argue, a bit like the boiling frog scenario.

Proponents of lean construction say there are lessons to be learned from the way that Toyota streamlined its processes by cornering eight types of waste: defects, overproduction, waiting, unused talent, transportation, inventory, motion and extra processing.

Other manufacturing firms have followed suit. Compared to construction’s 1% global annual labor-productivity growth over the past two decades, manufacturing beats the economy-wide average of 2.7% with a 3.6% rate, thanks to a lean framework paired with automation, found a McKinsey & Co. study.

Organizations like the Lean Construction Institute (LCI) and International Group for Lean Construction have interpreted and catered lean thinking to the AEC industry as a way to derive more value from every expenditure of resources on and off the jobsite.

'Bit by the bug'

A company’s journey to get to this point typically starts with an employee or two seeing lean principles in action on a project, witnessing efficiency gains and getting "bit by the bug,” Kristin Hill, director of education programs with LCI, told Construction Dive. They start to learn more about the system, bring those ideas to their projects and eventually take it further up the chain.

High-level buy-in is key to sealing the deal for a lean transition, she added. “Once leadership is on board, wanting to go in a lean direction and setting the vision for that happening, it starts to happen very quickly.”

Lean construction practices are often paired with tools like 5S, a method of organizing a workplace; A3 problem solving focused on continuous improvement; and the Last Planner system, a workflow of planning, making adjustments and sharing lessons learned throughout a project. The full suite of lean principles and tools can be overwhelming to construction professionals because of the change it brings to so many aspects of their work.

But trying to teach all of these things at once can put benefits like schedule reduction and cost savings on hold, according to Hill, who suggests the better approach is to “start somewhere, start today.” Anyone can start to look for and root out waste from their first exposure to lean, she said, because the journey is defined by small but continuous improvements.

As employees are introduced to lean, though, they need to be able to see the concrete ways that the principles apply and can streamline what they do on a daily basis, according to Katie Wells, Brasfield & Gorrie’s director of lean construction. “There is a lot of theory around what lean construction is and why we should embrace it,” she told Construction Dive. But to convince operations staff of its credibility, “you must be able to provide practical applications."

There may also be an opportunity to point to ways that lean might not be so different than what employees are already doing. Southland Industries, for example, talks about lean in a way that’s consistent with the firm’s core values, pointing to things like collaboration, accountability and innovation.

“We identified what was in keeping with lean thinking and also resonated with our employees,” said Jessica Kelley, an operational excellence manager at Southland, in a recent webinar. “We aim to keep it simple.”

One of these simple ideas was implemented in Southland's Mid-Atlantic plumbing shop, where workers adapted a music stand to hold instructions for pipe fabrication. That way, they aren't turning around to look at the instructions every few minutes and potentially confused by which way the illustrated directions are faced.
Opened late last month, Skender’s newest high-tech manufacturing facility gives the Chicago-based firm the capability to take many of its projects from concept to reality in a controlled, offsite setting.

The 130,000-square-foot plant located on the Southwest Side of Chicago uses BIM techniques, modular fabrication and lean manufacturing processes to minimize weather risks and scheduling issues while increasing quality and safety, Skender Chief Design Officer Timothy Swanson told Construction Dive.

“What we’re seeing now with the opening of our new facility is a full alignment of our design, engineering, manufacturing and construction abilities,” he said, adding that the factory represents a milestone for the firm. “We’re jumping from site-based construction with this leap into manufacturing,” Swanson said.

Using an assembly line system, skilled employees will build and assemble 95% of a project's modular components, including fixtures, finishes and most appliances. The modules are then shipped to the jobsite where they will be assembled and finished by Skender construction teams.

Expected to be at full capacity in about 18 months, the plant will employ 150 people, who will be eligible to be bargaining members of the local carpenters’ union.

The vertically integrated design, construction and manufacturing firm's first modular order is for 10 affordable-rate, three-flat apartment buildings for Chicago developer Sterling Bay. Based on a common architectural type in the city, each three-flat consists of 12 modules, totaling approximately 3,750 square feet per building, with three two-bedroom, one-bathroom units and modern, market-rate finishes, Swanson said.

The steel-frame units will be completed and ready for occupancy in the city's 27th Ward in a nine-week production schedule – 80% faster than conventional construction methods – and at a 5% to 20% lower project cost, depending on the delivery method, according to the firm.

Ideal for hotels and healthcare facilities

In addition, Sterling Bay and Skender plan to start a seven-story, 83-unit modular apartment building in Chicago during the first quarter of 2020. While Skender officials aren't ready to publicly announce other modular clients, the firm is in the design phase with other developers and end-users for a dozen other projects, including healthcare, hospitality and mid-rise market-rate multifamily units, according to Todd Andrlik, Skender vice president of marketing. Swanson said one of these projects is a 30,000-square-foot outpatient facility the company is building in a former retail store.

The modular process lends itself to structures like hotels and healthcare facilities that have easily duplicated floor plans with similar fixtures and fittings, Swanson added. “Where we’re most enthusiastic about the new facility is for the types of projects that have the expectation of high quality but with work that can be repeated,” he said.

Even though modular building is becoming more prevalent in U.S. construction, with companies like Marriott and Prefab Logic investing in it, Swanson said there are still many misconceptions about offsite construction.

“In North America, assumptions around modular and manufactured products are that they are of lesser quality,” he said. “We are making the only designed object in the world that carries that kind of baggage — it’s not like people say they only buy their cars from companies that build them in a garage. That logic is not sound."

To help dispel some of these myths, Skender unveiled a 650-square-foot luxury condo prototype last fall. “Visitors were surprised by how a modular building could be so high end,” Swanson recalled. “For example, they said they didn’t know a modular unit could have stainless steel appliances."

Filed Under: Commercial Building Residential Building Technology
Top image credit: Skender
Easily manage equipment across multiple job sites
It’s hard to know what’s really happening out in the field. That can mean missed opportunities, poor productivity or unnecessary costs. Learn how to seamlessly monitor and manage your entire mobile workforce.

Get the demo
Subscribe to Construction Dive to get the must-
The design profession, in its many guises, is resolutely optimistic. For a designer, no challenge is so large that he or she can’t develop a solution that will both overcome it and enhance the human experience.

Yet, given the recent onslaught of disheartening news regarding climate change, maintaining such optimism becomes something of a daily test. First, in August of last year, there was the Proceedings of the National Academy of Sciences article titled “Trajectories of the Earth System in the Anthropocene.” Penned by 16 climate scientists, the article warns that we’re much closer than previously thought to achieving the “hothouse” trajectory—i.e., a warming of 4 or 5 degrees Celsius—which poses “serious challenges for the viability of human societies.” That was followed in October by the much-publicized United Nations Intergovernmental Panel on Climate Change report stating that at our current rate of warming we could potentially be just 12 years away from hitting the tipping point—1.5 degrees Celsius above pre-industrial levels—that would trigger the most horrific aspects of climate change. Now, thanks to a January 8, 2019 New York Times article titled “U.S. Carbon Emissions Surged in 2018 Even as Coal Plants Closed,” we can add to the litany of bad news this fact: “America’s carbon dioxide emissions rose by 3.4 percent in 2018, the biggest increase in eight years.”

It would now seem that the alchemy required to turn our dire situation into a golden outcome has grown substantially more complicated. Yet the big leaps on a number of fronts regarding climate change enable us to maintain at least some optimism.

For example, as reported in a December 18, 2018 Forbes article titled “6 Renewable Energy Trends to Watch In 2019,” more than 100 cities across the globe get at least 70 percent of their energy from renewables, and more than 40 operate on 100 percent renewable electricity. Scores more cities are working toward similar goals. At the building scale, techno-logical and legislative developments have made on-site electrical generation easier and cleaner, not to mention more efficient and affordable.

Furthermore, cities are slowly shifting their views on their relationship to nature and choosing to see themselves as part of a larger ecological system rather than as separate from—and, in some instances, bulwarks against—the natural world. This has resulted in forays into biophilic design in places such as Oslo, Portland, and, in particular, Singapore.

As more cities shoulder the responsibility of addressing climate change, architects, designers, and urban planners will have an abundance of opportunities to work alongside them in tackling the unprecedented global challenge that we now face. And the array of actionable measures that our industry can take runs the gamut from common-sense design that reduces humanity’s environmental impact to the adoption of the most cutting-edge tools, materials, and processes that are currently being brought to market.

For an example of the former, look no further than the return to classic urban planning principles that we’ve seen in recent years as a means of lessening our collective carbon footprint. Factors such as walkability and mixed uses, combined with a focus on transit-oriented design, make a car-free lifestyle not only attainable but also desirable: a 2016 study by real estate website redfin.com found that for every one-point increase in a home’s walk score (when that home is compared to similar properties in less-walkable neighborhoods), there is a corresponding increase in home price by nearly one percent. Clearly, there is a demand for mobility options beyond just the automobile.

At the building scale, there are design processes that we can explore to create components that dramatically reduce energy consumption. It’s a well-established fact that forty percent of the energy produced in the United States is consumed in residential and commercial buildings. A significant component of a building that heavily influences energy consumption and is under direct control of architects is its façade. However, we now see a need for façades that are capable of adjusting to the moment-to-moment shifts in the natural environment.

One of the challenges in creating high-performance façades lies in utilizing an alternative-rich design process that is affordable yet easy enough to allow designers of all abilities to use it. That’s why our firm, Gensler, initiated a research effort focused on creating a simulation tool that enables the efficient design of more responsive and energy-e
Niles Bolton Associates
Using the Prescient design/build light-gauge steel structural system, a Florida-based developer is building a 336-unit apartment tower in downtown Atlanta that will rise to a height of 12 steel-framed stories above a five-level concrete parking structure. The system allowed an extra four floors of height over competing structural systems, says Nathan Kaplan, partner at Atlanta-based Kaplan Residential.

The project, Generation Atlanta, achieved a density of 217 units per acre (536 units per ha) using a technology that will reduce total development costs enough to make the downtown project feasible, enabling developers to comply with Atlanta’s initiative to provide more affordable housing and competitive market-rate high-rise units. Fifteen percent of the project’s units are intended to be affordable to residents with incomes at 80 percent of area median income (AMI). Completion is expected in 2020.

The Atlanta site is 1.6 acres (0.6 ha) with a considerable slope across its nearly 400-foot (122 m) length. That meant five levels of parking, containing 380 spaces for vehicles and 51 for bicycles, could most efficiently be placed in a concrete parking structure under the light-gauge steel structure, with only four parking levels exposed above grade being counted toward the height limit.

The lighter weight of the metal structure decreased the engineered load requirements versus traditional steel or concrete construction. Architect Ray Kimsey, president of Atlanta-based project designer Niles Bolton, says foundation costs were reduced about 15 percent due to a reduction in the number of pilings required by the significantly lighter structural loading.

Market Conditions

Kaplan notes that the site is attractive for housing. “We felt the downtown submarket was very job heavy and lacking apartments,” he says. “There are some 140,000 jobs in the downtown [central business district] and less than 2,000 new apartments delivered in the last 10 years, predominantly in higher-priced concrete buildings.”

Generation Atlanta units range in size from 459 to 1,512 square feet (43 to 140 sq m) with an average size of 832 square feet (77 sq m). The unit mix is about 25 percent studios at rents from $1,400 to $1,550 per month, 50 percent one-bedroom units at $1,650 to $1,850 per month, and about 25 percent two-bedroom units at $1,900 to $2,300 per month.

To meet the city’s affordability goals, Generation Atlanta will offer 15 percent of its units at rents from $1,300 to $1,500, depending on unit type, targeted to those with incomes at 80 percent of AMI. “We strongly believe the combination of a good land price and a cost-effective design will allow Generation Atlanta to be more competitive,” Kaplan says.

Though the site is at the Spring Street ramp of Interstate 85, it rates high on three 100-point scales—an 88 Walk Score, a 76 Transit Score, and a 75 Bike Score. Its 380 parking spaces supply a 1.13:1 parking ratio for its 336 units. “The location is very walkable to many job centers and also very close to Atlanta’s public transportation system,” the Metropolitan Atlanta Rapid Transit Authority (MARTA), Kaplan says.
Wikipedia Commons
In the short history of computing, an ongoing research project is human-computer interaction (HCI). We know the results of this research as the ever-expanding catalog of input devices developed since the 1950s for interfacing with computers. A few successful and obvious ones are: the keyboard, the mouse, the trackpad, the touchscreen, the pen, and the joystick. If most of design labor today is produced with mice (and/or pens), why are there so few discussions on those instruments? In a field bombarded with debates on the digitization of design, I’ve found everyday devices to be the most fascinating, yet overlooked, subject. So in lieu of reviewing the latest touchscreen, VR controller, or AR app, I’d like to talk briefly about mice and pens.

When it comes to drawing on a computer, designers are quite comfortable with these two instruments. They are tools that embody an elegant balance of ergonomics, precision, and intuition. The mouse, with its hand-cradling design, is by far the most common. It can be manufactured cheaply and has an average of three buttons. The pen, on the other hand, is not as ubiquitous. It is often expensive due to its pressure sensors, and it requires a compatible surface. But this was not always the case. Though we typically associate the mouse with personal computing, it was the pen that paved the way for dynamic interfaces.

The computer mouse was invented at the Stanford Research Institute between 1963 and 1964, and it was debuted in 1968 at what is now referred to as “The Mother of All Demos.” This event introduced the world to an interactive screen and its possibilities: word processing, file storage, and graphics. The mouse was a central component as it allowed the demonstrator and research director, Douglas Engelbart, to move around the 2-dimensional, X-Y plane of the screen seamlessly. Most of the demonstration was, of course, slow and glitchy, but the reason for its matriarchal label is simple: many of the highlighted behaviors are still in use today. We type text on word processors, navigate from window to window, and mouse movements still correspond to X-Y coordinates.

Before the mouse, however, there was the pen; and before the pen there was the gun. This is largely because the pursuit of drawing on a lit screen was first taken up, unsurprisingly, by the military. Project Whirlwind, a 1945 Department of Defense research project conducted at MIT, would gain notoriety in the history of computing for its pioneering work on computer memory and real-time processing, but it was also responsible for developing the first handheld computer-screen interfacing device: the light gun. Though much of the focus was on the design of a physical computer, the Whirlwind machine itself required a means to interact with the operator. The solution was a large, round cathode ray tube (CRT) screen with a handheld electron gun (think: a precursor to Nintendo’s 1984 game Duck Hunt).

A light gun works like this: it contains a light sensor which, when pointed at a CRT, generates a signal each time the electron beam raster passes by the spot the tip of the gun is pointing at. The point is then stored in the computer’s memory and can be retrieved at any time. If a dot on the screen represents an airplane, the gun can retrieve data about that object. The gun eventually morphed into a pen, a much more benign accessory. The pen invited one to draw—rather than target—objects. This would in turn provide the framework for Ivan Sutherland to develop Sketchpad, the first CAD program, which used the pen as the core input device. After Sutherland and Engelbart, the history of mice and pens is a bit more familiar. Apple and Microsoft enter the picture and mice become household items, while pens are adopted by the professional graphics industry.

But this abridged story of mice and pens sheds little light on their physiological effects. These devices are as much a part of our emerging digital behaviors as the images on our screens. The sheer variety of ergonomic designs and accessories available to treat side-effects of their daily usage signals their very real imprint on our physical bodies.

Consider the photographs taken by Howard Schatz at the 2000 Olympics. Here professional athletes are placed side by side and one can easily see the effects of physiological specialization. While designers may not have an optimized body type, I know plenty of them with
Paul Seletsky, AIA, an independent Digital Design consultant who was one of the pioneers in the application of AEC technology in architectural practice, shares his experiences and insights in this Profile.

"...it's human nature to want to choose the winning horse when selecting tools as critical as BIM, but without competition, new software that could truly impact our practice simply won't see the light of day."

What is your educational and professional background?
I graduated with a B.Arch. from Cooper Union in 1982 and then went to Italy, working for two years as a designer for Vittorio Gregotti Associati in Milan. Our documentation back then was done in pencil and ink.

Returning to New York in 1984, I joined Emery Roth & Sons to learn construction drawings. Documentation was produced in ink on mylar, with notational errors corrected using a chemical eradicator. Roth had one of the first CAD systems, McDonnell Douglas GDS, the precursor to Revit. In my role there, however, I did not get to work with it.

In 1987, I joined the Port Authority of NY/NJ, spending ten years in the public sector. In 1990, I did receive the opportunity to work in CAD and learned Architrion, MicroStation and AutoCAD; eventually being named their first CAD Manager. I also began setting up PCs, Mac, Windows and CAD, a small network and a pen plotter—learning "in the trenches." After seven years there, I returned to the private sector.

HLW Architects hired me in 1997 as their IT director. I built a staff of eight people and installed a network of servers and routers across four offices globally. I gradually came to regard this work as too laborious and costly, and falling outside the core competency of an architecture firm. In 1998, Revit came out and they demonstrated their software to us. BIM had arrived and I immediately saw it as a game-changer.

In early 2001, I joined my cousin's startup company to develop an early iteration of the smartphone. Nine months later, I was hired as Director of Technology by Davis Brody Bond Architects. There, I decided to outsource IT, leasing a new phone system, all printers and plotters, and eventually MS Office (but not AutoCAD). My colleagues gently teased me as "Mr. Outsource." In 2003, we began using BIM, trying ArchiCAD on one project, followed by another in what was now Autodesk Revit.

I began to write and lecture about BIM, describing what I foresaw as its impact on architecture and construction. In late 2001, I became chair of the AIANY Technology Committee, and for the next 14 years curated a monthly lecture series about AEC Technology's impact on practice, culminating in a symposium held in October 2012 called Bits+Mortar. The event featured a two hour conversation between Frank Gehry and Nicholas Negroponte, founder of the MIT Media Lab.

In 2005, SOM New York sought to fill a new position, Digital Design Director, and hired me. A senior partner and I created a new department called the Digital Design Group, recruiting 25 architects as AEC technology gurus and BIM mentors. Over the next five years, we created two student research programs, tested environmental analysis software, created our own massing study tool, and held in-house lectures with AEC Tech luminaries. It was an exciting time.

In late 2010, I journeyed outside New York to work for KieranTimberlake in Philadelphia, then spent a few years selling online AEC software and BIM training. In 2017, I happily moved back to New York. I spent 2018 focused inward, exploring what I wanted the last thirty years of my career to look like, since I don't intend to ever retire.

What is your current role? What are the main projects you are involved with?

I'm currently an independent Digital Design consultant in New York, seeking new clients. I greatly enjoy the environment and interaction of working in an architecture office, so if anyone out there is interested, feel free to contact me at pseletsky@gmail.com.

When and how did you get interested in AEC technology?

In 1983, I was sitting at my desk in Milan, drawing the seating floor plan for a redesign of Barcelona's Olympic Stadium. I was using a beam compass that must have been at least 3 feet long. It was at that moment that I said to myself, "Someday I'm going to be doing this on a computer so I can focus more of my time on design versus the mechanics."

How much of what you do today is related to AEC technology in some form?

Ninety-five percent analyzing client needs and deploying solutions, and five percent lamenting BIM software churning
E.M. Duggan Inc.
Long ago, project roles were well defined. Architects and engineers designed, contractors built, and owners wrote the checks. Over many decades, however, the lines defining those roles have blurred, with contractors increasingly handling or overseeing elements of design.

And it’s a trend that’s accelerating. More than 43% of contractors are gearing up to perform design work in-house, a 5% increase from 2018, with another 25% considering such a move in the near future, according to the “2019 AGC/FMI Risk Management Survey” from Associated General Contractors of America and FMI Corp.

While closely integrated design and construction organizations have long been routine in certain sectors, energy and industrial process work among them, the findings of the AGC/FMI survey reflect an environment increasingly influenced by time, technology and, of course, money. Compressed schedules are now the norm, say many industry leaders, in part because of more design-build. Off-site prefabrication also is a factor, with both GCs and specialty contractors taking advantage of production efficiencies to save on manpower and materials.

Adapting to this restructured landscape has not been universally smooth. Participants in the last two AGC/FMI risk surveys have complained of incomplete design documents, inadequate risk allocation in design-build, insurance and liability concerns, and issues coordinating with design teams.

Individual contractors’ approaches for expanding in-house design capabilities run the gamut from adding a handful of specialists as liaisons to developing full in-house design services, either organically or via acquisition (see graph 1). For example, Joseph Poliafico, Flatiron Construction’s vice president of safety and insurance, says his company has made a concerted effort to promote earlier and better communication with its design partners by using in-house staff to “look over their shoulder” as a design evolves. “It’s the reality of the market now,” he adds.

At the other end of the spectrum, Kiewit is among major firms that has significantly expanded in-house design capabilities, as the contractor seeks to apply the integrated design practices of its power business to other markets. Dan Lumma, president of Kiewit’s engineering group, says broader in-house engineering capabilities allow the company greater flexibility in adapting to specific problems in projects, especially those with complex requirements.

“We know that it’s impractical for us to do 100% of design and construction for every project in every market,” Lumma says. “In some, we will do it all and sign off as engineer-of-record. In others, we will work with design partners who understand and share our integrated approach.”

Key questions emerge: Is the need for in-house design indicative of growing pains that accompany any industry-wide evolution? Or, are there more deep-seated problems to be addressed? The answer, as many industry leaders point out, depends on the source.

Incomplete Design Documents Adding in-house design capabilities to augment and finalize design documents might seem, at first glance, a logical move, especially to support best practices in design-build. After all, initial designs are expected to evolve gradually through collaboration among the project team.

Twelve months ago, however, 92% of participants in AGC/FMI’s 2018 risk study reported that design documents were less complete than in the past.

Their concerns are not limited to specific project delivery systems.

“We’re seeing this across the board,” says AGC General Counsel Michael Kennedy. “Contractors are having to connect more dots on their own to keep projects moving.”
Carrier Johnson
Aside from equipment innovations, the building industry has remained largely unchanged for the last 100 years. Beginning about 10 years ago with building information modeling (BIM) software that began to change, said Daniel Reeves, president of the San Diego–based community and government affairs consultancy Juniper Strategic Advisory, who served as moderator of a ULI San Diego/Tijuana event in March.

Like other business sectors, innovative technology is having a disruptive impact on building construction, operations, and management, according to event presenters, who discussed new technology used to cut time for project due diligence; make cost estimates accurate and construction more precise; improve building operations and efficiency; and enhance tenant engagement, comfort, and satisfaction.


San Diego–based Scoutred offers software that simplifies and speeds up early-stage due diligence for real estate developers and their associates, reducing time for research from days to a few minutes, said founder Alexander Rolek. He explained that Scoutred organizes property information on millions of parcels in San Diego County and visualizes the data in a report designed to help parties make informed decisions.

Simply put in the parcel address and receive an immediate report that details property and zoning information, including subdivision name, parcel size, legal description, the owner’s name and address, tax assessment, map location, use type, building height limits, floor/area ratio (FAR), and setbacks. This information is exported to a PDF format, which also includes the following: a high-resolution aerial photo; zoning and other applicable overlays, such as parking and mass transit; a description of the community plan; details on the property’s attributes and any improvements; and all permits pulled on the property on record with the city.


Based in Sydney, Australia, Willow, a global software developer, is focused on creating easy-to-use systems that facilitate smart building construction, optimize building performance, enhance user experience, and open new streams of revenue by turning data into value.

The company has partnered with Microsoft to create Willow Twin, a scalable platform that leverages the power of the internet of things (IoT) and artificial intelligence to create 2-D or 3-D digital, geometrically accurate replicas of real estate assets that contain all asset information and live operational data.

“Data is the new gold,” said Casey Mahon, digital coordinator, Willow North America, explaining that the program collects building data and uses them to transform a structure into a living, evolving asset that learns from experience. The program harnesses building data, tracking user behavior and building performance to improve the tenant experience and drive savings through actionable insights and predictive maintenance.

Five years ago, Willow partnered with Investa, one of Australia’s largest developers, owners, and managers of commercial real estate, to develop Willow Twin 2.0, an intelligent digital twin that integrates 3-D visualization with data to allow a building to learn to operate itself efficiently.

Over time, the system learns to effectively manage energy and other resources used by assets: using data analytics and intuitive reporting, it improves assets’ triple bottom line by increasing their cash value while reducing their impact on the environment.

Since then, Willow and Investa have used Willow Digital and Willow Twin to innovate multiple areas of building development and operations, ranging from complex digital design and construction management to use of intelligent digital twins to manage buildings efficiently and enhance the tenant experience.

Mahon noted that Willow Digital 2.0 identifies which assets to track long-term, and lessons learned can be applied across an entire portfolio using Willow Scan, an OR, code-driven solution designed to identify and manage all assets in a portfolio.

​It also provides a completion tracker and model auditor that validates subcontractor and data, including operations and manuals, asset registers, and warranty information and gathers and stores operational manuals for the building or infrastructure network, which Mahon stresses is especially important when handing off building management to a n
The first exhibition of Seoul’s Robot Museum will be the robots building the museum itself.

Seoul wants to have the world’s very first museum dedicated to robotic science. And the city authorities have decided on the best possible way to build it: use robots, of course.

The museum, designed by Turkish architectural firm Melike Altınışık, is designed to be one of the most recognizable buildings in the center of the Changbai New Economic Center, a newly redeveloped area in the center of northern part of the city.

Its organic form, a semi-sphere that seems to flow in waves to reveal a glass and steel base, will be built by robots. According to the firm’s design principal Melike Altınışık, the building has been conceived as a temple to robotic innovation, so the best way they could materialize that ethos was by using robotic arms to assemble the new space.First, a team of robots will mold the curved metal plates that form the museum sphere using a 3D building information modeling system (basically a CAD system that works with solid objects in real 3D space rather than represent the objects with 2D plans). Robots will assemble the plates, welding and polishing the metal to obtain its final surface appearance.

Then another team of robots, the architectural firm says, will 3D-print concrete to build the public area surrounding the museum.

This process will start in early 2020, with the museum opening its doors about two years after that.

My only question is: Are they using robots to build the robot builders, and, if so, who will build the robots that build those robots and would this infinity loop cause a tear in the space-time continuum that will suck the entire museum into a black hole?
Alberto Cosi. ImageBamboo Sports Hall for Panyaden International School / Chiangmai Life Construction
It is, once again, the time of year where we look towards the future to define the goals and approaches that we will take for our careers throughout the upcoming year. To help the millions of architects who visit ArchDaily every day from all over the world, we compiled a list of the most popular ideas of 2018, which will continue to be developed and consolidated throughout 2019.

Over 130 million users discovered new references, materials, and tools in 2018 alone, infusing their practice of architecture with the means to improve the quality of life for our cities and built spaces. As users demonstrated certain affinities and/or demonstrated greater interest in particular topics, these emerged as trends.

Below, we present the trends that will influence urban and architectural discussions in 2019, with the year-over-year growth rates (YoY) that compare to the statistics of searches from 2017 to 2018.

1. Ways of Living: Greater Interest in Small Scale Homes

The Tiny Houses (+75% YoY) concept emerged strongly at the beginning of 2018. Whether it is a movement in response to ideological or financial situations, architects have become more involved in the development of practical and innovative solutions for small spaces. We can also include the interest for- living in dense urban centers, leading to the challenge of designing basic housing programs for spaces under 40 m2. (Searches related to Small Apartments increased by 121% in 2018).

2. Inclusive Architecture: First-Rate Design for Diverse Populations

Accessibility (+108% YoY), Universal Design (+116%) and Inclusive Architecture (+132%) were some of the most searched concepts on ArchDaily in 2018. In previous years the focus was mostly on architecture for children and reduced mobility, whereas this year we saw more searches related to Architecture for the Elderly (+78% YoY) and different capacities related to mental health (Architecture & Mental Health +101% YoY; Space Psychology +210% YoY) and visual impairments (Architecture for the Blind +250% YoY).

3. The Middle-East: Underrepresented Territories in Evidence

Just as we saw increasing interest in emerging practices in Latin America (+103.82% YoY) in the last two years, in 2018 we also saw an increase in searches related to the Middle East (+124% YoY). The conflict in Syria (+93% YoY) placed architects’ focus on Rebuilding (+102% YoY). In addition, global events peaked the interest of architects due to the magnitude of the structures involved. Both the city of Dubai (+104% YoY), which will be the host of World Expo 2020, and Qatar (+220% YoY), which will host the next soccer 2022 World Cup, increased considerably in search queries. Hashim Sarkis (+236% YoY), the Lebanese architect who was appointed curator of the Architecture Exhibition for the next Venice Biennial (2020), was one of the most searched persons during 2018.
Design Ingelligence
We sat down with technology thinker, practice educator and architect Phil Bernstein to talk about technology and the future of design.

DesignIntelligence: Leaders of firms, chief technology officers, and designers seem to be looking for the technology that will follow BIM. What do you think comes next?

Phil Bernstein: First, let’s contextualize BIM. BIM is a set of knowledge structures that will empower new uses of technology in designing, making, and using buildings.

Where CAD mechanized the means of representation, BIM creates a formal knowledge structure that can organize the enormous array of digital data piles and processes that are becoming part of building industry practices.

What comes next is the digitization of a lot of processes, which means two things. First, there will be new ways of organizing and integrating information so it can be leveraged and interconnected. Right now we have piles of unrelated digital data. We need strategies for integrating it.

Second, now that we have all this data, what do we do with it? A wave of rationality will create a different context for design, because the ability to use this information will change the designer’s obligations. Plus, the ability to collect information allows you to learn from how the building was constructed, how it’s being used, and how that will inform the design going forward. There will be a shift from relying entirely on judgment and intuition to rationality.

DI: So what comes next is more complicated than BIM 2.0.

PB: There isn’t going to be a BIM 2.0 in my opinion. Application-centric work is going away. Everybody’s using 30 or 40 different applications. New technologies will be more about putting the project in the center and less about what platform you’re focused on.

DI: It sounds like the focus will shift not only to how design is accomplished but also the designer’s role in it.

PB: Yes. Designers will have a lot more information. What does that information mean for your strategy as a designer and your value proposition as a practice? If your firm’s doing healthcare work and your clients are leveraging information from their electronic medical records systems to correlate actions and outcomes, the architect must transfer those expectations onto the design process. The old “I’ve done 17 hospitals so trust me” model isn’t going to work anymore.
Varjo Technologies/Umbra
Could new technology that simplifies the transfer of BIM models to augmented reality push AEC firms to go all in on extended reality?

xtended reality (XR) is in a unique phase of its life cycle. The technology is readily available for anyone and everyone who thinks they can do something with it. And for better or worse, it is anyone and everyone who thinks they can do something with it.

New applications for AR and VR are more ubiquitous than superhero movies. Unfortunately, they are just as vapid. The trick with XR is to shift it from novelty to necessity, and the AEC industry has proven to be the one that offers the best opportunity to do exactly that.

“With a single button click, Umbra does all the heavy lifting so designers can share huge, complex models with anyone, anywhere,” says Shawn Adamek, Umbra’s Chief Strategy Officer. “Never before have people had access to view complete, full-resolution BIM models in AR on untethered mobile devices.”

Once the model has been optimized in the cloud, users can log into their Web-based account, where they can view the model in the browser, send it to their mobile device, or share it with others.

A big part of what makes this technology so helpful to end users is the fact that it is compatible with mobile devices like iPads and smartphones. AR-specific devices, such as the Microsoft HoloLens, are still relatively rare among even the largest architecture and construction firms. Expanding the point of entry by making common mobile devices compatible with the technology increases the number of users who can benefit from BIM-to-AR applications, while also advancing the rate at which the technology evolves and improves.

The AEC industry has already done a good job at helping XR claw its way out of the novelty category. Recent developments like Umbra’s new BIM-to-AR technology are a big reason why. This innovation uses Umbra’s cloud-based technology—adapted from the company’s tools for the photorealistic video game industry—to take 3D data of any size and optimize it so that it can be delivered and rendered on mobile devices.

The technology, called Umbra Composit, can be used with common design tools such as Revit, Navisworks, and ArchiCAD to upload 3D BIM models directly to the company’s cloud platform. From there, Umbra automates the process of optimization and prepares the BIM model to be shared with anyone on XR platforms.
SHoP Architects, a young, award-winning architecture firm with an innovative design approach, shares its perspective on AEC technology in this Firm Profile.

What is the history and background of the firm?

SHoP Architects was founded twenty years ago to harness the power of diverse expertise in the design of buildings and environments that improve the quality of public life. Our inclusive, open-minded process allows us to effectively address a broad range of issues in our work: from novel programmatic concepts, to next-generation fabrication and delivery techniques, to beautifully crafted spaces that precisely suit their functions. Years ago, we set out to prove that intelligent, evocative architecture can be made with real-world constraints. Today, our interdisciplinary staff of 180 is implementing that idea at critical sites around the world. We are proud that our studio has been recognized with awards such as Fast Company’s “Most Innovative Architecture Firm in the World” in 2014, and the Smithsonian/Cooper Hewitt’s “National Design Award for Architecture” in 2009.

What is the firm's current focus? What are the key projects it is working on?

Since 1996, SHoP has modelled a new way forward with our unconventional approach to design. At the heart of the firm’s methodology is a willingness to question accepted patterns of practice, coupled with the courage to expand, where necessary, beyond the architect’s traditional roles. We are proud to have worked with clients such as Google, Goldman Sachs, and the United States Department of State. A snapshot of our current work includes a 1,400 ft Manhattan skyscraper at 111 W57th Street; the Barclays Center in Brooklyn, New York; 447 Collins, located in the heart of Melbourne’s Central Business District; the Botswana Innovation Hub in Gaborone; the Syracuse University National Veterans’ Resource Complex; and Uber’s new headquarter offices in San Francisco (Figure 1).

When did the firm start using AEC technology, and how is it being used today? How important is AEC technology to the firm?

At the heart of our process is set of evolving tools and techniques that have come to be known as Virtual Design and Construction (VDC). In a multi-dimensional environment, VDC is the process of digitally simulating the complexities of a design project under the lens of construction processes. This can include geometric rationalization, systems development/fabrication, logistics analysis and cost estimation, from concept through construction (or fabrication through assembly). The VDC workflow leverages emerging, cloud-based technologies to promote collaboration throughout all phases of design, production and building operation. SHoP has been a long-time pioneer of building information modeling (BIM), bolstered by Virtual Design & Construction (VDC) processes, a focus which has resulted in unparalleled architectural results under challenging delivery environments. SHoP identity has always embraced technology as a means to magnify creativity without sacrificing rigorous quality standards. SHoP views technology as a tool to embolden the rich nature of human collaboration. Some examples of SHoP’s technology implementation are shown in Figures 2, 3, and 4.

Does the firm have a specific approach and/or philosophy to AEC technology? If so, what is it?

For nearly two decades, SHoP has pioneered architectural design, encouraging owners, architects and contractors alike to form strategic relationships and deliver built work. The reason we do this is simple. By demystifying the process of construction, by presenting complex processes in a manner that even non-specialists can immediately comprehend, we can access the knowledge of every stakeholder in real-time. The result is broader, more fruitful, more fluid, and far more equitable collaborations. And that means better-performing buildings.

What are some of the main challenges the firm faces in its implementation of AEC technology?

A major challenge is that the standard AEC toolkit is not robust enough to facilitate the federated way that we should be working. We should have much more control over the pieces, parts and products, and their respective lifecycles, within a portfolio of projects. The platform should facilitate parallel processing as opposed to a linear construction. Our design work, in collaboration with all trades and stakeholders, should result in a digital twin of the project that can be meaningfully leveraged for the delivery of the project. Traditional
The architecture, engineering, and construction (AEC) industry is ripe for disruption, and emerging technologies are poised to usher in a new era of increased design and construction productivity, quality, and efficiency. While many members of the industry have been slow to embrace change, firms like The Lamar Johnson Collaborative (LJC) are setting a stellar example of what a forward-thinking AEC firm should look like.

LJC may only be just over five months old, but as founder Lamar Johnson says: “It’s been 20 years in the making.” Johnson launched his new firm with the idea of bringing together the very best people he’s worked with over the past two decades. This people-first philosophy endowed the firm with a depth of experience and range of capabilities that allowed them to hit the ground running and tackle large-scale, complex projects right out of the gate. Moreover, it gives them a unique perspective into the changes caused by recent technological advances.

Last month Anton Dy Buncio (COO, VIATechnik) and Gregg Young (Board of Advisors, VIATechnik) sat down with Johnson, Tod Desmarais (Managing Director at LJC), and Mariusz Klemens (Associate, Architect and Urban Designer at LJC) to talk innovation, tech, and the future of the AEC industry — here’s what they all had to say.

Anton Dy Buncio (ADB): These days, everyone is talking about autonomous vehicles, coworking/coliving, prefabrication, machine learning/AI…what do these technologies bring to the table, and what are the limitations?

Lamar Johnson (LJ): We built our firm around the idea of integrating technology into the architecture and design process in a holistic and authentic way. Of course, technology allows us to implement a vision and respond to issues more efficiently, but it doesn’t necessarily compel us to think differently. We still have to do that ourselves. Technology can empower us; it can supplement our thinking; it can make us more nimble; it can help us deliver our ideas in a more complete and effective manner. At the end of the day, however, it’s the energy, effort, and brain power that people put into projects that really make the difference.

When you combine that mindset with the power of cutting-edge technology, you can achieve really great things. It requires a lot of confidence — in both yourself and your technology — to raise unasked questions or suggest unexpected or innovative solutions, but we’re not afraid of presenting something unbelievable, because we know that what we’re doing works.

ADB: To that end, AEC has a reputation as a generally risk-averse industry, and yet you guys seem to be very comfortable with taking risks. Why is that?

LJ: I’d say that there are two sides to risk. In some situations, you take on much greater risk by doing nothing. Inactivity is a decision, and it can create a lot of risk in and of itself. If you fail to adapt or react to a changing environment, that’s taking the worst risk of all.

But it’s also important to note that “risk” is not a gamble. A gamble involves unknown odds; it’s taking a chance or a guess. Proper risk assessment entails a careful review of a situation, an analysis you then use to make an informed judgement. We do take risks — and so do our clients — but we thoroughly evaluate them beforehand.