Designing the Future Landscape: Digital Architecture, Design & Engineering Assets (Morning Session)

Designing the Future Landscape: Digital Architecture, Design & Engineering Assets (Morning Session)


>>From The Library of
Congress, in Washington, D.C.>>Kate Murray: Hi, everyone. I’m Kate Murray, and I’m
from The Library of Congress. And please allow
me to welcome you to “Designing the Future
Landscape: Digital Architecture, Design, and Engineering Assets.” And before I introduce
our opening speakers, I’d like to say a
few words about what to expect for the day. We — and by that, I mean our
stellar organizing committee, which is comprised of staff
from The Library of Congress, the Architect of the Capitol,
and The National Gallery of Art, have been on — planning
this event for over a year. We all sort of bumped
into each other when we were exploring
these complex and dynamic digital
collections about 18 months ago, and we thought there must be
other folks out there like us who are looking for
opportunities to share information
— it’s a hum, right? But to share information to
build communities of practice. So we decided to work together
in what really has been a model of inter-agency cooperation
and collaboration. And in this, we include our
colleagues from GSA as well. And we are bringing
together thought leaders, content creators, technologists, members of the archival
communities — that is to say, all of you,
to talk about the development and implementation of open,
standardized file formats, case studies, and current
projects and practices, and future-looking approaches. And we have our engaged and
supportive program committee to thank us for bringing
this to fruition. You know how sometimes
program committees are more about moral support? This was not one of
those program committees. They were true partners in
bringing this event about, and we owe a sincere
and heartfelt thank-you for allowing us to
pick their brains and raid their contact lists. We’d be remiss not to also
thank the special events, music division, and multimedia
staff at The Library of Congress for their important and
valued contributions. And of course, we’re grateful to
all of you in the audience today for sharing our passion and
curiosity about all things ADE. A few logistics for the day. The restrooms are out the
door near the yellow corridor, although I think the
ladies’ room one is broken, so you might have to go
down the hall a little bit. Breaks and lunch are on your
own in our café, which is just across the hall, and it’s
actually pretty good. I usually get the Korean
rice bowl or the sushi, but you can find
something that you like. We expect to have a dedicated
pay lane, so look for some kind of a sign that says
“Architecture, Design and Engineering,” or
“Special Event,” and you can get in that lane, although
you can pay anywhere. There’s also a Subway and a
Dunkin Donuts on the G level. We won’t spend time in our
sessions on speaker bios, but you can read all about
them on the conference website. Our hashtag is #DigADE2017,
and it’s case-sensitive, as you see there
on the monitors, and we hope you flood Twitter
with posts throughout the day. The wireless network is
lock guest, and you need to accept the prompt in
your web browser to connect. This event is being recorded
for later release on YouTube and other social
media platforms, so we ask that during Q&A,
you wait for the mic to come to you before posing
your question. And please note that
participating — by participating in this event,
you acknowledge that your image and voice might be captured
on these recordings. Speaker slides will be available on the conference website
in the near future. We’ll also have a report
coming out in 2018, which will be authored
by Aliza Leventhal, covering the main themes
and topics of the day, and we’ll widely distribute the
links to all of these resources. If you have any concerns or questions during
the conference today, please seek out a member of
the organizing committee, and we have stars
on our nametags. And finally, we invite
everyone to join us for a no-host happy hour down
the block at Bullfeathers at the end of the day. This is the start
of a conversation, and we’re thrilled you can all
join us, so let’s keep it going. Without further ado, I’d like
to introduce Mark Sweeney, Acting Deputy Librarian
of Congress, followed by Stephen Ayers,
Architect of the Capitol, to get us started with
welcoming remarks. Mark Sweeney is currently
serving as the librarian of — The Library of Congress’s
Acting Deputy Librarian. Prior to his current
appointment, Mr. Sweeney served as the Associate Librarian
for Library Services. He was responsible for carrying
out library service’s mission, which is acquire, organize,
provide access to, maintain, secure, and preserve The Library of Congress’s universal
collection. This vast collection
contributes to the advancement of civilization and knowledge
throughout the world — throughout the world, documents
the history and culture of the United States,
and records and supports the creativity
of the American people. On May 13th, 2010, President
Barack Obama officially appointed Stephen
Ayers to a 10-year term as Architect of the Capitol. Mr. Ayers is responsible
for facilities maintenance and operation of the historic
U.S. Capitol Building, the care and improvement of more
than 570 acres of grounds, and the operation
and maintenance of 17.4 million square
feet of buildings. Including the House and Senate
Congressional office buildings, the Capitol Visitors’ Center, the Library of Congress
buildings, the U.S. Supreme Court building, the Thurgood Marshall
Federal Judiciary building, and other facilities. He’s responsible for the care of
all works of art in the capitol under the direction of the
Joint Committee and the Library, and is responsible for the
maintenance and restoration of murals, outdoor sculpture, and other architectural elements
throughout the Capitol Complex. Please join me in
welcoming them to the podium. [ Applause ]>>Mark Sweeney: Good
morning, everyone.>>Good morning.>>Mark Sweeney:
Again, I’m Mark Sweeney, the Acting Deputy Librarian
of Congress, and on behalf of the Librarian of
Congress, Dr. Carla Hayden, it’s my pleasure to welcome
you to today’s program. We are pleased to host this
symposium in collaboration with the Architect
of the Capitol and the National Gallery of Art. Let me take a moment to provide
a little context for the Library of Congress’s commitment to architectural
and design heritage. The Library is both the oldest
federal cultural institution in the United States and
the world’s largest library. Since its founding in 1800, its
collections, which are universal in subject matter, have grown
to over 164 million items, stored on more than
838 miles of shelving. The collections include
more than 38 million books and other print materials, 36
— or, 3.6 million recordings, 14 million manuscripts,
photographs, 5.5 million maps, 8.1 million pieces
of sheet music, and 70 million manuscript
sheets. What’s germane to our
discussions today is the fact that the subjects of
architecture, design, and engineering are
woven into all of these different collections. However, the largest
concentration of design sketches,
presentation renderings, and technical construction
drawings are in our prints and photographs division, which
is also home to the Center for Architecture
Design and Engineering. The division’s extensive
holdings include original designs by many of the most
distinguished designers who worked in the United States, including Benjamin Henry
Latrobe, Robert Fulton, Richard Morris Hunt, Cass
Gilbert, Frank Lloyd Wright, Charles and Ray Eames, Raymond
Loewy, Paul Rudolph, and, of course, I.M. Pei, just
to name a few of the giants that live here at the
Library of Congress. We are also fortunate
in having the archives of two prominent
professional organizations, The American Institute
of Architects, and The Engineering
Society’s Library. I don’t have time to tell you about all the great
research that’s being done with our architecture, design, and engineering collections
right now. However, to give you just a tiny
bit that will give you a glimpse of what’s going on, I’ll
mention a few examples of the work that’s being done
with the Paul Rudolph archive. The Urban Land Institute
is using the archive to further its study of
urban planning and renewal. An architect is studying
Rudolph’s Florida houses in order to understand
changes in building codes and their financial impact. Architectural scholars are
studying Rudolph’s building techniques and materials, particularly his
use of concrete. And finally, preservationists and architectural historians
are looking closely at examples of some of Rudolph’s most
notable public buildings, such as The Jewett Arts
Center in Wellesley, and the controversial Boston
Government Services Center. In closing, I’m sure
today’s panel presentations and discussions will help us
achieve a better understanding of the challenges inherent
in acquiring, preserving, and sharing important born
digital assets relative to architecture,
design, and engineering. And I hope the remarks of the
Library of Congress curators and others will stimulate you to explore our magnificent
collections. Thank you. [ Applause ]>>Stephen Ayers: Thank you,
Mark, and thank you, Katy. Good morning, everyone. It’s great to be here today. Let me start with a brief story about the first major
restoration of the United States Capitol
Dome that we recently completed, the first in a generation. You may have seen the
scaffolding going up or coming down, and we completed
that terrific job in January of this year, just before —
or, December of this year — last year, just before the
Presidential Inauguration. The Dome, as you may know, is made of nearly nine million
pounds of cast iron girders, plates, columns, and
cast iron ornaments. Even the pedestal
upon which the Statue of Freedom sits is
made of cast iron. And as we were gearing up
for this restoration effort, we used CAD drawings, of
course, of the Capitol Dome that we had prepared
many years before to help us develop
the scope of work, to help us document the
more than 1300 cracks, and deficiencies, and
repairs in the cast iron that we were about to undertake. And these digital assets,
or these CAD drawings, were incredibly helpful
to the work we were doing. But they didn’t contain
the important stuff, the important details
of the fasteners, the connection points, and
all of the cast iron parts. It wasn’t exactly like working
with a virtual erector set that showed how all of these
hundreds of thousands of pieces that fit together to
make up this dome. You can’t necessarily see them
all, either, the fasteners and the connection points,
because they’re hidden. They’re inside spaces
that haven’t been — haven’t seen the light of
day in more than 160 years, in the gutter systems,
and balustrades, and those connection points
are what are important to us, and what were important
to our contractor. And so, despite the amazing
technology that was available to us, the most important tool in this restoration project
was the roughly 200 original watercolor drawings from Thomas
Ustick Walter in the 1850s, two of which you can see
right there on the screen. Thomas Ustick Walter was the
fourth Architect of the Capitol, and his drawings were tremendous in documenting exactly how
everything would fit together back in 1865. By sharing these drawings with
our contractors, they were able to develop their means and
methods by which they would go about disassembling all of
the pieces of this dome, restoring them, and then
putting them all back together. And that reduced our risk
immensely on the job. They were particularly
helpful on the balustrades, some of which you can see
here, essentially serving as a paint-by-numbers kit. And it’s amazing,
the level of detail of these drawings
back 160 years ago. I could spend the next 30
minutes talking about the state of our design drawings today
produced by our engineers and architects, and how
we can draw an outline of this particular dome, make
a bubble around it, and say, “The contractor needs to prepare
shop drawings for all of this.” Well, back in the 1860s, the
Architect of the Capitol did all of that, and Walter made all
of those connection points. And they — in fact, we
don’t have shop drawings, because his design documents
were essentially the shop drawings for that. But that is for a
different gathering. Today, fortunately,
we are blessed to have some incredibly
archivists at the Architect of the Capitol that were able
to pull these drawings together, the ones from Thomas Ustick
Walter, more than 200 of them. They were stored on a
network drive, just a couple of hundred gigabytes of
data that had been stored on that drive for a
number of decades. While we were working
on the dome restoration, we also took the time
to pay it forward to our future colleagues,
as well. We used BIM 360 to document
all of the work that we did, whether it was including
the photographs of existing conditions once
we took nearly 1/4 inch of paint off the dome —
lead-based paint, that is, documenting the conditions
that we found, documenting the original
repairs, or repairs that were done in
the 1959 and 1960, that is. And paid it forward
for our colleagues that might be undertaking
similar work some 50 or 75 years from now, as they go about restoring the
Capitol Dome once again. In the short term,
we’re confident that we can mine this
data to diagnose problems that we may have with the dome
and our maintenance activities, and in the longer term, of
course, it will be useful. This BIM 360 model will
certainly be useful only if it’s in a format that can be
assessed by future Architects of the Capitol, again,
50 or 75 years from now. And as much as we want
to cling to paper, it’s just not realistic, as you
know, as it won’t last forever. So we need to capture documents
in their born digital formats, and certainly, we’re
all here today wrestling with the same questions. What’s important to keep? What are the challenges of
keeping it in perpetuity? How do we protect the digital
record for the long-term. And perhaps most importantly for
me, anyway, how do we make sure that our colleagues
in the future, those that might undertake
the repair and restoration of this great Capitol Dome
50 or 75 years from now — how do we make sure
they can use it? Well, I hope by the end of
the day, you can figure all of that out [laughter]. That’s your challenge, and
I’m delighted to be here. I hope you have a great day. I look forward to hearing
the results of your work, and I’m quite certain you’ll
be able to figure that out by the end of the day. So good luck, have a
great day, and thank you. [ Applause ]>>Kate Murray: So
thank you, Mr. Ayers. That’s a bit of a challenge. I think we’ll do our best
by the end of the day, and thanks to Mark
Sweeney, also, for those wonderful
welcoming remarks. So we’re going to start with our
first session, which is going to be Aliza Leventhal, Katie
Pierce Meyer, and Tim Walsh, and they’re going to
give us an ADE primer. Thank you.>>Katie Pierce Meyer:
Good morning, everyone, and thank you all for coming. I am Katie Pierce Meyer. My colleagues, Tim Walsh
and Aliza Leventhal, and I are going — are
very pleased to be asked to provide an ADE
formats primer. Essentially, what we’re
going to do is give a — you about a one-hour
introduction that covers the history and
preservation issues associated with architecture design and
engineering and digital content. We’ll each take on a particular
period, identify key trends and preservation concerns. I should note that the
three of us come with a bit of an architecture bias,
but we’re very interested in exploring the connections to
and overlaps with other fields. So, I will introduce the early
years, provide a little bit of a background on
early CAD development from the 1960s through
the 1980s. My colleague Tim Walsh will
take on the 1990s and 2000s, a very active period in
technology development, and increasingly complex
preservation challenges. And finally, Aliza Leventhal
will discuss software — current software and
preservation landscape, setting us up for the
remaining sessions today. But first, let’s start
with a look back. So the use of computer-aided
design applications in architecture trailed
similar implementations within engineering
and manufacturing. Stimulated by mandates from the
U.S. Air Force in the 19650s, airplane engineers and manufacturers were
early adopters and testers of computer technology
for design. Computer-aided design and
drafting, manufacturing tools, and techniques were developed
to use machines as a means of increasing speed
and precision, while decreasing
production costs. In the aerospace industry, automated drafting tools were
widely used in the 1950s, and the needs of
manufacturers resulted in innovative software
programs for project management, data collection,
and engineering. Research and development
intensified in the academic community,
however, in the 1960s. Ivan Sutherland, his 1963
electrical engineering doctoral dissertation, and
his development of the sketchpad system
are widely credited with being a significant
contribution to computer graphics
programming. Sutherland was working at
MIT, along with Steven Coons, to rethink the interaction
between humans and computers. He and Coons envisioned a
computer-aided design system that would allow multiple
designers to interact with the system simultaneously
to communicate effectively with each other, and to,
quote, “use the creative and imaginative powers of
the man, and the analytical and computational
powers of the machine.” While these researchers
were approaching a problem from an engineering perspective, their work demonstrated
possibilities for computer graphics
that held promise for applications
to architecture. The idea was to create, move, copy geographical
entities using a light pen. By the 1960s, several
conferences were organized to bring professionals, kind
of like we are here today, to discuss the possibility and
implications of using computers in architectural practice. The goals of the conferences
and associated publications were to present information
about current efforts in using computers in
architecture and related fields, and to open up a discussion about the potential use
for creative design. There were also conversations
about the role of the architect, and what that looks like if
you start adopting computers in your practice. So there’s a little bit of
an apprehension on the part of some architects about
embracing computer technology, because of the uncertainty of
the value of the technology, and concerns about potentially
changing their role and status. A common theme in
the conferences and publications focused
on educating practitioners about computers and addressing
their role as architects. Now, in the 1970s,
we saw the beginning of the first commercial programs
available to the market. These are just a few examples
of those that were available. They were, however,
very expensive. In his 1977 book,
“Computer-Aided Design,” William Mitchell predicted that
developments in architecture and computer technology
had finally reached a point where the tools would, quote, “radically transform the
practice of architecture.” So what was coming
down the line — a lot of things regarding
personal computing, and just a lot of changes
within computer technology that were going to make
these programs a little more accessible, and, as
we’ll see in a moment, change just the nature
of the field. So the book was written
as an introductory guide for architects, students,
and computer technologists, to try to bring them together
to think about the developments in computer technology, and what
was possible in architecture. So he encouraged
thinking about design as an information processing
task, in which one conceives of the mechanics
of managing data. So it was about data
even in the early days. He was concerned about
the seeming hostility from architects, their
ignorance of the potentials of the computer technology, but
mostly the financial constraints of relatively small firms. It was — he noted that the
challenges were particularly great as compared
to other fields, where computer-aided design
was implemented more readily — specifically, automotive
and aerospace engineering. One point he made was that the
relatively small size of firms and the economic
investment necessary to bring computer-aided
design to such firms meant that the adoption of
the technology needed to be perceived as necessary
or valuable to the industry, not just to individual firms
or individual practitioners. It needed to be sort
of a broadly-applied — to have real value, and to be
perceived as having real value. So what are some of the preservation
challenges from this period? You know, I’ve introduced
kind of a lot of big ideas around what was going
on in those early days. There’s not as much in the
way of — as far as I know, large quantities of
records available, in terms of the technology. But there are some, and I think
identifying where those are, where different repositories
might have different things that might help us preserve what
was tried, what was debated, and what was presented
to various audiences, what has value, both for the
history of design records, but also for the history
of computer technology. Because some really interesting
things were happening during this period, and they’re just
sort of blending, and merging, and borrowing ideas and
technologies from other fields. So looking at that, I think, is one of the things
that’s really valuable. There’s also the experimentation
and adaptation to new markets. Some of the technologies
were created for one market and then used within another,
and I think that’s the kind of thing that we might
want to be preserving and providing access to. So, moving on to the 1980s — the 1980s can really
be characterized by a proliferation of programs. As I mentioned earlier,
there’s sort of a change in computer technology —
several changes that allowed for a wider array of people to
have access to the technologies. They were becoming
less expensive. Numerous vendors
entered the market. So this is a very long list, that is by no means
comprehensive, of all the vendors that were
entering this particular space. Some were specifically focused
on architectural software, while others expanded
their offerings to the architecture
and design market. So numerous companies competed
to create and sell CAD software to firms, including
AutoDesk, Intergraph, Bentley, Dassault, just to name a few. So in the early 1980s, firms began adopting
computer-aided design technologies — some firms — in large part because the
development and affordability of computers occurred in
concert with years of research and discussion within
the industry. So another thing I like
to point out is that all of this happened, you know,
over a long period of time. There was a long period of
discussion that was occurring, that, you know, architects
were sort of debating what
their role would be, how they might implement
this, whether it was worth it. There’s a kind of — there’s
a lot of sort of social, and political, and economic
considerations that go along with all these changes
in technology. So, just to give you a
little bit of a view of some of the advertising that was
going on during this period, in the 1980s — this is taken from several different
journals in the mid-’80s. And one of the things I thought
I’d mention was that the editors of “Architectural
Technology,” a publication of The American Institute
of Architects, organized two rounds
of CAD evaluations. They called them shoot-outs, and published the
results in 1984 and 1986. They evaluated numerous programs
— architects evaluated them, and by the ’84 review, AutoCAD
was already pretty acknowledged as the de facto standard
for affordable CAD. And by 1986, it’s noted that
most of them were expanding into giving 3D possibilities,
or 3D models of buildings, as well as 2D plans, elevation,
and construction drawings. So one of the things
that was revealed is that there were many
affordable options for CAD software
in the mid-1980s. So it wasn’t as easy as CAD just
— you know, AutoCAD came in and took over the market. There was a lot of
competition during this period, and a lot of different
software out there, and a lot of different
companies making this software. So — let’s see. From the 1980s through the ’90s, the substantial competition
existed, with AutoCAD really
becoming a predominant tool within the architectural
community by the 1990s, but by no means the only tool. Others have persisted, and
digital assets that produce — produced by these programs
have a lot of value. One thing I would like to note
is that, during this period, there’s several preservation
things to contend with. So a lot of this
software was sold in hardware and software
bundles. So one thing we might need
to think about is not just, you know, providing access
to software, but thinking about the hardware used
to access these materials, which we would in
most cases anyway. But there’s some
really interesting and different hardware that
might’ve come with some of these programs that we
would need to contend with. There’s, as I mentioned before,
development or adaptation across multiple different
fields. There’s a lot of proprietary
systems that we’re just starting to see, and you’ll hear
more about from Tim. I’d also argue that preserving
the context of these assets, the history of these
vendors, is very — I think interesting,
and significant. There was a lot of mergers,
a lot of bankruptcy, or people choosing to get out
of CAD, but also people choosing to merge together
with another company, because they were
stronger as one. So I think that kind of
history might also help us, when thinking about the
challenges we’re dealing with, and just thinking about
telling a broader story about computer-aided design
technology for architecture. So collaborating with
colleagues to identify and address the preservation
of legacy assets is part of the challenge we’re
here to address today. And I’m very happy to
introduce my colleague, Tim, to tell you about the 1990s.>>Tim Walsh: Good
morning, everyone. I thought I’d take a second, if we can just together
congratulate Katy, who just successfully
defended her Ph.D. Whoo! [ Applause ] And on very similar topics, so you have the right person
introducing the early days of this to you. So picking up with the 1990s, I
think one thing that’s important to acknowledge is
that, you know, there’s some arbitrary
distinctions here. Some of the things that’ll
characterize as, like, from the ’90s, or from the 2000s
— it’s not quite so linear. People weren’t always working
on the same timeframes. There’s a lot of trends
that stretch over time, but that said, we’ll
kind of stick with the narrative for now. The ’90s really saw sort
of two divergent trends. In aerospace, automotive,
engineering, we started to see a lot
more market consolidation around a handful of systems. This is probably largely because
in these fields, the longevity of assets is just as, if not
more, important than, you know, the creativity of the
design, which is, I think, something that we’ll hear
a lot more about later on with projects like
the LOTAR project. But really, we saw quite a lot
of consolidation around tools like NX, and CATIA, and Pro
ENGINEER, largely by these sort of gigantic firms like GM,
who not only standardized around the software, but
actually, in some cases, you know, purchased it,
and guided its development. At the same time,
in architecture, we sort of saw an explosion
of the amount of software that was being used,
and experimentation — you know, even people
taking software, realizing it didn’t
quite meet their needs, and sort of scripting around it,
and hacking on it a little bit. And a lot of this is
related to the development of 32-bit operating systems
and personal computers that actually had the
processing power to compete against these larger, UNIX-based
mainframe applications that had previously kind
of dominated the market. So when we get 32-bit operating
systems, like Windows NT with more advanced
graphics capacities, more advanced processors, suddenly a desktop application
could actually be just as viable as a mainframe application at
a significantly lower cost. Which opens the door to
smaller firms, to students, to people working at home, which
really kind of changes the game, when it goes from being
basically a technician doing the CAD, or, you know, a very trained professional
doing the CAD to anybody. You know, where now, you
know, a student, like, one of the first things they’re
going to do is open up Rhino and start doing 3D modeling,
and it’s very, very different than what it used to be. We also have the development
— sort of the brief rise — like, fast rise and brief heyday
of Silicon Graphics, which is, I think, a history that has been
kind of written, but in relation to this field, is really
important to think about. Because essentially, that meant that there were these
commercially-available systems that were miles and miles
beyond, in terms of 3D modeling and animation, anything
else that was out there, that were largely used in
other but related spheres. And I think this is
something that I’ll talk about a little bit — I’ll
keep talking about a bit. Like animation, the film
industry, special effects — so basically, you have
software that’s being written for these machines that’s being
used to do advanced CGI work, that’s then getting taken by
firms like Asymptote and applied to do visualization of the
New York Stock Exchange, for instance, which becomes
part of a built environment. And these sort of crosses
between these disciplines, I think, complicate the history, make it a little
more interesting. So some of the software that
we see in this time period — a lot of it’s not new. Form-Z would be an exception
here, which it came out in 1991. But existing systems, like
MicroStation, Solid Edge, SolidWorks, AutoCAD really sort of developed the 3D capacities
either for the first time, like AutoCAD comes
out with release 13. Or they sort of continue to
develop the 3D capacities that were already there to the
point that they’re actually, you know, viable, commercially. And I’ll talk about this more
in a minute, but, I mean, these are some of the
systems that basically won in this market that
Katie was talking about, with a lot of competitors. There were a lot of
other systems out there, but these are the ones — you
know, they’re still around. They’re still in wide use. We also have, like I said,
this sort of parallel trend of 3D modeling, and this
largely coming out of film, games, special effects. So PowerAnimator becomes Alias. Alias becomes Maya. These things are basically
used to create the dinosaurs from Jurassic Park, and
the liquid spider — or, the liquid Terminator
from Terminator 2, but then get applied to
architecture, and design, and engineering, and
sort of change the way that not only models
are being created, but also how they’re presented. Things like, you know, having — being able to do
animated renderings, and photorealistic
video walk-throughs of what a building would be like
kind of changes the interactions that people have with clients, on top of how they’re
actually doing the work. And a lot of this
software’s still around. I mean, even big budget movies
are mostly still using things like 3ds Max, Maya, which
were largely developed for and/or bought by
AutoDesk, for the most part. This is a couple of the
other software programs that were around. This is the desk next to mine at
work, some of the old software that we’re disk imaging. But just to show that, you
know, there are lots of names out there, too, that you
don’t necessarily remember. I think I agree with Katie. I think it’s a fascinating
history that deserves a little
more attention. There was a lot of
experimentation, and what this means is that, if
you are a place like the CCA, where I work, or, you know,
a library archive museum, and you’re getting the
archives of these projects — like, these are just a
few of the file extensions and softwares [sic] that you’re
potentially contending with. You know, I think
we have a tendency, from our historical perspective
now, to think, like, oh, it’s like AutoCAD, and Revit,
and maybe MicroStation, but there’s, you know, dozens
and dozens of software programs and file formats that
we potentially have to contend with. Which is, I think, a thing
that we should keep in mind when we’re talking about things
like best practices for working with these legacy collections,
you know, that file formats like STEP are fantastic, but
it doesn’t necessarily mean that we’re going
to have a converter into these open file
formats from all of these previous softwares
[sic] that came before. Some of the preservation
challenges come, again, from this sort of just variety
of applications out there, most of whom were proprietary, most of whom didn’t
sell that many copies. These applications are
also extremely expensive, so it’s not like
Word or something, where you can buy
a copy off of eBay, because it was in
almost every home. At this point, even tracking
down the software to begin with can be a real challenge. Experimentation and
practice as well — I think this is something —
I might be a little biased, coming from CCA, which is
usually interested, sort of, in developments in theory
and practice more than sort of the end products
of architecture. But, you know, in
our collections, we see from this period
a lot of scripting, a lot of creative
experimentation with software, which means you have to
get creative in response when you’re thinking about
how you preserve access to this material
over the long term. One of the things which
continues before as — or, begins before, and
continues even now, is that it’s an extremely
protective market, which means, like I said, the
licenses are expensive, but also that we’re dealing
with things like dongles and hardware keys, where
the application won’t work if you just have a copy of
the application and a license. You also have to have
something physically plugged in to a parallel
port or a USB port. So if we’re thinking
about, like, oh, maybe one potential
solution to this is, we preserve the old software,
and we run it in emulators, which is something we’re going
to hear more about today. And there’s a sort of
added layer of complexity, because how do you — you know, how do you convince this
software that’s not actually running on a physical machine
that there’s this dongle plugged into a physical machine,
without actually going in and cracking the software? Which is not exactly allowed
by U.S. or Canadian law without explicit permission. So these things start
to complicate some of our potential
solutions a little bit. Proprietary hardware platforms,
like Katie was saying, too, you know, SGI was fantastic. It was also extremely expensive,
so people like Neil Denari in architecture only got copies
because they were hanging out with people in
the film industry. So it’s not like there’s
a hobbyist gamer community emulating these things, that
we can then use to emulate it. You know, so there’s
whole challenges there. And I would say, you know,
finally, that sort of lack of, like, fully interoperable
vendor-neutral file formats — you have things like Ijus
[phonetic] that’s long established by this point. STEP is in development by
the ’90s, but it doesn’t mean that all these software
programs can actually write data into these interoperable
file formats. It doesn’t mean that data
isn’t lost in that migration. So just a thing to keep in mind. As we go into the
2000s, essentially, we see sort of further
development of these tools, especially around 3D
modeling and scripting. Probably the biggest
thing, though, is just a wider adoption, that these software
programs get more advanced because there’s more
time, but also more money, because the market’s bigger. We see the development of things
like SketchUp and Rhinoceros that really lower the barrier to
entry for things like modeling. Also, a lot of really
interesting automated fabrication, 3D printing — so people start to play with
materials a little more, think about what can change
on the construction site, what can change in the process
of creating the materials that ultimately go to the site. One of the biggest things, which
actually starts in the ’90s, really — technically maybe
before with AutoLISP — but is a — this, like, sort of
experimentation with scripting. So in the ’90s, you know, if you
look at a software like Maya, which is largely used
for, like, animation and special effects
work, but tends to get — like, gets used in
architecture and design as well, you no longer have to program
outside of it and figure out how to interact with the program. They start to build in
capacity for scripting. So Maya has the Maya
embedded language, or, like, MEL language, so you
can actually script within the program to do
things like generative design. That eventually gives way to more standard programming
languages, like Python, and then, you know,
in the 2010s, like Aliza will talk about,
that gives way to even sort of easier methods of scripting. So they start — these
things that people were doing as experimental practice in the
’90s start to become normalized as part of, like,
typical practice. And finally, but certainly
not least for the 2000s, we see the real development
of BIM. If you look at the sort
of AIA membership surveys at this point, you realize that basically Revit gets
developed in the 2000s. ArchiCAD becomes, you know,
fully unarguably BIM software at this point, but not that
many people are using it yet, except for the really
large firms. So something like, you know, 16% of architectural firms
are using BIM software by the mid-2000s, but
IFC is in development. The software’s there. There’s a lot of
evangelizing going on, a lot of discussion going
on, which sort of leads us up to the current moment. So unique preservation
challenges for the 2000s — you know, all this use of
scripting, of parametricism, of experimentation around
the software means a lot of dependencies, and a lot of
data that’s potentially lost if we migrate between formats. Always complicates things. We have much more
complex workflows, which, for someone like me, means
I might get what appears to be the same model in
four different file formats from four different
software packages, and unless you have
documentation from the project, you don’t necessarily
know what the order that things happened in was. Because they might be doing
initial modeling in one thing, and then porting it to
something else for rendering, and porting it to
something else for creation of construction drawings, and sometimes it’s a lot
more complex than that. So it starts to — we start to need quite a lot
more documentation to understand the
work processes. A vast increase in the number
of files and file size — so at this point, as the software gets
a lot more complex, the assets get a lot bigger, which has real sort
of economic concerns. Which, I think, when —
especially when we’re talking about the cultural sector, has
a big impact when we’re talking about collecting, because we
have to actually be able to pay for all this redundant storage to meet our digital
preservation best practices. And finally, a challenge
that I think is not related to just architecture, and
engineering, and design, but certainly universal,
is partially because these files
were so much bigger, there was so much more
data, people didn’t want to keep the stuff on their
computers and servers anymore. And people were told that it
was best practice in the 2000s to use things like Archival
Gold CDs that were guaranteed for 300 years, that I can’t read
10 years later, and LTO tape, which degrades over time. You know, there was a
really common practice — a lot of architects were
like, we’ll burn our archive to CDs now, and these are things that we have to contend
with now. It basically means that
there’s a ticking clock, where a lot of data’s sitting on removable media that’s
in people’s closets. It’s under the stairs. It’s rapidly degrading, and
things have to get migrated off if we want to even have the
assets around to begin with, in order to preserve and
provide access to them. And with that, I will
leave it to Aliza to very competently
continue the talk. Thank you. [ Applause ]>>Aliza Leventhal: So as
you see, the zoo continues. It doesn’t get smaller. It continues to grow, and
these are just a sampling of some of the software. I am the archivist
for Sasaki Associates, which is an interdisciplinary
design firm that includes planners,
landscape architects, graphic designers, including
environmental graphics, lots of architects. And so, as a result, this
is a pretty good assessment of the software that
we use at my firm. It’s not everything, but
it’s a good assessment, which also tells you
that there’s some — while we have a skew
towards architecture, a lot of these tools and
resources are being used by the other disciplines
as well, mostly because you can’t design
a building and not have a way to put what the landscape is
going to be in that same model. It’s impossible. You can’t really do that. It’s a bad idea. Well, I guess you can do it,
but I don’t recommend it. And so it’s important to
understand that this is not — we’re not looking at a
microcosm of our system. It’s a larger ecosystem, and
these software are working in between different
disciplines. But the important thing
also to think about is that as some software, like
Revit, is becoming more robust and heavier, clunkier,
bigger, can handle more data. We’re also getting really light
things like Tim mentioned, of Rhino and SketchUp, that
are really easy plug-and-play, which I’ll talk about
in just a second. That facilitate much
easier access, and for you to do different
things at different times. And so the workflow issues
that Tim’s talking about — talked about just a minute
ago get more complex, because you’re like, oh,
I see a SketchUp file. That was only during
schematic design. Great. Nope. Nope. That’s not the case,
because sometimes you’ll want to do a piece of that
massing in SketchUp, and then import it into Revit. And then it’s there, and now
you’re building on top of that. So it’s a very confusing — and I’m sorry for everyone
who is not familiar with all of these software. I’m going to explain
them in just a second. My apologies. But the workflow is
really confusing, because you might
think that you know when something was used based on
what the software is capable of, but it’s kind of a — you
know, the Wild West continues. We thought it was only
really in the ’90s, when everyone could sort of
hack and break their software, but it’s continued to evolve. It wasn’t an anomaly
of the ’90s. It’s actually the
trend, which is great, but two of the main
things that come out of the 2010s is
really parametric design, and computational design,
and visual scripting. Visual scripting is
particularly interesting, and comes a little bit
later, so we’ll focus on parametric design for now. The software that’s
included on this list — the large blue R is Revit. It is a building
information modeling system, and it definitely has the
lion’s share of the market. It is incredibly robust, because
it can handle engineering. Mechanical and structural
engineering can be added on, or you can take those
layers off. It’s really helpful for that. The bumblebee, and
ladybug, and all of that — there’s actually a whole zoo
that exists for energy modeling and more environmentally
oriented data elements, and — for programming. And then Rhino and SketchUp,
as I’ll explain right here, are the much lighter-weight
softwares [sic]. So this should just
give you a basic primer of what we’re looking at, in
terms of gradation of intensity. So starting at the
top, it’s much easier. You can really — like, first
day of class in design — any design program, you should
be able to just, you know, hop on and start playing around. It’ll be super confusing
at first. I have worked with it,
and it’s very confusing. But once you understand
the lay of the ground, it makes it much easier to
come into these other software, and so you start
understanding what massing is, what different layers can look
like, and different levels of texture and materiality
that you can apply to it. Alternatively, we’ve got
Grasshopper and Dynamo. So Grasshopper came out
much earlier, and was meant to be paired with Rhino. I’ll just go there
and show you that. So that’s helpful. So these — Grasshopper was
designed to work with Rhino, and Dynamo, which is this
fun four-cornered thing that has an arrow going to
Revit, comes from AutoDesk. And they basically
— sorry, yeah. Revit, great. They basically are intentionally
designed software to pair with another software, which is
an entire market that evolves. But, a lot of the time,
the software like Revit — Revit has just kind of made
itself this wonderful amoeba that can just continue
to accept things in. So more things can
be imported into it, and it can handle
exporting into a different — a huge range of different
file types. So it’s sort of just
becoming this sponge that can handle taking
everything in. It becomes a really
heavy software, though, and that’s really
dangerous when you’re working on a very large project. There’s this chance for not
something to get saved back to the file correctly, as many
people are playing with it — not playing, working in it. And — you know, and
so it can become — it can become much more complex
and difficult to parse apart, and find where the errors
happened, because you’re working on a much larger scale. The wonderful thing that
comes with visual scripting is that it allows designers to
very quickly get into that — the landscape of
programming, and providing some of the automation that the
1970s had hoped computers would offer them. Because if you’ve ever worked
in any of these software, you know you don’t want to
really be the one that has to number every single
room in a dorm as you’re working
on the floor plan. And so that’s one of the things
that Dynamo can do for you, is like, we’ll automatically
create that — if you run that script, it
can create numbering systems. And so it lightens the load
for the designer to not have to focus on these
granular details that you really don’t want
to mess up, and instead, you can rely on the strength
of the computer to fix that for you, if you’ve
applied that element. So that’s a really hopeful
thing that’s come about, but it does create a lot of
layers of issues that I’ll talk about in just a second. This is the layer of issue. So this looks really
simple and clean, right? Oh, obviously, one-to-one. Yeah, that’s super, except
that’s not really what happens. This is an actual diagram
from our sustainability lead in my firm, and she created it
to explain how certain data goes from one area to
another in order for you to create certain energy
modeling assessments. And so, I’m not going to get
into all of the various critters on the screen, but the important
thing to understand is that, it is a really complex problem
that we’re dealing with. Because this is just for
her part of a project. Like, that is a very small
portion of a much larger animal, and we’re not even at
construction administration yet. Like, this is during
design phase. You know, so — and for everyone
who might not know every — might not understand
the full phases, there’s schematic design, which
is where you’d want to just play around with massing and come
up with a general concept. There’s design documents. That’s when you’re working
more in honing the design, and really cleaning —
clearing it up, and then you get into construction documents. And that’s when things get
really firm and more concrete, to actually then make
it into concrete. And so this comes much earlier
than the actual concrete moment, but — like, this
is the landscape that we’re dealing with. And I can tell you,
it’s terrifying to me. I’m sorry. That probably doesn’t make
anyone here feel very good, but — [laughter] but it’s
a scary space to be in, to recognize that when
you accept these files in, even at my firm, there’s
a good chance that some of these won’t have these
saved to our billable drive. It’ll be something that she
was working on on the side in this case of the
sustainability coordinator, just testing something
out, and then maybe that doesn’t get back. And so, I mean, that’s, like, one orphan record
kind of situation. But for someone at a
collecting institution, that becomes a much bigger
issue, and it’s something that needs to be talked
about in terms of appraisal. Which is not what I’m
here to talk to you about, so I promise I’ll stop there. We’ll continue with the primer. That’s just a nice
little doom and gloom for everyone to keep in mind. So then, more doom and
gloom, but also excitement, is that now we’re
moving towards the cloud, and it’s a more collaborative
environment, higher impact in terms of — firms can work across the country
with each other. They can work across the world
with one another using platforms like GreenBIM and A360. I include Adobe Creative Cloud, because it’s also an incredibly
robust cloud-based suite. And basically, what we need
to be worried about here is who owns these records,
because they’re being hosted on their server. And it’s not that
I think that any — the servers aren’t trustworthy,
or have poor security, or anything like that,
but it’s about figuring out the right regulations. If we’re the prime on a project,
and we have subconsultants that need to have
access to certain parts, how do we protect our
model from any, you know, mischievous goings-on? And so, you have to keep that
in mind as we think about — well, the model — the business
model then also changes. So the expense is going back
up, because not only do you have to have your copy of Revit
2018, but then you also have to pay separately for A360. So now a firm that would
normally pay a set price for one year for that person to have one thing is now
paying basically double that for every single person
who needs access to it, because of the way that the
subscription model has been set up. And as software becomes
software as a service, and subscription models rather
than, you buy that software and you have it on your
computer, and it’s something that — oh, well, Katie’s
gone onto another place, so we can take her copy
and give it to Charlotte. You know, that doesn’t
work anymore, and it’s something that,
as people who are going to be collecting this
stuff in 20 years, need to think about now. Because there has to be
a proactive solution, because otherwise, we’re
all kind of in trouble. And so that’s something
to keep in mind, because we’re getting
more exciting as well. So A360 and BIM — GreenBIM, BIM360 that was mentioned
earlier — they’re all really
exciting platforms, and they allow the designers to
do incredibly innovative things as a result of being
able to work in real time across the world and with each
other in different spaces. However, we need
to keep in mind — what does it mean when
the project is done, and how do you get
those records back? So then, we’ll talk about
something fun again, which is about augmented
reality and virtual reality. These are the new ways
that firms are beginning to communicate with clients. So building models is still
huge, and is really important to materiality, and
expressing the idea, because visual literacy isn’t
something that everyone has at the same level as a
designer who went through five or seven years of design school. And so it’s great to be able
to provide a relatable way to experience building,
and one of those is — virtual reality has become
a new way of communicating. And so sometimes, we’ve
been able to use it and actually share it with our
clients, and sometimes it’s just for the designers to get in the
space and see what it’s like. And virtual reality is really
interesting, because as you see on the screen, Revit
and Enscape, you can just pull
something right in. You can make a change
in a Revit model, and it’ll be immediately
reflected in Enscape, which
is really cool. But then you can also do that
for a SketchUp model, which — SketchUp just has a little
button, and you can go into virtual reality mode. And so, again, what
happens to that record? Because there’s no
way of knowing that someone did the
virtual reality mode of their SketchUp model. Maybe they wrote
down in context. Maybe they have it in the
meeting minutes with the client, that they showed them
that, but it’s — the context is really important. And it’s one thing to know
what the software’s capable of. It’s another thing to know — what did someone actually
do with the software? That is the question,
when it comes to understanding
what is available. And then augmented reality — I was also told by somebody that
there’s, like, mixed reality, and I don’t know what
that is, so I’m sorry. But augmented reality is
where you apply, like — if anyone’s familiar with
the game Pokémon Go, where you basically
put a layer on top of what you’re experiencing. So this could be really
incredible for planners who want to show, like, this is what
this square could look like now, or this is what the
streetscape could be going — like, walking down, you
know, Pennsylvania Avenue. They could just change things, and so you could see what
layers get put on top of, rather than sitting in
an office and looking at a model on your table. You could walk and
experience that yourself. And the potential
for that is immense. Like, I’m so encouraged by
what is possible, but then, like, how do you save this? Especially if it’s
made as an application. Well, then, now we have
to worry about holding onto a cell phone type
that this was made for. And as we all have experienced,
even if you’re really nice to your cell phone, maybe
it’s not always going to be so nice to you. So that’s something to think about when we work
through these things. And so, to end on
the wonderful part of new preservation challenges, you’ll note these are
basically the same ones that Tim mentioned earlier. They just get more intense. So there’s more scripting,
and that’s great. It’s increased the number
of dependencies, though, because now we have
more data involved, and there’s more experimentation
as it has become democratized. So it’s not only, you know,
the incredible experimental and early adopters that
Tim has at the CCA — has the records of,
but now it’s — literally every student
can try experimenting and creating these new
concept at a very low barrier. And so that just means we have a
whole lot more experimentation. Instead of only 10 people
messing around, got thousands. The workflows are,
like I mentioned, as you can see, super fun. Workflow number one — and so, workflows are getting
much more complex, and that adds a whole
new spectrum of understanding the
provenance of the records, understanding the intention of
the records, and then worrying about how to actually get those
records when they are done. As an Archos Ware firm, it’s
a very complicated process of getting the records
when a project completes, because there are so many
different ways of defining when a project is completed. Especially if you continue
to work with a client for, you know, 30 years — maybe that project’s never
actually completed, but then it’s 30
years of records that have not been cared for. And that becomes much more
dangerous as we move more into digital, and if
people think that it’s safe to leave things on CDs,
we might be in a bind. We are in a bind. So — [laughter] and so then
the last two are a vast increase in the number of project
files and file size, and that means more in terms
of, like, affordability — like, the sustainability
of an archive being able to handle collecting
more material. If your first three
collections, you didn’t realize that you needed to change
your appraisal guidelines, so now you have six terabytes
from, you know, Aliza Leventhal, congratulations,
but that’s not great if you are now 45 people
in, and they also want to give you all three
terabytes that they have. If you don’t have the
bandwidth to literally house it, and you don’t have the
funding to support that, that becomes a problem that
we need to become better at articulating, so
that we’re more careful in how we’re curating
our collection. Which is a dangerous thing
to say, because we don’t want to be pre-selecting too far — too much, but we also need to be protecting
ourselves for the long term. And then, the last thing is
actually digital deliverables, and that’s something that
I’m really hopeful for, and hope that we get to
talk a lot more about today, which is that there is
now a need for this. So before, it was,
wouldn’t that be nice? Ah, this is really frustrating. I don’t know what to do. But if these files are
becoming contract deliverables, that means that, in perpetuity,
they need to be available. It’s not just being kind
to our future colleagues and future designers that
need to reference these. It’s a legal obligation to provide these records
for long-term access. And so it’s a call to arms, not only for the preservation
community, but for designers who are agreeing — who are
signing these contracts, saying, “Yep, I will definitely
give this to you in various level of
detail BIM mode.” Yowza. All right. You need to, like, have a
way to do that, and you need to help educate the
person who’s claiming that they want this file to
then be able to receive it and protect it for
the long term. Because that’s the whole point,
is that they want to move that one file to, you know,
rule them all, sort of thing. That’s what Revit’s promise — or, not Revit, but
BIM’s promise, was that if I have this
building information model, I can have the designer
design it. They can give it
to the contractor. The contractor can build it,
add whatever notes they need to in the process, and
then they can hand it to the facilities manager. And that’s a really
beautiful idea. We’re just not there yet,
but the deliverable — that contract deliverable
is the thing that’s going to get us there. Because there’s no
other way to ensure that something will get done
unless there is a very huge amount of liability to
not do it the right way. So I know that’s not a very
exciting way to end that, but it is — I think
it is really hopeful, and I hope you also think so. If you have questions,
feel free to ask us. We’re all really
excited for today. This feels like a
long time coming. I know we’re a little
quick, so that’s all. [ Applause ]>>Kate Murray: Hi, everybody. We all spoke really fast. So we’re wondering,
would you guys be willing to take some Q&A?>>Tim Walsh: Sure.>>Kate Murray: Great. So if you have questions,
we’ll have mics. Questions, anyone? No questions. Oh, one in the back. Phil — yeah, you — we will
get a longer break time, so that’s okay. So –>>Maybe because this was the
last thought I heard from Aliza, do you have any examples,
working at Sasaki, for what are deliverables
currently? How are you — I mean, surely
you’re delivering products. What’s your best least
common denominator?>>Aliza Leventhal: Yeah. So — is this on? Does this work already? Can you hear me? Great. Super. Well, that’s been recorded now. So — yeah. So Sasaki is working with some
projects that do require it, the — I believe it’s the BRA, the Boston Redevelopment
Authority, or whatever it has
now been renamed. We were doing a project for
them, and they were asking for Revit as their — or, a
BIM model as their deliverable. And they provided the standards
that we had to comply to. So it was a very well-thought
and articulated deliverable that we were able to uphold to. I read what they were
requiring, and it seems like — because they were — the
client also owns a copy of the software already, because
this is how they’re planning on moving forward, it’s
a much easier option. Because they’re planning on
just holding things in a version of the software, and then
maintaining it at that level. So that changes things
a little bit for us. I don’t think that’s going
to be the case almost ever, in terms of, like — if you
think a university is going to keep a version of every
software that you, you know, make a — or that
you work for, like, that’s kind of a crazy option. But that’s what I think
is going to have to happen for the beginning, is that these
clients are already having a version of the software
for themselves to be able to read it. And then the next thing
is figuring out how — how do we make this so that
it can be long-term access, and it’s not dependent on if
it was a 2016 or a 2017 file? Because corruption does
happen at that point. Yeah. Yeah.>>Kate Murray: Thank you. Mike –>>Tim Walsh: Well, I might
add something real quick, which is that if there are any
software vendors in the room, that means there’s
a business case for licensing us
your old software, and not just librarians
and archivists. It does come in handy,
and we would pay, if there was an option to.>>Aliza Leventhal: Hint.>>Really, good luck.>>Kate Murray: The
mic is on the way.>>Aliza, you talked a little
about problems of assessing or determining ownership
in the cloud environment, and I thought that
was fascinating. And I wondered if Katie or
Tim had any comments on issues about the same problem
with respect to the earlier periods
that you described.>>Tim Walsh: Probably not
much cloud in the ’70s, yeah?>>Katie Pierce Meyer: Yeah. Yeah. Well, but not the
cloud, but file ownership –>>Tim Walsh: Right,
ownership, okay.>>Question doesn’t go so
much to the cloud as it does to issues about who owns what, and to what extent
they appear even in the pre-cloud environment.>>Katie Pierce Meyer: I
think there’s always going to be issues of who owns what,
but I think in most cases, it’ll come down to — you
know, those were the products of the firm, and so
that they own them. And they have then an ability to
hand over rights to those files. I don’t know — in the case of
where I work, I’m dealing mostly with printed documents, as
opposed to softwares [sic]. Tim has more experience with
software from this period. I don’t know if you’ve run
into this particular issue.>>Tim Walsh: I don’t think we
run into an issue of ownership with the — well,
there’s an issue — there’s a question about
ownership of software and transferability of licenses,
if we get old files as well as installation media
for old software. And that’s a thing that quite a
lot of people are talking about, who are quite a lot more
legally qualified than me. So I’ll just leave that there. But otherwise, I don’t
think much ownership issue on that level. One thing is that, you know,
there’s always a question, but I think this was true
before, of whether, you know, the architects can also give
us the engineering records and all that. And that depends
on their contracts, and how they’ve handled that. So that’s sort of a
case-by-case issue. Yeah. I think, in terms of
accessibility to content, though, it’s very similar in
that right now, I’m spending — my team is spending quite a
lot of time basically, like, disk imaging old media,
everything from, like, zip disks, and CDs, and floppy
disks, and SyQuest cartridges, and whatever as, like, overhead to actually getting
access to the content. And I think that
overhead is remaining. It’s just changing, where
now it’s going to be, you have to figure
out how to get into these third-party
servers managed by other people, and
get the data out. And, you know, it’s not just
us that’s dealing with this. The digital forensics community
is really seeing a change in how they work,
from going to, like, evidence that’s physically
inscribed on a disk to transactions being
on third-party, corporate-owned cloud servers. And figuring out, like, how — you know, it requires a whole
change in the way that you work, but that’s just always true. You know, formats evolve,
and we evolve with them.>>Katie Pierce Meyer: And
I’ve had other concerns, I guess, lately. I’ve been thinking a lot
about privacy and security, and providing access
to just any sort of building documentation
in general. So this is something
I’d probably like to work a little
further on, to think about this
kind of issue. But I think that’s one
thing to keep in mind, too, is we’re always talking
about data about buildings, or about the built environment,
and ways that we might want to be sensitive to, or need
to be legally responsible for what we provide
access to in terms of privately owned structures. And that may have security
issues for, you know, individuals, or society
in general. So that’s, I guess, where
I’ll leave off on that.>>I’ve got one here.>>Hi, Kate, Tim, and Aliza. It’s Kurt Helfric
[assumed spelling] –>>Aliza Leventhal: Hi, Kurt.>>– and I just
want to thank you all for a fantastic presentation. I mean, I think you’ve really
shown us so many things that some people remember
maybe way back in the day, but you’ve given us a
really great overview and assessment from today. So congratulations on that. My quick question, Aliza,
is I think you’re spot-on about the contractual
obligations as a driving force for trying to get
some of these things in standards that we can accept. I do want to share
with you something — in my current position,
this isn’t true, but in — previously, at Santa
Barbara, we were very careful as collecting repositories
never to claim to be the legal repository for the materials
we were collecting. And I think — I
want to remind people that that’s still definitely the
way for research collections. The question I had,
really, is — I’m just going to
throw this out to you. Do you think it’s going to take
some serious legal lawsuits to actually be able to then
get these things developed, or is it something we can
really reach out and work on?>>Aliza Leventhal: So I really
hope it doesn’t take that, because it’s terrifying. But, for instance, the
$6.1 billion revenue loss that happened for
Airbus in 2006, they had a problem with CATIA. They had two different
versions being used. One was in Germany, one was in
France, and then when they tried to build the plane, the
wires weren’t long enough. Whoops. And so, all of
these flights got delayed, and they had to, you know,
change a bunch of things over. And these planes were
supposed to be able to handle double the capacity,
and would be able to fly around the world
in one tank of gas. It was, like, you know,
this crazy awesome new plane that they were making, and $6.1
billion is definitely a reason why LOTAR now exists. Super. Thank you, LOTAR. But — which also
sets a precedent. It’s an ISO standard for
the aerospace industry. Sorry for everyone who isn’t
as much of a nerd as I am, as excited about LOTAR. But that, like — so that
set a precedent for us. And so, luckily, we have someone
to talk to us today, here — yay — a little bit about
what that experience is. But I think that my
Chicken Little schtick for quite a while has been,
well, it already happened to somebody else, with
something that’s very similar to what we’re dealing with. Shouldn’t we just be able to now
proactively address this issue? And I’m not quite sure
we’re there quite yet, but we certainly have
the momentum now. And I think we’re articulating
it in a much more effective way to capture attention for this, and having the contract
deliverable also be pushing at the same time — we’re
hitting a critical moment where it’s now about
identifying what does that actually look like. Because, as Tim said,
STEP is a great idea, but it’s not really realistic. Especially — I work within
a firm that is, you know — at any one time, they could
be working on five projects as a person — not as the firm. But — and so, like, that — that level of chaos and
hecticness, them to be able to then apply COBI metadata on a daily basis seems
very unrealistic. And so we have to come
up with some way — again, hint, vendors
— that we should — the software should be able
to account for this for them. Because just like
Microsoft finally came and said, “You know what? We’re going to stop messing
with it every two years, so that you can’t have
access to it in a forward and backward compatibility with massive data
loss or corruption.” And I hope that that’s where
we move, in that direction, so that there’s just a
little bit more safety and security in the files. The concern really is about all
this stuff that came before, because we’re at a moment where
we can help, as a groundswell, shift it moving forward
so that software’s better, but we still have to deal with all the crazy
that happened before. And that’s why we
have Tim and Katie.>>Tim Walsh: I do think that’s
an important distinction, though, and it’s one we
should keep in mind today, that there’s sort of, like — what are the best practices we
can lay down moving forward, and how can we try to
make it more likely that people will use
IFC and, you know, document their workflows? And thinking about contract
deliverables is a really — you know, it’s a really
important factor in there. I think looking at Scandinavia,
it’s also very interesting that, you know, there are
countries out there that are requiring BIM files,
not just for the deliverables, but also for, like,
competition entries. And it sort of changes
how things work. But, you know, in a certain way, we have all this older
stuff to deal with, too. I think it’s extremely
interesting. Maybe it’s because it’s what I
deal with the most, but yeah, we do kind of need two
strategies, at least. And so we’re almost talking about two separate
things at that point. There are a lot of
issues that continue, especially around file formats, and their interoperability,
and their longevity. But, you know, the solution
that we do for creative, small design projects that
never got built in, like, 1992 is going to be a little
different than the solutions that we have for, like,
massive infrastructural projects in 2018. And that’s okay, but it
means we’re kind of talking about separate things sometimes.>>Katie Pierce Meyer: That have
some continuity between them, hopefully, in the long term,
from a historian’s perspective.>>Is this already on?>>Yes.>>Hi, all. Thank you for a great overview. So you’re all practicing
archivists, as am I. I’m wondering, in light of the
many preservation challenges that you’ve outlined,
what kinds of awkward or tough conversations about
appraisal has this led to? And if you feel comfortable
offering examples, I’d love to hear about them.>>Katie Pierce Meyer: [Laughter] I’m actually
a practicing librarian. I’m an archivist in librarian’s
clothing at the moment. But I do have, you know, interesting conversations
about appraisal. Not probably as much as
these guys, but with one of my colleagues, Beth
Dodd [assumed spelling] at the Alexander
Architectural Archive, and we discuss frequently
some of these challenges. And I can’t think of anything
specifically off the top of my head, although
we’re currently — have been doing what a lot of
people are doing, is not taking in a whole lot of things as
of yet, and kind of hoping and waiting to see
what will come. We have gotten a few
collections that have, you know, a lot of digital
media, and we’ve had — we’ve been working with
students of the eye school and bringing their sort of
fresh eyes to these problems. But we are still kind of hoping
for some ways to move forward. The argument I’ve been
making for quite some time is that no one’s going
to find a solution — no one person is going
to find a solution. No one organization is
going to find a solution, and that’s one reason
we’ve been in conversation for quite some time, and — I
have with others in the room, too, is I think we have
to do this collaboratively and creatively to address
both problems that Aliza and Tim just identified. And I think that’s the only way
we’re going to make any headway. So that’s my schtick, I guess.>>Aliza Leventhal: I was
going to say, hilariously, we were literally talking about
this at coffee this morning.>>Tim Walsh: Like an hour ago.>>Aliza Leventhal: Yeah. Because the appraisal
grid needs to be updated. Waverly [assumed spelling] and Tonnie [assumed spelling]
have a wonderful book. If you haven’t read
it, you really should. It’s basically the
architecture of records — architecture and construction
records management book. It’s great, and they have
an appraisal grid in there that is wonderful, but is
very focused on analog. And so the important
thing I’ve been discussing with Waverly lately is how to
update that, because it’s not that they’re create — that designers are
creating new records — as in, like, the construction
process hasn’t changed that dramatically. It’s not now you
offer, like, a, like, hologram of what
you’re producing. But even then, like,
we still have plans. You still have illustrative
plans. You still have renderings,
and perspectives, and elevations, and details. All those things
are still existing. It’s just understanding
what are the file types that are most likely going to be
the case, and what does it mean to say, “Keep forever,”
versus, “You know, I don’t need your
punch list for — like, after statute
of limitations.” Like, punch list is not that
necessary for me to hold onto. Could be terribly interesting
for research, but, you know, making the decision of the — what records we still
need are consistent, but it’s now changed. Because instead of saying, “You
need the full set,” you now say, “I want the central file for
Revit and all of its extras.” And I can’t forget that
there’s a bunch of things that are connected to this file, because if it’s not
packaged correctly, having the Revit central file
and none of the families, like — so that would be like
not having windows or walls or furniture, which are
what families are in Revit. So then you just kind
of have, like, a box. That’s the kind of
stuff that I think needs to be sussed out more. And so the appraisal
conversations I’ve had with people in my firm haven’t
gotten quite to that level yet, because we’re still waiting
for our files to be old enough to need to be assessed that way. But when talking to project
teams, like, it’s really — there’s a difference
between your working files and your deliverable, and
the deliverable’s the thing that we’re legally required to
provide if something happens. And so making sure everyone
in the room understands that, going back to Kurt’s point of,
like, the legal requirements for the files, is
really important here.>>Tim Walsh: Yeah. I think there’s a couple
things to consider. One that may not be so
obvious for the people who don’t do sort of,
like, digital preservation in the cultural sphere is the
significant overhead that goes into preserving files
for the long-term. Like, basically all of our digital preservation
best practices require lots, and lots, and lots of
replication of files. So if we’re talking to a
donor, and they’re like, “Yeah, I want to give you everything
that’s on my servers, and it’s 10 terabytes,” that
— in our underlying storage, when we account for the
fact that formats are kept in the original format, and
also migrated to file formats that are more likely to be
usable over the long term. And then all of those
things are replicated across different media types,
and geographically replicated, so that we don’t have a
single point of failure. We can do things like audit the
files over time, restore them from backups if they change. All of that has a
significant overhead, and it’ significantly expensive. So it means that we have to
be more careful, versus, like, a paper drawing that goes into
a climate-controlled vault. I think it, again, sort of just
depends on the material, though. Part of it is also — like, I
agree with Aliza that, you know, thinking about sending —
updating the appraisal grid, so that we can think about
what’s actually worth keeping, is really important, that
there are a lot of things like externally-referenced
files and dependencies that you don’t want to break. So that means that you have
to know what you’re doing when you’re interacting with the
material, but at the same time, with paper records, we
never kept anything, either. It’s sort of the difference
between an archivist and being a hoarder, is,
like, you identify the things that have permanent
value, and you keep those. But we don’t need to keep
every draft of everything. We don’t need to
— so this diagram that Aliza showed, it’s big. It’s complex. It doesn’t mean that we have to keep everything that’s ever
been created in perpetuity, especially in this world where things are,
like, more networked. The files are bigger. So I think we have a lot of
critical reflection to do about that, and it’s really
good to have a standard like the appraisal grid. I also think it just
depends on your institution. Like, who are your researchers, and what are they
going to be doing? If you want to make sure that you have plans
around, PDFs might be fine. If you want to make sure
that people can go in and reuse the data, then
you’re going to have to keep something a lot heavier. So it all really
sort of depends. I’ll say for the old
stuff, our strategy, largely through a project called
Archaeology of the Digital that the CCA did with Greg Lynn
over a period of about five, was basically to
identify the key projects that we thought were
interesting and innovative from the early days
and go grab everything. But that’s a lot more feasible when everything is a
computer that’s been sitting in someone’s closet for
a while, or, like — you know, I think the largest
one is probably about 500 gigs. A lot of them are,
like, 10 gigs. So there, we can just kind of
go in and vacuum everything up, and I think for the old
stuff, we kind of need to do that when we have the chance. Because, collectively, collecting institutions
should’ve been doing this for, like, 10 or 15 years by
now, and it’s kind of close to too late for some of it. Like, the media’s
already degrading. People already retired,
moved on, whatever. For the newer stuff, we have
to be a lot more careful. Like — because when we
talk to donors now, yeah, usually the assumption at the
beginning on their part is that we’ll want everything, because we’re collecting
the total archive. But we can’t afford that. It’s not feasible. I don’t think it’s good
for our researchers to just present them with,
like, you know, 700,000 files. No matter how you index
it, that’s not going to be helpful to anybody. So, yeah. But it’s
a work in progress.>>Katie Pierce Meyer: Yeah,
just to add a little bit more, I agree with these guys,
but this is where — I also have been having a lot
of conversations with a lot of different people,
including some people in firms, and colleagues elsewhere. And I just like the idea
of getting it out there to some extent that everyone’s
always appraising their stuff, all the time. You know, long before it gets
to the point of a curator and archivist actually having
that conversation with someone, decisions have been made
in a firm about records that have implications for
whether we can save it at all. And so there’s some
sort of education, or conversations I want to be
having with architects to think about how they’re
keeping their own things. What has value in the way
that they’re doing their work? And that can inform what we
choose to take on later on, and I think it is
very context-specific. Like, I think it’s going to be
different in different places, both what firms you’re looking
at, or, you know, who — what individuals, and also
the collecting repository. And so I think there’s
also some opportunities for thinking across
repositories. How we might be doing
this kind of work, and what we’re capturing about the built environment
generally doesn’t have to be the same in all cases,
but it should be able to speak to each other in some ways.>>So thank you,
Katie, Aliza, and Tim, for our wonderful
kickoff for this morning. The history — I learned
a tremendous amount of material myself. We are right at 10:20, so now
is the time for a morning break.>>Kristine Fallon:
Hello, I’m Kristine Fallon, a moderator for this session. I’m a fellow of the American
Institute of Architects. I’ve devoted my 40-year
career to the topic of computer applications
in designing construction. I also did a very early
study and proof of concept on collecting, archiving, and
exhibiting digital design data for the curatorial
department of architecture at the Art Institute of Chicago. We have two presentations today. Our first speaker will
be Greg Schleusner, AIA. He’s the principal and director of design technology
innovation at HOK. He’s also director of
Building Smart Initiatives for the BIM Forum, and leader of
the technical room within the — within Building Smart
International. Our second presentation was
put together by Rick Zuray, who’s a technical principal at
Boeing involved in quality, CAD, CAM, CAI, industry data
standards, and R&D. He co-chairs LOTAR
International Consortium, developing aerospace
data standards, and he represents Boeing
on related ISO initiatives. Unfortunately, Rick isn’t here, so his presentation
is being done by –>>Phil Rocher [assumed
spelling].>>Kristine Fallon: Phil Rocher,
who’s also involved in LOTAR. But first, Greg Schleusner.>>Greg Schleusner: Thank you. So I’m going to be — we
had a prep call for this, and I thought the comparisons
and questions that came out of it were by far
the most interesting. So my presentation is really
to set the stage of — you know, sort of making sure
it’s clear what we’re doing today at HOK, the things
we’re thinking about. But I think the meat of
this discussion really comes from the interaction
between those two things. I’m explaining why my
presentation is so short. I’ll put it that way. So a little bit about HOK —
so not only are we, you know, sort of a large architecture
engineering firm, we showed up on that last set
of slides in the ’70s and ’80s. I would have to check,
but I’m not certain if we ever actually sold
a single seat of HOK Draw or Draw Vision, which
were the UNIX-based and the Windows-based
applications that we had. But in a previous life, and up
until I joined HOK in 2008 — 2010, we had people using
these systems still internally, because they liked them better than the other things
we had access to. So we do have a very
interesting problem ourselves with archival data, but
we were smart enough to make it interoperable
with EWG a long time ago. So HOK is, like I’m showing on the slide here,
about 1800 people. We work on pretty
much everything, save maybe industrial. That’s probably the only thing
I’m aware that we don’t do, and we’re spread out about
24 offices around the world. So thanks for the introduction. I like cats is the only thing
I’ll probably add on there. I actually don’t own a cat, but that’s secondary
— separate discussion. So I’ve been around
at HOK for 2000 — since 2008, which is sort of
a weird time to join a firm, given the economy at the time. But based on the previous
discussions, it was, you know, sort of made clear to me that
we used to have a librarian. We used to have an archivist, but now we just have
technologists. And we’ve not figured
out how to, you know, sort of move past that, and I actually think it’s
a very interesting part of the discussion. Okay, so it was very clear, and I think the presentations
helped set the stage for this, is we definitely
think in projects. We don’t think about
a big strategy of making sure we have continuum
of information across projects. There’s a lot of people
that think that, oh, we did that detail
in that project, and that certainly
does take place, but that’s individual knowledge. That’s personal knowledge. It’s not something
that’s institutionalized, memorialized anywhere. And frankly, the technologies
don’t exist to enable that in the first place,
and I think the key thing that was definitely
made clear on the — in the 2010s is this
unique production process. We often use the same
tools project to project, but there is certainly no
such thing as a standard when it comes to what
information is captured, how information is developed. We’re very much a right tool
for the project sort of a firm. That certainly doesn’t
make it easy for us on the technology side to
make it easier for the teams to interoperate between
platforms. And that actually is — the linked discussion
we’re having here is, the business drivers are
starting to make sense for us to think about making
data more interoperable, and what are the things that we
need to start to do to do that? And that, to me, at least,
is a part of the discussion about how an archival
process could become easier, because I’m — generally sort
of look to the, you know, sort of aerospace
industries and others. And I think the business drivers
have made it easier to start to have these discussions
right now, where we’re unable to find those — find the
benefits without starting to think about those long-term. So from a — just a
generally way of working — and I think we can go in
many directions with these, but 95% of our projects
are model-based. We’re moving to a
global file system. So the reason I’m bringing
that up is, from an archiving as a result of work strategy,
we’re moving to a system that will give us
backups every 15 minutes. In everybody else’s
minds, that’s perfect. I don’t have to think
about anything anymore. I just work, and it’s backed up. And I have a million copies of
stuff every time it’s changed. And it certainly
will only get worse. We’re not — from the big
design and engineering firms, they’re certainly not the first
entity to start to do this, but the notion of, you
know, every 15 minutes, you have a snapshot
of everything you did for perpetuity — and that’s
sort of our view of — it’s a short-term view. But that’s definitely
the way we’re going. The idea of archiving things
just doesn’t make sense to people. So it’s — well,
we did the work. It’s saved. I can go back in a slider,
and see where it was in time. Whether you have the
information around it that describes its state, and
which — you know, because it’s, you know, 9:45 or 10:00
a.m. three years ago, who could possibly know? We are starting to do — the popular term in the
industry is data lake processes, but some approaches
are to use data lakes and just take everything
and put them somewhere, which is very much an archive — don’t do anything before
processing the data archive strategy. Because we’re starting to
see a business driver — and this is the piece
that I’m most focused on, is a business driver to
making the data interoperable or accessible. So, simple example. We get asked by clients
— we do hospitals. What does the OR look —
what do these ORs look like in the last 20
projects you’ve worked on? This is a three-day
intern question, typically. That should be a, you know,
half-hour, sit down in front of a computer, and
write a query question. But that question — certainly
we can have the non-geometric information, but then
you’ve dissociated with it the actual design. So from a technology
perspective, we’re starting to look at tools that will
actually allow us to ask that question, and both
get the geometric result and the information. And that’s, to us, a very
beneficial way of, you know, sort of as a firm our size, thinking about that
sort of problem. So we can certainly go into
that further in the discussion. And that — there’s a
question that always comes up, because buildings — or, HOK was
one of the original founders, along with others, AutoDesk —
I’m blanking on a few others. Honeywell — of AIIAI,
which became Building Smart, about whether or not our
processes are IFC-driven, you know, using these
open tools. One of the things that
we’re finding is a lot of the processes don’t
result in the use of them, so we’re starting to have
that discussion as well, about whether we need to be
more proactive with the industry to make IFC part of that. So that’s a — would come up
in a question, so I just want to make sure I mentioned that. So, last one. I like cats. And this is complete
post-rationalization. I just like this one,
but particularly, I have a technique I
use in presentations, which is actually just
combine the topic of a slide with the word cat, and
you search the internet, and you can always
find a result. There’s actually — in
the image-based result. And this one in particular was
obviously laser-scanning cats, but it’s a very meaningful
— it’s actually, you know, in the where we are
thinking about going, and — within Building Smart, and
starting to get back into it at HOK, the concepts
around linked data and semantic web
technologies as a means to help describe this context. One of the most often used
linked data concepts is cat is an animal — you know,
sort of the diagram. So that’s a — like I said, a
very good post-rationalization. So I’m going to stop
there, and we’ll go on to the next presenter. And we’ll have a
nice conversation. Thank you. [ Applause ]>>Phil Rocher: I’m
standing in for Rick Zuray. I have worked with Rick for — I don’t know, probably
20 years or so. First, I have to mention to you,
though, if you feel a rumbling, it’s not an earthquake. It’s my stomach. So — something that Aliza
had mentioned earlier — she had mentioned a
problem that Airbus had had with their computer-aided design
system, CATIA from Dassault, and the billion-dollar — many-billion-dollar
problem that it caused. And Airbus is one
of my customers, as is Boeing as well, obviously. There were other surprising
things regarding the A380 aircraft. A gentleman I used to work with
who worked at Pratt & Whitney, who was the engine partner on
the A380, one day, he calls me. He goes, “They had the first
A380 test flight today.” And I was going, “Great! Everything went all right?” He goes, “Yeah.” He says, “And the plane was
only 20 tons overweight.” Nope, that was — here
we are in, you know, the computer-aided design
and manufacturing generation, and the aircraft was
20 tons overweight. And I was going,
“It even took off?” He goes, “Oh, of course it did.” He says, “You know, there’s
lots of room for error.” And that — 20 tons, you know? How many cars is that? That’s a lot of cars. But anyway, despite that
we’re in the digital age, and the example of the wiring
harness snafu at Airbus, using the same CAD system
at two different locations, and having problems, is
still pretty typical. I think my message today has
to do with interoperability through the use of standards. The primary standard we use
— not the only standard, but one of the primary
standards we use in aerospace and defense is the
ISO STEP standard. And I’m going to talk
about two use cases today. One has to do with
collaboration between partners, and Pratt & Whitney and
Airbus are partners. And then also between
OEMs and their suppliers. So I guess — we’re — y’all have probably seen
the model-based enterprise pyramid before. You know, we’ve changed it to fit our particular
mode of operation. From an archival standpoint, the main three things there are
the information requirements and business rules, the
process and systems, and the repository — the
retention period and the format. One thing that’s very important
in the A&D industry to archive, and is — well, I’ll
just go ahead and say it, and then I’ll talk
about it a little bit, but it is design intent. And design intent takes
many different forms. You know, some of them are
relatively easy to digitize, like geometric dimensions
and tolerances, what we call product
manufacturing information today. You know, maybe some of the
systems engineering data, maybe some parametrics in the
model, things of that nature. And others aren’t quite so easy
to digitize and then archive, or pass on to one
of your partners. The aerospace industry, just
like the building industry, has very complex products. You know, they have — you know, a simple product only
has thousands of parts. A complex products
has tens of thousands of parts, maybe a million. But as the products get
more complex, it’s — also becomes more complex on
how do you retain that data for many years down the road. I don’t know if I like the
title of this slide or not. I would’ve called it “Just
an Interesting Slide,” but you have the various
Boeing aircraft programs across the top there, the
767, the 747, the 77 — the triple 7, and the 787. And the top half
of that chart kind of represents the
as-designed view, and the bottom represents
the as-manufactured view. And if you look at
the numbers — I mean, look at the data
volume in the as designed and the as manufactured. Those are significantly
different numbers. One other thing I’ll point
out is that, you know, you’re looking at the parts
per airplane there, and you go, “Well, why did it go
down from the triple 7 to the 787, anyway?” And that’s because the
more use of composite parts in the later aircraft. So that causes it to
be down — to go down. One last thing on this,
and this has to do with the complexity
of managing data. And this is a general
comment, but up until 2005, the amount of data
that had been created in the world was
something on the order of 500 billion gigabytes. And very soon, probably
— maybe we are now — we’ll be creating that much data
every 10 minutes in the world. So — — early example of data
exchange, the Rosetta Stone, the same information in
three different languages. Evolving technologies — and this is kind of
aerospace-centric, but something I’m going to
throw in here is the requirement by the FAA to provide the
type design data to them, or to retain it for
long periods of time. And the first generation
was the engineering drawing and the authority,
or the — to the — for the FAA was the 2D drawing. We’re kind of in the end of the
second generation right now. You know, we are in a 3D world. We were talking last night
that the production folks tend to still be in the 2D world. And I see a lot of
my customers — company like a Rockwell
Collins, or a Honeywell, when they provide data
to one of their suppliers to manufacture it, they’ll
provide them both a — maybe a STEP file, or a CAD
model, and a 2D drawing. But anyway, we’re certainly
moving toward that 3D world, and we are in discussion
with the FAA about a 3D authority
for this data. Something else that’s
very important to us in the A&D industry is
controlling that product data. And, you know, we used to be
configuration management-based, which was, you know,
somewhat paper-based. You know, it was kind
of haphazard, really. The engineering intent
or the design intent was in multiple locations. We moved on to product
data management, which was really a
file-based notion. You know, the engineering intent
or the design intent was still in multiple locations. And now, we’re moving into more of a product life cycle
management scenario, which is more of a
relation-based system, and one nice thing about this is that we can gather all the
design intent into one location. Having to do with
product life cycles — and, you know, different
industries have different timelines regarding this. Typically, point releases of software upgrades
are probably every three to six months. You know, major releases
are one to two years, maybe a little longer. Computer processes are
changing all the time. People are coming and
going at the company. I kind of found that
hard to believe, the one that’s marked
careers there at five years. And I was going, “Why
would somebody want to leave a company like Boeing? I mean, you know, great
benefits, great pay, all that kind of stuff.” And I looked it up, and sure
enough, it was 4.6 years, was the average turnover
at Boeing, which I found kind of amazing. But the design — talking
about the need for archival in the A&D industry,
look at the B52. The design of that
started in the late ’40s. We’re still flying it today. I don’t really see it
going away anytime soon. And we have other commercial
and defense products that are in that same life
cycle, 707 as example. They’re still flying. From the collaboration
standpoint, Boeing has — are a lot of companies — and it’s not just an
aerospace thing, obviously. I mean, look at BMW. They make cars all over the
world, and oddly enough, the largest BMW manufacturing
facility is in South Carolina, where I live. And I heard that, and I was
going, “That’s just amazing. The largest BMW facility in the
world is in South Carolina.” But Boeing’s a 24-hour operation
due to their global nature. They have hundreds of partners,
and we use the term partner to mean like a Pratt
& Whitney, or a GE, or a Rockwell, or a Honeywell. And then they also have
thousands upon thousands of suppliers, you know, that they may make one
little bracket or something. So anyway, there’s certainly a
need for kind of a standard way of communicating among all
these partners and suppliers, due to the global
nature of Boeing. So in steps the LOTAR project
here, and actually, this is kind of the introductory part to it. But we work with a number
of different organizations. We obviously work with the
ISO to create the standards. We work with a company — or not
— an organization, excuse me, SASIG, who has a product data
quality project going on. One thing that is very
important in the A&D industry, when you talk about archival,
is the ability to verify and validate it before
you archive it, and also, the ability to verify
and validate it as you’re retrieving it. And as an example of that, I
had mentioned that, you know, kind of a cornerstone for us
are the ISO STEP standards. They didn’t really
have much verification and validation stuff in them. We’ve extended the data models
that we use considerably, and I would imagine that about
25% of the physical files that we write out in the
STEP format today are — just had to do with
verification and validation of the data that’s in the file. So, like I said,
that’s very important to the aerospace industry. I talked about the
model-centric data management, the model-based enterprise
a little earlier. NIST is certainly one of
our partners in all of this, and I think they’ve been very
heavily involved with the IFC and BIM over the
years, as well — probably some of the same
people that we deal with. So it’s kind of a global
community that’s working on this, not just, you know,
the Boeings, and the Airbuses, and the Lockheeds of the world. LOTAR Project — boy,
where to start on this? You know what? I’m going to skip this slide. I don’t like it. This is — remember, this
isn’t my presentation, so — these are the —
you know, the — I’m not going to spend
any time on this — the typical kind of questions
you ask about, you know — when you’re getting ready
to archive data, you know, the when, and the what, and the
— all those kinds of things. This is a timeline of the
LOTAR Project or program. In the late ’90s, there
were independent efforts in the Americas and
in Europe having to do with long-term preservation, you
know, for aerospace and defense. The next key date on that
happened down there in 2009. Excuse me. We started a funded
project, the LOTAR Project, between the AIA and the ASD. It’s a different
AIA too, by the way. Why is there two, anyway?>>Confusion.>>Phil Rocher: Which
one came first?>>Ours [laughter].>>Phil Rocher: My
understanding was Wilbur Wright.>>Predated Wilbur Wright.>>Phil Rocher: Oh. Yeah, I guess you do. Anyway, 2009, we started
the funded LOTAR program, talk a little bit more in
a minute about how we work. We’ve been spinning off work
groups to work in specific areas that are of importance
in the A&D industry. And I’m going to touch
on that a little later. Another kind of cornerstone for
us is the OAIS reference model for the LOTAR processes. I’m sure y’all are — some of you are intimately
familiar with that. And then, as the basis of
that, as I said earlier, we use the ISO 10303
STEP standard. The work groups that we
currently have in LOTAR — the newest one, engineering
analysis and simulation. Wiring harness — who do you
think drove wiring harness, anyway? It was Airbus. But metadata for archive
packages, 3D digitalization. Oh, my goodness. I don’t know if I can do it. Composites, advanced
manufacturing, mechanical, and product data
management, or POM. And this is just
kind of a mapping between the various work groups. In the blue there in the middle
— that is blue, isn’t it? I’m colorblind.>>Yeah.>>Phil Rocher: Are the LOTAR
parts that correspond to it. The LOTAR parts kind of talk
about what needs to be archived, and how to archive it. And then the corresponding
ISO standards, down there at the bottom. You can take a look at the
LOTAR homepage at your leisure. The impact of interoperability
— and then, this is
primarily from the — I’d call it the design and
collaboration standpoint, but delivering models to the customer using
different systems, that costs an additional
53%, or, you know, causes a cost increase of that. Supplier communication — you know, all those
things cause the price of a product to increase. And, you know, you look at the
price of a 787, or an A380, and it’s just considerable —
these are based on some data that NIST had collected
a couple years ago.>>We have these tricks, too.>>Phil Rocher: It’s
throughout industry. Benefits of using
LOTAR standard — process security for using
international standards, and the aerospace and defense
authorities accept the workflows — the type authority,
based on our process, and the way we store data, and
what — the format of the data. The bottom point there
— the bottom bullet — LOTAR is kind of a funny
thing, at least to me. You solve a lot of the
problems of data exchange and data sharing through the use of long-term archival
techniques. They share a lot of
the same aspects. Gee, and I thought I had
one more slide, but I don’t. But anyway, that’s
my last slide. [ Applause ]>>Kristine Fallon:
So we’re moving right on the second section.>>Ann Whiteside: This is a
follow-on to what we were — just heard, where
we’re going to hear from three different
perspectives on some of the specific issues
in the data timeline, in terms of the work that
is done in different kinds of firms and at the GSA. And we have some questions
to prompt the discussion, and I’ll start with questions — I’ll start with a question
directed at one of our speakers, and then each of them will also
have an opportunity to respond. We have with us Noemie
Lafaurie-Debany, who is at Balmori Associates. I can say a couple
sentences about each person, but their bios are on
the website as well. Noemie played a role
in activating BLA Labs within Balmori Associates to
further push the boundaries of architecture, art,
and engineering — green roofs, floating
islands, temporary landscapes, forms of representation,
zero-waste city, and other things as well. Mark Rylander — in addition
to the bio on the website, Mark is a solo practitioner
with three active partnerships. He formed Rylander
Hoene Architects to design a friend’s school. He works with Carrie Moran
[assumed spelling] AIA on residential projects,
and he serves as an owner’s representative on a large development
project in Texas. He’s also currently
freelancing with a landscape and land planning firm in
Charlottesville, Virginia. And Nick Gicale is a program
manager and product owner at the General Services
Administration, focusing on the implementation
of IT systems that support GSA’s role in design construction
project delivery. Nick has been with
GSA since 2003, serving as a project manager on
complex construction projects for many years before
transitioning into his current role. And I realized, I
should introduce myself. I am Ann Whiteside. I’m the assistant
dean and librarian for information services at the Harvard Library
Graduate School of Design. I’ve been working on and
thinking about issues related to preservation of digital
design data for a decade now, and I’m pleased to be
participating in this symposium. So shall we get started?>>Mark Rylander: Sure.>>Ann Whiteside: All right. I’m going to start with
a question for Mark. What are the challenges
of design software on your firm’s workflow
and design processes?>>Mark Rylander: So
as a sole practitioner, every project is a
little different. We figure out what we’re going
to do based on the client and the partners that we’ve put
together for a given project, and make do with what we have. And a moment ago, I was trying
to — watching Phil’s slides, and thinking of what we possibly
have in common with Boeing. And I looked up at the 767 that
was done with paper drawings, and then down at my
pencil, and I thought, yeah, we used to do that. We can do that, too. But by and large, we — our practices represent almost
the whole history of software that anyone on our team has
worked with, and our work ranges from using a lot of SketchUp,
that small companies use, to AutoCAD and Vectorworks,
and rendering software. I’ll just stop there, because
there’s so much I could say about the software that we use. The technical challenges you
ask about really have to do with the complexity and
the purchasing agreement. So, you know, we generally
have to buy subscriptions. We have to, you know, own
a lot of different kinds of software simultaneously,
and year by year, you have to really buy
more than you need — a lot of feature-laden
packages, like the Adobe Suite, where we only use pieces of it to put together what we
need for presentations. So, you know, what
we would hope to see in the future is
something that’s simplified and streamlined.>>Ann Whiteside: Noemie,
you want to take it up?>>Noemie Lafaurie-Debany: I
think that’s also something that Aliza had started to
discuss earlier, is this idea of — and the subscription
base of every software. We used to have those disks
of 3D Studio Max, and AutoCAD. And if for any reason we
had to open an older file, we could always reinstall
that copy, but now that everything
is subscription-based, if we have — in the
case of 3D Studio Max, we no longer have
a current version of the software at the office. If we need to open an older
file, we are incapable of doing it, even
though we created it, and it’s our information
within that file. And this idea of 3D Studio
Max was for a new city that we had designed in Korea,
and the project was going to be put in an exhibition
in Los Angeles. And we were incapable of
giving our own project, which was extremely frustrating. Of course, we have
the deliverable that you had mentioned earlier. We have PDF, and PowerPoint,
and whatever, but if — in the case of the exhibitions,
they wanted to do a 3D — printed 3D model of that city. And it was just impossible
for us to see our own data.>>Nick Gicale: Yeah, a couple
thoughts from the GSA side. We have two primary
challenges kind of implementing design software
and related software at GSA. One is that, since we
are a federal agency, all of our software that we
purchase, and that we put into our agreements with our
contractors and architects, has to comply with federal
IT security requirements, which can be quite onerous. And so we’re always
balancing kind of giving our teams
all the tools they need to get the job done,
working with our contractors and architects to kind of be
open to software that they use, but also trying to comply
with FedRAMP and FISMA. As more and more applications
go to the cloud, you know, we’re trying to work with
industry to make sure that those applications
are working in alignment with our security requirements. The other thing that we run
into a lot, also, is that — since we’re an agency
that has regional offices across the country, and we run
about 10,000 projects a year, kind of every region is
basically responsible for thinking how to
run their own project. And so, getting the level of
consistency in terms of inputs across all those project
teams, across our contracts, is something that, you know, we’re always striving
to do better at.>>Ann Whiteside: Given
what you’ve all just said, are there some compelling — some compelling needs
of future software that would help your design
processes and workflows?>>Mark Rylander: You know, one
of the frustrations I’ve had, just in terms of presenting — and I’m frustrated by
the feedback here — is the ability to present in
a fluid way to clients on top of the drawings that we have. There’s an increasing dependence
on telecommunication, and — you want me to be closer to it? Okay. And — you know,
so it’s not uncommon, even for a sole practitioner,
to have a video conference. And I think, given the
dependence on drawing, having an integrated
touch-screen application — I sort of see the
future of the iPad Pro. Like the Toshiba tablet I used
to have where you can draw on top of your own drawings
to present to a client, but do it in a really fluid
way that’s touch-sensitive. And that kind of technology
would be really valuable, I think. I’ve seen — as it
relates to SketchUp, to sort of incorporate the —
some of the technology I’ve seen in Maya that General
Motors uses, where you can actually use your
freehand drawing to carve away at the model would
be really helpful. And so I’ve always imagined
somehow being to do — being able to do something
like that on my own desktop.>>Nick Gicale: I think, as
an owner, we want, you know, more integration between design
software and kind of the rest of the ownership life cycle. I work mainly in the
project space, but obviously, we have building maintenance. We have portfolio decisions
that are being made. We do a lot with lease
facilities, too, and right now, kind of all of those
things are segmented in our model of things. So bringing that all together,
either through integration, or additional functionalities
in software, is something that we’re eager to work on.>>Noemie Lafaurie-Debany:
And something that was discussed earlier this
morning, too, was processes, and in our effort to prepare
our archive to be sent to Yale University Library, it’s
something that is very important to us, is how do we
document the process? How did we get to that
final phase in the design? And in Microsoft Word, you
can track your changes. We don’t have HOK 15
minutes backup system in our small practice,
but adding something, or doing online GoTo Meeting,
or some sort of conference call where you can mark things up. There is no record of any of
this, and maybe a software that would somehow
help us document, not only those final project
and deliverable, but processes, would be interesting to us.>>Ann Whiteside: Do you have
somebody in your firm that — or somebodies who are
responsible for that — for any documentation like that
currently, without the software that you’re looking for?>>Noemie Lafaurie-Debany:
So it’s a very — we are preparing the archive
and the legacy of Diana Balmori, the founding principal
in our firm that passed away a year ago. And we have started this
process probably five — five years ago, or so. And the archiving process of older project is
informing the new projects that we are doing now, and how
we should keep record of them. But at the same time,
we are a small firm, and we don’t have a
designated person to arrange and organize all this data. So it’s kind of left
for all of us to do. We have a set of standards that are never really
followed, or a naming system. It is just too hard
to keep and maintain. So we all do a little part. We all wear different hats, and
spend a couple of hours here and there preparing those
files to be sent to Yale.>>Ann Whiteside: How’s that
work in your firm, Mark?>>Mark Rylander: On slightly
larger projects, or more — for more sophisticated
clients, you know, our archiving process
tends to be a PowerPoint. Having a sequential story
that begins as a storyboard in the office, and ends up
being, you know, the best way to explain how you got
to where you did kind of simultaneously presents
the work you’ve put together, and records the thoughts
that went into it. You can throw sketches into
the inside, and it also kind of documents that a
phase is complete. So, you know, often, a really
well thought out PowerPoint, you know, provides
a kind of closer that a big rolled-up
set of drawings doesn’t. Because it forms this record
of design intent as a story, and about the only
thing that would improve on that would be — you
know, I think we all come up against some of
the fundamentals of operating systems, like
just opening images in Windows, and the ability to kind of
quickly scroll through images. So, you know, I would love
to be able to have software like Portfolio that — Alias
Portfolio Wall that Alias, now AutoDesk has, where
you can pull out the images and arrange them on a
desktop, and not open a folder. Because the more organized
you are in archiving, and putting things in folders,
the harder it is to figure out — where was
that great image? Which meeting was that that we
— remember that sketch we did? And which presentation
was it in? Oh, it was in 17-11-15
under bar, you know, client presentation three, and
I think it was in the middle. And by that time, you know, you
get a phone call, and you’ve — you’re unable to sort of connect
back to the original idea. So getting access to images
really quickly and sorting through them remains, you know
— as was a dream in the origin of Windows, it still remains
something that’s a little out of reach for all of us.>>Ann Whiteside: Interesting. Nick, do you want to talk
about that a little bit?>>Nick Gicale: Sure. At a product level, we try to collect everything that’s
produced by our product teams, and they — we’ve taken the
strategy to kind of worry about how to get to the
access of that data later. But for the design
phase, I mean, we kind of collect
every iteration of the formal design submissions that are specified
in our contracts. Once we get to construction, our existing enterprise
product management tool is built to collect all RFIs and
submittals that are generated on the project, as well as any
reports and other correspondence that happens during the
course of a project. So right now, at a product
level, we’re trying to track all that throughout every
phase of the project. I think our bigger issue is,
after that product is over, you know, how does that
stuff get maintained? Does it get maintained well? Somebody mentioned
earlier about having to — you know, what’s the
right level of detail to have people enter
data, drawings or other, you know, artifacts. And we struggle with that
a lot, because, you know, resource-wise, our building
managers are probably managing 40-plus projects at a time, while also taking care
of their buildings. So they don’t have
time to kind of — when a paint color
changes, or something else, to update that in the BIM file. So, you know, what do
you do with all that data that you need, but don’t
have the resources to manage?>>Ann Whiteside: You’ve
each talked a little bit about standards,
and I’m wondering if we can draw some more out
about standards, and how — what the role is of external
guidelines, or standards, or vendor-neutral file
formats have in your workflow, and why they’re — and
if they’re challenging, how they’re challenging. If we could draw out
a little bit more about that — do
you want to start?>>Noemie Lafaurie-Debany: Sure. So maybe I could start by introducing the
process in the office. So Aliza spoke a
little bit about it, but in the schematic — again,
not every project is the same, but in most cases, the
schematic design phase, concept design phase —
in the landscape field, we’re going to want
to work in 3D. And we are most likely going to
be using Rhino to think of it. So different slope, and
berms, and recessed areas — after we’ve started to
think about volume — and that’s the sort of file that
will exchange with Architect. That file is very easy. It doesn’t need to be packaged. All the information is
contained with that file, so it’s very easy to share. After that, when we move
into the design development and construction document, at
that moment that we’re going to go into the 2D and start
documenting our design in AutoCAD. At that moment, we are
going to start using — we’re going to set
the files with — following the format
of the deliverable. Each of those file is like — in the paper form is just one
sheet, and within that sheet, we are going to link in,
or to add some ex-ref, which are reference
to other files of that same AutoCAD format. And we are going to be able
to work on that sheet file and on those ex-ref file, with
multiple people being able to touch at different files. As landscape, for example, if we
do a project with an architect, we are going to add
their building as an ex-ref to our file. If, at the end of this process,
we do not package this file, we are going to lose all the
different linked file to it. Or, in our office, all the
files are saved on the servers, and the server is backed up
in two different location, on the east coast and the west
coast, and we can get copies — I think we are saving three
copies a day of those files. When the project
is sort of done, we move it into what we call the
archive partition of our server, and in moving files — and that
archive partition is not backed up as often. When we move those files,
if we did not package them, then we are going to have
to spend 10, 15 minutes, maybe even more, trying to
search those linked files to be able to have
— I’m so sorry. It’s very complicated. But it’s just — it’s what we
need to do, and we often — because we have — working
on 10 different projects, something’s always — one
deadline after another deadline, we often forget that one step. And then we regret it when, 10
years later, the project is back on track, and we have to
find those original file.>>Mark Rylander: That all
sounds very familiar to me, since one of the most recent
projects I’ve been working on is with a landscape
firm collaborating with other freelancers. And it’s interesting to see
just — it’s kind of a miracle, in a way, how a group of people who haven’t really
met each other manage to coordinate a site plan and
a landscape architecture firm without having a manual. And it’s very complex, because
the survey will come in with all of its layers, and you’ll
create these external references from maybe a Google Earth image
that you put in on one layer, and the GIS information
that shows county boundaries for sensitive areas
and steep slopes. And by the time you’re
ready to draw a rectangle for where the building
goes, you’ve got a really, really complex file, and the
way you even edit any part of it involves a lot of care. And if someone comes in that
is in a little bit of a hurry, and puts one of those references
in a folder, or worse yet, somebody who’s really into
filing systems and thinks that the names of things
should be different, and you change the name
of an image that — it’s true with the
Adobe products, too. You’re in the image folder, and
you change the name of an image to give it a little more
clarity — instead of scan 32, it becomes, you know, barn. And suddenly, it’s unlinked,
and no one can find it, and it doesn’t come out
on your presentation. So it’s a very — it’s a
very, very methodical process that many of us, you
know, became designers because we weren’t really that
methodical to start out with. And so we’ve had to learn
how to be really methodical, and I almost — I picture it. It’s like the — Daniel
Kahneman’s book, “Thinking Fast and Slow,” that you
might’ve read. I feel like as designers, we’re
in this sort of system one, and everything’s intuitive, and
we’re shooting from the hip. And then, when it comes time to make sure our external
references are all filed correctly, we have to
be really methodical. System two.>>Nick Gicale: Yeah, at
GSA, we try to stay away from dictating how designers
manage the CAD files, and how they construct
the drawings that they’re working on. We do have — obviously, we’re
a large federal bureaucracy, so we have lots of standards
and guides that are out there. P100 kind of dictates all of our
requirements around construction and design requirements, and also some deliverable
requirements. We do have a BIM guide
that’s out there, that helps to define kind of, if
we are using BIM on a project, what the final deliverables have to comply with, and
stuff like that. But a lot of the time, you
know, again, we’re dealing with thousands of contractors,
thousands of architects. So we try to stay out of the
how you do the job part of it, and just try to kind of
manage the end product by our contracts, and
satisfying, you know, what those deliverables
need to look like.>>Ann Whiteside: Are there
other aspects of your workflow that you think are helpful
to this conversation, in thinking about
standards and software that we’ve talked
about this morning?>>Mark Rylander: Oh, sorry.>>Noemie Lafaurie-Debany:
Sort of a — I mean, it’s parallel to the
design process, but in preparing for the documents
to be sent to Yale, something that we think is
very important is writing a narrative, which is a little
bit similar to the PowerPoint that Mark was describing. But that will tell you, oh,
in 2006, there was the — yeah, this huge economic
crisis, so yes, the budget had to be cut. So we had to redesign. That’s why there are two
design for that same space. And this takes a little bit
of time, but it’s not — it maybe takes a
couple of hours. It’s not a crazy amount
of time as renaming files, or packaging them, but I think
for anyone that would want to go very much into the
project and research it, it will at least give them
some sort of direction of how to navigate this
nebulous of files.>>Ann Whiteside: From a collections
viewpoint, I really like that.>>Mark Rylander: You know, I — one of the challenges that
sole practitioners face, and we all face, is this sort of
movement towards design build, which can mean a lot
of different things, depending on whether it’s, you
know, a house addition, or, you know, a major
master plan that has — is a fast-track construction
project. So — but in either of those, the initial design brief
is really important. And, you know, as I looked at
the slide earlier about the — with all the energy
management software, you know, one can make a statement that
we would like this project to be 30% below ASHRAE 90.1,
and whatever year it is, or to meet certain lead
standard, and then — but how to actually get there? There are many, many pathways,
and that software is — you know, a lot of it
is really experimental. So this design brief, kind
of like the PowerPoint, I imagine it being not unlike,
you know, Boeing’s design brief. There’s all this technical
stuff that you need to meet that’s non-visual, that
the drawing doesn’t represent, that are performance standards
that you hope to meet, or are required by GSA
to meet, for example. So, you know, the real — you know, one of the
things that’s very hard to document is the pathway by which these non-visual
standards are met, and it’s usually some
other kind of paper trail. And it may be solved
in some form of BIM. Certainly, I would
love to be able to, and hope to soon be able to,
click on a part of the drawing and come up with a Word document
that — or an Excel spreadsheet, or some other type of file, or
chart, or graph that references, you know, why something
is a certain way. Because otherwise, the — you know, explaining to somebody
retroactively why the building doesn’t meet that
energy standard, and what happened along
the way, and that kind of finger-pointing
is pretty common. It’s a pretty common sight.>>Nick Gicale: Yeah, my
only other comment on that — when I think of workflow,
one of our primary challenges as an owner is, you
know, we have — we usually hire somebody
for our larger projects, you know, a design firm. We usually hire a
separate contractor. We usually hire a CMA
or somebody to help kind of oversee it, and
then, you know, we might have other
contracts, commissioning agents. You know, we’re involved at
that project management basis. We’re dealing with customers and
all sorts of other stakeholders. So in terms of workflow
from design through that whole
process of a project, how do you get all those people
collaborating the right way in a system, or in a set of
systems, or other methods to produce kind of the
results that you want? And our experience with that
has honestly not been great. Since 2009, you know, we’ve had
an electronic product management system at GSA that was kind of
intended to do all that, and, you know, some teams have
gravitated towards it more, and use it for project
collaboration and workflow. So if somebody submits
the drawing, it gets approved in there. If somebody submits an RFI, the formal response is recorded
there, all that kind of stuff. But many of our projects
are still kind of working, in terms of workflow
and collaboration, outside of that tool
with other technologies that have evolved since then. So it’s trying to find
kind of the median, or the middle ground, between,
you know, what do we mandate, or what do we have in a
collective system or set of systems, versus, you know, how does that workflow
happen outside of it?>>Ann Whiteside: Another
area that I think you touched on was accessing legacy
data, and some of the — and you’ve learned from accessing your legacy
data what you should do in the future. Are there other thoughts
about that that we could learn about from each of you, or
what you would do differently as you have had experiences,
unable to access legacy data?>>Mark Rylander: Well, there
— yeah, I’m reminded that — and I know this conference
is largely about software, and we were talking
about wish lists. A lot of times, our legacy data
is actually a set of drawings and sketches, but increasingly,
you know, because it’s easy to store, having
record photographs — you know, it’s easy
to get a point cloud of the existing building. And a lot of that kind of data
that — but I think, you know, what — it would be helpful
for us in our tiny office, would be to have a really
great printer and scanner, and a lot of other stuff
that we can’t really afford, that would just simply
allow us to digitize a lot of the documents that
are not fully digital. And, you know, I almost
feel like there could be, and probably is, a way
to partner with some of the more sophisticated
printers to have an archiving — probably a — you know, a
simple PDF archiving standard. The other kind of
data I was talking about with the energy
stuff might be a lot harder to figure out.>>Ann Whiteside: Nick?>>Nick Gicale: Yeah, our issue
with accessing legacy data — so I kind of mentioned it,
I think, earlier, is that, you know, we have many
different business lines at GSA, and kind of IT investments
over time, because of the size of our agency, have largely
been business-line driven. And so, currently, we
have an IT environment that has building
drawings over here. We have kind of a system that
takes care of work requests, so if a light bulb
goes out, you know, there’s a flow process there
that gets taken care of. We have a project
management system, which I described earlier. We have a portfolio
investment system. We have another system that
we use for charging rent to the other federal agencies
that are inhabiting our space. So we have these drawings
that have kind of come online. We’re storing them
in all these systems, at least for the
last five years. And so, how are we — if
I’m starting a new project, you know, am I going to
five different systems to get the inputs to
start that project? Am I now trying to tie all
these things together, so I can, you know, have a
better cohesive model? So I think as we’re looking at our IT expenditures
going forward, we’re trying to be more strategic
in how we bring those that have been historically
separate business units and systems, you know,
together to flesh out a more cohesive
story about a building, or a project, or
set of projects.>>Mark Rylander: I
had one other thought, if I could just chime in, and
that just has to do with the — you know, the trend in
our time towards just more and more sophisticated
images of buildings. Architecture is not an image. It’s a three-dimensional
construct, and I was just thinking
about some of the projects that I would love
to be able to — I’d love to be able to
look at the Form-Z models of some old projects
from my former office, and actually rotate the design
development Form-Z model, because there was
a lot going on. And when I look back — look
at what’s in my portfolio, it’s the rendering
that we sent out. And somebody did a great
rendering of the front of the building, but there’s
all this really great work that often — that
often, what the Library of Congress should have is the
high-resolution model before a lot of the compromises happen in
the later phases of the building and during construction. So that you — you know, that’s
the one you would want to show. You don’t want to show
the one where, you know, you’ve had to value
engineer out the wall panels, and the awnings go away,
and the sun shades go away, and suddenly, everything
gets very flat. And everything that — you
know, all of the compromises that are made along the way. But usually, somewhere around
design development, there is — the design intent sort of
reaches its highest level, and then it starts to drop off. And that, historically,
has been a point where the best modeling software
available at that time is used, and now I have no idea
how you go back — how you go back and
retrieve that. I hope you figure it out. Everybody’s pointing at Tim. It’s all up to Tim.>>Noemie Lafaurie-Debany:
And just wanted to share an experience
during the archiving process. At one point, early on, we
decided that we would create PDF of the AutoCAD file at
specific important moment in the design process that were
not recorded by the deliverable. They were more like
internal meetings that sanctioned direction
into the design. And we did it for a whole
project, and we looked back at those files, and
the line weight, or the way they were translated in PDF did not reflect
the design. I mean, every element was
there, but you couldn’t read it. And after that experience, we
stopped transforming files. So whatever format they are in, that’s the format we
send them to Yale.>>Ann Whiteside: I’m thinking
now we might want to move into Q&A, and invite
Kristine and her speakers back up to the podium so we
can have a conversation with the audience.>>Kristine Fallon: So we’re
handling this as a combined Q&A, so it’s a free-for-all. Who has a question for anyone?>>Mark Rylander:
Okay, well, thank you.>>Kristine Fallon:
Oh, you need a mic.>>Aliza Leventhal: Hi, I’m
Aliza Leventhal from Sasaki, and my question is really to
all of you, in terms of — we’ve talked a little — you guys talked in the second
panel about, like, your hopes and dreams of what
would be great to have. But I think what I’m — would
be interested in talking about is more, what is a
reasonable standard for — like, what is something — like,
Noemie, you mentioned that — Noemie, you mentioned that
you guys have standards, and they don’t [inaudible] help. I can relate to that. But the — what is
reasonable, then? How can — is it something that
needs to be a button in Revit in order for that to be — or, you know, all these various
software — can’t just be Revit, but, like, does every
software need to have, like, perfect archive function,
and that’s the answer? Because that’s not reasonable,
but what would be something that you’d be — like, you
can imagine being built into your workflow,
in terms of, like, making sure things
are accessible?>>Mark Rylander: You
know, I suppose one thing that would be — it’s
interesting that sometimes, when we send a digital file
out, we like to send a PDF so that we know exactly how
it should look when it plots. So I’m not sure if
your question is sort of asking what would
the standard — what would an office standard
be, or a digital standard. But I think to the extent that
it’s sometimes unclear whether or not someone on the receiving
end is going to be able to view or plot a drawing the way
it’s intended, that some type of metadata that says that,
you know, all the lines that are magenta should plot at
.35 millimeters, for example — that there’s actually, like,
some type of graphic interface. Because we have it
in the office. In the office, we have
a little card that shows which colors plot at
which line weight. So it’s always kind of
funny, in the sophistication of a digital environment,
you still have cheat sheets on scraps of paper that tell you
which colors plot at which size, depending on what your plot
file is that you’ve used. So somehow, that connection
between the design intent of the lines and how they
should appear graphically seems to be part of the file system
that’s needed, at basic.>>Noemie Lafaurie-Debany: One thing that’s
completely unrealistic, but I’m not really
sure about discussion with industry partners — but a read-only file would
be quite interesting, when you no longer own
the copy of the software, but you could still
just look at that file, not modify anything in it. Just turn the model around,
look at different elevation. Maybe that’s not too far off.>>Kristine Fallon: Greg, do
you have some thoughts on that?>>Greg Schleusner:
I don’t know. The answer to the question is
a tough one, because simply, if there’s not a business
purpose for doing it, it’s always a tough sell, right? So I’m, you know,
sort of a market — my thing is, this is a
market-based solution. So you have to figure
out what’s valuable, and that’s what will
become useful. So, you know, there’s these
— you know, certainly, we have contract deliverables
and so forth, but at the end of the day, a lot
of the standards are for moving between two parties. But if you don’t have a standard that you ever use while you do
your work, it’s not valuable to you to do the thing that you
have to hand to the next person. So that — somewhere
in that is my thinking. I don’t know exactly
what that looks like, because that basically implies
that you’d have to get vendors to work on open formats, which is not the way
they currently want to view the world.>>Kristine Fallon: It
would seem to me that the — one of the big ideas
we’re going to get to after lunch is
access use cases. In other words, what is it
you’re trying to archive? Is it, you know, the
essence of the design at its, you know, most refined point? Is it what we told the
contractor to build? You know, what is it
you’re trying to archive? And then you have kind of
different requirements, and then different methods,
it would seem to me. I don’t know. Phil?>>Phil Rocher: Well,
I’m not going to answer what you
said, or address that. But I will kind of answer what
I think your question was, and actually, it’s something that I’m talking
about after lunch. It’s my presentation
this time, but anyway — we have an implementer forum for
software producers that write to the standard format. And, you know, we started doing
this, like, back in the ’90s, and some of the software
developers started to write STEP interfaces. But they didn’t necessarily
communicate or interoperate with the other software
developers’ STEP interfaces, and so, you know, multi-trillion
dollar problem here. So industry had asked us to — well, how do you test
for interoperability? And so, you know, we thought
about it, and some colleagues of mine from Ford, and
Boeing, and General Motors, and other companies
got together. And we came up with, you know,
kind of a testing scenario, and we’ve refined
it over the years. But, you know, I think
there’s a lot of instances where there’s a standard
thrown out there — like, okay, go implement it,
but it’s really not that easy to implement, you know. And, you know, one vendor
might, you know, go in path A, and another vendor go in
path B, and, you know, there goes your interoperability
right there. So I think my answer to your
question is, is that there has to be some forum for software
companies to test their products in — you know, a non-adversarial
forum, if you will.>>Kristine Fallon:
So we could go on a long time on this question. Are there any others? Is there one over there? Yep. You’re hiding
behind the column. I see you. Okay.>>Okay. I — Noemie, I
have a question for you. I’m sorry? Oh, I’m sorry, I’m
Kurt Helfric, again, from National Gallery of Art. So, Noemie, I just
wanted you — a comment — something you told us struck me,
which was basically this idea that at a certain point, you
had thought about creating PDFs of some of the drawings. And could you just elaborate
a little bit more on that? In other words, were they —
they were actually PDFs of CAD, or sort of born-digital
materials. One of the things that Alex
Ball [assumed spelling] has done in his white paper
on CAD is to say, “You have to understand why
you’re collecting things.” So if you’re collecting things
for image value, for instance, PDFs or TIFs are fine,
but if you’re doing it for other things, then
you really need to think about the native formats. But I’m actually
really struck — I’d like to hear more about
what you found that was missing in those PDFs that
you’re aware of.>>Noemie Lafaurie-Debany:
If we go — if I try to explain it from
the — doing it analog, we would have done
— to go to a meeting and discuss different
alternatives, we would have sketched out
different things on trace paper. And we would have
different layers, and maybe we would have put
them on top of one another. And we would have, you know,
been able to remove one layer, change color, highlight
something, and whatnot. And then, if we were to send
that analog piece to Yale, we could just send
all of those layers, and that would have been
documenting that moment in the design process where
we were making decision about the size of the
berms, or height — I don’t know, something. If we put this in digital, we would have done maybe the
same sort of exercise, you know, drawing in CAD those
different curves, and they would have been
on different layers. And when presenting them,
we would have shown — or maybe plotting them, we would
have shown them on different — you know, we would
have turned off layers, and just show certain image. But if we look at that
file 10 years later, and we try to plot it as a PDF,
because it was a turning point in internal — at the office,
but in making design decision, with the layers history, nobody can really tell
us anything about it. The plot size — everything
look like it’s the same — I don’t know, that
it’s a planted area, or it was a water feature,
or whatever it was, everything is at the same level. You are losing this layering.>>Kristine Fallon:
Other questions? Oh, here’s one. Katie has a question.>>Katie Pierce Meyer:
Hi, Katie Pierce Meyer. Not that this addresses that
question, but I’m wondering about how much any of you
might use screenshots as ways of capturing some
of those moments. And is that something
you would use as a way of sharing information on
a particular portion — or, you know, points
in a design process? And if you do, are those things
that you actually maintain as part of the archive
for a project?>>Greg Schleusner: I’ll take — so certainly, we use
screenshots, but they’re not — I don’t think they’re typically
used for design discussions. They’re usually more on the
technical side of things. So they’re — like, very
mundane explanatory things, like this is the wrong
color, or the wrong size, or something like that. I think this — so they
certainly get used, and it would have to be part of
a sort of more defined process that people would follow
in our environment, which is nearly impossible
for me to get people to do.>>Kristine Fallon: And there
was another question in the — yeah, the people who are
behind the column, I can’t tell if you have your hands up.>>Following up a little
on Katie’s question, I wonder whether, as you
work through these questions about archiving standard, and
the consideration is given to adding some narrative
component to the project record. In which participants
would be asked to describe or recount some of the
experiences of both creativity and collaboration that
were part of the process.>>Greg Schleusner: So
she had mentioned earlier about doing the — I
think it was you — the written narrative. So we — so we’ve had
discussions internally about doing interview-based
processes — not actually for
archiving purposes, but for knowledge transfer. Because we have a
very similar problem where if it doesn’t describe
well, you’re not going to learn from it, and you can’t move between projects,
and move information. So I think the tools
probably would be — that would be the mechanism
we would look to, and audio and video are much
simpler to use than having somebody write
something, or an e-mail. I think one of the sort of
pieces that we always struggle with is, okay, how do you
actually make an association of that simple recording to what
it’s relevant to in the project? And you end up with, you know,
intranets and stuff that — oh, it’s on there somewhere,
or — and that’s, again, another kind of interoperable
standard that no one has the same system. And so you upgrade systems,
and you lost a connectivity. I mean, one of the —
just as a quick aside, I once put together a list of all the stuff we don’t have
standards for that makes sense for us, and it’s,
like, 65 items long. Right? So, like,
just quick examples, what a 3D sketch looks like,
you know, photogrammetry. There are standards, but, like, the odds of a designer
knowing what an open format photogrammetry standard is,
I don’t really even know. Point clouds, all these sort
of things — so it’s a — it’s sort of a — I don’t know. I like to use the term,
“We need an internet, not more standards,” that
if you think about it, like, the internet made
things connect better. We don’t have that. We just have these
little piles of stuff. So, I mean, that’s
my long-term view, how you make this more
simple, but I’ll stop there.>>Kristine Fallon: Roger?>>Well, my question — my question has to do
with this idea that — I mean, there’s a range of you
up there from small design firms to large owners, and
different industries. And I’m wondering, you know,
the challenge of trying to collect information
and archive it — it’s somewhat what you want
to do for your own practice, and it’s also somewhat
driven by what your client or the owner wants to require. And if the owners are clearer
about what they need and want, is that going to sort out some of the problems for
the design world? And, you know, to some
extent, standards are important for things we want to preserve. They maybe help you with
the things you just want to have access to, but if
the owners were doing more to push standardization of the
information they want at the end of a project, would
that help sort this out? And maybe, is there any
experience from the PLM world that might shed some
light on that, for us in the architecture
and design world?>>Kristine Fallon: Do
you want to take it, Nick?>>Nick Gicale: Sure. Yeah, I think on the
federal side, I mean, especially for our larger
projects, we have very, very detailed contracts
and then specifications that dictate deliverables. And I think, at the end of
the day, I mean, what we want to be able to do with our
projects is that, you know, the final deliverables
that are produced, either by the architects
of record or through an as-built
modification process by our contractor, is to
turn over a set of drawings, and specs, and O&M manuals, et
cetera, that allow the building to continue to be run,
you know, going forward. I think where that does break
down a little bit is, you know, we have a pretty
good grasp on that for our larger projects
above $3 million. Projects under that, you
know, there’s many more, in terms of volume, and those
are the projects where — you know, I mean,
it’s $25,000 to build out a small space somewhere, and so how do you get
the smaller contractors that might be working on that
stuff to comply with some of the more complex requirements
of our bigger projects? And so working that out is,
you know, a struggle for us.>>Kristine Fallon:
So has anyone on the panel done a project
for the GSA, and what do you — is it just deliverables for
the GSA, Greg, or is it — you know, does it really help
you in the future go back? Of course, you can’t use it,
because it’s confidential, and for your eyes
only, and all of that.>>Greg Schleusner: Well, so we’ve certainly
done GSA projects. We have examples where
we were a sub to a — in a party where none
of that information that we even produced got
handed over to the GSA, so that’s one example. Even with re — or using
models, and so forth. But I think, to Roger’s
question, there’s small areas during a —
and this is design-specific — small areas within
the design process that the standards can be
used from a practical sense. But most of the time,
I think they’re seen as someone else’s
problem, and so we can — I can meet your problem,
but I’m going to do — it’s sort of an image
that came to my head. There’s this TV show that they
ship things across the country, and it’s a competition who
can get it there first. And they don’t care
how you get it there. It just has to land there in
one piece, and that’s sort of how our industry works, in that we don’t really
have the standards about how you get there. We have the standards
about how it lands, right? And this is, I think,
the challenge. Like, it’s not a — it’s
not that we can’t deliver to a standard if needed to,
but it’s not one that — the standards don’t really
apply in the process part. They apply in the landing part.>>Kristine Fallon: So it’s
actually an addition step. So you say, okay, so
we have our stuff –>>Drive your business
question –>>Greg Schleusner: Well, so I
think the specific thing is — from a technology perspective,
is there aren’t tools that utilize standards that we
can leverage for our business. There are tools that
are historically seen as handover tools, and so
it’s a very big discussion. But I think it’s generally the
way I view things, that it’s — there’s motivation
to have a status quo of vendors provide tools
for certain workflow, and then open formats for
handover, and that’s sort of at least how I
[inaudible], yeah.>>Mark Rylander:
So, you know, I — your question is a great one,
because it sort of takes me back to imagining looking at an
owner-architect AIA agreement, and trying to use that
as a basis for explaining to a client what
is about to happen. And it’s — does not — you know, if you read the
description of schematic design in the AIA, it just basically
has a line about planning, and organization, and
maybe there’s a narrative. What are you going to get? Meanwhile, books
have been written, or there have been
attempts made at explaining to a client what the
design process is like. But those descriptions
only work — are only about some
preconception. The AIA has a little film
about you and your architect, and what it could
be, but it’s a — you know, about a really
pretty big building. And I would never
think about using that for a house renovation. So, you know, a great tool would
be probably for some group of us to figure out how to put together a
little introductory film about how the process
works, and that starts to adjust the behavior
of all the stakeholders, and particularly
explains who is in charge at each step, and
who to turn to. So –>>Kristine Fallon:
Please introduce yourself.>>Hi. So one of the things –>>Kristine Fallon: You are?>>Mark Rylander: You are?>>What? Is it on? I –>>Mark Rylander: No, you.>>Phil Rocher: No, who are you?>>Mark Rylander: Who are you?>>Nick Gicale: What’s
your name?>>Mark Rylander:
What’s your name?>>Oh, ha. I’m Karen Moran, and I’m
actually one of the people in partnership with Mark
Rylander, several occasions. One of the things that I’ve been
thinking about is how we are so far removed as architects
from the Thomas U. Walter era, where his office would’ve been
the repository of all the data, and that you could go
back to the office, and it would be there. Because once you start
talking about deliverables, and the deliverables are
required to be manipulated, how do we, as architects,
put our fingerprint on those? Because Ms. Leventhal brought
up that even within the office, it’s hard to understand
the process by which these things
get developed. So once you release your
deliverable to the owner, and — I mean, if any architects
in the room remember, it was only a few years ago that we would never
release an original of any kind to an owner. We would release a print, and
that would be a reference. And now, we’re being pressured to release manipulatable
[sic] drawings to owners for future use. Had we put a fingerprint on
it so that it can go back to the originator —
because when it comes to digital archiving, you know, Thomas U. Walter had a
cornerstone with his name on it. Do we now? Probably not, for the most part. People won’t even remember
who the architect was. I doubt very many people
go to Boeing and say, “Could I have a manipulatable
[sic] drawing of my B52? Because I want to screw
around with it in the future.” I don’t think so. That doesn’t happen,
but people want to do that with buildings
all the time. So how do we actually document
what was meant to be documented in the first place, and how do
we keep that for perpetuity?>>Mark Rylander: Read-only. Read-only.>>Kristine Fallon:
Probably the person who might have some
thoughts on that is Greg, I mean, to some extent.>>Greg Schleusner: Well,
I mean, I’ve thought about these problems, but, like,
the actual solutions is not, like, something I could —
like, Blockchain, right? That’s how you solve all
problems today, right now, is just to say Blockchain. But, I mean, this is — this
is a realistic interpretation of this problem, right? You have to have a ledger of
all these things that happen, and that’s the promise
of these technologies. I think what we find is
they’re not easily implementable in our industry, because one of
the — even to hear Nick talk, we actually don’t have owners. And when I’m saying owners,
I’m saying process owners. We’re not building owners. No one owns a process. We all owned a little
piece of it. And so, you know, from a,
you know, pie-in-the-sky, if you look at what
the internet tells you, that’s definitely the
way we need to go. How we get there, I
have no idea right now. So, I mean –>>Kristine Fallon: Maybe
Phil could talk a little bit about the whole configuration
management, product management issue in
aerospace, and how it’s sort of handled, at a high level.>>Phil Rocher: Well,
you had said something about that we don’t give the
B52 CAD model to the Air Force, but — because there isn’t one. But anyway, I — what — I’m
going to address that first. Actually, I do see
the DOD requesting — not an entire weapons system
per se, but, you know, give me the wings on
the A7, because I want to reengineer them, or I want to have somebody
else reengineer them. So I do see that quite
a bit these days. The other part of that question
kind of had to do with — I think it had to do with
traceability, if you will, or at least that’s what
I’m going to call it. But in the aerospace industry,
they’re very religious about, you know, if the lefthand
bracket is your design, it is associated with you and the organization
for which you work. So if there’s a question about
that lefthand bracket, you know, some number of years
down the road, they do indeed know
who designed it. But, you know, whether he’s with
the company anymore, you know, that’s a different story, and
that’s out of our control. So –>>Kristine Fallon:
But how do you do that?>>That you –>>Phil Rocher: How
do you do what?>>The information –>>Kristine Fallon: How do you
keep track of that information?>>Phil Rocher: Our data
model supports that. It’s — the entities are
called person and organization. It’s –>>Kristine Fallon: The IFC
data model supports ownership — yeah, creative — yeah.>>Greg Schleusner: But I think
— I mean, from in your — like, let’s say I’m
going to design a plane. Like, let’s say I do
something smaller. I want to do a single-engine
prop plane. If I have a supply chain,
I set up a PLM platform that everybody connects to, and then that’s how you
put it together, right? I mean, that’s — I
think that’s the — sort of the root of your
question, that people point to that PLM system, or
platforms, and stuff like that as a way we could
solve our industry. But it — okay, who hosts it? Who maintains it,
like in our industry? It’s — so, like, okay, who
pays the rent on the system after the project’s done?>>Phil Rocher: Yeah. And in our case, it’s Cessna,
and Gulfstream, and Embraer, ad infinitum, whoever
the company is.>>Greg Schleusner: Yeah.>>Kristine Fallon: But it’s
interesting, in the IFC model. Because we did a bunch of what
are called the experimental BIMs for the Corps of Engineers,
and I would go to a trade show and see some company 20
steps removed from us, and they’d be showing
something they did with these experimental models. And I’d look at it, and if
you looked at created by, it would be C. Scander [assumed
spelling] at KFA-INC.com. And so that was maintained
through the model, despite all the people — in
IFC format, all the people who were manipulating it, and
adding to it, and all of that. Roger?>>Well, following up on my –>>I’m sorry, we
need you to say –>>Kristine Fallon: Oh, Roger. Roger Grant.>>Roger Grant, National
Institute of Building Sciences. Sorry I didn’t introduce
myself earlier. But following up the
question I was asking — this is interesting, because who
is driving that is Department of Defense, the owner, saying,
“We want this,” and they want to be able to validate it. And I think — you know,
that’s the challenge we face. We don’t — I don’t know. GSA has a lot of market
power, but do they have enough to drive this through
like manufacturing does? How do we get to that
in the AEC realm? Or we will ever get there? I don’t know. Does anybody want to
take a crack at that?>>Kristine Fallon: Nick?>>Nick Gicale: I mean, I
think in terms of BIM usage, GSA has been kind of
pushing that since I’ve been at the agency, since 2003. So, you know, today, at least
for our projects that are above $3 million, every one of
those projects has some type of BIM usage on that project, whether it’s spatial data
modeling, or whether it’s, you know, something more
involved, in terms of taking that model through design. I guess in terms of, though,
the idea of making sure that the architect or designer
is always attached to the work that they did, I think we
probably could do better in that area. We definitely do that for
equipment that we install. We always know who made
the faulty equipment that we now have to take
out of the building. But for, you know, the
renovations and stuff like that, you know, if you’re not doing a
new building, I think over time, kind of that tracking of
who did what does get lost.>>Got one back there.>>Hi, Stacy Baez
[assumed spelling] with the Architect
of the Capitol. This kind of goes
on with my thought, because the defense industry
has in fact driven, like, records management requirements, and the space industry has
driven what we now know as the OAIS standard. Do you see the opportunity,
or is that, in fact, desirable for the AED
community to be able to drive — like, this is our
requirement, and if you want to sell us a system that is
compliant, maybe a [inaudible] or something, it must be
able to conform the data for the purpose of archiving? Because right now, what we’re
faced with is, like, okay, we wrap up a project
in a massive system that a contractor has
used, and then it’s like, okay, here’s your zip file. You know, here are your CDs with
all the data, and it’s very raw, and then the challenges of
trying to do the verification and all on the archival side. But do you see if there’s an
opportunity for our industry to drive that — the
softwares [sic] — especially the preeminent
software companies are adding modules in to their software
packages that create, you know, the archival packages that the
archivists are going to need to be able to have some
kind of reliability that we can preserve the
formats that were created. Greg Schleusner: I would say — so one of the things
that you find in a lot of client requirements is — we’re good at doing what
we know can be tested — be extremely, you know,
sort of blunt about it. So if the client has a very
good standard, and they’re going to test that we’re delivering
to it, we can deliver to it. But the second it says — you know, someone in the meeting
says, “Eh, not a big deal,” well, it’s not a big
deal to us anymore. And so one of the
things that I think — that comes with standards
is the ability to make sure you’re
receiving them, right? And part of the — I think
from a technology standpoint, the standards that exist
today could be delivered on on every project. I don’t think that’s a question. It’s actually just a
will to enforce them that is a much more difficult
one, because it actually — you could look at
it in two ways. There are technologies that
make it easy to enforce, or there are — you either
have the staff to enforce them. And no one has that money lying
around until somebody sees it as a business model to enforce. From a technology perspective,
it’s at least how I see it, that we won’t get
better at delivering it. I think that’s from a
handover standpoint.>>Kristine Fallon:
Yeah, I was talking to a BIM manager
at a contractor. He said, “So, you know, when
the VA comes out with, you know, you’re going to deliver this
based on our BIM standard, before we bid, I got to call
them and kind of feel them out, and figure out whether they
know what they’re asking for. And if they don’t, we
figure we won’t do it, and we don’t charge them for it. If they do, then we
figure we have to do it, and we charge them for it.” So that’s — I mean,
that’s really the rationale that people use, and,
you know, when we — yeah, I mean, we’ve built
validators and things, so the clients can know they’re
getting what they wanted, but yeah.>>Kimon Onuma, Onuma Inc. One thought here, as
far as the ownership, and how do we track
ownership of information. I think we’re still stuck —
if you look at this morning, with the talk about —
we used to even be bound to the hardware that we used. The applications used to be
bound to that and fused to it. In many ways, we’re
still bound with our data to the applications that we use,
and the software that we use. But if we look at a
transaction-based thing that we can track
the transaction, you can actually
track ownership, and it’s actually staring us
right on the screen right there. So when we tweet to the
hashtag there, it actually says where we are, and we post that
location, that information about what we’re seeing in
this room, which can be traced in many different
— you can search it on Google, all kinds of stuff. So the technology’s
already there. I would encourage
us to look in terms of how do we track information, and look in a more
transactional-based way to look at it. And then you can start
identifying ownership. It goes back to Blockchain,
and a lot of other things. Obviously, we’re not going
to get there overnight, but if we think in those terms, I think we could
get there faster.>>Hello. My name is Laura
from Georgetown University. This is a comment to Mark,
and kind of following up what he was mentioning. You mentioned that
you have a problem when you try to find images. So you have a thought
about, if you tag the image, or add mirror tags, so
probably they would be easier — it would be searchable, so
you will not have to go folder by folder to find it, and you
just have to search your C drive or whatever you have, and
store or — just a comment.>>Mark Rylander: Yeah. Well, you know, I
think if there were — if there were a package — first of all, if there were
some image management software that was better than
what’s out there — a friend of mine
referred to iPhoto as Satan’s work last week. That, you know, I — we
would probably use tags if you could just
have your tags as part of the software,
and you click them. You could create your own tags. You could work that out
with the client and the rest of your design team, and that
actually would be really useful. I think tagging is —
you know, it’s work — it’s how Google works, so.>>Greg Schleusner: So
we’re implementing search technologies, and we
won’t touch anything that doesn’t just do machine
learning on images these days, because people aren’t
going to do it. I mean, we have used
tagging systems in our internal structure for a
long time, and maybe we get one of 100 people that use tags.>>Mark Rylander: Yeah.>>Greg Schleusner: So
there’s obviously technological solutions to bad data management that are becoming
more available. So, I mean, that’s a
sliver of the problem, but I think that’s a
meaningful way to look at it. So even this — in
this discussion, like, the domain models
for image recognition for our industry are not open. There are no such — I don’t
think there’s open data sets. That’s a very good example. Like, the training data that you
need to — oh, that’s a crane. That’s a duct. You know, those are all, as far
as I know, proprietary data sets that people are using
to train their systems. So this is a perfect
example where we’re — we self-compound on the
problems, and, you know, now you just pay for
this subscription to get the machine
learning algorithm to find your images
in the future.>>Mark Rylander: You probably
shouldn’t ask that kind of question at an architect, or they will create their own
proprietary system [laughter] that no one else can use.>>Kristine Fallon: Tim?>>Tim Walsh: Tim Walsh, the Canadian Center
for Architecture. You know, I think we all know
these BIM diagrams where it’s like this one model will follow
a thing from its initial sort of design development
through to the deliverable. And then it goes to
the facilities manager, and then it’s used for
every renovation forever. And it tracks the building
until demolition at least. You know, maybe we’re
often not in there, but sometimes maybe
you also consider that that would get copied out
to a collecting institution at some point for, like,
sort of permanent retention. And I think this is mostly a
question for Nick, but, I mean, to what degree is that
actually happening? Like, these deliverables that
come to you, do they get reused?>>Nick Gicale: Yeah, I mean,
I think they get reused. We haven’t had, I don’t think,
a new building go all the way through using BIM
for the entire thing, and now we’re solely
using that model to then manage the building. I think that over
the last 10 years, we have gotten more advanced
in what we’re using BIM for, and we have had projects
take BIM all the way through design, through
construction. And you have a fairly
good dem model at the end, but the issue still that I
have, and that the agency has, is that — you know,
maintaining that BIM model with all the small
projects that happen in it, or happen to the building after
that BIM model was turned over. Quite frankly, you know, a
lot of our people, you know, aren’t — don’t have experience
doing that, so we’d have to hire contractors to do it. So I think the maintenance
of that model through after the project still
becomes a huge issue for us.>>Kristine Fallon: Well, I
think, on that, we’ve done a lot of studies for organizations
who wanted to get into some kind of data turnover, or BIM, or —
you know, some things like that. And what we always find
is there are some things that they have staff that
they really don’t need to do any more. I mean, I’m talking
about large organization. These are kind of
redundant jobs, but then there are some
really missing activities, missing jobs. And, you know, I mean, we think
technology just takes away jobs, but it also creates some
requirements and some new jobs. And, you know, if you don’t
staff those jobs, i.e., you know, somebody who’s going
to sort of know what’s going on in the building, and make
sure the BIM gets maintained, even though you can do with many
fewer CAD operators somewhere else, you’re not really
going to get the benefits. And that’s what we found over
and over with large owners.>>Aliza Leventhal: Sorry,
I have another question. Aliza, from Sasaki. It’s just kind of an end —
maybe an end [inaudible], because it is close to the end. But because you all work
in various disciplines, what are some of the
struggles that you have working from your discipline to
another design discipline, and communicating? I know Greg had said
when it comes to working with another firm, you’re able
to decide on what standards work for you, and develop that. But even within Sasaki,
we have conversations about how can landscape
architects better communicate with architects, and
how does that work with a planning project on
top of that kind of thing. And so I wonder if you can
explore that a little bit, because as experts who already
have the learning curve covered, it would be great to know
what are the struggle, pain points that you
experience looking at a different discipline’s
files, and trying to integrate
them into your own.>>Greg Schleusner: I
feel for you more than me, because you probably get
the short end of the stick, so I’ll let you go first.>>Noemie Lafaurie-Debany:
We don’t do BIM. We provide CAD drawing
for the architects to then plug into their set. We — in — this is getting
out of BIM for a little bit, but in different —
visualization is very important in landscape, and in
speaking with architects or other consultant on
the team, and the client, a landscape is something
that changes all the time. So it’s very hard to present. It’s not an AutoCAD
block or anything that would truly represent
what you are designing. So we’ve experienced
with different software. At the moment, we are
using Lumion in order to very quickly create a
visualization of trees, and grasses, and shrubs. No architect we’re working with
are using Lumion at the moment, so that means that we
need to take their model, clean it up in Rhino,
then put it in Lumion. In Lumion, we design
the landscape in 3D, and that’s [inaudible] to
create the image of the design. We cannot give that Lumion
model to the architect to figure out what the planting
plan would be. We need to go back to
AutoCAD and then draw — so there is a lot of sort of
dead end in a ways that we need to go to in order for
the design to process. But for communicating
with other consultants, we need go back to CAD. That’s how we — where we
are stuck at the moment.>>Greg Schleusner: Aliza,
this particular problem, we’ve actually — so the game
world is actually quite open, and Lumion is like
a game engine. It’s — we’ve actually
started to do things where we move towards the game
engine tools, and tools built on top of game engines, so
we can always get access to the source code and extract
the tree location, if you will, in the future, versus
having these dead ends. But these are, you know,
specialists that, you know — finding a game developer
that’ll work for an architecture salary is
very hard, those sort of things. But, I mean, those are the
places that — we’re a big firm. We can try to explore, but
they’re not sustainable, and these dead ends
exist everywhere. I think on the building side,
working with partners, it’s — you know, everybody knows
what the problems are if you’re using certain
BIM platforms. Oh, we have to do this. Everybody has the same issues,
where we can’t track changes, and things — I have
to get an e-mail that says what’s changed,
those sort of things. I don’t know. It’s a systemic problem. It’s not unique to us, though.>>Okay, so thank you. [ Applause ]>>This has been a presentation
of the Library of Congress. Visit us at loc.gov.

Leave a Reply

Your email address will not be published. Required fields are marked *