Patience Jones: Hello, and welcome to As Built, the podcast from Graphicmachine about architecture firms, and buildings, and how both get built. I am your co-host, Patience Jones. With me, as always, is...
Brian Jones: Brian Jones.
Patience Jones: Your other co-host. Thank you for joining us. Today, we are doing another in our Tech Stack series, and we're talking about AI policies. I feel like AI should be said in like a Darth Vader sort of a voice, which I can't do. Here's the disclaimer, this is not legal advice. This is not a substitute for talking to a lawyer about AI issues, or if you have questions, or having that person opine on your policy. This is just issue-spotting: things to think about when you're thinking about how to create an AI policy. So, the first question is, why do you need one?
Brian Jones: And more importantly, encouraging you to craft an AI policy for your office.
Patience Jones: Yes. In a non-legal advice sort of a way. You probably have a policy for, you know, who gets to use certain types of software or processes that you have in the office. This is no different, and that's why we're encouraging it, because AI is a tool. Whether you use it and how you use it are things that can be covered by a policy to remove any confusion, misunderstandings by people, whether those people are your staff or your clients, and it can evolve as technologies evolve. And your understanding of how you want to use it can evolve. Even if your stance is, "We don't use AI for anything ever," that can also be a policy. And if you don't communicate that, people are not necessarily going to assume that that's your position, and you may have some unpleasant surprises. So, how to think about what to think about?
Brian Jones: You have to assume that AI is going to be used in potentially the crafting of work within your office. And understanding that is key to deciding what you want it to be used for and what you don't want it to be used for.
Patience Jones: Here's where my little antennae go up. Why does one have to assume that AI’s going to be used?
Brian Jones: Because it's so pervasive in culture right now. People are getting accustomed to using ChatGPT, among others, to find things. And so this is being used in some way inside your office. Maybe it is as innocuous as searching for a restaurant nearby, but it could be much more. It could be for substantive work as well.
Patience Jones: That's fair. I definitely don't want anyone to think that this is us saying, you must use AI and you must use it in the following ways. That's not it. But I think especially when you look generationally at how it's being used, a lot of people use it the same way that they use Google.
Brian Jones: Yep.
Patience Jones: Or the same way that we use Google.
Brian Jones: Yeah.
Patience Jones: They will go to ChatGPT and look something up. And that something could be, what's the building code section for this type of concern? Or it could be to create a rendering, or it could be what time does this store close?
Brian Jones: You raised kind of the first tier of considerations, which is the reality of hallucinations in AI results when you're searching for specific information that is factual. Like the building code, there are other things within the profession that would fall into that category. Knowing that you may have to validate that as an entity and that you can't take the result for granted is really critical to understanding the larger sense of what may be part of your policy or framework.
Patience Jones: I want to break this down a little bit because it is such a new thing, and things evolve. At the time of this recording, it is really imperative that if you’re using AI, you're checking those results against something else, especially if you're relying on it for any kind of important anything. Many AI bots are currently coded to, and trained to be, "helpful." Helpfulness is assumed to be providing you, the questioner, with an answer to the question that you seek. If the AI does not have an answer to the question, the thought process goes, "Well, going back to you and saying that I don't have an answer is not helpful, so I will make one up." There was a test that somebody ran where they came up with a made-up concept that has never existed. They asked four different popular chatbots to explain this concept. And only one of the chatbots came back and said, "I'm sorry, I don't know what you mean. I don't have any information about that. Can you give me some more details?" The other three provided multiple paragraphs of "information," including made-up history, made-up people who made up this concept, and different instances in history where it was used, all made-up, because in the programming, the ultimate goal is to be helpful. This is “helpful.” So depending on who's asking the bot and what information they already have, they may not immediately know if something's wrong.
Brian Jones: Yes.
Patience Jones: Part of the policy probably needs to address: in what circumstances do you need to validate and how do you validate? It probably shouldn't be by using another chatbot to validate. It probably needs to be something else.
Brian Jones: It's one of these things where the convenience of it can be alluring, and it is useful, but understanding where it is useful and how it can be useful inside your organization is the reason to think about it.
Patience Jones: I think a great place to start is, "What is this tool for?" Just because a tool exists doesn't mean it is useful for any situation or all situations. What is it that we want this thing to do? In what circumstances can somebody use this? Do they have to validate the answer? Do they have to disclose to a client that this is being done? Depending on what kind of projects you do and who your clients are, they may have internal rules, things related to their granting, things related to their fundraising, where they can't use any AI or incorporate it at all, or they have to disclose it. That's important to know. If you would feel like, "Oh, the client's going to be mad if they found out that we used AI for this," then it probably behooves you to rethink using the AI. If it's something that you would be embarrassed if people knew you were doing, then the issue becomes not, "Should I tell people?" but, "Should I really even be doing this?" And then, things like, "Who has access to this tool and do they understand how it's billed?" Because a lot of the tools, once you get into a professional subscription, are billed by number of queries, number of credits used, and it's important for people to know ahead of time, "Yes, you have permission to use this. You can use it whether it's number of times, or number of queries, or number of hours a day, whatever."
Brian Jones: I think in the profession's desire to be green and carbon-neutral in the coming years, this is also something to think about, how this impacts that. There are the actual built pieces that are produced, but there's also the effort that goes into the building of those, and both count, or should count, towards that idea.
Patience Jones: Yes. I mean, is it really super fun to run a bunch of prompts through AI to see if it can generate 5,000,000 renderings for you? Maybe. I don't know. But that is using up so much power and so much water, and that doesn't really align with the idea of being sustainable, or being green. You have to know how AI works and you have to know what you're doing, and then make an informed choice about how to use it. What follows from that is also the understanding of how AI processes information. The free chatbots - when somebody just goes randomly to ChatGPT and types something in - all of that information, all of the prompt, is being retained by the chatbot company, and it's being used to train that chatbot, and all the chatbots. That information becomes part of a pool that any other chatbot on that system can pull from to deliver an answer to somebody. So you have to be really careful when you're thinking about, "Okay, is, is this information confidential? Is this proprietary? If I put into a chatbot that I'm doing a a house for somebody that has two children, and this is the address, have I now put confidential information into this larger system?" What would you be comfortable with the whole world knowing?
Brian Jones: I think the difference between when the web began to be crawled by Google and now is that the act of publishing a webpage gave you the intrinsic clue that you were putting something public that was publicly accessible. This has that kind of gray region where it isn't just the results that are public, it's also the information that you're inputting into the prompt that becomes part of the larger repository.
Patience Jones: Exactly. Let's say you upload a set of designs because you want AI to do something with those designs, and you're not using a system that's closed, those designs also get dropped into the pool. When somebody else, six months later, says, you know, "Hey, ChatGPT, generate for me a set of designs," yours may be what they use, and you just need to be aware of that.
Episode Resources:
Connect with Brian Jones and Patience Jones
• https://www.linkedin.com/company/graphicmachine/
• https://www.linkedin.com/in/brian-jones-graphicmachine/
• https://www.linkedin.com/in/patiencejones/
AI Resources
• How Does AI Work?: A Beginner’s Guide, Caltech Center for Technology Management & Education
• Generative AI and Chatbots, Temple University