
In this episode of On Boards, hosts Joe Ayoub and Raza Shaikh welcome Andrew Sutton, an attorney with the law firm McLane Middleton and an expert on AI ethics and the use of AI in law. As a founding member of his firm’s AI practice group, Sutton brings his knowledge of AI to this discussion of the ethical, legal and governance aspects of AI.
As AI continues to gain prominence, boards will have to consider how they can incorporate AI into companies and in the boardroom. Our discussion dives into the management of AI, implementation strategies and how to develop trust in the AI system.
Key Takeaways
- Andrew Sutton’s background in AI
From building computers to founding an artificial intelligence group at McLane Middleton, Andrew has always been a tech enthusiast. His work spans a variety of areas, including cybersecurity, privacy, and AI’s role in corporate strategy.
- AI in the boardroom
Andrew emphasizes the importance of boards addressing AI proactively as technology continues to evolve. Companies must be prepared to discuss the implications of AI implementation at the highest levels, especially given the growing expectations from stakeholders.
AI is already embedded in everyday tools, cell phones and Internet browsers to name a couple of obvious places, but boards must become much more intentional in how they use generative AI.
- AI governance and organizational structure
A “top down approach” is key when it comes to AI governance. Boards should be collaborating closely with technology teams, consultants and managers to create clear policies and strategies for AI.
Governance will require the coordination between various departments and committees to cover risk, business and IT. Some companies are appointing Chief AI Officers to drive implementation.
- Building trust in AI implementation
Companies need to create a robust structure with an AI model limited to company data and a person confirming the accuracy of the system’s outputs.
AI models need to be well-maintained and frequently tested to ensure there are no biases or hallucinations.
Quotes
“Taking that first step really needs to happen now, and that should be the emphasis for every board because I believe that the shareholders are expecting that the boards are going to be on top of this.”
“ AI is different because it changes the way that people work. It changes how human capital is deployed by adding a degree of automation into processes that were otherwise knowledge and education based and human decision oriented.”
“If you’re not moving forward with this, you risk being left behind. It is transformative in a way where in 5 or 10 years you might not be a relevant player…”
“An important part is having a robust structure in place that allows you to trust the AI… If you know that your data is good and your AI is limited to your data, and your model is tested and regularly maintained, then you can have confidence that what’s coming out of the AI is probably accurate.”
Guest Bio
Andrew Sutton is a founding member of McLane Middleton’s Artificial Intelligence Practice Group with work experience that includes Artificial Intelligence policy and ethics, the use of Artificial Intelligence applications by employees; acceptable use policies, Artificial Intelligence deployment/strategy, Artificial Intelligence application assessments, consumer protection concerns, robotics and the deployment of Artificial Intelligence technologies in the physical world. Andrew’s experience also includes cyber security, privacy and corporate work including complex transactional and real estate issues.
Andrew is a co-author of AI and Ethics: A Lawyer’s Professional Obligations which is included in the American Bar Association’s publication Artificial Intelligence: Legal Issues, Policy, and Practical Strategies published in 2024. He is a founding appointee to the Massachusetts Bar Associations Artificial Intelligence Practice Group and a member of the Boston Bar Associations Senior Associates Executive Steering Committee. Andrew regularly presents to local and national audiences regarding matters involving the ethical use of artificial intelligence and the use of artificial intelligence in connection with the practice of law.
Links
Corporate Governance Institute: Guide to AI in boardroom decisions
Artificial Intelligence: Legal Issues, Policy, and Practical Strategies
Transcript:
Joe: [00:00:00] Hello, and welcome to On Boards, a deep dive at what drives business success. I’m Joe Ayoub, and I’m here with my co-host, Raza Shaikh. Twice a month, On Boards is the place to learn about one of the most critically important aspects of any company or organization, its board of directors or advisors, with a focus on the important issues that are facing boards, company leadership, and stakeholders.
Raza: Joe and I speak with a wide range of guests and talk about what makes a board successful or unsuccessful, what it means to be an effective board member. and how to make your board one of the most valuable assets of your organization.
Joe: Before we introduce our guest, we want to thank the law firm of Nutter McClennen & Fish who are again sponsoring our On Board Summit this year, which will take place in October, again, in their beautiful conference [00:01:00] center in the Boston Seaport. They’ve been incredible partners with us in every way. We appreciate all they’ve done to support this podcast.
Our guest today is Andrew Sutton. Andrew is an attorney with the law firm of McLane Middleton, and as a founding member of his firm’s artificial intelligence practice group, which focuses on virtually every aspect of the use of AI, including policy and ethics, intelligence applications by employees, acceptable use policies and deployment strategy. Andrew’s experience also include cybersecurity, privacy, and corporate work, including complex transactional and real estate issues.
Raza: Andrew is a co-author of AI and Ethics, A Lawyer’s Professional Obligations, which is included in the American Bar Association’s recent publication on artificial intelligence, and he regularly presents to local [00:02:00] and national audiences regarding matters involving the ethical use of artificial intelligence and the use of artificial intelligence in connection with the practice of law.
Joe: Welcome, Andrew. It’s great to have you as our guest today on On Boards.
Andrew: Thanks, Joe. Thanks, Raza. It’s great to be here.
Joe: So, let’s start with how you first got interested in artificial intelligence in a professional context and came to found the artificial intelligence group in your law firm.
Andrew: Sure, sure. I’ve always been a big tech guy. Going back to when I was a kid, I used to build my own computers, take apart the VCR and hook up all my video games at the same time and see what would happen. My legal career really has been that of a bit of a renaissance man. I handle a broad range of issues, but I find the core of my practice really is it’s somewhat corporate and [00:03:00]transactional.
Commensurate with that, some of the work that I was doing a few years ago in commercial real estate involved combining technology with physical places in the real world, and that led me into data security and privacy and cyber issues as I started to consider what would happen if a bad actor were to hack a building instead of perhaps a computer.
And then from there, as things developed, my interest in AI grew. AI really is a real world technology. Once I ended up at McLane Middleton, I worked with John Weaver, who also has similar interests, to develop the AI group so we could stay on the cutting edge of these issues.
Joe: Thanks. So, as we talked about earlier this week, AI is so important in how it’s going to be used in a company. It’s a discussion that should probably take place in virtually every boardroom. [00:04:00] So, talk a little about the framework that a board might think about as they address the use of AI for their particular company or organization.
Andrew: Yeah, I think it’s really, really a top down approach. I think there are governance and policy issues and managerial issues that a board would want to take into consideration working closely with their technology group and perhaps, outside consultants who are able to give them some guidance on this.
The really critical piece of this is taking the first step towards implementation and moving forwards in a proactive and productive way, because right out of the gate, it’s going to be hard to determine what the ROI is on AI, and it will have compounding returns.
But taking that first step really needs to happen now and that should be really the emphasis for every board because I believe that the [00:05:00] shareholders are expecting that the boards are going to be on top of this and that the businesses are going to be on top of this. So, to the extent that that isn’t already happening, it would be something that very soon we would expect to see happen.
Joe: So, in terms of governance at the board level, are companies creating new committees that are focused on AI, are they wrapping it into another company? How are they dealing with that issue? Because it’s a pretty big issue and if you just let it come up in the boardroom whenever it happens to come up, you’re probably not going to get to it in the depth that it requires.
Andrew: We are seeing some very robust hierarchies forming within organization’s committees and committees on top of other committees, and this isn’t really something that just kicked over to the IT group. I think that AI is really something different. It’s a bit of a watershed moment in terms of a shifting paradigm about how work is done.[00:06:00]
So, it’s not really something that one person or one committee is going to be able to manage organizationally. The broader your organization, the more committees are going to need to manage all the different pieces of this, I think.
From a governance perspective, a lot of that is coming out of developing AI acceptable use policies. That’s sort of the document that sets out a very broad sketch of what the organization Is able to do right now with AI, and it might say, we’re restricting the use of AI pending further testing and investigations of the technology, but it’s forward looking in the sense that we are moving towards an AI ecosystem at the organization.
Joe: But where did these discussions take place? Did they take place at the audit committee? If there’s a separate risk committee, does it go to tech? Is there a separate AI [00:07:00] and cyber group? I mean, is there a best practice that has really emerged as to how to best address it?
Andrew: I think it’s critical for the board to start with management and discuss what the implications are of AI for the organization. Because if there isn’t a good way forward from a managerial perspective, if there isn’t a plan that the management and board are working towards here, whatever you do in the rest of the organization is going to start to fall apart. So, there needs to be a very structured approach.
Starting at the board level and at the managerial level, you’ll then be fed into it. You might have a committee on the IT side who talks about legacy systems and structures. You might have a whole new group of people who come in to help assist with AI and start to see a different division forming, maybe you might have a chief AI officer that starts to be part of the managerial [00:08:00] structure and organization.
Joe: Are companies actually hiring chief AI officers? You’ve seen that?
Andrew: They are. I think that there’s definitely a skills gap there. There are only so many people who have the experience of implementation at this point. There’s a handful of organizations that have really begun to do this, at least with generative AI.
I think that there are issues in terms of learning in scale that are going to be bottlenecks on a lot of organizations as they seek to deploy this. You really don’t want somebody who’s kind of making it up as they go, but the sources of individuals who have the onhand experience of dealing with an implementation are somewhat narrow.
Joe: Just to talk about how important it is, where does the AI officer sit in the hierarchy of senior management?
Andrew: I’d say they sit alongside the CISO, the chief information and security officer. Because I think that the [00:09:00] policy and the implementation that is going to go into this really needs to stay on track with management. So, if I’m a chief executive and I’m a board, I want to make sure that there’s a person who is able to manage this process and scale it appropriately for the organization.
Joe: How do boards go about developing a governance framework that will help drive the strategy that they’re going to employ? What are the steps that they’re taking? Is it the AI officer? Is it the tech people? I mean, do they bring in outside people? Most board members are not going to be conversant with all the issues regarding AI, so how does a board then educate itself sufficiently so that it can begin to make the decisions it’s going to need to make that will govern AI policy throughout an organization?
Andrew: I think it starts at the managerial level, and I think it starts with the chief executive officer [00:10:00] taking the board’s imperative and bringing it to the organization. So, if the board says, “Hey, we want to do this. We’re ready to do AI. You need to make this part of your agenda”, and then from there, it goes into the organization, and the manager would determine what the assets are that he needs in terms of human capital that are there or that are missing and seeking to sort of fill out what’s necessary to start making these decisions and start thinking about, like how deeply are we going into AI? How quickly are we moving into it? What’s happening in our industry?
Then I think that once there’s a good understanding of what’s happening, and that might involve bringing in third party consultants, it might involve management consultants, it might involve technology consultants, you go back to the board with a plan and you say, “Okay, now we’re going to work with our attorney. We’re going to draft our governance policy. We have an idea of what we want to accomplish and how we move forwards”, and then from there, the [00:11:00] transformation and the structure begins to take place.
So, I think it’s almost kind of like there’s definitely going to be a adoption period for every organization as they go through this. There’s going to be a transformative period and it’s not going to be easy, it’s not flipping a light switch on. But I think that organizations that take the time to really dig into this at the beginning and think about the policy will start to answer the tough questions and they’ll be able to bring that back to the board and make good investment decisions based on the information that that they’ve been able to assemble, I think
Raza: Andrew, earlier you mentioned the watershed moment, and I think it tells us that this is different, and by this, I mean, AI or the current form of AI, how do you think it is different than any other tech or tech project decision? What is so special about AI?
Andrew: AI is different because it changes the way [00:12:00] that people work. It changes how human capital is deployed by adding a degree of automation into processes that were otherwise knowledge and education based and human decision oriented. I think we’re moving towards a hybrid structure. It’s not going to be a situation where AI does everything, but I think there’s going to be a lot of delegation of tasks and information crunching to AI that will inform decision making on a faster basis on a more detailed basis and on a more current basis.
In terms of how it’s different, I think that we really haven’t seen anything like this maybe since the Internet where you had a different group of people who understood how to leverage networks and information and data on the Internet and was able to get a really distinct advantage competitively in their market. I think that you’ll see the same thing happen here when we’re looking at a market based kind of competitive [00:13:00] approach.
I think that it’s important for boards to understand that if you’re not moving forward with this, you’re being left behind, and it’s transformative in that way where in 10 years, five years, you might not be a relevant player, just like Sears went out of business with its catalog over time because you didn’t take the appropriate steps to invest in this shift.
Raza: Well said, and I think I’ll add that one of the things that strikes me is that the ability to make independent decision was a capability that software and tech never had. Ultimately, the humans were assisting it and these were just tool, but with AI under the right guidance and guardrails and all of that, there are a lot of scenarios where AI will be making decisions, and I think that’s the key thing that allows those workflows to be more efficient and leverage these capabilities.
Speaking of AI’s [00:14:00] capabilities, what do you think it means for the boards themselves to be using AI or the output of AI? How can that make the board’s information flow or decision making better?
Andrew: Everybody uses AI already. It’s already in your cell phone. It’s in your browsers. It’s in your Google search. So, it’s been sort of lurking behind the scenes, even if people don’t understand it. For years, we’ve had if this, then that, which is sort of kind of a form of AI. We’ve had machine learning.
When we talk about boards using AI, we’re really talking right now about boards using generative AI, the specific type of AI that statistically is making determinations as to what the next word that you are looking for will be based on your input to generate an output. When boards are using information that is gathered and parsed through AI, I think they need to understand what [00:15:00] that means, and I think that’s critical for boards to learn as soon as possible.
If AI is being used to generate reports or insights or information about their organization’s processes, compliance regulation, they need to trust that information. So, in any instance where AI is involved, the biggest issue is going to be trust, can I trust this data? Is there something in the data that might lead to bias? Is there something in the data that might lead to a hallucination or an inaccuracy?
Because what we’re talking about here is we want to have a high level of confidence in the information that the board is using with the greater speed and breadth of information processing that AI provides. We don’t want to sacrifice any of the quality that we have. We need to understand if a report is AI generated that maybe we take with a grain of salt. It’s kind of like [00:16:00] the Internet 30 years ago where your high school teacher said, “You know, go look at the encyclopedia. Don’t hand me something that you googled.
Joe: How does a board get comfortable with trusting AI and how they as a board are using it and how their company is employing it? What should the board be doing in order to develop enough of a trust that they can move forward?
Andrew: The important part of that is having a robust structure in place that allows you to trust the AI. So, for example, if you know that your data is good and your AI is limited to your data and your model is tweaked and tested and regularly maintained, then you can feel like what’s coming out of the AI is probably high confidence. And if you have a person or maybe even another AI process confirming the accuracy of the AI outputs, then you can [00:17:00] say, “All right, this has been checked and parsed in the data. It started from as clean.”
One of the things that is a big criticism of AI is where is the content coming from. Is it all from nonfiction books that the model was trained on? So, there’s value in different models. One thing for a board to be cognizant of is that AI is highly asymmetrical based on processing power, capability, compute time, and training data.
I don’t have access as a consumer to the same level of AI output as OpenAI has in its own system. It could ask a question that I’m not even allowed to ask and get an answer. So, there are all these different perspectives to have as the board to say, “What is the AI we’re using? Have we invested in it? What’s it drawing from and how does it get us to that [00:18:00] place we want to be with that confidence?”
Joe: How does a board balance the need to implement it and not fall behind their competitors who may be implementing it, but also not false starts, so that going down a road could be very expensive, could also be extraordinarily frustrating? What steps should they be taking in order to make sure that the implementation is moving quickly, but they’re not taking risks that are likely to lead to a real setback in their implementation?
Andrew: I think departmentalization is really important in implementation. I think that success and measurable ROI is really important for boards and management at this point. I think that a failure of an AI deployment would be extraordinarily costly for any organization, so it should be critical to plan your AI implementations step by step to ensure that as you are [00:19:00] rolling this out piece by piece, it is being rolled out successfully in that on the technical side, it’s going out without a hitch that you can see a clear ROI, and that on the human capital side, you’re managing expectations with respect to what does it mean to have AI in our ecosystem, that we’re empowering our workers, not replacing our workers. So, I think there are definitely some really important pieces to that, and that’s going to be a tricky balance for a lot of organizations.
Joe: How do you determine what the ROI is in the use of AI?
Andrew: You start off by looking at the back end of the AI to see who’s using it and what they’re using it for, and you can look at the amount of time it takes to conduct certain tasks. At the outset, when you’re using an AI that isn’t really suited to a particular task, you might actually find that it’s a negative ROI, because the development of the workflow itself to get the result you want [00:20:00] is taking longer than actually doing the thing that you’re asking the AI to do on your own.
But once you have a workflow in place, and this is why I think there will be AI divisions that will sort of help you develop these workflows and program and put everything together, you’ll see exponential returns because if the AI system is able to do what you want quickly, you now have created an automation for a task, but this automation is going to be somewhat granular at the beginning.
Aside from organizations that might say, “Hey, I’ve already done this, maybe I can sell it and I’ve taken the risk and then I sell it to the rest of my industry and sort of teach people how to use what I built,” starting at this point from ground zero means you’re doing a lot of testing, so it’s step by step and very granular.
Raza: Andrew, going back to the theme of AI in the boardroom [00:21:00] itself, thoughts on, let’s say, things that come under the heading of AI but the board itself is using, for example, meeting notes or meeting minutes transcription. Perhaps the board pack and materials software that boards use is able to summarize or flag questions for the board member. What do you think about that, the use of AI by the board itself to make it more effective?
Andrew: The use of the AI on the board minutes and board confidential information is absolutely not recommended at this point. It’s really important for boards to understand that confidential or non-public information could potentially be leaked through an AI. It could be part of the AI’s training process.
A big thing that we’re seeing with a lot of AI vendor contracts is really trying to track down where that data is going once it sort of passes into the realm of the [00:22:00] AI process, and we found latent GDPR issues. We’ve found issues that might be create liability under the California AI Act, and we found situations where information is being sent overseas because of some sort of round-the-clock service that’s being provided and it’s somewhere kind of buried in the AI hierarchy and the vendors supporting the AI company.
So, it’s really, really critical for boards to understand if they’re going to use any of these technologies that everything stops at their organization so that if you have your attorney at the board meeting to maintain confidence and give you advice, but the AI is recording, it could destroy the attorney-client privilege because the information is being sent to a third party, so we want to absolutely make sure before anything [00:23:00] gets into the board where the directors have liability, that everything is 100% clean. It’s almost like a data security assessment and privacy assessment at that point.
Raza: So extending all this to maybe a little extreme, which is in these cases, these technologies are supposed to be, if and when used correctly, augmenting the board or assisting the board in doing their job. Can we also imagine and think that in the future, there would be AI itself as one of the board members in a board, and is that even possible? What does that look like, and is it real?
Joe: Well, I would just jump in and say, not replacing even one board member, but maybe you say, “We’re going to have AI in the room, and there’s a couple of people we don’t need anymore because they were just gathered, they had some basic information about some stuff, but AI has so much more, so [00:24:00] we’re going to have AI here and five board members rather than the seven we had.” I mean, that’s what I’m thinking of, that AI actually takes the place of one or more board members.
Andrew: I think that’s really tricky.
Joe: Yeah, I bet.
Raza: From a legal perspective, I’m sure you have an opinion or view, but I think what we’re talking about is like pretend that this is a board member sitting on their chair and AI also interjects because it can understand the conversations, the context, the board materials, the organizations, everything, that it is able to interject or say, “Yeah, well, have you guys thought about this,” and that will be extremely helpful potentially. So just, putting it out there as a possibility and imagination.
Andrew: I think there would need to be some pretty significant legal reforms to get to a point where we’re seeing [00:25:00] AI replacing board members. As a matter of fact, because I think that one of the critical things about a board is that a board is human, it’s their people and there are people who are responsible for the organization and to make decisions, and if they do something illegal, they can get into trouble, whereas the culpability of an AI is very different.
If you ever look at the user license agreements for OpenAI or Google or Copilot, they have no liability for anything. They could tell you that the sky is green and that the seas are red, and too bad, that’s your fault. So, I don’t see that as necessarily something that is on the near horizon, but I do think that it could be possible for people who are training AIs to create models of different board members to think about how they would vote [00:26:00] to be able to say, “Okay, if we present something to this person this way, can we move their vote in a particular direction?”
Because we’re talking about data and preferences, and every time you vote, or you have meeting minutes, you could have an AI that could somewhat approximate what a board member might do. And for big decisions, there might be people who want to head in that direction. Having the AI help inform the board might be something that happens, but again, as long as it’s a closed system, the board and its attorneys are going to feel much better about
Raza: Yeah, well, believe it or not, Andrew, I know at least one company where it helps the law firms kind of role play and predict what a certain judge is going to argue or how it will proceed, I don’t know the technicalities, and be able to say, “Here are the pitfalls with this judge and here’s how you should argue,” and even make a prediction saying, “We think [00:27:00] you have a good shot if you do it this way.” So, it’s fascinating and it is quite a slippery slope, fraught with ethics risks and reliability issues.
Andrew: I’ve spoken to some judges about that, Raza, and they are not happy. They do not like the idea that you could feed all of their decisions right into an AI and say, “Okay, am I going to start shopping for a different judge because I know that Judge Jones has this kind of perspective in these cases.”
Joe: But isn’t that just a more systematic approach to what’s already going on? People are always wondering, if they have an opportunity, is there a judge that’s going to be more favorable to this particular case? This just systematizes it. What’s the difference?
Andrew: The distinction is that AI can see patterns that people don’t see. It’s like when you have a doctor and the AI does a better job of seeing squamous cancer cells then AI is just better at [00:28:00]
Joe: So, why would judges be more upset if they’re being selected based on AI feedback rather than just lawyers sitting around going, “That judge did this in this case, you got to avoid him.”
Raza: They’re just saying I shouldn’t be predictable.
Joe: Yeah, everyone’s predictable.
Andrew: I think that it speaks to the asymmetry of information and asymmetry of access that AI presents as an issue. One thing that judges like about AI is that it could provide access to justice, that it could help people who are indigent and can’t afford attorneys to sort of go through the legal process without falling into procedural traps or something like that. But they definitely don’t like the idea of it, say, rendering a decision in their place or being used to sort of manipulate what could happen in the courtroom. That is something that the judges do not like.
Raza: Of course.
Joe: Well, yeah, I mean, I understand it’s going to be a hard thing to [00:29:00] balance the enormous impact that AI can have in a positive way without allowing it to basically substitute artificial intelligence for human intelligence and making really the basic decisions that create the whole rule of law.
As you think about it, do you want artificial intelligence to be processing this and creating the framework for the future of how law is actually viewed and implemented in this country. I mean, there’s already some issues about the rule of law. If you add the artificial intelligence factor, where does that take us?
Andrew: That takes us to a lot of the due process cases and litigation that’s out there on AI right now. I mean, it’s not generative AI that has spawned a lot of litigation. It’s systems like facial recognition systems [00:30:00] or systems that are looking at applications and making decisions about an applicant based on some pattern in their resume, and in having an inherent bias in the system, I think, there are cases involving unfair hiring practices where AI system was parsing resumes and it’s discarding based on gender because the AI was trained on the data that was already at the corporation and it said it’s 75% male, so I’m only going to look at male applications.
There are airport systems that have been using facial recognition and pulling people over because their skin is a different color and violating due process on that basis. So, it’s a very fine line and that’s why we get worried about the bias in the system. We get worried about the data and how it’s trained and that’s a challenge.
Raza: And all these examples have like really highlighted that this is fraught with ethics [00:31:00]considerations risks and reliability problems. From the board’s perspective, how ought they think about providing guardrails and systems for addressing these concerns with AI? Because on the other hand, there’s a lot of opportunity for organization to take these risks.
Andrew: I think that boards and organizations as a whole need to be smart about AI touch points. They need to be smart about data. You don’t want the AI being a primary source of contact. There was Air Canada case involving an AI where there was some liability and Air Canada tried to assign that liability to a company that just owned the AI asset, and the AI essentially violated its own policies, and I think it refused a refund to a certain customer who was entitled to a refund and there was a lawsuit and it turned into a publicity disaster.
So, when we are [00:32:00] dealing with these things and the board’s looking at these things, you have to say, “Where does my control of this begin? And where does my control of this end? Is the result something that is clear and expected or traceable?” Because you don’t want a system that’s just a black box AI, so it all comes back to this trust issue of building a system that has repeatable results that has good information that is acting the way that you want it to act, and that’s an expensive proposition; the upgrades and the time and the personnel, that’s a big bite, but it’s a bite that’s somebody’s going to have to take or the organizations can have to take at some point. So, really, we’re in the stage of the runway where we’re figuring out how much runway do we need to get into the air with this thing.
Joe: And how much time do you really have to [00:33:00] not lose the competitive advantage?
Andrew: Right.
Joe: Andrew, it’s been great speaking with you. Thank you so much for joining us today on On Boards.
Andrew: It’s been a pleasure and thanks guys.
Joe: And thank you all for listening to On Boards with our special guest, Andrew Sutton.
Raza: Please visit our website at OnBoardsPodcast.com. That’s OnBoardsPodcast.com. We’d love to hear your comments, suggestions, and feedback. And if you’re not already a subscriber, please be sure to subscribe at Apple Podcasts, Spotify, or wherever you get your podcasts. Remember to leave us a five-star review.
Joe: And we hope you’ll tune in for the next episode of On Boards. Thanks.