AI Nightmare Fueling Lawsuits - Steve Britt, AI & Data Privacy Expert, Britt Law
AI Nightmares Fueling Lawsuits?! ⚖️
What if using AI could land you in court?
Industry veteran and host, Doug Haugh, sits down with Steve Britt, a national AI and data privacy expert and managing partner of Britt Law, to discuss the complexities of data privacy and AI.
The insights are mind blowing.
One wrong step and your AI could be a liability magnet.
Want to avoid disaster?
Get the insights you need to navigate the legal minefield of AI development, from copyright infringement to data governance.
Discover the proactive steps businesses MUST take to protect themselves and their customers.
Learn how a simple oversight could lead to catastrophic consequences.
This isn't just about compliance; it's about the future of your business.
You'll also Discover:
Why a Data Protection Assessment is your AI lifeline
How to navigate the personal data in AI models
The hidden dangers lurking in vendor agreements
A surprising FTC remedy that could destroy your AI
Your key to using customer data without violating privacy
=> Are you confident your company is protected in the age of AI?
------
Follow Doug Haugh on Linkedin: https://www.linkedin.com/in/douglashaugh/
Follow Steve on LinkedIn: https://www.linkedin.com/in/stevebrittlaw/
--------
This episode is powered by NewTide.ai, Enterprise AI Built for the Fuels and Convenience Industry. Learn more here: https://newtide.ai/
Transcript
Ropes.
Speaker A:Whoops.
Speaker B:All right.
Speaker B:I just turned that on, so.
Speaker A:Okay.
Speaker B:Because I've forgotten in the past, I'm like, oops, then we gotta.
Speaker A:Yeah, there's a Ropes and Gray.
Speaker A:There's a Delaware February of 25.
Speaker A:There's a Thompson Reuters case and the question of does training LLMs on copyright, copyright data constitute infringement?
Speaker A:And there's a February 25th district court opinion in Delaware.
Speaker A:And I haven't read the full blog yet, but initially says yes.
Speaker B:Right.
Speaker A:Oh, that's, I mean, it's, that's, that's a problem.
Speaker A:I don't, I don't know what the solution is going to be, but I.
Speaker B:Mean, I think, yeah, because the models have trained on, I mean, everything that's on the Internet and there's a lot of stuff on the Internet that is copyrighted, right?
Speaker A:Absolutely.
Speaker A:Yeah.
Speaker A:So it's hard to see how there's not.
Speaker A:It doesn't have to be some.
Speaker A:But anyway, they, they specifically rejected a defense of fair use.
Speaker A:And I.
Speaker A:People are always citing to me, transformative.
Speaker A:But is it transforming?
Speaker A:I thought, I don't know.
Speaker A:But I don't want to rely on a defense, on an interpretation of a case law for transformative use.
Speaker A:That's not where, as your lawyer, we got to get your license rights clear.
Speaker B:Right.
Speaker B:Good, good.
Speaker A:That's good.
Speaker A:This will be, this will be.
Speaker B:I'll kick us off with an intro and then typically Ben will edit us back to that point.
Speaker A:Yeah.
Speaker A:Cool.
Speaker B:All right.
Speaker B:Hello.
Speaker B:Welcome to the latest episode of Fueling AI.
Speaker B:Thanks for joining us today.
Speaker B:Joining us is Steve Britt is a national AI and data privacy expert.
Speaker B:He's a managing partner of Brit Law and serves as general counsel to the National AI association in Washington, D.C.
Speaker B:as an experienced corporate and software licensing attorney, he's passionate about data privacy, artificial intelligence, data management, SAS cloud data hosting, transactions, capital raises, and M and A.
Speaker B:So thanks for joining us today, Steve, and really appreciate you bringing us some perspective on what I think is a lot of questions out there about, you know, how we can use data, how data is protected, how do we protect our own data, how do we use software and platforms and services that we have confidence in are not infringing on other people's data rights or privacy.
Speaker B:So really important perspectives.
Speaker B:I really appreciate you joining us today and maybe start us off where, you know, how did you get interested in AI?
Speaker B:What brought you here?
Speaker B:Sure.
Speaker B:It's been a long journey, as it has for most of us.
Speaker A:Well, thank you very much.
Speaker A:I'm thrilled to be here.
Speaker A:I love your company and what you're doing when we first met and I'm very excited about the NIA appointment just a couple of months ago.
Speaker A:And I always hasten to add, I now hold three certifications, international certifications for data privacy and AI.
Speaker A:The CIPPE for Europe or gdpr, the CIPM for privacy management and the brand new this year, aigp which is artificial intelligence governance professional and people laughing.
Speaker A:I'm always driving these, these logos and I said it's the only way anyone knows you know, anything, so.
Speaker A:And the aigp which I just took three months ago, changed my whole perspective on data issues.
Speaker A:It just is, I'm so deep in data privacy and of course cyber on going back to the data breach days.
Speaker A:But, but AI is a, is a huge game changer.
Speaker A:So I'm thrilled to spend some time with you and give you my perspective.
Speaker A:As we both know, it's evolving so quickly.
Speaker A:Laws and regs are passing.
Speaker A:Who knows where it's all going to go.
Speaker A:But I'm eager to kind of fly over those issues for you and, and try to add some value.
Speaker B:Okay, well great.
Speaker B:Well, maybe start with.
Speaker B:You just mentioned earlier when we were warming up, you know, a recent case, I mean, I think one of the things that we're seeing and we get questions from clients about, okay, you know, one thing is, and we'll get to their data in a minute because I think that's very, very important.
Speaker B:But it seems like a lot of the caseloads evolving and some of the litigation is more about, you know, how the model builders have used data, where they've gotten the data, is it protected?
Speaker B:Did they infringe on other people's rights in creating the model?
Speaker B:Because obviously, you know, we're model agnostic, so we'll use Chat or Claude or Gemini or you know, we've even installed deep SEQ and you know, are using some of the, you know, open source models.
Speaker B:So, you know, what are, how do you see that playing out and what are the latest developments?
Speaker A:Boy, it's an incredibly complex.
Speaker A:When ChatGPT was first released, 22 and you know, hallucinations and copyright infringement were the first kind of driving legal issues.
Speaker A:And I was just saying there's a Delaware federal court case just last month in February 25th on the question of does the use of copyrighted information in AI models constitute infringement?
Speaker A:And this district court case, Thomson Reuters case, Thomson Reuters v.
Speaker A:Ross, I think it is said that in fact it does rejecting a fair use defense.
Speaker A:So it's, there are some facts of that case which might distinguish it but it's really the legal issues around AI at all levels are incredibly complex and moving so fast.
Speaker A:So whatever anyone says today may change tomorrow.
Speaker A:And at the large overview, when I talk about data issues, the one thing to note that that data privacy, which is all about the collection and use of personal information under the data privacy statute, super broad GDPR.
Speaker A:And now we have 23 state data privacy laws, 20 of which will be in effect by the end of this year.
Speaker A:So it's all about personal information.
Speaker A:And AI is about the machine use of data.
Speaker A:It doesn't have to be personal information.
Speaker A:And so the real convergence of issues here is if the AI model used personal information, you've triggered both regimes.
Speaker A:So you have to do a data privacy analysis of the data and the data rights that attach to the users whose data you put in the model.
Speaker A:And then you've got the whole compliance issues around artificial intelligence.
Speaker A:Incredibly complex.
Speaker A:So with that, where would you want to go from there?
Speaker B:Well, you know, we really opened it up there.
Speaker B:I mean, I think, I guess, you know, in terms of governance policy, I mean, at the very first level, you know, what are your recommendations to, you know, your peers in the legal trade, you know, general counsels at companies, how do they work with their CIOs, their CTO, their HR director, you know, certainly the executive team?
Speaker B:Like, yep.
Speaker B:How do you see that dynamic working?
Speaker B:And, and do they have the right resources?
Speaker B:You know, you think is, do you see corporate counsel reaching out to folks like yourselves or specialists or, you know, how does that work?
Speaker B:And we were getting a lot of, you know, a lot of our customers are just at the beginning of this journey.
Speaker B:They haven't, you know, I saw a stat the other day where 55% of the US population is using AI every week, but only 6% of companies are.
Speaker B:Which is, which leads me to believe, well, actually, 55% of companies probably are, but it means their employees are.
Speaker B:And, you know, there's no official company use.
Speaker B:So.
Speaker B:But as we enter that official kind of company use regime, what are some of the first steps that these, these companies need to take?
Speaker B:Just, you know, from a policy and governance perspective to give guidance?
Speaker A:Yeah, good question.
Speaker A:And I was going to pick up on one of your facts, but when I'm asked to.
Speaker A:First of all, it is an expertise.
Speaker A:I've been on so many calls, and I end up over the general counsel, and they say, God, it's great to talk to somebody that understands it.
Speaker A:They expect me to know it because I'm general counsel.
Speaker A:I can't keep up with it.
Speaker A:And I said, no, it is, it has reached levels of complexity that really are an expertise.
Speaker A:On your question of how I got here, I came out of a large federal regulatory chapter back in the Internet days and everyone was looking for an Internet lawyer and they didn't know what that meant.
Speaker A:So I dove into, I dove into technology and licensing and went and got the first two of my data privacy certifications.
Speaker A:I could tell instantly when GDPR passed, it was going to be a game changer.
Speaker A:And so again, wise by omission.
Speaker A:But when, when AI happened, I said, oh, here it is again, game changer, not going back.
Speaker A:And I went and got the AIGP certification and it completely reoriented my thinking.
Speaker A:And so anytime companies come to me to talk about any kind of these data issues, the important thing is to realize the first thing we have to do is do a data protection assessment.
Speaker A:What are your data practices?
Speaker A:What kind of data you collect, what jurisdictions are you collecting it from?
Speaker A:Who do you collect it from, what'd you tell them at the time?
Speaker A:And what do you do it with?
Speaker A:Who are you sharing it with?
Speaker A:And are you honoring these data subject rights that GDPR and all the state laws provide?
Speaker A:So, you know, I always thought, well, I don't know what your data practices are.
Speaker A:And that initial assessment, I actually do a consulting level rate.
Speaker A:You can't pay these thousand dollar hourly rates to do that.
Speaker A:It's got to, it's got to be built inside the enterprise.
Speaker A:And frankly, until you do an assessment and I form a committee and every office which handles personal information or data has to have a representative.
Speaker A:And it's interesting when you get in companies and, and I say, well, do you, do you purchase data less?
Speaker A:And CIO was all, no, no, we never purchase a list.
Speaker A:And marketing says, oh, we do it all the time.
Speaker A:So it's, it really have to kind of work through.
Speaker A:And then I don't try to boil the ocean or confuse everyone.
Speaker A:I try to find the practical middle lane.
Speaker A:Once I find out what jurisdictions they are arguably triggering within the context of these complex rules about how much data you collecting with each state's residents and are you collecting sensitive information?
Speaker A:Do you have to opt in or opt out?
Speaker A:Then I try to kind of get a sense, okay, where you are and then let's try to design a policy that's unfortunately usually the lowest common denominator, which usually is California, and just kind of design something kind of in the middle of road.
Speaker A:It may not be perfect.
Speaker A:I just don't think the regulators, they continue to say a good faith effort toward compliance will be recognized.
Speaker A:And that's critical because it's so complicated and, and you know, companies have a business to run.
Speaker A:And I really feel that it's getting so granular and prescriptive, it's kind of not fair.
Speaker A:And so I think you try to put A path All 23 state data privacy statutes, except to require an annual data protection assessment.
Speaker A:On the data privacy side, all of the AI, we have three state AI acts and of course the European Union and they require risk assessments.
Speaker A:So this is going to become a game of assessments.
Speaker A:When I talk to clients say look, let's draft an assessment, it's available to regulators on request.
Speaker A:It's not going to be perfect.
Speaker A:Let's kind of tick through your processing activities.
Speaker A:It's analyze the potential risk versus the benefits and then what mitigation measures like encryption or de identification or segmentation of.
Speaker A:So I believe you can kind of build a draft model that at least shows an effort to comply and regulate.
Speaker A:I just don't think regulators are going to be able to second guess that.
Speaker A:Like another issue, the state laws are finally picking up the concept of data minimization.
Speaker A:And data minimization is, is a standard that they think are trying to say means you shouldn't have collected any more data than you actually use or can, can use in your business.
Speaker A:And they're trying to use that as a proxy for what and GDPR is called privacy by design, where you're supposed to design the most protective of privacy rights.
Speaker A:But US laws didn't pick up that concept.
Speaker A:So and I just don't think data minimization, whatever it means in the hands of a regulator, people have to run their business however they see fit.
Speaker A:And so it is what I find when we go through these assessments, it educates the client.
Speaker A:They don't have to do anything like immediately it's going to take them into data configuration challenges.
Speaker A:Because I say you've got to be able to tag track recovery, individual stakeholder records.
Speaker A:That means from the enterprise that's, that's on your on prem service on your hosted servers.
Speaker A:Can you do that?
Speaker A:It usually sends them back to kind of some technical, some software upgrades.
Speaker A:But again, I think that for a very modest fee of doing an assessment, you get kind of educated, say oh, now I get it.
Speaker A:We figure out how to apply these standards to their particular business and then you kind of work on a game plan to coming into, you know, compliance over the next, you know, two, three years within budget.
Speaker B:Right, right.
Speaker B:Well, I think the One of the things that I, I think are new issues coming up, you know, as, as customers and our clients and everybody tries to implement AI.
Speaker B:You know, we all start with data because that's, you know, that's the heart of the matter.
Speaker A:Right, right.
Speaker B:Otherwise you have a generic intelligence, but it's just like a, you know, I tell folks, you know, part of New Tide's mission is to give people an experienced hire, not a college grad.
Speaker B:Right.
Speaker B:So we're going one step further to try to get them down the path of understanding our industry, our standards, our terminology, our, our infrastructure, you know, all those types of things.
Speaker B:And that all starts with data.
Speaker B:Obviously it's in, you know, that takes ingesting tons of public data.
Speaker B:Right.
Speaker B:But most of the questions come about when it's like, okay, I'm going to start loading my company data.
Speaker B:You know, obviously there's privacy and data governance concerns around that.
Speaker B:One of the things that we think is most important clients realize is that, you know, do that in such a way where you're not basically taking all of your private company data and training a public model.
Speaker B:Right.
Speaker B:That, you know, while isn't taking your documents and posting them to Google.
Speaker B:It's not like that, but it is, you know, it is interpreting.
Speaker B:You know, once that AI reads your information, it does recall it and understand, may not regurgitate it, you know, verbatim, but it could certainly convey that information to others.
Speaker B:Right.
Speaker B:If it's.
Speaker B:Once it's in its model, it's in its corpus of knowledge.
Speaker B:Right.
Speaker B:So, so I think we start with that basic.
Speaker B:But what I wanted to talk to you about today was something, you know, kind of more grander than that because you're, you've, you've spent so much time on data privacy and data governance and, and those issues, you know, how do you view, you know, a company's efforts?
Speaker B:They, they deploy their first AI platform, whether it's ours or others.
Speaker B:You know, anybody that's going to be, and they're, they're going to say, well, to make this thing useful, I need to train it on my information, right?
Speaker B:From my company, my transactions, you know, my, my contracts, my customers, you know, my practices, my policies, you know, all of that type of thing to make the environment in which I'm using that AI, you know, custom fit to, to what I want, what I'm trying to get done, the business I'm trying to conduct and to as much as possible how I conduct it.
Speaker B:Right.
Speaker B:Even down to, you know, kind of cultural norms for the organization in Terms of how they treat people, how they treat each other, how do they converse with each other?
Speaker B:Because they're now conversing with, you know, the agent.
Speaker B:So, you know, as we do those things, you know, what is, how are clients, how should they think about that in terms of the privacy concerns and just data governance?
Speaker B:Assuming they have picked a supplier or vendor or provider that is giving them the basics, right?
Speaker B:Like hey, we're, this is yours, you own it, it's in your private environment, it's not being shared with the public models.
Speaker B:Any training data is going to remain yours.
Speaker B:And that's our approach as well as others, very different than a public use.
Speaker B:And that's why I say that first stat I was talking about, I think is so important where if we have 55 plus percent of the population, meaning our employees using these things, that's where I think this is really important to get a governance policy in place and at least define it's not about trying to stop that use, it's about trying to govern it and make sure it's appropriate.
Speaker A:Yeah, and that's really where the whole efficiencies of AI, the promise of AI is, being able to incorporate it into your business.
Speaker A:And responsible AI governance really has to begin at the planning stage.
Speaker A:Why are you using AI?
Speaker A:Are you using a third party tool?
Speaker A:You're building your own, you really have to do that analysis.
Speaker A:And the formation of an AI governance board or committee is meant to kind of evaluate the potential risk, put in place machinery to mitigate the risk, look at whether the data is valid.
Speaker A:You know, you have training data, then you got to test throughout the life cycle.
Speaker A:Is that model still doing what you thought it was supposed to do?
Speaker A:And all these broad standards of transparency and explainability and security and resilience, all these kind of responsible AI governance models, many of which don't have well established standards of how to apply them.
Speaker A:So this is really an evolving process.
Speaker A:But when I go to look at a business to look into what do they want to do and why do they want to do it, you know, you instantly know that if the, if the use case, the use of that model, the automated decision making is touching consumers or end users and making decisions.
Speaker A:The EU act for high risk, the Colorado act for consequential decisions are in those areas of education, employment, banking, government, public private benefits, housing, insurance.
Speaker A:Like if you're, if you're using a model that's touching any of those elements of the economy at the user level, then you're in the most kind of sensitive areas but if you're using it in your business for efficiencies in the business, that's the best place to be.
Speaker A:On the one now we talk about the use of data to train the models.
Speaker A:California just passed a statute that Newsom signed the end of last year.
Speaker A:They passed like four AI statutes and one of them made clear that the California Consumer Privacy act applies to personal information under the CCPA and an AI model if it's possible to be extracted or accessed.
Speaker A:So that means that potentially the users are getting rights to your models and your weights and things.
Speaker A:So at the development level, I think, well, can we get it out of a category of personal information?
Speaker A:Can you de identify it?
Speaker A:Can you use synthetic data?
Speaker A:Are there ways to get you 90% of the benefits of that data without actually using the data?
Speaker A:Because this area of personal information in the models for development, you know, is, is just a very hot area that I don't trust where the courts and even the legislators are going to go.
Speaker A:So first party data, the other thing about first party data, which is your data, it's your users, it's your customers, it's your clients, it's your website visitors generally that's recognized as the most, you know, wide open right for you to use it.
Speaker A:However, you have to do some little analysis of it wasn't collected under the, under the disclosure that you would put it in a model.
Speaker A:So I think that's why you start to see changes in user terms.
Speaker A:They're trying to pick up that consent factor for the use of that data with a disclosure that now applies, which we want to use it in our AI models.
Speaker A:And that's what I would tell clients, clients to try to kind of get consent under you.
Speaker A:And if they're your clients, you'll probably be able to do that in a, in a, you know, in a modest, non shocking way.
Speaker A:Especially if you're telling them it's not used for any other purpose.
Speaker A:We don't share it with advertising bureaus and so forth.
Speaker A:But it's that level of kind of looking at what are you trying to do, what are you using, what are the rights that attach to that?
Speaker A:And then if you're using a third party vendor, what are the rights that attach to that?
Speaker A:Like they may say that you own your inputs and outputs, but I see in some of these platforms terms of use where they're reserving for themselves the ability to use that data for their own model.
Speaker A:So it's, it's, it's granular.
Speaker B:Yeah.
Speaker B:And I think that's one of the, one of the things we advise folks to be very careful about, I think there's, you know, is it's, it's that sort of blind spot I think can develop where it's like oh well we have the inputs and outputs, of course, it's our data.
Speaker B:I'm like, you do need that extra protection that you know, to the extent that the service provider is using that data to train, develop, improve, you know, their model that then they own, right?
Speaker B:If they're going to do that, then it needs to be very explicit and you need to know what you're agreeing to.
Speaker A:Right.
Speaker B:And I think this comes down, I'm seeing kind of two sort of categories, right?
Speaker B:There's what I as a software technologist, you know, I look at embedded AI, which is, you know, SAP or Oracle or Microsoft, you know, they're all putting AI capabilities into their platforms, right?
Speaker B:So as you use those platforms, you're going to be using that embedded AI capability.
Speaker B:And I think like you said, getting clarity on how much of the interaction that you're providing in using that platform is going back into the software providers AI model versus something you own.
Speaker B:It needs to be clear, right?
Speaker B:You might be perfectly fine with that.
Speaker B:If it's generic use and it's de identified and you know, it's just making your accounting system run faster or you know, more automated fashion, great.
Speaker B:But I think it's definitely an area where we're encouraging folks to really ask those hard questions before they agree to a new terms of service or you know, turn on a new, hey, this is, this is just comes with your platform now it's free.
Speaker B:Yeah, well it's free.
Speaker B:But like, you know, when I hear free, it tells me I'm paying for it some way.
Speaker B:So you know, you know, and I think the other category is, you know, so you've got the embedded AI, you know, sort of protections and exposure.
Speaker B:And then a lot of what companies like ourselves are doing is like, well we're helping companies build their own AI, like this is yours.
Speaker B:And in fact, you know, we don't, we can't and wouldn't use any of your data to train our AI, you know, a generic system level AI.
Speaker B:Right, right.
Speaker B:And I think that needs to be explicit for providers looking for those, you know, for customers looking for those kind of solutions from providers.
Speaker B:Because if it isn't, you know, some of your most sensitive company information is in effect being ingested and embedded in someone else's model, which to me is scary.
Speaker B:Right.
Speaker B:If I have, if it spent, if I spent, you know, decades accumulating thousands of legal contracts and agreements and you know, going through negotiations, some of which maybe have taken months, right to come to a, an outcome that was acceptable, favorable, you know, to my business.
Speaker B:That is part of how I make money compete, you know, win in the marketplace.
Speaker B:To have, you know, that exposed and ingested into someone else's know how and capabilities to me is, you know, would be, you know, a dangerous situation for companies.
Speaker B:I think so, yeah.
Speaker B:So, so that's part of it.
Speaker B:But I, I do see it sounds like what I'm taking from your feedback here Steve, is companies even internally are going to have a range of exposures.
Speaker B:Right.
Speaker B:So in our industry, if you think about, you know, we work from the supply chain down all the way to, to the consumer.
Speaker B:So.
Speaker A:Right.
Speaker B:What I'm hearing is those, those consumer exposed touch points.
Speaker B:So if all we're all the way down to the C store and we're interacting with our customers who are the public, they're not an employee, they're not a corporate transaction with a, you know, a B2B customer in that sense.
Speaker B:But it's, you know, it is an individual.
Speaker B:And I'm, if I'm collecting data there, even if it's first party data because it's my data they're transacting with me.
Speaker B:It sounds like there's still a lot more caution that needs to be employed at that end of the spectrum rather than, you know, hey, I'm up here automating an internal workflow that, you know, it's just transactional data that's, you know, company information and it's not about any individual person.
Speaker A:Yeah, I mean your, your knowledge is so great, it's so great to talk to you because you're you and all these niches, as it were.
Speaker A:So yes, on the data privacy side, the convenience store is collecting data, personal information from its users, customers.
Speaker A:And so that needs to be pursuant to a privacy notice that discloses exactly what the use they will make of it and what they do with it and who they share it with.
Speaker A:So, and that's very important on the data privacy side because those users ultimately will have rights to ask questions about what you're collecting, who you shared it with, please delete it.
Speaker A:And they may ask you to delete it, but you don't necessarily have to.
Speaker A:It starts a clock on responding to that request.
Speaker A:So a bunch of issues there that are complicated.
Speaker A:I was going to pick up one other thought.
Speaker A:The ftc, you know, there's so many issues here and Each you uncover one rock and you see two others.
Speaker A:And so the FTC is taking the position that trying to do changes in terms of use that are significant, like changing the, the ability for a company to be able to use your data in a model may be an unfair, deceptive practice to do that in any kind of surreptitious, non fully disclosed, informative.
Speaker A:So companies come along and saying, oh by the way, we're changing our terms of use.
Speaker A:Please accept they're more complicated issues, especially if they're giving themselves greater rights to the use of your data.
Speaker A:So I just note that.
Speaker A:But you know, the ftc, but with California and with states now dealing with personal information and AI models, if it's possible be an output or released, you know, you can run yourself into some serious issues.
Speaker A:The other thing worth noting is that the remedy, the FTC's favorite remedy, and there's a line of cases if they catch someone who's collected or used data improperly to make them delete the data and if it's an AI related model, to delete the model.
Speaker A:So you could suddenly run yourself their favorite remedies to say, and you must delete the data, the first party data and delete the AI model that has it in it.
Speaker A:Whoa, that's.
Speaker A:And so that could be very expensive.
Speaker B:If you've one, if you've made that investment proactively and even more expensive if you've embed it.
Speaker B:You know, if you have it executing processes for you.
Speaker A:Yeah.
Speaker B:And you, But I think if you have to back up, that would be difficult.
Speaker A:But I think in the development stage it is thinking through what's the type of data, how's it used, how do I protect myself, you know, synthetic synthesis, synthetic data, segmented data.
Speaker A:I always tell clients anytime what data you're putting in a model, I'd like you to segment it.
Speaker A:I'd like to be able to prove that the training data or the testing data, I'd like to have it segmented on a server somewhere lock and key so we can prove that's the data that was used.
Speaker A:But you know, you're the technologist and I don't know how reasonable that is all the time.
Speaker A:But yeah, I think, I think, I.
Speaker B:Think with modern platforms, I mean one of the most important, you know, kind of features, capabilities, my belief of enterprise software versus you know, kind of using consumer grade stuff in your, in your company.
Speaker A:Right.
Speaker B:Is you know, most enterprise software platforms, even back, you know, basic accounting systems from the old days or as ERPs evolved and CRM you know, you always want, you know, auditability built in.
Speaker B:Meaning, you know, not only was this data used in the model, where did it come from, is it in the model, is it not?
Speaker B:But who did it and who loaded it and how was it, you know, because you get into permissions and roles and sort of other types of data, you know, constraints and governance that you want to impose on your, on your users to protect, you know, themselves, you know, from, from sort of consequences but also protect the company information and how it's used and just, you know, there's good old fashioned, you know, fraud and other reasons to audit that data.
Speaker B:Right.
Speaker B:So, so I think most enterprise systems on an integrated basis will have very good, you know, auditability.
Speaker B:Right.
Speaker B:Knowing what went in, where did it come from, when did it happen and hopefully if it's, if it's a, it's a modern platform with the right controls and user permissions, it'll, it'll know who did it.
Speaker B:Like who actually loaded that data and when did they do it.
Speaker B:So perhaps even if you did get a consent decree and you needed to do some remediation, you know, hopefully you could back up to just that point in time where you know, it was introduced and not lose all your work.
Speaker A:Right, yeah, good about so man, that makes the case for it and thank you for that.
Speaker A:That's, that's, that's encouraging for me to hear is because I'm, I'm over here in the legal weeds.
Speaker B:Right, right.
Speaker B:Well, thinking about, you know, so you know, when I think about sort of that first person data, so it raises an interesting question.
Speaker B:We have, you know, a modern C store today might have as many as, you know, 100 cameras in it.
Speaker B:If it does self checkout with, you know, if it's, if it's automatic, I don't know if you've ever walked into like one of the Amazon Go stores, but they have.
Speaker B:I wouldn't, I wouldn't even venture to guess how many cameras are in those stores because the ceiling is carpeted with them.
Speaker B:Right.
Speaker B:It's sort of how it works.
Speaker B:But setting aside that example was fairly extreme.
Speaker B:If you go into a normal, you know, local convenience store, modern version of it, there's probably 40 or 50 cameras between the four quart throughout the store in the office, I mean, you name it, right.
Speaker B:So hopefully providing, you know, mainly for security purposes in the past it's been, you know, unfortunately stores in our industry do get robbed fairly frequently.
Speaker B:Like when I was running a chain, I mean we, we had one or two a week was not unusual and it's very traumatic.
Speaker B:I mean, a lot of times you have, you know, associates and employees, you know, threatened at gunpoint, which is terrifying for all of us.
Speaker B:So those cameras, you know, they were put in for those purposes.
Speaker B:Oftentimes, you know, there is one over the register monitoring transactions and how cash is handled and such and things like that.
Speaker B:But my question for you was those now are being fed to, you know, computer vision models that basically, and I see tremendous potential here.
Speaker B:But it does raise some questions about this.
Speaker B:You know, first person data and permissions.
Speaker B:You know, in the old days you'd walk in and there'd be a, you know, a sticker on the door, maybe said, smile, you're on camera.
Speaker B:Right, but how does, how does that impact the use of that data now where if you have an AI that's, you know, essentially consuming that video real time, all the time, 24 7, it's not just being recorded to a storage medium, it's being interpreted live, right?
Speaker B:Yeah, as it streams.
Speaker B:I mean, that's the power of it, right?
Speaker B:I mean, we, you know, I've looked at applications that even on the borders of the property, you know, read license plates as the cars come in and associate those with perhaps, you know, a loyalty customer so that before they even get to the pump or get in the door, you know, I can push them an offer and say, hey, you know, thanks for, I see you're coming in.
Speaker B:You know, it's really cold out today.
Speaker B:How about, how about a free small coffee if you, you know, if you buy a Danish or something like that.
Speaker B:So, you know, very proactive, prompting one to one marketing type of capabilities that are, you know, have a lot of people in the industry pretty excited to be able to do that for their customers.
Speaker B:But what are the implications if I start feeding all those camera, you know, feeds into an AI model that's.
Speaker B:Yeah, that's digesting them and, and in fact using them for different purposes.
Speaker A:Boy, I thought advertising was ground zero.
Speaker A:This really is ground zero.
Speaker A:And it's very well framed.
Speaker A:Makes perfect sense.
Speaker A:Videos, of course, are personal information and the, and one other area is using videos for employment and hiring and then applying software to analyze videos.
Speaker A:There's, there's a.
Speaker A:Two or three specific statutes on the use of videos for employment and in using software to analyze the responses in a video interview.
Speaker A:But your point, the state data privacy laws are really focused on the use of sensitive information and doing profiling of users.
Speaker A:It's a requirement of specific.
Speaker A:Usually it's either a Right of opt out or a right of opt in where you have to secure consent for that purpose.
Speaker A:It is a perfectly understandable business use case.
Speaker A:There haven't been a lot of cases yet.
Speaker A:But I do know that profiling users and associating data, you know, identifying them and then associating, it's just not a lot of clarity.
Speaker A:It's, it's where this, where the data privacy laws go really are sharing of the data with someone else for profiling.
Speaker A:So if you're doing it for yourself, you're on stronger ground.
Speaker A:These are your users, your customers.
Speaker A:But I'd be, I'd be very careful, I'd always be looking for, in my privacy notice to talk about maybe deletion or retention of data.
Speaker A:And the other key thing, of course, is wherever it's stored and used, that it's, you know, secured and protected.
Speaker A:But I think that that's really the benefit of the technology and loyalty programs and others, and where even the data privacy laws go, go toward notice that that's a possible use.
Speaker A:Either an opt in affirmative consent to do the collection or an opt out if they tell you they don't want you to do that anymore.
Speaker A:So it takes you into a business process discussion to make sure you can actually honor these rights which the statutes are granting to the natural persons whose data you're collecting and using.
Speaker B:Yeah, makes sense.
Speaker B:I mean, I think with, I think with the loyalty programs, it's because it's very explicit, right?
Speaker B:I mean, you're agreeing, you're signing up, you're, you know, you're volunteering your information and.
Speaker A:Right.
Speaker B:You know, you're agreeing to a use, a license of use that, you know, a lot of people probably don't read them, but it's provided to them.
Speaker B:They scroll through it and click through when they do that.
Speaker B:I think it's, you know, I guess my questions are more about passive collection.
Speaker A:Right.
Speaker B:Where it's, you know, I may never even visited that store before.
Speaker A:Right.
Speaker B:And I just walked in that day, but now you're filming me and observing my behavior, using that to interpret or profile, as you say, different behaviors that might be possible.
Speaker A:Right.
Speaker A:I mean, certainly the disclosure, certainly the disclosure, we've seen the signs for many years.
Speaker A:The requirement to disclose that they're subject to video.
Speaker A:And that's not even a data privacy statute.
Speaker A:That's kind of an independent set of rules.
Speaker A:But, but it's just, it's just an area to keep an eye on in terms of your use of that data.
Speaker A:Which kind of takes me back to the idea of having a deletion policy if they're not a customer.
Speaker A:And the, the EU act and others are really focused on law enforcement use.
Speaker A:I mean, GDPR data scraping for purposes of identification purposes is a violation of GDPR if done by law enforcement.
Speaker A:So law enforcement has to be very careful and license plate readers have been sensitive for a long time.
Speaker A:So it's, it's just, you know, it's another set of issues to keep an eye on.
Speaker B:So what I'm hearing from you, though is, you know, proactive, showing a proactive effort to manage and govern this right, and, and demonstrate that, you know, you've, you've done to the best of your capability and sort of the current state of the art in many cases, the, everything you could to protect private data, protect personal data.
Speaker B:And you've got an approach that governs that and, and that you were thoughtful about it and, and, and exerted due effort.
Speaker B:That's probably the, the best defense, right, to, you know, against this sort of ambiguity that still exists.
Speaker B:Oh, yeah, you know, because this stuff's moving so fast, case law hasn't caught up yet.
Speaker B:Specific statutes may be, you know, in place or not in place, depending on the state.
Speaker B:But regardless of that, I think comes down to, you know, just good governance, like it does in a lot of other parts of our business, of having a clear policy, making sure your employees are educated on it, making sure it's been updated and reviewed on an annual basis like we do for a lot of different policies.
Speaker A:Right.
Speaker B:From credit to HR to, you know, trading limits and you name it.
Speaker B:Right.
Speaker B:So this just sounds like another one of those categories where executives need to be aware of that, have a, have an approach, probably hopefully get some guidance, you know, either from their internal counsel or outside counsel like yourselves.
Speaker A:Right.
Speaker B:And that, you know, and to keep it, keep it evolving with the times is what I'm also hearing is that it's not a static target.
Speaker A:Yeah, definitely not static.
Speaker A:And kind of notice and consent, like, you know, public notice, public disclosure of your practices, even if they are aggressive, is enormous.
Speaker A:It's that, you know, it's the, it's the bear running after you and your buddy.
Speaker A:You don't have to outrun the bear, you have to just outrun your buddy.
Speaker A:So if you, if the regulators, especially with these requirements for all these assessments that are available to the regulators, I say to people when I speak that think about it now, the regulator can sit in their office and go look at your publicly posted privacy notice, sometimes called privacy, but they can Go look at your privacy notice and they can ask you for your data protection assessment.
Speaker A:They'll know right then where you are on compliance and they won't have to waste any time and effort.
Speaker A:So the publicly posted privacy policy notice is the place to start.
Speaker A:We walk through what are the actual practices, what might you want to use it for in the future?
Speaker A:What are you doing with it now, and is it secure all data privacy, Data protection statutes require reasonable data security.
Speaker A:So it takes you into those storage and AWS and all those kind of questions.
Speaker A:But you know, again, not, you don't need to be perfect, you just need to show the regulator that you're attentive and try and trying to comply.
Speaker B:So, you know, asking questions leads me to the next question about like, so, you know, what should customers be asking of their vendors?
Speaker B:You know, so if, if we or others walk in and say, hey, we have this great new AI platform, we'd love you to use it, we think it's very powerful and saves you money, et cetera, what are the questions they should be asking us or others, you know, as regards, you know, our subscription agreements, our protections, our qualifications, you know, where do they start?
Speaker B:Just to kind of vet, I mean, you've been doing, you know, software and technology for a long time and obviously, you know, we've had, you know, EULAS in terms of end user license agreements.
Speaker B:We've had, you know, the subscription agreements that have gone through lots of evolution, you know, over the last, I mean, I started dealing with those 26, 27 years ago, I guess when we, you know, when the whole Internet started and we first started with SAS software.
Speaker B:But it was a very different type of licensing than, you know, the traditional, hey, I sign a license and you give me a cd.
Speaker B:This seems like there's, it's, you know, a lot of probably consistencies with those types of, of scrutiny and, and procurement and contracting questions.
Speaker B:But is there, is there some additional ones that customers should be thinking about as it relates to AI that are a little different than the traditional kind of software contracting or procurement cycle that they've done in the past?
Speaker A:Yeah, very, very good question.
Speaker A:And we've covered some of them in the sense of how was their model built, you know, how was it built and trained and the source of the data, what rights attach to those, to the model, to the inputs, to the derivative works that come out of their working for you and your outputs going back into their model.
Speaker A:So I always, when I have companies, the first thing I want to start with and let me just See the copy of their form agreement.
Speaker A:Let me see.
Speaker A:Kind of like the regulator looking at a privacy notice.
Speaker A:How sophisticated are they in these areas?
Speaker A:And those won't be answered in the policy, in the agreement itself.
Speaker A:But trying to make sure there's a clear understanding of what rights attach and where they are in the development of their model with how do the algorithms work, what was the data sourced on, what are the outputs, what happens to them?
Speaker A:What are the derivative rights that arise from their working for you?
Speaker A:Those are all questions that I can prepare 10 or 15 set of questions.
Speaker A:And so we, we try to make it as user friendly to understand or I get on with council or their development team and just, just and, and, and I think companies understand these are the relevant questions for any of their customers, licensees for the use of their technology.
Speaker A:So it's, but they are different than just the normal SAS software agreements.
Speaker A:They're just much more granular on the, on the, how the tool was developed, how does it work?
Speaker A:I want to know.
Speaker A:I, I say the young companies realize the customer is going to start putting in their agreements for you.
Speaker A:You agree that your model was built in accordance with all applicable laws.
Speaker A:You agree that you had the right to use the data and that you will properly use the outputs of their own model.
Speaker A:So we're going to end up in a contractual reps, warranties, indemnities argument that you end up in the M and a transaction of if something comes out of the woodwork and some regulator or a plaintiff comes and sues, I gotta have contractual indemnity or protection or a rep and warranty that you gave me that your model complies with applicable laws.
Speaker A:But heads up, when you take your modified model into your customer base, they're going to come back to you.
Speaker A:So I want that vendor to agree to support me, the developer in my compliance obligations.
Speaker A:I need to know what you did with your model.
Speaker A:And my customers are going to have to, I'm going to have to build risk assessments for my customers and I want that, I want that vendor with me on my own compliance obligations, security incidents, honoring of data subject rights, they may be holding certain data.
Speaker A:So again we're going to end up in this tiered kind of negotiation over reps, warranties, indemnities for things which happen after the fact that nobody really anticipated.
Speaker B:Right, right.
Speaker B:No, I was in, I had this discussion the other day on a certain type of indemnification and the counterparty asked me well why, why are you asking for, you know, they wanted indemnification from, you know, the supplier on obviously was the model trained, you know, if it was found to be trained on, you know, inappropriate data, et cetera, you've got indemnify from that if I agree to use it.
Speaker B:Certainly reasonable.
Speaker B:We were asking them for basically reciprocity on that, saying, well, we need the same thing.
Speaker B:And they were saying, well why would you need that?
Speaker B:I'm like, well, if your users which have access to the system go and inappropriately obtain a whole data set inadvertently or overtly, if they went and stole a bunch of data and then loaded it into the model that we're now hosting and training and supporting for you, I need you to indemnify me against that.
Speaker B:Right, Because I can't.
Speaker B:I mean, you know, we set user pop, you know, you're controlling access to the system for your users if they do something ill advised as, as that kind of activity would be.
Speaker B:We're going to need to help work together to protect each other in that type of situation should we come under scrutiny, you know, through a different means.
Speaker B:So I think it's, it's really, it was the first instance to me where we kind of had a, you know, a give and take dialogue.
Speaker B:Like you said, it's negotiating these things.
Speaker B:I don't think there's any like hard black and white.
Speaker B:I think it's.
Speaker B:But it's largely, you know, conducting good business, having good contracts that make this, make these things as clear as they can be and that just that.
Speaker B:But that do have to ponder a different kind of use now, you know, for the data and how it's protected and how it's used and where it's, where it comes from and like I think, I think now.
Speaker B:And that's, that's why back to the auditability and sort of, you know, making sure that we can retrace the course and say, well, where did that data come from?
Speaker B:How did it get in the system?
Speaker B:Who did it?
Speaker B:How'd they load it?
Speaker B:And you know, is that all appropriate use or not?
Speaker A:Yeah, boy.
Speaker A:I mean, very impressive.
Speaker A:And, and again, of course in those indemnity clauses, like in the M and A transaction, you go back in the back and the exceptions and the limitation of liability and damages.
Speaker A:So the cap on potential damages, you want violations of laws to be carved out of that cap equal to the fees that you paid us for the.
Speaker A:So it, it's kind of like in the M and A transaction, it drives you back into those, the boilerplate which a lawyer friend of mine used to say it doesn't matter till it matters.
Speaker A:And, and it's again one of those, it's just a knowledgeable, wise consumer of legal services, but also the executive.
Speaker A:And once you, you've got such a great perspective and understanding of it, these things, you know, ultimately I try to train my clients.
Speaker A:I mean, I don't want them to use me for more than they need, but, but I need to get them up to speed with what they should be looking for, the questions they should be asking.
Speaker A:And the kind of weird thing is these terms don't take a lot of words to create, but they also don't take a lot of words to kind of be completely pro vendor.
Speaker A:And so you're back in the back of these agreements.
Speaker A:It really does take legal expertise to flesh out those issues and it's worth a little bit of time and effort to do that.
Speaker A:But you're, you're spot on with, with how you're seeing it roll out and it's going to continue to be the case.
Speaker A:I try to tell the early stage companies, you know, be careful.
Speaker A:You're building a product and you're all focused on the technology and your ability to use it, but you're not realizing that when you pull that, you're wise enough to pull that enterprise version into your own firewall so that it can't leak out on that public platform.
Speaker A:But you're picking up a whole series of compliance obligations you need to pass down.
Speaker A:So I don't think it has to be super exotic, a few words and a few provisions.
Speaker A:You just have to have them.
Speaker A:You have to have something to kind of shield yourself.
Speaker B:Yep, makes sense.
Speaker B:Makes sense.
Speaker B:Well, I appreciate covering those issues maybe as we get close to wrapping up.
Speaker B:Tell me a little about this National AI association that you've, you've been helping out.
Speaker B:And I read, I was, you know, in prep, just reading up on it.
Speaker B:Sounds like there's 350 enterprises now, over 50 billion in market cap value and 120 jurisdictions.
Speaker B:And it's, you know, it sounds like a, a fairly extensive effort to get folks organized and, and have an advocacy organization to, you know, help inform, educate and, and advocate is what I read.
Speaker B:Yeah, but tell me which, tell me how you felt about it.
Speaker B:How's, how's it been going and you know, what can they help us with?
Speaker A:Yeah, thank you.
Speaker A:And I'm going to encourage membership.
Speaker A:I mean, the membership sponsor runs from 2,500 to 10 grand for their top sponsor.
Speaker A:And we're trying to build a coalition of interests and companies and businesses and nonprofits to drive responsible development of AI.
Speaker A:But you also pick up these data privacy issues.
Speaker A:So the Trump administration, to their credit, has put on the street a RFI request for interest, a request for information by David Sacks at the Office of Science Technology, the crypto czar.
Speaker A:So asking quite openly what should we do about AI?
Speaker A:They pulled the Biden executive order, which had a lot of really responsible elements to it.
Speaker A:So they pulled the Biden executive order for government agencies and they're opening this dialogue.
Speaker A:What should the AI action plan be for the administration?
Speaker A:The House Energy and Commerce Committee has also put an RFI on the street, more data privacy of what the committee should be looking at on data privacy.
Speaker A:So I've just, I drafted our comments at nia and we've just gone through a clearance process internally and with our members.
Speaker A:And we're recommending, first of all it's, it's a hard exercise because there's so much for someone who the reader that doesn't know these level of details.
Speaker A:So we, our comments which I'm sure will be publicly posted, they're due on March 15th, so we'll release them on Friday.
Speaker A:And so we walk through what here's the legal landscape.
Speaker A:EU, AI Acts, state AI Acts 23, data privacy and hoping to show that the complexity we've got to kind of wipe the slate clean here on data issues.
Speaker A:You know, the data subject rights, gdpr.
Speaker A:So much of this is a pretty good consensus around, but then the states go off in different directions.
Speaker A:So we are actually proposing an American Data Management and Artificial Intelligence Act.
Speaker A:That's a.
Speaker A:So don't try to, that's an Alphabet soup there.
Speaker A:Yeah.
Speaker A:And my, the founder said, you know, I wish we had a better acronym.
Speaker A:It's like, it's like an admire or whatever.
Speaker A:So and so to try to preempt the state law, let's get to one standard at the national level.
Speaker A:Let's help companies comply that real focus on innovation, deregulation, reasonable standards for AI.
Speaker A:And we're all in this kind of AI compliance.
Speaker A:So it's a big ask.
Speaker A:But it'll be interesting to see what the administration does.
Speaker A:And I just think it's important to try to kind of take this opportunity out of this series of requests to see if we can kind of right size all of these rules and obligations protecting innovation, but also protecting victims of harassment and algorithm discrimination and those issues and then try to bring data privacy into some uniform set of standards and really accepting 85% of what all the states have done.
Speaker A:Let's get off all the random opt in versus opt out.
Speaker A:You know, the right to appeal, a denial of a data deletion request and all these kind of special.
Speaker A:It'll be interesting to see, but I'm very excited about it.
Speaker A:You know, I had my own regulatory chapter back all the way back in Reagan and Bush 41.
Speaker A:And I, I was in multiple agencies, either congressional director or gc.
Speaker A:And so it's my chance to have fun again, get back in public policy.
Speaker A:So keep an eye on it.
Speaker A:I would encourage membership.
Speaker A:You know, again, there's actually a bronze level for individuals that's free and get you on the data list.
Speaker A:What we want to do is show Congress and the administration we represent all these companies, all these individuals.
Speaker A:These are a lot of voters that we think let's just bring sanity to data management in a way we can actually implement it and restore the US Role of leadership.
Speaker A:But we've also got to kind of work with Europe and other countries to come to some common standard.
Speaker B:Yeah, no, I think it's a, I think it's a really important effort.
Speaker B:I mean, I think that, you know, just the balance between, you know, innovation, creativity, commerce, you know, commercial interest and sort of building business with, you know, security, privacy, compliance.
Speaker B:I mean, it's.
Speaker B:Anytime we get the pendulum too far on one end of that, those spectrums, you know, it's just damaging to someone.
Speaker B:Either it's damaging to individuals as, you know, businesses get out of control and don't have, you know, we lose individual protections.
Speaker B:On the other hand, I think, you know, previous policy, you know, the, some of the Biden AI policy was just to lock us in where there would be very, I think very detrimental to innovation and development of this, you know, key resource and key capability both, you know, which I think is rapidly starting to have national security concerns and certainly leadership concerns as far as our economy goes.
Speaker B:So.
Speaker A:Right.
Speaker B:Excited to see us, you know, reevaluating that and bringing together hopefully all parties with those perspectives that can give us that.
Speaker B:What I think good policy and good governance always gives us, which is a balance between those two ends of the spectrum.
Speaker A:Right.
Speaker B:Where I can't have absolute rigid control and be able to do nothing.
Speaker B:I'm perfectly safe, but I'm perfectly poor.
Speaker B:The other is it's, it's Wild west, wide open.
Speaker B:I have no rules.
Speaker B:I don't know how to even, you know, try to comply if I wanted to because it's, it's, it's so unregulated.
Speaker B:So yeah, you know, I think, I think finding that middle Path is always difficult, but excited to see some, you know, new work in that area and really encourage folks to check it out.
Speaker B:I think it's the NAIA.org is the site.
Speaker A:We'll put it in the NAIA.org let me make two completely different points, but on your very point about innovation.
Speaker A:So the EU AIR act applies to any AI tool or system released into or used in Europe.
Speaker A:So we're already bifurcating.
Speaker A:And the EU AI act requires a compliance assessment to have been conducted of your compliance with that act before your release of the technology.
Speaker A:Wow, that's a game with all these standards.
Speaker A:Right.
Speaker A:So innovation is directly at risk there in terms of a timing for European regulators to make a determination of whether you're in compliance.
Speaker A:The immediate issue was self certificate, like CMMMC for those DoD contractors.
Speaker A:Self certification versus third party.
Speaker A:And when the EU act passed it, they said, well, there is not an established IV and V validation set of consultants to determine whether you do comply.
Speaker A:So we're going to permit self assessment for now.
Speaker A:That was key.
Speaker A:Otherwise they're going to order third party assessment.
Speaker A:So that just shows the complexity on a completely different issue of liability.
Speaker A:So this 23 state data privacy statutes, other than Washington's My Health, My Data act, do not permit a private cause of action for violation of those state data privacy laws.
Speaker A:You can't sue on them.
Speaker A:Wow, that's key.
Speaker A:The business community did a good job of if you could sue over whether the notice was complete or you honored my data.
Speaker A:So they have no private cause of action except Washington state.
Speaker A:There is no shield of liability.
Speaker A:And so therefore, as a result, the plaintiff's bar is suing under wiretapping statutes and video protection.
Speaker A:They're in trying to in run that prohibition by suing under state data privacy, the California invasion of privacy, wiretapping and all those kind of.
Speaker A:And those cases are getting some traction.
Speaker A:So on the AI side, there is no protection from liability for tort and negligence lawsuits, there's no liability protection.
Speaker A:So if you harm someone through the use of an AI system, it's straight plaintiff.
Speaker A:So there's some issues here which we hope that the Congress would, you know, get preemption, define any claims or liabilities for any lawsuits touching data, be actual damages only, maybe a requirement of a right to cure, you give notice a right to cure the harm before you can sue on it.
Speaker A:We think there's some key public policy issues I hope that we can, you know, impress upon those that are making those decisions.
Speaker A:But thank you for the chance to share.
Speaker B:Yeah, no, I think it's important.
Speaker B:I think as companies start to go down this journey and, and look towards adoption of these technologies, you know, all these, all these topics are critically important.
Speaker B:So thank you so much, Steve, for helping us cover them today.
Speaker B:You know, and hopefully folks realize if they take nothing else away, it's, you know, making the effort, showing, you know, preemptive preparation, doing your best, realizing it doesn't have to be perfect, but that you do have to be engaged and ask these questions before you, before you launch down this journey too far.
Speaker B:So I think that's if any other takeaway that one and the fact that there are groups like NAIA and doing good work to find that path forward on good governance, that allows us to keep innovating as an economy and as inventors and companies and startups, but with clear guardrails that we know how to do it and protect everybody as that innovation occurs, it's going to be critical.
Speaker B:So appreciate you doing that work with the group and encourage others to check it out.
Speaker B:And really just thank you for your time today and helping bring some clarity to these, these data questions that we all have.
Speaker A:Very, very, very helpful.
Speaker A:Thank you so much.
Speaker A:I really enjoyed it.
Speaker B:Yeah, thanks.
Speaker B:That's it for our episode today.
Speaker B:Thanks everybody for joining us on Feeling AI.
Speaker B:Look out for the next episode coming soon.
Speaker B:Take care.
Speaker B:See ya.
Speaker B:All right.