人工智能相关合同中的关键考虑因素 | 每日IP英文第493期

学术   2024-08-22 07:01   河北  

第493期

今日分享的文章介绍了公司在审查或准备与人工智能(AI)产品和服务相关的合同时应考虑的关键要素,包括尽职调查、关键术语的定义、输入和输出、法律合规性、知识产权、AI和劳动力方面的考虑,以及责任问题。本文为AI领域的合同制定提供了一套全面的指导框架,帮助企业和法律专业人士更好地理解和应对AI技术带来的法律挑战,推荐阅读。
我是大岭先生,这是我为您分享IP英文的第493天,期待您的评论。如果今天的文章对您有帮助,欢迎您分享。

Key Considerations in AI-Related Contracts

August 19, 2024 | Husch Blackwell LLP-Erik Dullea, Shelby Dolen, Owen Davis & David Stauss
Keypoint: Companies onboarding AI products and services need to understand the potential risks associated with these products and implement contractual provisions to manage them.
With the rapid emergence of artificial intelligence (AI) products and services, companies using these products and services need to negotiate contractual provisions that adequately address the unique issues they present. However, given that this area is new and rapidly emerging, companies may not appreciate that the use of AI may raise unique contractual issues. Even if companies do realize it, they may not know what those provisions should state. In addition, many AI-related contractual terms are complicated and confusing, oftentimes containing new terms and definitions that companies are unfamiliar with handling. 
In the below article, we identify key considerations when reviewing or preparing AI-related contracts. Although there may be other considerations depending on the specific use case, the below considerations should provide the reader with a useful starting point for how to address this issue.
Due Diligence
As a starting point, companies onboarding a new AI-related vendor should conduct a risk assessment of the vendor and its product/service. The risk assessment should identify information such as the specific use case and business reason for using the product, the product/service’s inputs and outputs, whether the product is being used for a high-risk processing activity, and the vendor’s access to company data. If the vendor insists on using its contractual terms, the analysis also should identify whether those terms are negotiable and, if not, whether the company is willing to assume the risk of whatever terms are presented. If the vendor is a start-up, will the company be left holding the bag if the vendor closes shop in the face of third-party litigation or regulatory investigations? 
Ultimately, the company’s use case for the AI product/service will dictate what contractual terms are the most significant. For example, if the company will use a vendor to create marketing content, then intellectual property considerations will prevail. If the product will be used to review resumes, then bias considerations will prevail. If the vendor will analyze the personal information of employees or customers, then privacy considerations will prevail.
Definitions of Key Terms
Although specific terms will depend on the exact use case, terms that typically require definitions are artificial intelligence (or a similar term like AI technology), generative AI, inputs, and outputs. Defining artificial intelligence is particularly important given that it establishes the scope of all obligations. The prevailing definition from the Organisation for Economic Co-operation and Development (which is used, for example, in the Colorado AI Act) defines AI broadly. Generative AI is a subset of that broad definition where AI is used to generate content. 
“Third party offerings” is another common and significant term if the vendor’s product/service will be used in combination with a different vendor’s product/service. This is a common occurrence as many AI products/services are built on another vendor’s product/service such as OpenAI. The underlying vendor’s terms may alter or nullify any warranties or indemnification provisions and, therefore, require close review. 
Ultimately, understanding exactly what the product/service does (and does not) do and aligning the definitions is critical.
Inputs and Outputs
In addition to defining the key terms, the contract should address obligations and rights regarding inputs (i.e., what information goes into the AI) and outputs (i.e., what information comes out of the AI).
With respect to inputs, companies need to consider what data will be provided, whether it will be secured by the vendor, and whether privacy or business proprietary considerations come into play. For example, if the company will input customer data, the contract should address privacy considerations and a data processing agreement may be appropriate. If the company will input business proprietary information, the contract should require the vendor to keep that information confidential and use it only for the company’s business purposes. 
The contract also should address how the vendor can use and share the data, including whether it can use the data to improve or train its product. For example, Salesforce is currently running an ad campaign with Matthew McConaughey called “The Great Data Heist.” The premise is that the AI vendor market is currently the AI Wild West where the “bad guys” only want customer data and “will do anything to get it.” Salesforce ends the commercial by stating: “Salesforce AI never steals or shares your customer data.” The fact that Salesforce is willing to spend tens of millions of dollars on this ad campaign should be a signal that this is an important topic to address with AI vendors.
Relatedly, depending on the scope of the data shared with vendors, companies should consider adding data breach notification and defense/indemnity clauses if they are not already addressed in the contract or data processing agreement. It is not difficult to imagine that these AI products and services will be a new threat vector for hackers.
For outputs, the contract should address which contracting entity owns the outputs. For example, Microsoft recently updated its consumer Services Agreement to, according to its FAQs, expand “the definition of ‘Your Content’ to include content that is generated by your use of our AI services.” In other words, Microsoft recognizes that the user – and not Microsoft – owns the output.
Legal Compliance
With the emergence of state and international laws regulating the use of AI such as the EU AI Act and Colorado AI Act, companies that engage in activities subject to those laws will need to add contractual obligations that address the laws’ requirements. Similarly, companies that are federal contractors, need to monitor Presidential Executive Orders, agency regulations, and federal procurement guidelines to confirm their use of the contemplated AI technologies will comply with the requirements under the federal contracts.  
At a minimum, any contract with an AI developer should obviously require the developer to comply with applicable laws. However, depending on the use case, additional provisions may be appropriate. For example, the Colorado AI Act (effective February 1, 2026) requires deployers (entities that use an AI product/service) to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.” Yet, deployers may need to rely on developers to test and validate that the AI product/service does not create unlawful discrimination. In that event, the deployer should contractually require the developer to represent and warrant that the AI product/service does not create unlawful bias and link that representation to the defense and indemnity provisions.
The Colorado AI Act also contains provisions requiring deployers to, among other things, create impact assessments, provide notices, and allow for appeals under certain circumstances. Companies contracting with AI developers should consider whether the developer should assist the company with complying with these obligations or even be entirely responsible for compliance. Indeed, there is nothing in the Colorado AI Act that prohibits a deployer from contractually shifting obligations to developers.
Finally, companies deploying AI products in the United States should address the likelihood (if not certainty) that new laws will go into effect during the term of the contract. These new laws could include not only additional requirements (e.g., providing a right to opt out to consumers), but also could regulate more types of AI uses. For example, the Colorado AI Act’s provisions primarily apply to “high risk artificial intelligence systems,” which are AI systems that, when deployed, make or are a substantial factor in making a consequential decision. The definition of “consequential decision” includes activities such as financial or lending services, insurance, healthcare services, and housing (among others). Although a current use case may not result in a “consequential decision” that does not mean that another state will not enact an AI law that expands the scope.
Intellectual Property
AI vendor contracts also should address intellectual property (IP) considerations. Every contract should address IP ownership between the parties, including, ownership of the AI, all input and output, and any training data. If a company provides inputs or prompts to the AI product/service, then the company will likely want to maintain its ownership rights over that input or prompt. Additionally, if a company’s inputs or prompts are used by the AI product/service to create any output, then the company will likely want ownership rights over any output, including any work product or deliverable created from that output. 
Another ownership consideration is whether the AI vendor’s product or service relies on a third-party’s technology. As noted, in the space currently, many vendors are relying on third-party technology for their own AI models. Companies should require vendors to represent and warrant that the vendor has the right to use the third party’s technology through a license and shall comply with all use restrictions under that license. Any representation and warranty should also make it clear that the vendor has full power and authority to grant the rights under the contract to the company.
Finally, for all AI products/services, vendors should also represent and warrant that the products/services will not misappropriate, violate, or infringe any third-party IP rights. Companies should consider indemnification protection for any claims that result from the misappropriation, violation, or infringement of any third-party IP rights and corresponding liability for any indemnification obligation. 
AI and Workforce Considerations 
Depending on the use case, companies also should consider how the AI product/service will be viewed by the workforce and whether internal controls are necessary. Labor groups and employee advocates have expressed concerns about the rapid spread of AI systems within businesses. AI technologies are used to recruit employees, determine performance ratings, determine candidate redundancies, allocate work, and monitor the productivity of employees working remotely from home. 
Effective AI systems require more than just strict data governance. A key factor for obtaining workforce buy-in with AI is to take a people-centric view in the design and implementation of the AI technologies, where workers feel empowered by AI, and the AI helps humans to do a better job and be more satisfied. 
Liability 
Finally, liability-shifting terms are pivotal in any AI vendor contract as AI regulations emerge and increased public awareness of AI’s impacts could  lead to litigation over AI services and products. As noted, new AI regulations impose certain obligations on AI developers (vendors) and AI deployers (the vendor’s customers) but in the parties’ contract, those and other obligations may be shared or shifted to one party. Companies will therefore need to scrutinize the vendor’s warranties and disclaimers, and whether (or in what circumstances) the vendor will indemnify the company if the AI service/product does not comply with the law.
The impetus to carefully analyze warranty, disclaimer, and indemnity provisions is also triggered by the risks of private litigation. In the context of employment, the Equal Employment Opportunity Commission (EEOC) has made it clear that companies using AI products/services for employment decisions, can still be liable under employment discrimination laws even where the product/service is fully developed or administered by a third party vendor. Conversely, a California court just recently held that a human resources vendor using AI to screen job applicants for the vendor’s customers could be liable for the screening tool’s discriminatory impact on applicants. As a reaction to this decision and similar litigation that is sure to arise, vendors will likely aim to place the liability on the companies they contract with when it comes to discriminatory effects of the vendor’s AI product/service.

-End-

Source: https://www.bytebacklaw.com/2024/08/key-considerations-in-ai-related-contracts/

Each article is copyrighted to their original authors. The news is for informational purposes only and does not provide legal advice.

本栏目近期分享



美国专利侵权抗辩的5大策略 | 每日IP英文第492期

美国外观设计专利判定标准大变革 | 每日IP英文第491期

美国337调查对SEP案件的重要作用 | 每日IP英文第490期

海外专利申请制度-加拿大 | 每日IP英文第489期

知识产权许可协议的关键问题 | 每日IP英文第488期

美国专利商标局:在AI帮助下作出的发明可以获得授权 | 每日IP英文第487期

欧洲统一专利法院启动后,德国成为欧洲专利诉讼的首选之地 | 每日IP英文第486期

欧洲专利局专利异议程序简介 | 每日IP英文第485期

德国知识产权诉讼的最新进展和趋势 | 每日IP英文第484期

并购交易中的知识产权尽职调查 | 每日IP英文第483期

欧盟的标准必要专利草案,可以解决业界难题吗?| 每日IP英文第482期

知识产许可协议,应该注意什么?| 每日IP英文第481期

每个跨国公司都应该知道的 . . .美国ITC337调查案件 | 每日IP英文第480期 | 每日IP英文第480期




关于我们

郝政宇律师为北京观韬中茂律师事务所合伙人、律师、专利代理师。郝律师曾在国家知识产权局从事多年专利审查和复审工作,此后在多家知名律所执业多年,代理众多企业应对知识产权纠纷,擅长处理技术类知识产权案件,部分案件入选知识产权指导案例,现担任10余家上市公司知识产权法律顾问,入选The Legal 500 知识产权律师、2023年“中国50位50岁以下知识产权精英律师”等榜单,主编《科创板企业上市知识产权指南》、《专利分析》等专著。团队成员全部毕业于知名院校,具有丰富的知识产权诉讼经验。
主要业务:
知识产权诉讼:专利、技术秘密、商标、著作权、不正当竞争、技术合同、知识产权权属纠纷等
知识产权顾问:专利挖掘和布局、专利FTO分析,知识产权许可和交易,企业知识产权法律顾问、企业IPO知识产权辅导、数据合规、开源软件合规等

电话:134 3962 0218

邮箱:haozy@guantao.com

欢迎添加郝律师微信交流与合作

如果您对知识产权实务也感兴趣

欢迎添加大岭先生微信

加入“大岭IP知识产权实务交流群9”

▼更多知识产权实务文章,欢迎点击关注本公众号

星标本公众号,共同成长

关注“大岭IP”公众号后:

1. 后台回复“司法解释”,获得现行有效的知识产权司法解释汇编,包括官方解读;

2. 后台回复“指导案例”,获得2008年-2023年中国知识产权指导案例合集; 

您的分享、点赞,是对我们团队的最大支持!

欢迎您通过评论留下您的观点,和更多人分享您的经验~~~

大岭IP
专注于知识产权诉讼,解决企业知识产权实际问题【郝政宇律师团队,我们坚持分享专业、简单、有用的IP实务文章】
 最新文章