Anthropic claims that no customer data is used for AI education.
Updates to the Claude developer’s commercial terms of service reveal that Anthropic, a generative artificial intelligence (AI) startup, has committed not to exploit client data for large language model (LLM) training.The business also promises to assist users with copyright problems. Anthropic clarified its position by revising its commercial terms of service under the direction of former OpenAI researchers.The modifications will take effect in January 2024 and stipulate that Anthropic’s commercial clients will own any outputs produced by utilizing its AI algorithms.Under these Terms, the business “does not anticipate obtaining any rights in Customer Content.”In the second half of 2023, OpenAI, Microsoft, and Google promised to assist clients who were having legal problems because of copyright disputes pertaining to their technology use.In a similar vein, Anthropic has pledged in its most recent version of its commercial terms of service to shield clients from copyright infringement lawsuits stemming from approved uses of its products or services. Anthropic declared.“Customers will now enjoy increased protection and peace of mind as they build with Claude, as well as a more streamlined API that is easier to use.” Anthropic promised to pay for any settlements or judgements that are accepted as a result of its AI’s infringements as part of its legal protection offer.Customers of the Claude API and users of Bedrock, Amazon’s generative AI development suite, are covered by these terms. According to the terms, Anthropic does not intend to purchase any rights to user content, nor does it grant any party, directly or indirectly, any rights to the other’s intellectual property or content.Claude, ChatGPT-4, and Llama 2, three advanced LLMs from Anthropic, are trained on large amounts of textual data.Diverse and thorough training data is essential for LLM efficacy because it allows the models to learn from a variety of language patterns, styles, and fresh information, hence increasing accuracy and contextual awareness. In October 2023, Universal Music Group filed a lawsuit against Anthropic AI for violating copyright on “vast amounts of copyrighted works – including the lyrics to myriad musical compositions” that the publishers own or manage.In the meantime, nonfiction writer Julian Sancton is suing OpenAI and Microsoft for allegedly exploiting his work without permission to train AI models, including ChatGPT.