Your daily adult tube feed all in one place!
The New York Times has denied claims it 'hacked' OpenAI's systems using a 'hired gun' to create misleading evidence for their copyright lawsuit.
In December, The Times filed the suit against OpenAI and Microsoft claiming they had illegally fed 'millions of articles' into Bing Chat and ChatGPT meaning the chatbots reply with content from their articles.
OpenAI hit back, saying that the suit was 'without merit' and accusing the publication of having 'intentionally manipulated prompts' to get the chatbot to 'regurgitate' their stories.
They accused the outlet of paying 'someone to hack OpenAI's products' and said the use of a 'hired gun' did not meet the Times' 'famously rigorous journalistic standards.'
Now, The Times has dismissed the hacking claim as 'irrelevant as it is false,' saying that OpenAI was 'grandstanding' in its request to dismiss the lawsuit.
OpenAI responded to a copyright infringement lawsuit filed by The New York Times claiming the newspaper's argument is 'without merit'
OpenAI claims using the Times' articles is fair use. The New York Times argues the tech company use their materials without payment
In December, The New York Times filed a lawsuit against OpenAI and Microsoft alleging copyright infringement, claiming the tech companies used their articles to train chatbots
In a court filing late on Monday, the Times accused OpenAI of using the 'attention-grabbing' term 'hacking' in an attempt to discredit the suit.
They said: 'OpenAI’s true grievance is not about how The Times conducted its investigation, but instead what that investigation exposed: that Defendants built their products by copying The Times’s content on an unprecedented scale — a fact that OpenAI does not, and cannot, dispute.'
In their previous rebuttal, OpenAI said: 'We regard The New York Times' lawsuit to be without merit.
'We had explained to The New York Times that, like any single source, their content didn't meaningfully contribute to the training of our existing models and also wouldn't be sufficiently impactful for future training.'
'Their lawsuit on December 27—which we learned about by reading The New York Times—came as a surprise and disappointment to us.'
OpenAI said using publicly available internet materials, such as The New York Times' articles, is fair use and supported by legal precedents.
'Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,' the tech company said.
'That being said, legal right is less important to us than being good citizens. We have led the AI industry in providing a simple opt-out for publishers (which The New York Times adopted in August 2023) to prevent our tools from accessing their sites.'
In the lawsuit, The Times argued it is not fair use and said, 'There is nothing "transformative" about using The Times's content without payment to create products that substitute for The Times and steal audiences away from it.'
The New York Times said that OpenAI and Microsoft artificial intelligence programs use large-language models that were developed by copying their articles with a particular emphasis.
The lawsuit said, 'Defendants seek to free-ride on The Times's massive investment in its journalism by using it to build substitutive products without permission or payment.'
The New York Times sited examples of artificial intelligence 'hallucinations' or regurgitations - a phenomenon that occurs when the chatbots generate false information and wrongly attribute it to a source. Examples that OpenAI claimed are flawed.
OpenAI said the examples of regurgitations cited by The New York Times are 'intentionally manipulated' or 'cherry-picked'
OpenAI said, 'Memorization is a rare failure of the learning process that we are continually making progress on, but it's more common when particular content appears more than once in training data, like if pieces of it appear on lots of different public websites.'
'The regurgitations The New York Times induced appear to be from years-old articles that have proliferated on multiple third-party websites. It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate.'
'Even when using such prompts, our models don't typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,' said OpenAI.
'Intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use.'
The outlet said using their work has been extremely lucrative for the companies, and they have tried to negotiate with the tech giants to ensure they received their fair share while working with them to develop their technology but have been unable to reach an agreement.
'Microsoft's deployment of Times-trained LLMs throughout its product line helped boost its market capitalization by a trillion dollars in the past year alone. And OpenAI's release of ChatGPT has driven its valuation to as high as $90 billion,' said the lawsuit.
OpenAI said they felt their discussions with The New York Times were progressing constructively up until their last communication on December 19.
'The negotiations focused on a high-value partnership around real-time display with attribution in ChatGPT, in which The New York Times would gain a new way to connect with their existing and new readers, and our users would gain access to their reporting,' the tech company said.
'Still, we are hopeful for a constructive partnership with The New York Times and respect its long history.'
OpenAI said, 'We look forward to continued collaboration with news organizations, helping elevate their ability to produce quality journalism by realizing the transformative potential of AI.'