A professor lost two years of ‘carefully structured academic work’ in ChatGPT because of a single setting change: ‘These tools were not developed with academic standards of reliability in mind’

A professor lost two years of ‘carefully structured academic work’ in ChatGPT because of a single setting change: ‘These tools were not developed with academic standards of reliability in mind’

If you’ve ever accidentally deleted a work file only to realise you never made any backups, prepare to pour one out with me. Marcel Bucher, a professor of plant sciences at the University of Cologne, has written a column for Nature entitled “When two years of academic work vanished with a single click”. Ouch.

Bucher explains that he’d been using OpenAI’s ChatGPT Plus service as an assistant for a number of tasks, including writing emails, structuring grant applications, revising publications, and analysing student responses to exams (via Futurism).

“It was fast and flexible, and I found it reliable in a specific sense: it was always available, remembered the context of ongoing conversations and allowed me to retrieve and refine previous drafts,” says Bucher.

“I was well aware that large language models such as those that power ChatGPT can produce seemingly confident but sometimes incorrect statements, so I never equated its reliability with factual accuracy, but instead relied on the continuity and apparent stability of the workspace.”

However, in August of last year, Bucher temporarily disabled the “data consent” optionโ€”because, in his own words: “I wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data.”

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

“At that moment, all of my chats were permanently deleted and the project folders were emptiedโ€”two years of carefully structured academic work disappeared”, Bucher says. “No warning appeared. There was no undo option. Just a blank page.”

Bucher admits that he kept “partial copies of some conversations and materials”, but otherwise, two years’ worth of his academic work had vanished into the ether. The professor repeatedly reached out to OpenAI’s support service, but was eventually told that the data was permanently lost and could not be recovered.

“This was not a case of losing random notes or idle chats,” Bucher opines. “This was intellectual scaffolding that had been built up over a two-year period.

“We are increasingly being encouraged to integrate generative AI into research and teaching. Individuals use it for writing, planning and teaching; universities are experimenting with embedding it into curricula.

“However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind.”

OpenAI representatives using a rotary phone to call ChatGPT via the 1-800-ChatGPT phone number

(Image credit: OpenAI)

Bucher continues: “If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use. As a paying subscriber (โ‚ฌ20 per month, or US$23), I assumed basic protective measures would be in place, including a warning about irreversible deletion, a recovery option, albeit time-limited, and backups or redundancy.”

Apparently not. Still, while OpenAI has previously been in the headlines for privacy concerns, it’s somewhat refreshing to hear a story of its supposedly robust modern privacy policies seemingly working as intended. “Ultimately, OpenAI fulfilled what they saw as a commitment to my privacy as a user by deleting my information the second I asked them to”, says Bucher. Quite.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *