The maker of ChatGPT is trying to curb its reputation with a new tool that lets the teacher know whether a student or artificial intelligence wrote that homework.
The new AI text classifier, launched by OpenAI on Tuesday, follows weeks of discussion in schools and colleges over concerns that ChatGPT’s ability to write anything on command could undermine academic integrity and hinder learning.
OpenAI cautions that the new tool — like others already — isn’t foolproof. AI-powered text recognition is “imperfect and sometimes wrong,” said Jan Lake, head of the OpenAI Alignment Team, which aims to make systems safer.
“Because of that, it shouldn’t be relied upon when making decisions,” Lake said.
Teenagers and college students were among the millions of people who started experimenting with ChatGPTT after it launched as a free app on OpenAI’s website on November 30. And while many have found ways to use it in creative and harmless ways, the fact that it can easily answer take-home test questions and help with other assignments has sparked panic among some teachers.
As schools reopened for the new year, New York City, Los Angeles and other large public school districts began banning the use of devices in classrooms and at school.
The Seattle Public Schools District initially blocked ChatGPT from all school devices in December, but has opened access to teachers who want to use it as an instructional tool, said district spokesman Tim Robinson.
“We can’t ignore it,” Robinson said.
The district is also discussing possibly expanding the use of ChatGPT into the classroom to help teachers train students to be better critical thinkers and have students use the application as a “personal tutor” or to help generate new ideas while working on an assignment. Robinson said.
School districts around the country say they are rapidly evolving the conversation around ChatGPT.
“The first reaction was, ‘OMG, how do we get rid of the cheating on ChatGPT?'” says Devin Page, technology specialist for the Calvert County Public School District in Maryland. Realizing that “this is the future” and blocking it is not the solution, he said.
“I think we’d be naive if we didn’t recognize the dangers of this tool, but we could have prevented our students from using it and we’d have served all the powers that be,” said Page.
In a blog post on Tuesday, OpenAI emphasized the detection tool’s limitations, but said that in addition to preventing theft, it could help detect automated disinformation campaigns and the misuse of AI to impersonate other people.
The longer the text, the better the device will be able to detect whether an AI or a human wrote something. Type in any text – a college admissions essay or a literary analysis of Ralph Ellison An invisible person — and the tool labels it as “highly unlikely, unlikely, vague, possible, or likely” to be AI-generated.
But like ChatGPT itself, trained in digitized books, newspapers and online articles, but often confidently spewing lies or nonsense, it is not easy to interpret how it produced its results.
“We don’t really know what pattern to focus on or how it works internally,” Lake said. “There’s not much we can say at this point about how the classifier actually works.”
Higher education institutions around the world are also beginning to discuss the responsible use of AI technology. SciencePo, one of France’s most prestigious universities, banned its use last week and warned that anyone caught using ChatGPT and other AI tools to secretly obtain written or oral work could be banned from SciencePo and other institutions.
In response to the backlash, OpenAI said it has been working for several weeks to develop new guidelines to help teachers.
“As with many other technologies, a district may decide that it is inappropriate for use in their classroom,” said Lama Ahmed, an OPNI policy researcher. “We’re not really pushing them one way or the other. We want to give them the information they need to make the right decision.”
It’s an unusually public role for the research-based San Francisco startup, now backed by billions of dollars in investment from partner Microsoft and growing interest from the public and governments.
French Digital Economy Minister Jean-Noel Barot recently met with OpenAI executives in California, including CEO Sam Altman, and told an audience at the World Economic Forum in Davos, Switzerland, a week later, that he was optimistic about the technology. But the government minister – a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris – also has difficult ethical questions.
“So if you’re in law school, there’s room for concern because obviously ChatGPT can present some pretty impressive challenges compared to other tools,” he said. “If you’re in an economics faculty, you’re fine because ChatGPT is going to struggle to meet or deliver on expectations when you’re in a graduate-level economics faculty.”
He said it will be even more important for users to understand the basics of how these systems work so they know what biases might exist.
We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences