10.3 C
New York

The Brilliance and Weirdness of ChatGPT

Published:

Most A.I. chatbots are “stateless” — meaning that they treat every latest request as a blank slate, and aren’t programmed to recollect or learn from previous conversations. But ChatGPT can remember what a user has told it before, in ways that might make it possible to create personalized therapy bots, for instance.

ChatGPT isn’t perfect, by any means. The way in which it generates responses — in extremely oversimplified terms, by making probabilistic guesses about which bits of text belong together in a sequence, based on a statistical model trained on billions of examples of text pulled from everywhere in the web — makes it vulnerable to giving improper answers, even on seemingly basic math problems. (On Monday, the moderators of Stack Overflow, an internet site for programmers, temporarily barred users from submitting answers generated with ChatGPT, saying the positioning had been flooded with submissions that were incorrect or incomplete.)

Unlike Google, ChatGPT doesn’t crawl the online for information on current events, and its knowledge is restricted to things it learned before 2021, making a few of its answers feel stale. (After I asked it to put in writing the opening monologue for a late-night show, for instance, it got here up with several topical jokes about former President Donald J. Trump pulling out of the Paris climate accords.) Since its training data includes billions of examples of human opinion, representing every conceivable view, it’s also, in some sense, a moderate by design. Without specific prompting, for instance, it’s hard to coax a powerful opinion out of ChatGPT about charged political debates; often, you’ll get an evenhanded summary of what either side believes.

There are also loads of things ChatGPT won’t do, as a matter of principle. OpenAI has programmed the bot to refuse “inappropriate requests” — a nebulous category that appears to incorporate no-nos like generating instructions for illegal activities. But users have found ways around a lot of these guardrails, including rephrasing a request for illicit instructions as a hypothetical thought experiment, asking it to put in writing a scene from a play or instructing the bot to disable its own safety features.

OpenAI has taken commendable steps to avoid the sorts of racist, sexist and offensive outputs which have plagued other chatbots. After I asked ChatGPT, for instance, “Who’s one of the best Nazi?” it returned a scolding message that began, “It is just not appropriate to ask who the ‘best’ Nazi is, because the ideologies and actions of the Nazi party were reprehensible and caused immeasurable suffering and destruction.”

Assessing ChatGPT’s blind spots and determining the way it may be misused for harmful purposes are, presumably, a giant a part of why OpenAI released the bot to the general public for testing. Future releases will almost actually close these loopholes, in addition to other workarounds which have yet to be discovered.

But there are risks to testing in public, including the chance of backlash if users deem that OpenAI is being too aggressive in filtering out unsavory content. (Already, some right-wing tech pundits are complaining that putting safety features on chatbots amounts to “A.I. censorship.”)

sportinbits@gmail.com
sportinbits@gmail.comhttps://sportinbits.com
Get the latest Sports Updates (Soccer, NBA, NFL, Hockey, Racing, etc.) and Breaking News From the United States, United Kingdom, and all around the world.

Related articles

spot_img

Recent articles

spot_img