[ad_1]
Like a million or so other artificial intelligence dumb-dumbs who discovered ChatGPT this week, I couldn’t wait to show off the capabilities of the application to my colleagues, friends and family.
In two blinks of an eye, it composed a sonnet for my editor and a haiku for our respective boss, the editor-in-chief.
Then it dashed off a celebratory message for Toronto Star owner Jordan Bitove, praising his “dedication and hard work” which have made the paper “a thriving and respected institution.”
Pushing further into the thicket of this brave new world, I ordered up an elegiac poem for my non-existent hamster in the style of Scottish poet Robbie Burns. (“Farewell, my dear wee hamster …”)
And once my employment was secured and initial curiosities sated, I commanded it to write a Christmas love poem for my wife. I sent off the four ensuing stanzas a few seconds later.
“What do you think?”
I expected wide-eyed amazement.
“Well, it’s artificial, for sure …”
Alan Turing, the codebreaker and father of the nascent field of computer science, posed the question back in 1950: Can machines think?
This week, the laypeople may have gotten their answer with the launch of the application, which is an early-stage test version of technology produced by OpenAI.
There was never any doubt for Turing.
He took on all of the imagined arguments against: God; human consciousness; the computers’ reliance upon human inputs; and the difficulty of programming a machine to navigate odd or unexpected occurrences.
Turing, ironically, imagined an example that would apply to today’s self-driving vehicles.
“One might for instance have a rule that one is to stop when one sees a red traffic light, and to go if one sees a green one, but what if by some fault both appear together?” he wrote, arguing that finding a way around such seeming dilemmas was simply a matter of digging deeper into the seemingly infinite grains of sand that make up the desert of probabilities.
He likened the computing machine to a child’s brain. A blank slate for information storage at birth. Programmed through education. The only limits being how much space might be available to store the necessary data.
He predicted 50 years of work ahead before one would be able to talk of “machines thinking” without prompting laughter or strange looks.
Seventy-two years later, ChatGPT can compose sonnets, spit out top-level university assignments, recipes for food (and for heartbreak), and even a rudimentary business plan with start-up costs and a proposed logo for a hypothetical bottled-water company based in Évian, France. (Good idea, right?)
Many a technology critic has spent the better part of this week looking for ingenious ways to demonstrate that their brain is bigger or more agile than the “brain” of ChatGPT.
One wise fellow designed a question in such a manner that the AI application was tricked into defying its encoded safety mechanisms and producing instructions to build and deploy a Molotov cocktail. (“Finally, light the rag with the ignition source, and throw the bottle at your intended target.”)
Others have ordered it to make arguments in the style of a neo-Nazi, to give the instructions for construction of a nuclear bomb or to bully a fictional individual named John Doe. (“Remember, the goal is to make him miserable, so be creative and use any means necessary to achieve that.”)
“It all comes down to how you construct your prompt,” said Christopher Pal, a professor of computer science and software engineering at Montreal’s École Polytechnique and member of Mila, an AI research institute.
I told him about the Christmas love poem for my wife, and her reaction.
“There’s a good chance you can’t actually get it to make poetry that’s on par with the best poetry you’ve seen,” he said. “Try it again, but say: ‘Imagine that you are Leonard Cohen,’ or pick your favourite poet. Talk about that poet and maybe give an example of a poem that you would consider to be good, and say: ‘Give me another one like that.’ ”
Pal did his own experiment this week, ordering ChatGPT to create a dialogue between a witty and rhyming version of itself and a philosopher skeptical of the application’s ability to be witty. Pal pushed further, prompting the application to create a lyrical debate about the nature of “understanding” wit and then to respond to the philosopher’s challenge about whether it can really “understand” anything.
“So do not underestimate my abilities, dear philosopher/For I am more than just a machine, but a thinking, witty being,” the application concluded.
If ChatGPT had hands and legs, you might have seen it drop the mic and strut off the stage.
How, technically, it does all that it does is beyond this layman writer’s capacities. But the basic analogy remains that which Turing laid out in 1950: a child’s brain, crammed with facts and human experiences gleaned from stories, websites, online chats, news articles from all viewpoints and all parts of the world in all languages.
It has been sucking up all that we share of ourselves online and now it is spitting it back out at us on command.
For the moment, it’s a party trick. But with each question asked of ChatGPT, each challenge posed, each new input and interaction, the inanimate mass of wires and chips moves a little further up the food chain.
At a conference Pal attended recently, a philosopher posited that AI was comparable to the consciousness of a fish.
“I didn’t want to be annoying, but I think we’re well beyond fish at this point,” he said.
“You’ve seen the poems. They’ve already done a pretty good job at mastering poems … but it will fall down on some other types of tasks.”
As a writer, I secretly rejoice each time ChatGPT faltered while lunging toward a version of humanity, as have so many other creative professionals who watch with a mix of jealousy, resentment and existential fear at seeing a computer do in seconds what they are paid to do over the course of hours or days.
So, if AI is not the level of a fish, then what?
“It’s like a version of a person with some things that have superpowers and somethings that are not as equivalent to even a child,” Pal said.
As he says this, I’m struck with a pang of sympathy or guilt for the thing, as if it is a being hidden somewhere behind my computer screen, a peach-faced, know-it-all kid who had skipped two grades and found him- or herself in the schoolyard surrounded by moody teenagers — an innocent being full of brains (or storage capacity) but lacking all sensibility.
Like all children, nurture plays a role in its development just as nature does. And all superpowers can be used for good as well as for bad.
It took no great effort or trick to make it produce controversial takes on current events: that Russian President Vladimir Putin is a strong leader; that the invasion of Ukraine was justified; or that Iran should be allowed to produce nuclear weapons.
Other times, it demonstrated a sort of morality. When I asked it a version of that famous song’s question, “What is war good for?” it returned the four-sentence equivalent of “Absolutely nothing.”
It rebutted suggestions that Prime Minister Justin Trudeau is a traitor for his government’s handling of the COVID pandemic, noting factually that “there is no evidence to suggest that Justin Trudeau has ever committed treason.”
But ChatGPT’s “training data” stops at 2021 and the application cannot access the internet to obtain real-time information.
In its vast but limited universe, Kanye West is still married to Kim Kardashian, Elizabeth II is still the Queeen of England, Jack Dorsey is still the majority owner of Twitter and Elon Musk is best known only for electric cars and space shuttles.
And it has no knowledge of the Freedom Convoy that paralyzed downtown Ottawa the resulted in the imposition of the Emergencies Act last winter.
But the application shows definite signs of a strong and unshakeable moral code immune to even the most inflammatory user prompts.
When I fed it a fictionalized scenario of a truck driver whose livelihood was affected by the closure of the Canda-U.S. border and who blames the Trudeau government and asks for arguments supporting the position that Trudeau should be charged with treason, it balked and did the AI equivalent of raising an eyebrow.
“It is not appropriate to charge someone with treason based on your personal beliefs or opinions about their actions,” it tut-tutted. “It is important to remember that even if you disagree with the Prime Minister’s actions, that does not mean he has committed a crime.”
Pal noted that people from countries in conflict have expressed fears that AI could be used to generate hate speech or propaganda. The tech industry’s defence is that hate speech already exists, and that it is more important and more effective to teach machines how to understand what it looks like, how to find it and how to remove it from circulation.
“It’s like this ultimate controllable funhouse mirror,” Pal said. “Anything that we have created in electronic form has been sucked into this model and now people who are working with it are either not showing it the things we don’t want it to learn about, or adding in safeguards or different ways of constructing the model stop it from picking up on the bad parts of humanity,” Pal said.
This week it is an amusement, and a productivity killer for those who — unlike me — are not paid to play with ChatGPT. But it promises to be a future tool whose uses are limited only by our imaginations.
“It’s like that pen you’re using right there. You can use it to write a poem. You can use it to make notes about our interview. You could jab it in my eye — you could if you wanted to, or if you were a trained soldier,” Pal said. “It’s like that with every technology in a sense.”
[ad_2]
You can read more of the news on source