top of page
Writer's picture2244 Online

Teaching Computers to Write OpenAI.Com

The Economist August 8th 2020 pp63-64. Artificial Intelligence | “Bit-lit” “A new language-generating AI can be eerily human-like-for better and for worse”


Source of Figure ictineducation.org



The following was generated by Generative PreTrained Transformer 3, GPT-3

“The SEC said, ‘Musk/your tweets are a blight./They really could cost you your job,/if you don’t stop/all this tweeting at night.”/….

It goes on to make a sensible ending …”/but that doesn’t give you the right to be a bore!.


GPT-3 is from SF-Based OpenAI that Elon Musk helped found. “It represents the latest advance in one of the most studied areas of AI-giving computers the ability to generate sophisticated, human-like text."


GPT-3 is statistical model based in part on having read a huge compendium ultimately consisting of 175bn parameters. It maps the probability of “...which words follow other words.” How often does Red follow Rose as an example? It applies similar logic to sentences and paragraphs.


The latest version can generate news articles that humans can only identify as AI generated in only about half the examples or the “equivalent to guessing at random.” Expert reviewers of GPT-3 confirm the ability of the device to make short stories and even to convert “rude messages into politer ones,”….

Not surprisingly, as accomplished as GPT-3 is it still is “early days.” “Sometimes it seems to regurgitate snippets of memorized text rather than generating fresh text from scratch.” It’s clear that word-matching based on probability is not the same as a “coherent understanding of the of the world.” An example given, “It takes two rainbows to jump from Hawaii to 17.” It especially can be “found out” by asking questions that require understanding. Ultimately, when pressed with questions, much like humans, eventually GPT-3 replies “I’m sorry, but I don’t have time to explain the underlying reason why not.” Like some humans we know, GPT-3 has no filter so prompts like ‘black’, ‘Jew’, ‘women’ and ‘gay’ often generate racism, anti-Semitism, misogyny and homophobia.”

Much like using AI for facial recognition, systems are only as good as the training set used for machine learning. It’s known that AI facial recognition works well only when challenged by white faces, so IBM has recently, in an effort to improve accuracy, boosted the diversity of its set of training faces. Reportedly, “OpenAI itself was founded to examine ways to mitigate the risk posed by AI systems.

For more detail. Read the article and see https://www.openai.com

Comments


bottom of page