Generative artificial intelligence (AI) such as the much-talked-about ChatGPT (Chat Generative Pre-trained Transformer) are machine learning algorithms. Designed to produce content—images, video, audio, and text—from user input, these programs fall into the category of generative adversarial networks.

While Big Tech says it envisions the technology, which has been in the works since the 1960’s, changing the way work and business is done, detractors raise serious concerns that it will be used to eliminate human jobs, spread misinformation, and manipulate its users.

The multi-billion-dollar question is: does AI’s arrival signal the end of human creativity?

Generative AI has come a long way

TechTarget produced a graphical timeline documenting the development and emergence of generative AI dating back to the 1930’s until now. In the 1960’s work began on development of the precursors to the modern chatbots. 2014 marked the creation of GANs to produce realistic content.

In 2021, the artificial intelligence company, OpenAI, launched Dall-E. The algorithm can generate images from text inputs. Last November, OpenAI released ChatGPT. The company trained its now popular chatbot on massive amounts of text data from a variety of sources. Over 100 million people are now using ChatGPT to do things ranging from answering questions, creating content, summarizing books, and improving their writing.

For designers and content creators, there are numerous AI apps ranging from free to premium, designed to increase creativity and productivity. Below is a small sampling of what’s available:

  • Khroma – color palette generator
  • VanceAI – photo enhancement tool
  • Pictory – extract short form videos from longer content
  • Scalenut – AI content research and copywriting
  • Publer – AI social media post creation and scheduling

Ethical issues and legal questions surrounding AI

Despite the unrealized potential represented in AI models that create art, music, and other content do, one does well to remember the models produce content based on the work of others (often from mining and scraping content from the internet or supplied by developers).

But the use of AI technology also raises some ethical concerns in the areas of privacy and surveillance, bias and discrimination, and the role of human judgment, according to Harvard professor Michael Sandel.  The European think tank Brueger holds the opinion AI technology has the potential to manipulate human behavior. And Creative Commons, the global nonprofit public domain licensing entity, currently believes that AI-generated works are not deserving of automatic copyright protection, but admits the increasing mix of human creativity and AI technology might cause the issue to be revisited.

In February 2023, the United States Copyright Office issued a policy statement regarding a graphic novel written by a human author combined with AI-generated images. The agency ruled the written work constituted a copyrightable work, but concluded the AI generated images did not deserve such protection.

Educators worry AI will make it easier for students to cheat on written essays. A student at Princeton developed an app to determine the likelihood a paper was written using the technology. USA Today reported the availability of AI technology could help the proliferation of deepfake pornography. Reports of a fake interview and a fake AI-generated song hitting streaming platforms made recent headlines.  And things are just getting started.

Who’s monitoring the bots’ behavior?

Another critical issue with AI technology is questions about its reliability and accuracy. Last month, on 60 Minutes, Lesley Stahl interviewed Microsoft president Brad Smith and others about the emergence of AI technology in its Bing search engine. When Stahl pointed out Bing seemed plagued by unpredictable behavior and persistent inaccuracies, Smith countered by saying it was still a work in progress, one whose upside–the economic one in particular–made it worth the risks. What about if it falls into the hands of those who promote unfounded propaganda and misinformation? Is it still worth the risk then?

mass use of AI technology has immediate appeal to two types of people—the greedy and the lazy

mass use of AI technology has immediate appeal to two types of people—the greedy and the lazy

AI is no substitute for human creativity

Generative AI algorithms are made by companies whose primary goal is to make a profit. So, naturally, mass use of AI technology has immediate appeal to two types of people—the greedy and the lazy. And as is often the case, to be first in AI race means minimizing and ignoring whatever negative consequences come from its use. Those issues are for politicians, lawyers, and courts to decide. I recall how some promoted cell phones in schools as tools for learning. Now, educators have realized the devices are the primary reason some students don’t learn.

With AI, anyone can use an algorithm to create a blog post, write a post for social media, generate marketing copy, summarize a book, draft an essay, or create an image, and pass it off as a product or their expertise, skill, and knowledge. But that doesn’t mean it’s true. It only means they found a quicker way to get it done without putting in the work. In other words, they cheated.

I’m not worried about AI eliminating human creativity because it’s not possible. Keep in mind AI algorithms are not human. Technically, they don’t exist, but are creations and tools of human ingenuity and creativity. Humans have a conscience, which provides moral guidance. We are also composed of feelings and emotions that can be expressed verbally and visually. And we use memories of our experiences to share with others. We can also create things pulled out of the depths of our imaginations.

But AI’s emergence is one that needs a watchful eye. It’s already shown the potential for good, but also an overwhelming potential for misuse. Nobody completely understands how it works or more importantly, how to control it. Still, how Big Tech will use AI to its economic and consumer-control advantage, and how far humans will go to find creatively devious ways to misuse it, remains to be seen.

And at what point does government intervention and regulation of it become inevitable?

Share This Story!

Generative artificial intelligence (AI) such as the much-talked-about ChatGPT (Chat Generative Pre-trained Transformer) are machine learning algorithms. Designed to produce content—images, video, audio, and text—from user input, these programs fall into the category of generative adversarial networks.

While Big Tech says it envisions the technology, which has been in the works since the 1960’s, changing the way work and business is done, detractors raise serious concerns that it will be used to eliminate human jobs, spread misinformation, and manipulate its users.

The multi-billion-dollar question is: does AI’s arrival signal the end of human creativity?

Generative AI has come a long way

TechTarget produced a graphical timeline documenting the development and emergence of generative AI dating back to the 1930’s until now. In the 1960’s work began on development of the precursors to the modern chatbots. 2014 marked the creation of GANs to produce realistic content.

In 2021, the artificial intelligence company, OpenAI, launched Dall-E. The algorithm can generate images from text inputs. Last November, OpenAI released ChatGPT. The company trained its now popular chatbot on massive amounts of text data from a variety of sources. Over 100 million people are now using ChatGPT to do things ranging from answering questions, creating content, summarizing books, and improving their writing.

For designers and content creators, there are numerous AI apps ranging from free to premium, designed to increase creativity and productivity. Below is a small sampling of what’s available:

  • Khroma – color palette generator
  • VanceAI – photo enhancement tool
  • Pictory – extract short form videos from longer content
  • Scalenut – AI content research and copywriting
  • Publer – AI social media post creation and scheduling

Ethical issues and legal questions surrounding AI

Despite the unrealized potential represented in AI models that create art, music, and other content do, one does well to remember the models produce content based on the work of others (often from mining and scraping content from the internet or supplied by developers).

But the use of AI technology also raises some ethical concerns in the areas of privacy and surveillance, bias and discrimination, and the role of human judgment, according to Harvard professor Michael Sandel.  The European think tank Brueger holds the opinion AI technology has the potential to manipulate human behavior. And Creative Commons, the global nonprofit public domain licensing entity, currently believes that AI-generated works are not deserving of automatic copyright protection, but admits the increasing mix of human creativity and AI technology might cause the issue to be revisited.

In February 2023, the United States Copyright Office issued a policy statement regarding a graphic novel written by a human author combined with AI-generated images. The agency ruled the written work constituted a copyrightable work, but concluded the AI generated images did not deserve such protection.

Educators worry AI will make it easier for students to cheat on written essays. A student at Princeton developed an app to determine the likelihood a paper was written using the technology. USA Today reported the availability of AI technology could help the proliferation of deepfake pornography. Reports of a fake interview and a fake AI-generated song hitting streaming platforms made recent headlines.  And things are just getting started.

Who’s monitoring the bots’ behavior?

Another critical issue with AI technology is questions about its reliability and accuracy. Last month, on 60 Minutes, Lesley Stahl interviewed Microsoft president Brad Smith and others about the emergence of AI technology in its Bing search engine. When Stahl pointed out Bing seemed plagued by unpredictable behavior and persistent inaccuracies, Smith countered by saying it was still a work in progress, one whose upside–the economic one in particular–made it worth the risks. What about if it falls into the hands of those who promote unfounded propaganda and misinformation? Is it still worth the risk then?

mass use of AI technology has immediate appeal to two types of people—the greedy and the lazy

mass use of AI technology has immediate appeal to two types of people—the greedy and the lazy

AI is no substitute for human creativity

Generative AI algorithms are made by companies whose primary goal is to make a profit. So, naturally, mass use of AI technology has immediate appeal to two types of people—the greedy and the lazy. And as is often the case, to be first in AI race means minimizing and ignoring whatever negative consequences come from its use. Those issues are for politicians, lawyers, and courts to decide. I recall how some promoted cell phones in schools as tools for learning. Now, educators have realized the devices are the primary reason some students don’t learn.

With AI, anyone can use an algorithm to create a blog post, write a post for social media, generate marketing copy, summarize a book, draft an essay, or create an image, and pass it off as a product or their expertise, skill, and knowledge. But that doesn’t mean it’s true. It only means they found a quicker way to get it done without putting in the work. In other words, they cheated.

I’m not worried about AI eliminating human creativity because it’s not possible. Keep in mind AI algorithms are not human. Technically, they don’t exist, but are creations and tools of human ingenuity and creativity. Humans have a conscience, which provides moral guidance. We are also composed of feelings and emotions that can be expressed verbally and visually. And we use memories of our experiences to share with others. We can also create things pulled out of the depths of our imaginations.

But AI’s emergence is one that needs a watchful eye. It’s already shown the potential for good, but also an overwhelming potential for misuse. Nobody completely understands how it works or more importantly, how to control it. Still, how Big Tech will use AI to its economic and consumer-control advantage, and how far humans will go to find creatively devious ways to misuse it, remains to be seen.

And at what point does government intervention and regulation of it become inevitable?

Share This Story!