As an AI-certified professional and a lifelong sci-fi reader, I’ll be the first to tell you: what we’re using today isn’t “True AI.” We aren’t dealing with Data from Star Trek or HAL 9000. What we’re actually working with is Machine Learning (ML)—specifically Large Language Models (LLM
While these tools are incredible for refining wording, identifying repetitive habits, or expanding on a stuck idea, there’s a line in the sand. Using AI as a “helping tool” is a superpower; using it to generate an entire work from a three-word prompt is a shortcut that bypasses the heart of creativity.

How the “Ghost in the Machine” Actually Works
To use AI responsibly, you have to understand its diet. LLMs are algorithmic structures trained on existing data: dictionaries, encyclopedias, and vast swaths of the internet. But they also ingest fiction, scripts, and artwork.
AI cannot create from a vacuum. It synthesizes what it has already “seen.” Think of it this way: If you were raised in a room where the only art on the walls was by Picasso, every original thought you had about painting would naturally lean toward Cubism. That is the limitation of AI. It can mash, combine, and logically reassemble, but it cannot truly imagine.
The Ethics of the “Digital Buffet”
When modern models like ChatGPT or Claude were built, engineers fed them petabytes of data. This included public domain works, but also copyrighted books, paintings, and graphic novels. This has sparked rightful outrage regarding “AI art theft,” particularly from independent illustrators and photographers.
When you ask an AI to “imagine” a scene for you, it is pulling from a reservoir of human-made art. If that reservoir contains “stolen” data, the output sits in an ethical gray area—or, for many, becomes outright theft. We saw this come to a head in August 2024, when a landmark class-action lawsuit was filed by artists against major AI firms.
3 Steps to Using AI Responsibly
So, how do you use these tools without losing your creative integrity?
- Choose Your Tools Wisely: Not all models are built the same. While some are known for scraping data without permission, others—like Adobe Firefly or Google Gemini—have made strides toward using more ethically sourced or licensed training sets.
- Be Explicit (and Human) with Your Prompts: There is a world of difference between asking a machine to “write a story about a gnome named Bob” and asking it to “Analyze my draft for repetitive word usage and suggest pacing improvements.” Use AI to audit your work, not to author it.
- Provide Your Own “DNA”: When I generate images for my website, I don’t just type a prompt and hope for the best. I use tools like Adobe Firefly, upload my own purchased stock or public domain references, and give strict instructions to limit the AI’s “creative” liberties. I provide the foundation; the AI just handles the heavy lifting of the rendering.
The Bottom Line
Are these methods perfect? No. But by using these guardrails and your own moral compass, you can use AI to enhance your craft rather than replace it.
Stay tuned: In my next post, I’ll be diving into how to spot “AI Slop”—that hollow, uninspired content generated without a human touch.
