Taking longer to load than usual. Thanks for your patience.
Couldn't load the data. Please reload the page.
Taking longer to load than usual. Thanks for your patience.
Couldn't load the data. Please reload the page.
I might be a bit late to the party, but I’ve started experimenting with AI art using Stable Diffusion — and honestly, it’s incredible.

I realized that creating beautiful images really comes down to finding the right combination of multiple AI tools. Even choosing different upscalers depending on the style makes a noticeable difference in quality.

As someone who works professionally with moving images, I have mixed feelings about this rapid progress. Still, I tend to view it positively. After all, learning from countless artworks is something every artist does — the difference is that AI can do it at an unimaginable speed. I’m sure someone is already building tools that make AI model creation super easy, so as long as we use our own curated images to train models, I think this technology will remain creatively empowering.


When you have an idea in your head, you have to instruct the AI precisely to get the image you want — but often the model doesn’t fully understand the underlying concept. For example, when I tried to generate an image of a “ninja,” I discovered that using the term “niqab” (the veil worn by some Muslim women) actually produced results closer to what I had in mind.
To truly do this the right way, fine-tuning or additional training seems like the best approach. I’d like to learn how to create my own LoRA models in the future.