#SDXL had a lot of issues with trying to merge styles. It also went a little crazy in that last picture because. So I went to #ChatGPT and talked it over with it. Worked on some refining and had it cook a combo #Goth and #Jugendstil from that last one. The results were quite beautiful.
growing more and more freaked out by these articles and blog posts with AI-generated images of people who are missing legs or arms, have extra hands or figures, or otherwise reside deep in the uncanny valley
My friends and I are starting a #StarWars#RPG set in the Outer Rim, during the transition from Republic to Empire.
Meet Thane: A former slave liberated by clone troopers, who joined the fight only to grow disillusioned when the Empire reinstated slavery. He turned on his units only to be permantly burned in the process. Now he wanders the Outer Rim as a bounty hunter, and occasional rebel fighter.
Continuing our thread, I'd like to showcase my character Thane in a few different art styles starting with #CalArts style animation! These AI images from Bing breathe new life into his story, with vibrant colors and a modern twist on the classic Star Wars aesthetic.
From battle-scarred armor to the windswept cloak, each detail is reimagined with a lively, animated flair.
Thane's story unfolds further in this unique oil painting style, complete with an impressionist twist. Here he is, contemplative and battle-scarred, finding solace in the quiet moments amidst the chaos of a galaxy at war.
Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.
In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.
Regarding that last boost, I'm starting to conceive of LLMs and image generators as a phenomenon of (American) society eating its seed corn. If you're not familiar with the phrase, "seed corn" is the corn you set aside to plant next year, as opposed to the corn you eat this year. If you eat your seed corn this year, you have no seeds to plant next year, and thus create a crisis for all future years, a crisis that could have been avoided with better management.
LLMs and image generators mass ingest human-created texts and images. Since the human creators of the ingested texts and images are not compensated and not even credited, this ingestion puts negative pressure on the sharing of such things. Creative acts functioning as seed for future creative acts becomes depressed. Creative people will have little choice but to lock down, charge for, or hide their works. Otherwise, they'll be ingested by innumerable computer programs and replicated ad infinitum without so much as a credit attached. Seed corn that had been freely given forward will become difficult to get. Eaten.
Eating your seed corn is meant to be a last ditch act you take out of desperation after exhausting all other options. It's not meant to be standard operating procedure. What a bleak society that does this, consuming itself in essence.