I was asked my opinion of the Danbooru AI-art temp ban, as someone who's been doing anime generation since 2015 but is not really part of the Danbooru community; I think it's fine—good, even.
I don't think it's cowardly to not define what's acceptable or when it'll be allowed, and punt to later.
This is for a variety of reasons, but mostly: the loss from a temporary ban is not large because the benefits are small and it can be made up later, while the downsides are large (even excluding any legal issues).
1. It is unimportant for Danbooru to curate AI art. There is still not much of it (albeit rapidly increasing), and most of it is still bad.
2. Images generated now are worse than they look. You will come to regret uploading many of the images which would be uploaded right now. Aside from the obvious artifacts like hands (hands are a problem I have struggled with for years—now you all know my pain), even 'good' samples are going to look worse in a year or two. There is, with every leap in technology, whether daguerrotypes or Lumiere moving-pictures of trains or ProGAN human faces, a honeymoon period where people are justly astonished at its realism; but as time passes, everyone sees enough of it to become sensitized to all its flaws. Consider how laughable CGI effects in Hollywood movies from the '80s look, or how terrible early computer graphics in manga & anime look now. (I read through the original Ghost in the Shell manga recently; they look terrible now, as much as they awed readers in the '90s.)
Further, use of generative models always improves considerably as time passes, as people learn how to prompt them, develop libraries of prompts, discover new prompts (early on, prompts are usually terribly bland, boring, and restricted—people just don't know what the model can do, and have too little imagination to discover it all immediately. We are still figuring out all the crazy things GPT-3 can do), apply new methods like 'negative prompts' (not even a year old, IIRC) or textual-inversion/DreamBooth etc. You may be impressed by current samples, but soon you will be looking at better samples and finding the old ones to be garbage. You will flinch at the small images, upscaling blurs, square aspect ratio (totally unnecessary, there's always been many ways to generate variable aspect ratio images), and other limitations now. Similarly, with video games, the final games on a platform tend to be the most visually spectacular, because that is when the game developers have learned to use the hardware to its fullest.
I would also remind everyone that deep learning only gets better: while Stable Diffusion is driving this conversation, SD is well below the publicly-known SOTA of image generation (there are surely even better models than Imagen & Parti cooking away, and who knows about GPT-4?), and the research frontier is already shifting to video. You shouldn't fixate on SD, but be thinking about images benefiting from video/3D-modeling knowledge in models trained with 100x the compute of Image/Parti and refined with years of tricks & optimizations. Because that's where things will be not terribly long from now...
3. Images added to the database will be a burden forever. It is better to have a bright line rule than either let bad images get in (as curation crumbles under the workload), sit around indefinitely, or be purged inconsistently to the understandable anger of everyone who invested any effort in finding/uploading/tagging them.
4. The proper way to handle AI images is still up in the air. Any set of tags or rules developed in haste, pressured by uploads of possibly hundreds or thousands of images a day, may be severely flawed, and overly tailored to the NovelAI SD model. The future is one of many different models and workflows for years or decades to come, and no one can foresee it. So, why not pause it a while, and see how things shake out? After all, look how much has changed in just the past 2 months! You can't be sure where things will be 2 months from now, much less 2 years; but the consequences of any mistakes in screwing with the tags or letting in the wrong images or modifying the software will still be with you. Why not let other sites like Pixiv and AIbooru figure things out first, or serve as object lessons in how not to handle AI images?
Given the rapid increase in number of generated samples, a mistake in curation could lead to a lot of images being uploaded. It's hard to get statistics, but given that Pixiv search results for 'NovelAI' are already yielding >30k hits. I don't think there is much risk of Danbooru becoming majority AI in the next year, but in a few years, it will become possible to generate hundreds of thousands of very high-quality diverse auto-tagged images automatically and a few dedicated humans screen them for upload, and simply swamp Danbooru while following the letter & spirit of the rules; before that thing happens, it would be good for the rules to be correct so that is a good thing and not a bad thing.
5. The community is the most important aspect of Danbooru, not the images—the images don't upload or tag themselves. It is clear to me from reading through the posts up to now that much of the community simply does not want AI art, for reasons both good and bad, while the people who do want AI art don't care that much. The community should not be put at risk.
6. There is still some value to AI art of Danbooru being focused on human-created art. Contrary to what many believe, generative models can learn fine from the outputs of themselves or other models, particularly if they have been heavily prompted or filtered by humans, and this is a standard technique for improving them; to the extent that the outputs are wrong, they can simply be treated as a 'style'; eg the way that large image models have learned 'Deep Dream' as a style. So it's not a fundamental problem for generative models if Danbooru contains a lot of AI art. However, it does confuse things in various ways, and if the quality is low, the future models may have to waste additional compute learning the 'style' of the glitches. So, since data is not really a bottleneck for anime models at this point (no one has yet trained a model which has overfit the Danbooru corpus, much less other image sources), adding AI images to Danbooru may wind up hurting on net rather than helping. So even from the AI researcher point of view, banning AI images may still be useful.
So overall, yeah, I just don't see much reason to be upset if Danbooru chooses to temp ban AI art. Looking at the long-term, I see a lot of good reasons for that, and if it is a mistake, it is a minor one. One might say that the ideal time for Danbooru to drop the ban will be when it becomes moot because no one can tell any longer whether an image was AI-touched, and all the artists are just using it as an ordinary tool, no more remarkable than the many specialized tricks Photoshop has supplied for eons.