Diffusion Single File
comfyui

Can we train more game art assets, game props, icons, CG scenes, etc?

#146
by WarmBloodAban - opened

Can we train more game art assets, game props, icons, CG scenes, etc?

Those might already exist in the training data. You can "enforce" it by training a simple lora to get exactly what you want.

But with certain scoring systems, those assets / icons might be considered low scores (score_4 or lower) or "bad quality" if the scoring system Anima uses measures things generally.

I've used Rouwei in the past to gen some icons and items by doing something like this:

  • Create a placeholder (item slot) like the image below. This is for the directional for initial generation, so high denoising (>0.85) is ideal: test
  • Drop all quality modifiers, and try to use nat-lang only. Right image uses all suggested quality modifiers, left one does not.
    test2
  • To get the best of both worlds, you can swap the conditionings midway to inject quality modifiers for increasing the overall quality.
    test3

Prompt list (positive + negative):

  • Image 1:
    @tenzupp3, game asset, sword, minified version of an in-game item.
  • Image 2:
    masterpiece, best quality, score_7, @tenzupp3, game asset, sword, minified version of an in-game item. + worst quality, low quality, score_1, score_2, score_3
  • Image 3 with comfyui-prompt-control node, it uses image 1's prompt until 30%, then swaps to image 2's:
    [:masterpiece, best quality, score_7, :0.3]@tenzupp3, game asset, sword, minified version of an in-game item. + [:worst quality, low quality, score_1, score_2, score_3:0.3]

These kind of stuff requires additional steps to work if you want to stay consistent with your gens. The reason for that brown square is to scale icons / assets to the certain size.

You can train a lora this way without using other's assets. Generate bunch of stuff like this (with considering icon to screen ratio for further enforcing that it's an "asset" you want to generate), clean-up by removing the initial brown square, train a lora with those images, generate more things more easily to train a lora on top of that, rinse and repeat until bored or reached ideal. You can also inject your preferred style into this asset / icon lora by doing it this way.

You can train whatever you want. I assume though that the ye-pop meta tag can help with game assets. It's a "random world" dataset but Tdrussel removed all photographic data from it, so I assume what was left is mainly stuff like what ur asking.

Can we train more game art assets, game props, icons, CG scenes, etc?

Based on the license β€” it depends on the use case. (If I'm reading this correctly)
Generating game art, props, icons, CG scenes as Outputs β†’ allowed, and you can even use them commercially.
Fine-tuning the model with that kind of data β†’ only for non-commercial use (personal projects, internal R&D). For commercial/production use, you'd need a commercial license from CircleStone Labs.
The one hard no: using any Outputs to train a model that competes with CircleStone Models β†’ not allowed

Thank you for sharing my experience. Then I can use these prompts to train my style, because the style I tested using these prompts is still relatively immature.

Sign up or log in to comment