Osram advertising)

#7684
by Pnppk - opened

Osram advertising, 1930-1950'
Без названия - 2022-08-02T100357.921.jpeg
Без названия - 2022-08-02T095858.205.jpeg
Без названия - 2022-08-02T095220.645.jpeg
With respect to @SoloPC

is it possible to make a little man so that his head would be a light bulb, his body would be made of light bulbs and his arms and legs would also be made of light bulbs? I tried, but didn't work.

Like the Michelin logo, but instead of tires, light bulbs? Probably, yes, i can try

будет интересно посмотреть что получится:))

introduced hint steampunk 😆

image.png

Oh, it's really hard to do. I can't(
Most general descriptions, like an android made out of light bulbs or a person with light bulbs instead of body parts, do not work от слова совсем(
And it's especially bad at executing queries with verbs, like made of, assembled, constructed, etc.

Then I must have lucked out yesterday with "Archangel Michael made out of beef jerky"

0 archangle jerky.JPG

ran again to test today

0 archangle jerky 2.JPG

I think it can be done, keep trying :)

while i was posting that i tried for the lightbulb man

see, getting closer! heh

image.png

writing a clue steampunk

There's a difference. Archangel Michael is a stable archetype, Google will give you a million relevant images for such a request. But the lightbulb man is not. For example, I know (more or less)
i (1).jpeg
one well-known picture) I think dall-e is simply not trained on this.

If Dall-e mini was a trainable AI, there would be no problems. The problem is that it is PRE-trained)

and when we connect queries, art station trends and image search in Yandex, for example?

Craiyon, not dall-e mini, sorry. But it doesn't matter.

The developers claim that it is trained on about 16 million text-picture pairs. It's not much, dall-e 2 has more than 130 million pairs, for example. I may be wrong about the numbers, but the difference is right.

It knows about artstation, definitely. As for Yandex, I think that the dall-e mini was trained on Google pictures, because OpenAI (developer of the dall-e's algorithm) is related with Google.
Whether it uses the ability to connect to the network during generation, I don't know. My opinion is no.

I did a little experiment with these libraries, connected them one by one, the request was the word transformers, when I connected Artstation and Google AI generated approximately the same pictures, these were drawings and old models of transformers and the animated series, and only when Yandex connected all the pictures were transformers from the Michael Bay movie .

Oh, that's an interesting result

Thanks, I saw this one. I just don't know enough about transformers to do the right conclusions)

Sign up or log in to comment