Regardless of one's feelings on the matter, we can clearly say that AI has been transformative because it never fails to provoke heated arguments. And not only that, but the arrival of commercial models, that is, models that can be run either in your browser (on a cloud service) or on your own machine (local) seems to have shifted perspectives entirely. What becomes bothersome is that self-proclaimed communists seem to have done a complete 180 on intellectual property, and this is what we want to focus on.
Also because the yellow tint bothers me on GPT images:
I believe it’s intentional, because GPT is perfectly able to produce non-yellow tinted images, and it only happens with their model. It gives them an instant look.
To undo it I use photoshop, create a new layer, fill it with a yellow tone lifted from the image (use color picker tool), paint the entire layer 2 with this color, then set the layer to ‘Divide’ (or ‘division’, I’m not sure in english). Then use the layer opacity bar to remove more or less of the yellow. I find that when you remove 100% of it, it makes for a very harsh, bright picture and this is probably another reason they do the yellow tint, it looks warmer.
Almost certainly intentional, given that none of their Dall-e models nor their Sora models have a yellow tint like this. My guess is that it functions as a watermark equivalent or something.
Maybe, but it could also be a feedback loop from the studio ghibli generations. Those are quite yellow so I think that’s started to seep into the training data [I think…? At least that’s what I’ve heard]
I think openAI uses the any publicity is good publicity adage and while the data contamination makes sense on the surface, it’s perfectly able to produce non yellow images, and in fact even smaller local image gen models are also able to produce specific colors. When you send gpt a prompt for image creation it changes it into something the tool understands and I think they inject “yellow tint” or some similar keyword into the prompt. It would be very easy to auto add a negative prompt to remove it if they wanted, too.
Also because the yellow tint bothers me on GPT images:
I believe it’s intentional, because GPT is perfectly able to produce non-yellow tinted images, and it only happens with their model. It gives them an instant look.
To undo it I use photoshop, create a new layer, fill it with a yellow tone lifted from the image (use color picker tool), paint the entire layer 2 with this color, then set the layer to ‘Divide’ (or ‘division’, I’m not sure in english). Then use the layer opacity bar to remove more or less of the yellow. I find that when you remove 100% of it, it makes for a very harsh, bright picture and this is probably another reason they do the yellow tint, it looks warmer.
Almost certainly intentional, given that none of their Dall-e models nor their Sora models have a yellow tint like this. My guess is that it functions as a watermark equivalent or something.
Maybe, but it could also be a feedback loop from the studio ghibli generations. Those are quite yellow so I think that’s started to seep into the training data [I think…? At least that’s what I’ve heard]
I think openAI uses the any publicity is good publicity adage and while the data contamination makes sense on the surface, it’s perfectly able to produce non yellow images, and in fact even smaller local image gen models are also able to produce specific colors. When you send gpt a prompt for image creation it changes it into something the tool understands and I think they inject “yellow tint” or some similar keyword into the prompt. It would be very easy to auto add a negative prompt to remove it if they wanted, too.