Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that's part of what makes FLUX.1 so good: the content it's trained on is very similar.

Diversity is a double-edged sword. It's a desirable feature where you want it, and an undesirable feature everywhere else. If you want an impressionist painting, then it's good to have Monet and Degas in the training corpus. On the other hand, if you want a photograph of water lilies, then it's good to keep Monet out of the training data.



DALL-E3 doesn't struggle with this. It's just opinions. There's no technical limitation. They chose to weaken the model in this regard.


Nonsense. FLUX.1-dev is famous for its consistency, prompt adherence, etc.; and it fits on a consumer GPU. That has to come with compromises. You can call any optimization weakness: that's the nature of compromise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: