We actually stumbled into this idea while working on solving a different problem. While helping a few companies use Stable Diffusion in production, we found that these companies needed to use QA analysts to manually evaluate their images to make sure they were high quality once in prod. This process often took days just to go through thousands of images, so companies that have a text-to-image product with PMF couldn’t guarantee that their images were of high quality.
From our initial set of customers, we found that evaluating the quality of their outputs at scale was an even larger problem than just using the model itself. So we pivoted our company into working on solving this problem using our skills from CV research. All three of us did computer vision research at UC Berkeley, and Abhi worked with John Canny, who invented many CV techniques like the Canny Edge Detector.
We built a product that automates this process using computer vision. We’ve trained in-house several computer vision models that grade images based on different criteria.
These include: - Detecting human deformities, such as a person with 7 fingers on a hand (image generation models generate deformed hands over 80% of the time when generating a photo of a person! This includes the state of the art: Midjourney, SDXL, Runway, etc…); - A score that rates how well the image aligns with the prompt(we do this using our own finetuned Visual Question Answer model); - A composition score (how well composed the image is according to photography “rules”)
Using Rubbrband is pretty simple. You can send an image to us to process via our API or our web app, and you’ll get scores for each of those criteria back on your dashboard in less than 10 seconds. Here is a quick Loom demo: https://www.loom.com/share/961830347b3643dcbb92dfe80f8ca1f0?....
You can filter your images based on certain criteria from the dashboard. For instance, if you want to see all images with deformed eyes, you can click the “deformed eyes” filter at the top of the screen to see all of those images.
We store images generated by your image generation model. We’re like a logging tool for images, with evaluations on top. We currently charge $0.01 per image, with your first 1000 images free.
We’re super excited to launch this to Hacker News. We’d love to hear what you think. Thanks in advance!
[1] https://www.rubbrband.com/static/media/astronaut_with_coffee...
Diffusion models like SD are trained with a very simple loss function instead, which is just the L2 loss of an iterative denoising process. This tends to result in stabler training than using GANs. However, you could fine tune SD with reinforcement learning using the deformity detector as the reward, but it’s not a panacea as it could lead to overfitting and performance degradation.
Generative networks are ime not at all difficult to train because the amount of training data is typically orders of magnitudes larger. In this case, the idea is to train something to classify images as high or low quality, which I think is just as hard as generating images. Regardless, if you had such logic, I don't see why you couldn't incorporate that into the network's own loss function? That's how it is done for L1 and L2 regularization and many other techniques for "tempering" the training process.
The problem is that you want the model to be creative but not "too creative" (e.g eight finger hands). But preventing it from being too creative risks making it boring and bland (e.g only generating stock images). I don't think you can solve that with a post-processing filter. Generating say 100 images and picking the "best" one might just be the same as picking the most bland one.
E: or how it's supposed to work.
Problem: traveling salesman, solution: one particular path. I think verification that the solution is optimal in this case is exactly the same problem as finding the solution.
The way we think about it is that we're building a product for organizations in scaling mode, and they have deep needs on the product-side. Flexibility on filtering, different client-libraries, a clean observability interface, etc...
It's possible that we open-source parts of our models, but fundamentally we think we can capture value by building a great all-around web product, and not just a set of eval models.
If the quality of the model is difficult to replicate (which seems to be a big "if" at the pace of NN image processing improvements), I guess there might be licensing or plugin sale opportunities there
I kid, I kid.
I'm wondering about real photos that have deliberately been shot to screw with normative standards of photo composition, like Weston's headshot of Igor Stravinsky. Another genre of photo that may be flagged are sci-fi and/or fantasy film set candid shots, such as photos featuring an actor (or actors) partially out of costume.
Come to think of it, various photography hall of fame galleries could be great testing suites.
Dead Comment