Will Image-Generating Software Dall-E 2 Take Jobs From Photographers?

DALL·E 2 is a new artificial intelligence system capable of creating realistic images and art from a written description. No more elaborate styling and lighting setups: now you can just enter a description of what you want, and DALL-E-2 provides the image. Too good to be true? Too threatening for the still-successful photography industry? See for yourself with my preview test.

DALL-E-2 expresses in its mission statement:

OpenAI’s mission is to ensure that Artificial General Intelligence (AGI) – by which we mean highly autonomous systems that outperform humans in the most economically valuable work – benefits all of humanity.

If you’re a photographer who feels like modern technology is constantly undermining your ability to be an indispensable player in the marketing industry, this statement is sure to make you cringe. I was granted pre-access to the Artificial General Intelligence (AGI) platform and took it for a trial. Can he really do what we can do? Can it even “surpass” us? Is it a threat to the photographer? Is it a resource? Or is it a combination of both? Let’s look.

There are a few functions of the software. The first, and the one it’s best known for, is that it can generate an image or illustration based on a description. On their Instagram, for example, you can find the result of “a blue orange cut in half on a blue floor in front of a blue wall”

Everyone can agree that the result is quite mind-blowing. I even attempted a random description myself.

There’s no denying that the technology is impressive. However, my intention during the pre-test was to find out if it could do the trick for a professional photographer. work. Could a client, instead of hiring us, enter the description of what they wanted and avoid the expense of hiring a professional?

First test: are the images generated comparable to those of the work of a professional photographer?

My first test was to see if DALL-E 2 could generate visual content that could rival the images I was working on at the time. First case study: a chocolate based on cocoa and dates. I typed in the description of the image I had created that morning: “a date with chocolate sauce poured over it.”

Here are the results :

I guess if you just need a photo of dates with chocolate, that might be enough. However, if you had any consideration for lighting, composition, color correction, or aesthetics, these images would not meet my standards.

Then I decided to throw a model in the tests. The brand once did a shot where a model was pouring chocolate on her tongue, and it was a very successful image. Along these lines I typed: “A beautiful woman with chocolate drizzled all over her body.”

My first observation was that it seemed like the artificial intelligence had chosen Caucasian brunettes as their quintessential representation of beauty, so I guess I’m out of luck! My second observation was, as in the previous test, that the aesthetics of the images were a complete failure. It looked more like a scene from a Freddy movie than an advertisement selling chocolate and lust. The software impressed me in that it could magically generate images from a brief description, but it soon became clear that it was in no way capable of creating a cohesive set of images. aesthetically successful.

Test 2: Can the correction functions be an asset for the photographer?

You may have seen the nearly implausible DALL-E 2 results of the AI-corrected Fuzzy Ladybug, as seen in this Tech Times article. I decided to test these features as well. My first try was to remove a shadow and fill it with a patterned background. Guess I jumped right into the deep end.

Once I uploaded my image, I selected “Edit Image” and typed “Remove the skincare bottle shadow and fill it with the palm leaf shadow”. I was unquestionably impressed with the images it generated.

It definitely outperformed Photoshop, which couldn’t match the palm pattern.

For the amount of reviews I’ve done so far, I really have to take my hat off to the software on this one. Then I tried another real scenario. My salsa customer once asked me to replace the red peppers in the image below with jalapeño peppers. Needless to say I had to do it again. Impressed with DALLE-2’s latest fix, I decided to see if it could do the job.

“Substitute jalapeño peppers for red peppers.

(crickets)

“T, To May!”?… and the peppers are still red.

A clear failure on this mission.

Test 3: Can Dall-E-2 effectively add elements to a photographer’s image?

In my product photography I often do a lot of splashes and crashes. My final test was to see if the software would be able to do some of this work for me. Inspired by the images I took below, I asked if he could add sparkles to a background.

Here is the result for “Add tortilla chips to the background.”

I also asked the software to add more water loops to a shot.

Below is the result for “Add a touch of juice to the background”.

The test above produced no splatter and some nice alternatives, like a blurry pineapple that creeps in.

conclusion

After subjecting DALL-E-2 to a myriad of challenges, it was clear that the software had yet to fulfill its mission of “outclassing” a professional photographer. Although the software is an incredible feat, it does not always deliver what is asked of it. If so, the aesthetics of the image are not up to par. I was surprised by the repair work on the palm shadow, and I wonder if it will position itself as a more advanced tool than Photoshop.

What do you think of this new technology aimed at “outclassing humans at the most economical value”? Share your thoughts below.

Leave a Comment