Even as generative AI storms into the workplace, the debate continues about whether it’s driving useful advances or simply overhyped. A recent Pew survey of working adults found that workers are worried about AI’s impact in the workplace, and most fear that AI will lead to fewer jobs in the future.
Still, new AI-powered tools come on the market every day, promising to save time, automate work, or take on the tasks people don’t want. Here at mmhmm, we focused on surveying 1000 knowledge workers to understand where they truly find this technology useful and where they think it’s getting in the way.
Most workers said broadly they’re comfortable with AI in the workplace. That said, people are otherwise confused. They know it’s here and show some desire to use AI-driven tools, but they’re confused about when it’s actually helping, what it’s doing, and whether even to trust what comes out. Workers told mmhmm that they want AI to support what they’re doing, not replace core tasks. This, even as they turn to AI for help and support. A whopping 91% want AI to make them more effective at their job. They want help making decisions, they don’t want to hand those decisions entirely over to technology.
The issue of trust is perhaps the most interesting, as our audience noted that it requires authenticity. And that’s not really possible when handing the reins entirely over to technology. Users specifically want to know if a tool or app is using AI. 67% want to be notified when AI is being used, and that number trends higher when just speaking with senior leadership. A scant 12% say they can always tell when a tool or app is incorporating AI functionalities.
This creates an issue for developers in that they must show that AI is being used, even though a generative AI-driven feature may immediately turn off a large portion of the user base.
When it comes to showing value, much of the early talk about generative AI came from its ability to churn out copy that sounded mostly like it was written by a human and even to produce interesting, if uncanny, images. But those images, while interesting, have a problem. Almost half of the workers we surveyed said they found AI-generated images to be unnatural.
The core issue comes down to trust. When people see images online they now have trouble trusting those that are just too perfect, unsure if what they’re seeing is the product of meticulous craft or a soulless algorithm. In one very public example, Coca-Cola received pushback during the holidays when it released a video ad built almost entirely by AI models.
Megan Cruz of The Broad Perspective Pod summed it up this way: “This is always what it was going to be used for btw. It’s not some great equalizer. It’s a way for already massively wealthy execs to add a few more mil to their annual bonuses by cutting creative teams entirely & having a machine vomit up the most boring slop imaginable instead.”
Paradoxically, using AI at work may lead people to trust its output even less. When it comes to people who use AI daily, 44% do not trust any image online unless it’s flawed in some way. That’s much higher than those who use AI weekly or monthly. Across our entire dataset, 35% won’t trust any online image unless it’s flawed in some way.
Then there is a strange dichotomy between how much people trust the brand or the content when it comes to disclosure. In a sense, people seem to expect brands to use AI and give them a little more leeway when it comes to usage.
As for the content itself, people’s skepticism remains consistent whether or not AI usage is disclosed. That is, when AI is disclosed, 52% of the people are more skeptical, and when it’s not disclosed the number of skeptics rises a barely noticeable two percentage points. In the same situation, 42% of the people say they’re skeptical of the brand when AI is disclosed, and that jumps 11 points when AI isn’t disclosed.
When workers don’t trust something, it’s not just the image itself that takes the hit, it's the brand that puts it out. If an image is too perfect to the point that it loses its humanity, 58% will lose trust in the brand that produced it.
Ultimately, companies may be better off staying away from the content creation side of generative AI, as it’s not clear whether the benefit will outweigh the cost. However, in places where AI can assist a human, such as helping a writer do better work or a graphic designer to create something even more compelling, it makes sense. In other words, don’t lose the humanity behind the work.
This is part 1 of a two-part report. Part 2 will be published next week.