DC Index139.39+2.54%OpinionA WHOOPing Masters from Rory McIlroy
dollar·commerce
Platforms

'Don't use AI answers, please.'

Andrew Watson·July 21, 2025
'Don't use AI answers, please.' — Platforms article on Dollar Commerce
'Don't use AI answers, please.'

AI, AI, and more AI, that’s all we hear these days. Double dashes, recycled vocabulary, and a noticeable lack of character. Training ChatGPT to sound organic is tough but not impossible. With a bit of prodding and nudging in the right direction, it’s becoming harder to tell what’s AI-generated and what’s genuinely human.

Take LinkedIn, for example. Automated systems and API’s now let you publish posts with little to no human input, which has me questioning the legitimacy of the voices I follow on these platforms. Twitter’s no better. It’s flooded with Twitter junkies leveraging automated tools that churn out empty business quotes ten times a day, to satisfy the algorithm. Stuff like “Wake up at 5 a.m., and make your dreams pay rent.” or “Your comfort zone is your enemy.” Incredibly inspiring stuff.

My new rule? No picture, no real.

Nonetheless, why does AI matter for people like me hiring for marketing roles?

Early indicators of talent used to be easy…

Every company has a recruiting process. Maybe you’re a bank kicking things off with cognitive tests to sort the left-brainers from the right-brainers. Or you’re in tobacco, where the only thing that matters is your golf handicap.

Over time, many of these processes have adopted technology as a smarter way to sift through a big pool of candidates. But at some point, early, middle, or end, you’ll talk to an actual human who’ll decide your fate.

At our end, the process usually runs 3 to 4 steps, depending on the role. We typically begin with a casual online questionnaire (no face-to-face interviews yet), paired with a few scenario-based or technical questions to get a feel for how a candidate thinks - how they might approach certain tasks, and their take on the platform in question (e.g. Facebook ads).

You can usually tell the difference early on between someone strong and someone just okay. For example, if I asked a candidate:

Q: What’s a good metric of profitability for a brand advertising on Meta that is also a subscription business and looking to scale?

OK answer: ROAS is a good metric because it shows how profitable each purchase is after ad spend. A business should optimize for ROAS to make sure it’s using creative that will generate the best results for its bottom line.

Great answer: As a subscription business, I’d need to know the brand’s LTV first before I can understand what’s profitable for them. While ROAS is useful for first-purchase profitability and keeping an eye on cash flow, CPA is be a better gauge for longer term scale, because this brand could have a much stronger LTV and can afford to loosen their first-purchase targets if so.

A simple question, but only one answer shows a better understanding of how to approach scaling. In the past, you could usually tell, at a high level, whether a candidate had a methodology or thought process that aligned with the skill set you were looking for. That alone gives you some comfort knowing that, when you jump into your first face-to-face workshop, the candidate is likely to show promise, and a solid grasp of both the mechanics of the ad engine and the business-level thinking needed to work with clients effectively.

Is AI muddying the waters for traditional recruiting practices?

However, AI is blurring the lines between strong and weak candidates. Tools like ChatGPT can now produce responses that outperform even ‘good’ applicants, if prompted correctly. Candidates are constantly leaning on AI to craft polished answers to recruiter prompts, often using simple instructions like: “Give me a more detailed answer for ____.”

For people like Alex or myself, this makes it trickier to sift through OK, good, and great candidates and to decide how best to approach the recruiting process. Then again, does using AI show efficiency, or laziness? Did the candidate use it to generate their entire answer, or just to tidy up the formatting? Is their English actually that good, or is AI making them sound more polished than they are?

The reality is that AI is now creating a lot of murky judgment calls in the early stages of recruitment. It’s slowing down the hiring systems many specialists have spent time building, because it increases the requirement for human interaction, especially for technical roles, that in parallel require strong client facing skills.

And the result? I’d say our success rate in identifying strong candidates from our first round of interviews, especially in media buying, has dropped noticeably. Candidates often assume that because they gave a “great” answer in the first round, they’re happy to hide behind the ChatGPT wall, praying I won’t ask the same question again. But when I do, it’s clear many don’t know the platform as well as they claim, even on simple tasks like setting up audience segments or how to define engaged customers.

So what does that mean for recruiters? In our case, as a client-facing business, we’ve had to swap stages 1 and 2 around, by introducing a short face-to-face intro call before diving into case studies or deeper scenario based Q&A sessions. It’s a quick way to assess baseline expertise before committing more time, or case studies and workshops with prospective candidates.

My take? For companies leaning too heavily on manual answer-driven hiring pipelines, AI’s ability to generate polished, high-caliber responses may be flooding the candidate pool with people who are less skilled, less proficient, and less personable, because it gives them a way to falsify their expertise and personalities. It’s hard to gauge if I’m excited or fearful, that someday all remote jobs and recruiting will be powered by software automation. Perhaps ‘Deep’ AI can monitor your job postings, screen applicants, (which it already does), crawl information on your background and expertise, send you messages, request references, schedule a call, and even take your first interview? If so, I think we’re all in.

Originally published on Substack.
Comments coming soon. Comments use GitHub-backed Giscus once the repo is published. See components/Comments.jsx for activation steps.