Polish or Punk: AI Everywhere
Welcome to December, Job Board Doctor friends! We have almost made it another rotation around the sun and there are no indications that Artificial Intelligence (AI) is going to lessen it’s grip on every part of our professional brand marketing or our talent tech solutions.
Majority of Long Form LinkedIn Posts Use AI
AI-generated writing is taking over LinkedIn, where corporate jargon already reigns supreme, making the platform the perfect playground for algorithmic nonsense. Microsoft’s shiny AI tools for Premium subscribers promise to “rewrite” posts and messages, but let’s be honest: most LinkedIn posts already sound like they were written by a robot. According to Originality AI, more than half of the longer posts on LinkedIn are likely AI-generated—a stat that soared when ChatGPT entered the scene in early 2023. But who can tell the difference? Whether it’s a human or a bot, everyone’s just churning out the same “thought leader” drivel, all wrapped in the same saccharine, buzzword-laden package.
Of course, not everyone’s thrilled about the rise of AI-powered LinkedIn influencers. Critics argue it’s killing creativity and undermining actual human writing skills—because nothing screams “inspirational” like outsourcing your congratulations post to an algorithm. Meanwhile, users like entrepreneurs and content creators defend their reliance on AI, claiming it’s all about efficiency. Sure, you could take four minutes to write a heartfelt note to an ex-colleague—or you could let an AI bot spit out something bland in four seconds. On LinkedIn, where vanilla is practically the brand, it’s harder to tell where the humans stop and the machines begin.
Read more at Wired.
Developers Beware: FTC Pursues AI Vendor’s False Advertising and BS Validation
The Federal Trade Commission (FTC) is on a mission to shut down the AI hype machine with its aptly named Operation AI Comply—a takedown of companies using artificial intelligence to deceive and defraud. Leading the pack of (alleged) offenders is DoNotPay, which sold itself as the “world’s first robot lawyer.” Spoiler alert: it wasn’t. Promising everything short of arguing before the Supreme Court, it failed to deliver anything resembling actual legal expertise. Then there’s Ascend Ecom, which swindled consumers out of millions by touting AI-powered tools that would allegedly rake in “passive income” through online storefronts. Spoiler again: the only thing passive about it was consumers watching their money disappear.
Not to be outdone, Ecommerce Empire Builders peddled dreams of “AI-powered empires” that turned out to be as real as a unicorn on Wall Street. Meanwhile, Rytr made it easy to flood the internet with fake consumer reviews, because who needs honesty when AI can churn out lies for you? And let’s not forget FBA Machine, which got so creative with its scam that it rebranded itself to dodge lawsuits while continuing to sell pipe dreams of AI-driven e-commerce riches.
But the pièce de résistance? Evolv Technologies, which claimed its AI security scanners could replace old-school metal detectors with cutting-edge tech. Turns out, these scanners were great at finding harmless laptops but not so great at detecting weapons—like the seven-inch knife that slipped through and was later used in a school stabbing.
The FTC isn’t buying the “but it’s AI!” excuse. From court orders to settlements, they’re making it clear that snake oil—whether wrapped in buzzwords or code—won’t fly. If these companies thought they could exploit AI hysteria for a quick buck, the FTC just handed them (and us) a reality check, with interest.
Sure, there isn’t TA tech on this initial offender list however job board and recruitment tech vendors who are selling slick AI solutions without transparency and clear validation of methodology should take a beat and review their sales techniques and solution’s impact before the government comes a knockin’.
[Want to get Job Board Doctor posts via email? Subscribe here.]
This Post Has 0 Comments