An executive in my professional network recently updated their LinkedIn profile with this message:
“Friends, if you received an email from me about an opportunity with my organization, you didn’t. My corporate email address has been deepfaked.”
Deepfaking involves using AI to generate convincing false communications (such as in the case of my LinkedIn friend)—emails, voice recordings, even videos—that mimic a real individual. These communications appear so authentic that they can bypass human skepticism, fooling even the tech-savviest among us.
You may have already encountered this in your organization, as it’s used in increasingly sophisticated phishing attacks to deceive employees into taking actions such as opening documents or sharing sensitive information.
Deepfaking and identity fraud are chief among the concerns of organizations when it comes to the use of AI in the hiring process, according to a recent survey by the Institute for Corporate Productivity (i4cp).
The survey of talent acquisition executives found that many organizations are grappling with the implications and risks of the use of AI in hiring by both their own recruiters and candidates.

How AI is keeping talent acquisition executives up at night
Over half (54%) of those surveyed reported that they have encountered candidates in video interviews whom they suspected were using AI tools to assist with answering questions or completing technical challenges; 24% said this is a rare occurrence.
But there is not much movement in terms of adjusting policies and practices in response to this—only 17% report that their organizations have moved to increase the use of in-person interviews in response to concerns about AI-related fraud.

The need for continuous management of AI—to include auditing for bias, ensuring compliance with evolving global regulations and staying on top of new advancements—can be overwhelming, particularly for organizations that may have rushed into AI adoption without first laying a firm foundation for governance.
“AI proliferation without real governance is creating diminishing returns—too much simply optimizing specific workflows, but not optimizing HR overall function,” one survey respondent observed.
Another concern is the risk of the human being lost in the hiring process. The question of not only what AI can do in talent acquisition, but what it should do, is a critical one to answer. The intersection of intelligence—human and machine—and how organizations can adopt AI responsibly, strategically and with measurable impact is an important conversation for leaders to have.
Despite the hype, AI adoption is still quite tactical
Most (61%) talent acquisition executives reported that they currently leverage AI in very tactical, assistive ways. The most common current usage of AI by far is the creation of job descriptions.

And despite the daily deluge of articles, social media posts and vendor advertorials about the accelerating adoption of AI in hiring, some organizations are hanging back from going beyond the tactical at the moment.
Talent acquisition leaders in industries such as financial services, defense and aerospace, energy and infrastructure, and healthcare are less likely to report that their organizations are widely adopting AI in hiring specifically due to security, privacy and concerns that relate to the sensitivity of data handled, regulatory restrictions or exposure to national security threats.
The need for clarity on AI in hiring
Most (41%) of those we surveyed said that their organizations currently do not have an official stance on candidates utilizing AI tools in the recruiting process (e.g., resume optimization). Meanwhile, 29% reported that they encourage ethical use of AI tools by candidates, but have concerns about potential misuse. Twenty-six percent described their organization’s stance as positive; they welcome applicants to use AI these tools and provide guidelines for their use on their websites. Anthropic is one example of this: The organization has very clear messaging on its career site about how and when candidates should use AI. The messaging on rules of engagement for the integration of AI into the application process generally follows these parameters:
- Do the work yourself; use AI to polish what you created on your own. This includes the application, cover letter and resume. AI should be a final review step, not the originator of what is submitted.
- It’s fine to use AI for research, preparation and practice before interviews. This includes using AI-powered platforms such as InterviewPal, Interviewing.io and Google’s Interview Warmup, etc., which provide video, voice or text-based practice interview sessions and real-time feedback for improvement.
- While AI-assisted preparation is OK, candidates must show up for live interviews unaided.
Anthropic’s move is an acknowledgement of the inevitable: Some candidates will use AI no matter what. Now may be the time for employers to transition from trying to ban the use of AI in any form (to include asking applicants to certify that they have not used the assistance of AI in their job application) to compromising where and how its use makes sense for your organization.
This starts with clarity on risk, what is and isn’t acceptable, and identifying where it makes sense to loosen up.
Strategic AI considerations for talent acquisition
- Develop an official viewpoint and policy on the use of AI by both talent acquisition and applicants.
- Post AI use, policies and guidelines for job candidates on the company’s career portal. While this practice is relatively uncommon at the moment, we will see more organizations posting clear guidelines for candidates that spell out what can be used and how.
- Update candidate disclaimers and consent forms to prohibit the use of synthetic identities or AI-altered video if applicable to your policy.
- Cross-validate credentials. Confirm work history through direct employer contact, LinkedIn consistency checks and verifiable references.
- Consider requiring live video interviews with real-time interaction.
- Ask unexpected or personalized questions during interviews (e.g., “Can you tell me about something you read in the last 24 hours that grabbed your attention?”) to test real-time comprehension and spontaneity.
- Explore platforms that verify user presence and do live identification checks.
- Train recruiters to recognize AI-generated anomalies.
- Create an internal deepfake playbook with examples and response steps.
About the survey respondents:
- 79% mid-level to senior executives
- 82% represented larger organizations (those employing >1,000 people).
- Over half (52%) represented public companies; 37% represented private companies; 11% were from nonprofit or government organizations.
- A combined 70% of those organizations are global (high level of global integration) and multinational (national/regional operations act independently).
The post Deepfakes in the talent pool: How AI is reshaping hiring appeared first on HR Executive.