Generative AI tools like ChatGPT are transforming the way we work; from how candidates write CVs to how employers evaluate applications. But alongside the opportunities, they also raise important questions: How do we ensure fairness? What does authenticity look like in an AI-powered world? And how do we keep inclusion at the heart of recruitment?
Here’s a breakdown of some key themes and takeaways from our recent Randstad Inclusion Lab on inclusive hiring, and what it means for both employers and candidates.
A reminder of our speakers:
moving beyond the CV: the rise of skills-based hiring.
More organisations are moving away from traditional, qualification-based hiring and shifting towards a skills-first approach. Instead of focusing solely on education or job titles, employers are prioritising what candidates can actually do.
This shift not only widens the talent pool, it also helps break down barriers that have historically excluded underrepresented groups. It’s an opportunity to rethink what “qualified” truly means, and create more equitable pathways into meaningful work.
AI in job applications: challenge or opportunity?
With the increasing prevalence of AI tools, candidates are naturally leveraging them for assistance with applications and interview preparation. However, employers are noticing a concerning trend: the submission of homogenous, AI-generated responses that lack authenticity and individuality.
So, what's the solution?
Rather than banning AI altogether, the conversation is shifting toward AI literacy - encouraging candidates to utilise tools like ChatGPT to enhance their voice, not replace it. It also pushes employers to be more intentional with the questions they ask, ensuring they elicit thoughtful, personalised responses that cannot be easily replicated by AI.
educating jobseekers on responsible AI use.
AI can be a valuable tool for jobseekers, especially those navigating the job market for the first time. But knowing how, and when to use it is key. To use AI effectively and responsibly, candidates should:
- Craft authentic responses: Avoid relying solely on AI-generated answers. Always review and personalise AI-generated content to reflect your own unique voice and experiences.
- Showcase their skills: Remember that employers value authenticity and are looking for genuine insights into your skills and personality, not just perfectly written responses.
Employers can support responsible AI use by clearly communicating their expectations about AI usage in the application process.
fact-checking AI: managing accuracy and avoiding “hallucinations”.
One of the most significant risks of using AI tools is misinformation. Large Language Models (LLMs) sometimes produce inaccurate or even entirely fictional content, a behaviour known as "hallucinations". In other words, they sometimes make things up. So how do you trust what they generate?
Some practical tips:
- Always ask AI to cite its sources when generating information
- Cross-check the information with reputable references
- Use prompts that encourage honest responses (e.g. “If you don’t know the answer, say so.”)
Remember, AI is a powerful tool that can enhance your work, but it still requires human oversight and critical evaluation to use it effectively.
a shared responsibility.
As AI is rapidly reshaping how we work and hire, the focus must shift to how we use it; thoughtfully, ethically, and together.
- For employers: It’s a chance to modernise hiring practices and embed inclusive, human-centered policies.
- For candidates: It’s about learning to use AI ethically and effectively, emphasising value and authenticity over shortcuts.
- For everyone: It’s a reminder that technology is most powerful when it supports genuine human connection, not when it replaces it.