About Us
Criteria’s science-based assessments, interviews, and employee development resources empower companies of all sizes to hire and develop top talent. Uncover valuable insights throughout the employee journey with a user-friendly, comprehensive suite of tools, thoughtfully designed to help companies make better talent decisions, faster.
© 2025 Criteria Corp. All rights reserved
Generative AI is transforming the hiring landscape, reshaping how candidates prepare, apply, and present themselves throughout the process. From resume writing to assessments and interviews, AI tools are giving candidates new ways to showcase their abilities, but they’re also introducing new challenges for hiring teams who want to ensure authenticity and fairness.
In this webinar, Criteria’s Chief Technology Officer, Chris Daden, and Chief People Officer, Jillian Phelan, explore how AI is reshaping candidate behavior and what organizations can do to maintain fairness and authenticity in their hiring process. Together, they unpack how AI is being used by job seekers today, what counts as “cheating” (and why that definition can vary), and how hiring teams can communicate clear expectations while maintaining a great candidate experience.
They also share how Criteria’s Proctoring solution helps ensure that every assessment result truly reflects the candidate’s own ability, giving you the confidence that the person you hire is the same one who took the test or interview.
Watch to Learn:
TRUSTED BY:

ON-DEMAND VIDEO

Chris Daden
Criteria Chief Technology Officer
Chris Daden is Criteria's CTO, a member of the Forbes Technology Council, and a Founder several times over .

Jillian Phelan
Criteria Chief People Officer
Jillian is Criteria's CPO and a Global HR Leader with M&A + VC Exit Experience.


Alright. We're gonna go ahead and get started. So, hello, everyone. Welcome to today's webinar, which is called how to stop AI cheating before it haunts your hiring process. My name is Michelle. I'm your host from Criteria, and we're really looking forward to diving into an extremely timely and vital conversation that I know a lot of us are having in the age of AI. And now for just a quick introduction, we are Criteria. And as many of you know, we are a leader in the talent assessment, interviewing, and development space with science backed tools that help organizations make better talent decisions. And so this webinar will be about an hour total, and we'll save fifteen minutes of q and a for the end. So feel free to add your questions to the q and a box at any time. We'll also be running a few polls to get your perspectives on a couple of key topics, and then everyone will be receiving a link to the recording and the slide deck after the webinar as well. But without further ado, I'd love to introduce our two incredible speakers. And first up is Jillian Phelan, chief people officer at Criteria and a seasoned global HR leader with a track record of guiding companies through rapid growth and transformational change. She's passionate about building inclusive high performing teams that scale sustainably even in the most complex environments. Welcome, Jillian. Thank you. Welcome. And we're also thrilled to be joined by Chris Dayden, the chief technology officer at Criteria, where he leads the development of cutting edge AI powered hiring solutions. He's also a respected voice in the AI and tech community, having spoken on panels at SHRM and leading tech conferences. Welcome, Chris. Thanks for having me. And so a little bit about our agenda today. First, we'll talk about what's going on in hiring today. And in particular, we're gonna talk about AI and how candidates are increasingly using AI in the hiring process in very unique and, frankly, innovative ways, but how that can present some hiring challenges. And then we'll talk about cheating. What are some examples? How do we define it collectively? Next, we'll go over the first line of defense, which is basic expectation setting through communication. And then we'll get into the meat of it, which, with some of the real ways our tech is countering AI cheating as well as some other tech and tools you can use to stay ahead. And then we'll close out with some q and a from all of you. So with that, let me pass it over to Jillian to get right into the state of hiring in the age of AI. Hello. The information we're gonna share in some of the next few slides is from our Criteria Corp hiring benchmark report, our customers, and industry experts and leaders. But let's jump in to the state of hiring and talk about what talent professionals are experiencing. The overall sentiment that we are hearing is that it's harder than ever to find talent in twenty twenty five. Candidates are leveraging AI for resume optimization, and candidates' ability to morph their resume to fit a job description is translating into more candidates per role and even candidates that may not have applied previously. So we're seeing an influx in that volume. With that, recruiters are struggling to find top talent within that influx of applicants. Many companies also articulate that it's extremely competitive, that potentially many organizations are going after the same pool of high quality talent. All of this, in addition to, a little bit of uncertainty remains about the economy. So we're seeing roles be placed on hold given a little bit of that uncertainty. AI ultimately, and we'll jump to the next slide, is really rewriting the rules. So AI has been embedded in applicant tracking systems for quite some time, but the most recent shift is that job seekers are now leveraging it too with the tools at their fingertips. AI isn't going anywhere, and candidly, it's only gonna become more prevalent for candidate enable it enablement within the hiring process. And so staying ahead of this AI curve is essential. In fact, AI is changing the rules with resumes. We, as mentioned on this slide, recruiters are seeing more AI generated resumes and are flagging them for a lack of authenticity, I think all of us who've been in hiring processes can agree that gauging genuine candidate experience is becoming significantly harder. AI, when it's leveraged to write the resumes against the job descriptions, is creating a candidate pool that might look amazing for our positions and ultimately to find out that perhaps the candidates don't always have that experience that's articulated on the resume. AI is also rewriting the rules with interviews. So we have applicants who are more prepared than ever. They're leveraging AI to understand a job description, develop interview prep questions, and even develop cohesive responses to interviews. So it's really shifted that dynamic too. We've also seen an influx of fake profiles and AI generated headshots. There are even AI bots that are enabling candidates to apply for roles literally while they sleep. These bots have candidate information and past experience. They will go out and crawl to find job descriptions and jobs postings, create a brand new resume that matches candidate experience to that job description, and apply on the candidate's behalf. So we'll jump ahead to what's happening and why is it so hard to discover. You'll see seventy four percent over three quarters of our survey respondents agree that it's so hard to find candidates right now, and cutting through that volume or that noise is the most pressing issue. Even we even have teams that are feeling understaffed and struggling, and some of the short term solutions are time consuming. We have a handful of customers who are telling us to in order for them to avoid their candidates from reading off AI scripts during interviews, they're actually pivoting back to in person interviews so that they can gauge in real time authenticity, fit, and actual ability. We With all of this going on, we believe that maybe it's time that we need new talent signals in addition to, or maybe in lieu of, the resumes. But we'd love to hear from you. Okay. And we also have found that talent isn't really missing. It's hiding, actually. So we have about forty percent of candidates with tailored resumes, and we're specifically at criteria as well as what we're reading in industry sources that we're hearkening back to looking for nontraditional candidates, maybe candidates with career gaps, those with no degrees, career changes. We're also looking at underrepresented demographics like neurodivergent candidates, veterans, older adults, and, of course, passive candidates. But talent can also be found with individuals who have transferable skills, especially now that true and genuine capabilities within individuals is maybe going to be a better talent signal in this quickly changing world. The point is that we need to find talent perhaps with broader skills rather than just historical experience. And it could mean that we start searching for candidates with analytical thinking abilities, flexibility, agility, and emotional intelligence. Frankly, it's overwhelming. And, Jillian, I think, I was gonna launch a poll, and then it didn't launch. So Oh, okay. Good. Let's go back to it because I said that they're back to it. Yeah. There was a little glitch there. Thank you. We would love to hear from you all. So I'm gonna launch a poll really quick for you all. We wanna know, basically, have you encountered candidates using AI to cheat the hiring process? We just wanna hear from you, get your perspective on that. Sorry. The poll didn't launch when it was supposed to, but, I'll give you all a few more moments to fill that out, and then, Jillian, we can discuss that. We'll share that result. So I will close that poll in three, two, one. End poll. Let's share those results. Yeah. I don't think we're surprised. Yeah. We're seeing it as well. We had quite a real time experience within our own organization of launching a handful of roles. It was over five at one time. And the traditional method of leveraging our platform, with the volume of candidates very quickly made it pretty obvious that we have more candidates and more candidates that look phenomenal. Yeah. And this has changed compared to years past. Yeah. So it looks like your two thirds, of those of us, listening in today say, yes, they've experienced that cheating. Twenty four percent or a quarter of you aren't sure, but maybe you're suspecting something, and just ten percent of you are saying no. So I'm gonna stop sharing, and I'm gonna jump ahead for you, to where we left off. Thank you. Yeah. We've mentioned this before. In our criteria benchmark report, we, well as what we, experience internally, have found it is frankly overwhelming right now with about forty five percent increase in applications. In fact, right now, we will post positions and within a four day period of time have between one thousand to fifteen hundred applicants. This is up from about three hundred applicants per position a year ago. So it's been an exponential growth in pipeline. And all of this and the hiring practices might still be a little bit broken. Again, this is from our benchmark report that nearly half of new hires fail within eighteen months. And even internally, we ask this question if it might be the reliance on the resume, in certain instances and not on transferable skills that are increasingly becoming important within our organization as well. And this is such a fun supposition. We are wondering if the resume is actually dead. I know it's earth shattering, but it certainly feels like it from a hiring manager perspective. In fact, LinkedIn, earlier this year came out with a statistic saying they've got about eleven thousand job applications per minute. It was unprecedented volume that they articulated almost turned on overnight. And Gartner also articulates that by twenty twenty eight, they surmise a quarter of job applicants will be fake, so maybe bots or maybe fake in terms of experience. But I'd love to go back and read some of the responses from you that came in in the chat about what what hiring might what experiences you might have. Heather Ortega, I love yours. It was like right at the beginning. You've reviewed nearly four thousand candidate applications and an increase in AI generated resumes. We are experiencing the same thing. And I appreciate your trigger is, the abuse of the emojis perhaps, which is maybe ripped straight from an AI generated resume. So thank you for that. Yeah. You get up five thousand, Martha, for each posting. My goodness. I thought ours was high. And I was wondering if it's because we are a fully remote organization and thought, oh, maybe I is that because we have an advantage for that? But your five thousand make mine look easy. Thank you. And sorry for that influx. We can move on if we'd like. Yeah. Great. Yeah. Thank you, everyone. So we're going to talk about cheating. So we know candidates are using AI in various ways to get ahead when applying for jobs. And, honestly, in this difficult job market, you know, more power to them. But when we talk about cheating, it's important to understand that it's not really as black and white as we might assume. So, Julian, I'm gonna hand it back to you to walk us through some of the nuances of candidates cheating, with AI. Thank you. And it's such a good question because cheating might really oversimplify what is truly going on. So when we look at the left, they these examples are a little more tame, and then we'll progress over to the right side of the slide. But using AI to polish your resume, even think back to Grammarly. That was an AI that is an AI generated tool. Was that cheating? I know I've been using it for a few years. Tailoring resumes specifically to the role, preparing candidates, preparing themselves for interviews and maybe coming with more canned answers or polished answers. Is that cheating? Generating responses to those questions in advance and then even answering questions during an assessment, leveraging AI to do so. As we move over to the right, these become a little more nuanced or even harder. So is it cheating for a candidate to read off a script and predefined responses during an interview? How about a deepfake video? We have had experiences internally where we certainly surmised not a deepfake video, but I'll go back to the one before that we had a candidate reading off the script, and the answers were superb, by the way. How about those AI bots that are mass applying to jobs? We unequivocally have them. We see them. They come in even through a different, routing mechanism within our applicant tracking system so we can identify them. But is it also cheating to plagiarize work, and proprietary content, maybe using code from, from somewhere else? And then lastly, even misrepresenting or sharing untrue information. And I think with all of this, we, as hiring professionals, may need to adapt how we are identifying talent first, but also how we are designing our hiring processes. And maybe instead of a focus on catching cheating, we might design hiring processes that get us beyond the polished language and get us to things like problem solving, job relevance or job relevant simulations, and even work samples that can help us uncover authenticity. I mean, the goal ultimately is to understand that person behind the prompt and get beyond just the AI polish. But, again, let's hear from you. Yep. So we've got another poll. Let's launch this one. So we wanna know where do you draw the line? Which of these do you consider cheating? So you can check all that apply. We'll give you all a moment to respond to that, but I think this is a really interesting, concept because this is a a webinar about AI cheating. But cheating is is actually more complicated than we think. So we'll just give you another few moments to respond to that, and then we will discuss those results together. Great. I am gonna close the poll in three, two, one. And here are the results. Jillian, do you wanna walk us through those? Yeah. These are these are fascinating. Thank you. I appreciate also in the comments, Jenica Jenica articulating it is an evolving definition. And here, using the a using AI to answer questions during interviews is eighty nine percent and getting live answers during tests being the highest, responsive or or agreement or alignment on what cheating might be. And what's fascinating is I think there's a larger agreement on maybe some of the more definitive or flagrant uses of AI, but it's also, perhaps, an evolving definition that I think all of us are having to face currently. Great. Like I just said, it's really a definition that doesn't mean the same thing to everyone, and it's going to be up to all of us as hiring leaders to decide and then share the behavior and the expectations of candidates. This become imperative within our hiring processes. Awesome. So, for our next topic, obviously, we've covered what's going on in the state of hiring. We've covered what is cheating. And now we wanna talk about mitigation and deterrence. And so, what we wanna talk about is the first line of defense, which is basic communications and expectation setting. So we at Criteria have a sort of structured way of looking at different methods for ensuring candidate integrity. And so I wanted to pass it over to Chris to discuss our sort of overarching approach to deterring what we think of as cheating. So, Chris, over to you. Thank you very much. Great to see you all. Let me give you a quick overview of our kind of approach to candidate integrity at Criteria. So we think about candidate integrity as something you build in layers. There's really no single silver bullet that will prevent AI cheating. I think many of you have figured that out by now. And it's really about overlapping protections that are gonna work together. And I want you to think of it as a funnel or an inverted triangle like you see here. The goal is that each layer gets progressively more technical and more protective as you move down. At the very top, we start with communication and deterrence, and this is where we set clear expectations with candidates. We're gonna go through each of these sections a bit today. Then we build, we add built in assessment protection. So there's some technical controls that I'll describe today, like session monitoring, how we use adaptive experiences or question exposure limits. And, again, as you continue to move further down, those layers are gonna extend into structured interviewing where our video and live tools can be customized to elicit authentic responses, and it really helps limit opportunities for manipulate for manipulation. And finally, really the most technical layer, and we're gonna talk about it in detail today, is how we leverage proctoring and AI analysis to, do things like image and audio sampling, behavior monitoring, and how the reporting can give you insights as a hiring manager. So today, these layers, together will give us confident high integrity signals. And with all of you as hiring leaders, the goal is to give you the assurance that what you're seeing reflects a real candidate's ability and not just a synthetic one. So with that in mind, I'm gonna hand it over to Jillian who's gonna take us into that first layer, and we're gonna talk a bit about best practices when setting expectations and the deterrents you get from that. So candidate communication is vital for setting the expectation, even articulating, like, please don't use AI and and retesting individuals, whatever might be necessary within your hiring process. We have some examples of how we have articulated this to our candidates in our process. By the way, start with defining cheating within your organization and what it means to your hiring process. And then as you invite candidates to complete various stages of the hiring process, set expectations by telling them what is expected at each phase. The next is some examples of how to articulate messaging candidates, what they can do, what they can't do, and what is expected throughout the process. Maybe taking sample questions is great, but please don't use your phone or take snapshots of your assessment and on and on. In fact, on the next slide, our landing page on criteria has both a message for our candidates as well as a video articulating where we stand on the use of AI in the applicant and the hiring experience. We've been startled, and I think you I would love to hear from you too in the chat, that once we started to put our expectations out and communicate them to candidates, that it was extremely effective. It set the expectation and started to pivot what we were seeing from our applicant pool. But I'll hand it back to you, Chris, for a little more tools and tech. Thank you, Jillian. And it's amazing how something as simple as communicating expectations has been effective for our customers, so that's fantastic. Alright. So now we've talked about expectation setting, kind of the top of that inverted triangle. We're gonna shift gears into some technology itself. So this next section is really about what happens under the hood. I'll do my best to not get too technical, but give you at least some insights into how it works. And, the reality is because while communication is where deterrence begins, technology is really going to be what sustains it throughout the candidate experience. So at at Criteria, we have this multilayered defense system, and the goal of that is to protect integrity at every stage of the hiring journey all the way from our assessments to interviews and give you things like proctoring and analysis. The real challenge here, though, isn't just detecting bad behavior. The challenge is doing it consistently, accurately, and for many of you, at scale. So most organizations simply can't rely can't rely on manual oversight anymore, and we understand that you need automation to uphold the integrity of those talent signals across thousands of candidate candidates, and you wanna do it in an automatic and secure way. So that's exactly what our approach enables. The ability to deliver to you all trusted, high confidence results at scale and not slow down your hiring process is what we think about every day. So let's dive into the first section here about built in assessment protection. So let's start kinda with the foundation. We've built this into our assessments product, into our interviewing product, and the goal here is to protect integrity automatically. And, again, we're gonna use overlapping controls to ensure the integrity of the signal. There isn't one silver bullet, so all of these are intentionally designed to work together. So these controls in the built in assessment protections operate behind the scenes in every single candidate session. And, again, they're designed to maintain the security, fairness, and reliability of results but not add friction to the candidate experience. So here's a few examples. We have things like session controls and monitoring, and the goal of that is to track the session from start to finish, gives us opportunities to identify unusual activity patterns, repeated attempts, or maybe behavior that doesn't match what we would expect a human candidate, to experience in a test flow. There's also things like our dynamic question bank and exposure monitoring where we manage essentially massive pools of test content and constantly track which questions have been seen. And if an item crosses an exposure threshold, we can, and that essentially means it's been overshared or compromised, then it's automatically flagged and removed from circulation in our assessment. So that's how we keep the content fresh and unpredictable at scale. We have excellent techniques like our adaptive testing framework that personalizes difficulty and question order based on real time performance, and that means that even two candidates taking the same test won't have the same experience. And that's an inherent deterrent to answer sharing or AI generated responses to assessments. We also have these time based controls. Right? So in especially in our cognitive assessments where pacing is part of the measurement itself, these time limits are tuned to protect the validity of the score, and it makes it really nearly impossible to script or leverage automated tools to to to beat some of those time based controls. We've also got game based assessment, which, as you can imagine, have protection built into the design. The goal there is that the tasks themselves are highly interactive and dynamic, and it's gonna be a lot of behavioral input rather than just a multiple choice or a selection. Selection. So that's gonna give incredibly resistant scenarios to generative AI or automation based manipulation. And finally, we use things like digital print, fingerprinting, which is subtle but really powerful. It allows us to detect repeat test takers or device level patterns across sessions, that AI has a lot of, of of kind of indicators and fingerprints of. So, again, at a technical level, and there are more controls that that we're not disclosing here today, all of these work together as part of our assessment level protections. The next, area I wanna talk about is specific to this generative AI phenomenon. So I think what's important is a lot of those test controls are critical and needed in today's environment, but there are special innovation and controls that we feel like criteria we are really, really ahead of the curve on. And part of that is the kind of three dimensions of our generative AI strategy. And we we know that this is the frontier of integrity protection in the age of generative AI. And the reality is these tools these AI tools evolve fast, and they're capable of mimicking human behavior. They can access web based assessments. They can even generate dynamic responses like we heard in the chat. So the real question is how do we stay ahead? And this three-dimensional detection strategy for generative AI is is quite important to that strategy. So first, one dimension as we break that down is that, we think of the first dimension as a firewall layer. So we look at things like traffic sources. We look at referrers of where the traffic came from. We look at what's called user agent data to be able to identify someone or something is trying to access a test through maybe AI driven browsers or automation tools. And it's kinda like checking the caller ID before we let anybody in. This kind of first dimension is really critical, to our generative AI, strategy. Next, we have things like embedded detection, and this layer lives inside of the application itself. These are techniques that we build right into the candidate experience so we can detect changes that certain AI tools make to the screen or maybe adjust to the browser behavior. For example, if an AI tool is adding invisible overlays to the browser itself or it's injecting buttons or scraping our, our web page, we can see that behavior, and we are building subtle behavioral checks into our interface that make it harder for AI to respond in a natural human way. And the last dimension that I think is really critical is system level detection. And, frankly, this is where it gets really interesting. Traditionally, to detect system level behavior, you had to you and we see this in the market today. You have to force candidates to download maybe really heavy lockdown browsers. And the reality is that hurts accessibility and harms the overall experience. So at Criteria, we have a really different approach. We're testing new methods that can detect system level signals like background processes, browser plug ins, unauthorized helpers, but we do it in a seamless way, always with the candidate's permission, high transparency, and we don't degrade performance or privacy in the process. So as you can see here, the result is a flexible, really multilayered architecture that evolves alongside the threat landscape that we see changing so rapidly. So every time a new AI tools emerge, you're gonna see us make some type of reaction, and our detection network learns and adapts to those, those risk, vectors. The next, grouping of our strategy here is structured interviewing customization, and I saw some really great comments in the chat about how this is being applied in processes today. So we've talked about assessments and the technology that protects integrity, but what about interviews? I think with structured interviewing, this is an opportunity where authenticity really shines through, and it's your chance to hear and see how someone thinks, communicates, and problem solves. And as generative AI becomes more accessible, as you've said in the chat, even this part of the process must be adapted because people are reading off of scripts and their excellent answers, and it's really hard to get a high integrity signal from that. So if you're using criteria structured interviewing, whether that's one way video or live, the key here is to design questions, and we can, of course, help you with that, that invite depth and nuance, questions that make it hard to fake thoughtfulness. That's, I think, one of our key recommendations here. And let's be honest. Most of us have already felt when something's AI generated. We talked about it in the chat. You read a paragraph, and you instantly think, yep. That's AI. It's that overly agreeable tone you might feel. Maybe it's the perfect punctuation. We mentioned emojis in the chat. Or my personal favorite, the overuse of the Emdash. And I owe an apology to all the English majors out there who have used the Emdash correctly for decades, but now are being unfairly suspected of using ChatGPT. So, that's always on my mind. But those little cues are exactly the point. We're learning to recognize authentic human texture. So when you build a structured interview question, aim for prompts that make people share real experiences, invoke emotion, and give you insights into the human that's, in that interview. And that's kind of the content that AI still struggles to mimic convincingly. I see someone confirm in the chat that they used Emdash before it was an AI thing, so I feel you there. Yeah. And, Chris, I think a really unique part of structured interviewing too is when you're asking someone a question, you really want them to give a very specific response that describes an experience that they had at a previous company or a previous experience. And I think that's definitely something that AI literally can't come up with because they aren't inside your brain and or know all of your specific unique experiences. That's right. Totally agree. And to wrap kind of the video interviewing layer up, I also wanna note that if you're in a high volume hiring environment and you really need a way to scale this video interviewing authenticity check automatically, we'd take a look at customizing a behaviorally a behaviorally anchored rating scale, a bars guide that's loaded into our interview intelligence model where we can use predictive AI scoring to help you automate the evaluation of those nuanced indicators of genuine thought, and it gives you that high scale safeguard that will still value the human element. So that's definitely an an opportunity. And then, of course, from a basic configuration standpoint, separate from interview intelligence, you can also limit retakes, adjust the amount of prep time for those one way video interviews. And, again, the goal is not to add friction, but to help capture a natural unfiltered response. So when you combine those video based insights with what you're already getting from assessments, you're gonna create a real compounding integrity signal, and that's gonna blend, like, validated data with real human expression, and that's where you get a picture of a candidate truly coming to life. So next, let's, let's move into, the very bottom of this, funnel, and let's talk a bit about proctoring. So now at the base of this triangle, which is kind of the final layer, we come to our proctoring and AI analysis. This is where, really, all the pieces come together. So you've got things like image capture, image analysis, screen and audio sampling, comprehensive reporting, And all these tools combined will give hiring teams the confidence that assessments were completed honestly, and the goal is to, again, not introduce unnecessary friction for the candidates in your pipeline. And I know the word proctoring can sometimes make people a little uneasy both for candidates and for hiring teams. There's a perception that it might add friction or cause drop off, but what's really interesting is, our own data, shows that we're not seeing a meaningful drop off in, candidates' completion rate as a result of of the proctoring component. And, again, I think the the fact is that, especially in this unique hiring environment that we've all described, candidates are becoming increasingly comfortable with hyper with higher integrity processes. And they understand that it's not always about policing, but it's also about fairness. And I think that's a really important perspective, to keep in mind, that the candidates may be highlighting. So as we move into the details of our proctoring tech, just keep in mind, it's not only about surveillance. It's about trust, and it's about protecting both your organization and also candidates from uncertainty that comes in this world of generative AI. So, proctoring by criteria adds a extra layer of protection to your assessments. The goal here is to make sure genuine candidates are completing them. We're deterring attempts to use AI or outside help in an unethical way. The goal is the technology is designed to be robust and respectful. It captures the data that we need to verify authenticity, but and it and it does in an intelligent way, in a way that minimizes friction for the candidate. So let's move on to talk about kind of what we collect as signals in Proctoring and what you see in our client facing score report. So this will be what it looks like in practice. So what you're seeing here is a client facing report. This is the output of our AI proctoring system for a test event. It's designed to make the integrity insights as clear as possible and easy to interpret. At the very top, you'll see a summary of the proctoring results. In this case, the candidate didn't pass the proctoring criteria, but what's most important is that it's not just a binary pass or fail. Every finding is backed by specific and explainable evidence. So the report gives you an idea of exactly what was monitored. In this case, we have face detection matching against a reference image, presence of multiple people, use of mobile phones, and other behaviors that obviously can impact assessment validity. And each signal you see here, like face not detected or multiple faces detected, is the result of an automated image and behavioral analysis that runs quietly during the test session. And every flag is visualized with supporting images so you can see exactly what the system saw. And I think that's what makes our product really unique. The transparency is key here, and it allows our hiring teams to review results quickly and confidently without having to interpret complex data or make judgment calls on really limited information. And, again, importantly, these reports are not meant to punish, but they're meant to inform. And a single missed frame or a brief camera interruption, for example, doesn't automatically mean that a candidate will fail. Our models are designed to weigh overall session behavior to distinguish between what might be a technical glitch and a genuine integrity issue. So, again, at the end of the day, what you get is a proctoring experience that's smart, it's explainable, it's fair, and it gives you and your team's clarity, without the guesswork. And I'll wrap up here by saying that one of our big beliefs when we build products at Criteria is our belief in flexibility. And everything we've just talked about from image capture to mobile phone detection to screen and audio sampling, it's all built and highly configurable. So you decide which layers to turn on and when you want to turn them on. Maybe you only wanna do it for specific job roles. That's fine. This control panel here shows exactly how our customers manage their proctoring settings. You can enable essential monitoring, like face detection and reference matching, or you can go further with the more advanced options like multiple person detection or mobile phone detection. So the same applies to things like screen or audio capture. It's entirely up to you. So, again, this approach gives you the power to tailor that proctoring experience directly to your organization's needs, and we think that's really critical, especially in high volume hiring environments where efficiency and candidate comfort are always the top priority. So, that's our proctoring product, and that's our approach to, managing the most high integrity candidate signals, throughout your evaluation experience. Great. Thank you so much, Chris. So we've covered a lot, regarding our proctoring solution. That is a new product we just released this year. So if you're interested in getting a one on one demo of Proctoring, all you need to do is just type demo in the chat, d e m o in the chat, and a member of our team will follow-up with you. Just make sure you're not labeled as anonymous in the chat. Otherwise, we probably won't be able to reach out to you. And if you have any other questions about anything we talked about today, feel free to just type in demo in the chat. We'll reach out to you. We won't force you to sit through a demo if you just have a simple question. Happy to chat through one on one with any of you about anything that we talked about today. But with that, I think we're gonna pivot to questions. So I wanna give everyone a moment to, add some questions to the q and a. We can also go through the chat. We would still love to hear examples from all of you of how you're experiencing cheating today and some unique challenges you might have had. We can talk through those, during this webinar. But let's pivot to questions. And while we wait for some of those questions to come in, I had a question for Jillian and Chris. This is a more heady question, but what do you think hiring will look like in five years? You know, you don't have to be right about this. We all have our ideas, but I'm just curious if you have a perspective on that. My insight is that it's we we must find hiring signals that are other than the resume. And it's it's almost bittersweet because in my career, I've seen that whole evolution. I mean, I used to have to print mine on, like, a fixed stock paper. I know I'm dating myself. And there really was, like, one version. So it's wild to see in my own, trajectory that this is a document that may not hold the weight that it had once held. And I know this, like, sounds like I only bleed criteria, but I kind of do. But our products, can't articulate enough how timely they are given the noise that we're seeing relative to the volume of applicants. So we are able to take a resume as solely one data point and then maybe through a cognitive assessment, have one additional data point on how does this person think, or an asynchronous video interview is giving us insight into, wait. How do they communicate? And that is unfolding who the person the authentic person is in addition to having that resume there. And we are leveraging that now, and I don't know how else I would get through all the noise and the volume without those tools currently. Nice. And I'll just I'll add to that. Thank you, Jillian. That my view is over the next five years, we're gonna continue to see hiring be more skills first and technology driven. And I think one thing that stands out to me is, it was the World Economic Forum's future of jobs report, this year that says that there's gonna be an estimated net increase of something like seventy plus million jobs globally by twenty thirty, and I think that's that's great. But on the other hand too, thirty nine percent of core skills they predict will evolve or become obsolete, meaning that we absolutely must change the way we are evaluating candidates and talent, and we have to adapt to the fact that what we're measuring now is not what we'll be measuring five years from now. And I think we talk a lot about that at Criteria. Of course, we feel that we're an effective talent suite, position to help organizations with that challenge as we approach what we call Work four point o. And, really, for talent acquisition teams, that means shifting to, like, highly scalable tools with really protected high integrity signals, that defend against things like AI and also make sure that our candidate experiences are built for the AI era realities. I think that's that's very, very important. So that's kind of the next five years of hiring in a nutshell. Great. One question that's come up in the chat, and you can feel free to add questions to the chat or the q and a. We'll try to get them in both areas. But, someone's asking, what do you tell candidates about proctoring and how much detail? And I think a lot of that is actually built into our tool. So I don't know if, Chris or Jillian, you wanna comment on that. Yeah. I'll I'll kick us off. So we design, again, our products with the highest level of transparency and integrity. So we let candidates know through a really, really nice, modern user experience that proctoring, is gonna measure these various things like take periodic images or audio sample during the assessment, you know, of course, depending on what is configured by the individual customer. And I think what's equally important, though, is, again, not to just dwell on it as some type of punitive thing. Like, we always wanna give candidates a presumptive positive situation that they are not there to cheat. Right? I think that's that's good for the majority. And it's important to have messaging that is very front and center that says that proctoring is going to help the candidate by ensuring fairness and consistency for everyone. So from a technical perspective and a data privacy and security perspective, of course, we have to be explicit with what we're collecting, and we are in a great experience. However, we also wanna explain the advantage it can give to the average candidate in the sea of thousands of applicants, many of which may not actually be human. It's really important that we tell them it's an advantage for them. And we've seen that, you know, help with completion rates quite effect Yeah. And I think the tool also does a really great job, when you're setting up proctoring for the assessment. It takes you through a bunch of consents, and it lets you know what's gonna happen. So, I know on our website, we have, a walk through, a click through demo that you can take of the candidate experience where you can see exactly what's built into our product. And then if you wanna add your own communications to that, you're welcome to do that because it does it does help really all candidates feel that they're they're going through a fair process. June, do you have anything to add before we go to the next question? One other data point is if then you, as the hiring manager or talent acquisition professional, are reviewing the results, it also our product also layers in the human element. So maybe you have a test result that is flagged for something going on in the proctoring during an assessment, but it also enables the hiring manager and or the talent acquisition professional to review what the flag might be. So it it it lends itself to the ability to have gray area in the event. Maybe someone's Internet went out or something else happened. Someone walked in the room. The dog jumped. You know, a cat jumped into the laptop. And I think that's also very fair having that human element in as well. Great. So next question here. Do you consider using AI to build your resume cheating? So I think that's a question, for us on our own perspective on that. Yeah. Look. I'll I'll comment as a hiring manager. Right? I am also working on our products, but I also hire engineers, and I I hire folks across my teams. And I think there's this and this is a personal opinion. Right? I think there's a real defining line between and I think we saw that in the poll results. Right? Sounds like a lot of people would agree with this sentiment that preparing or learning or, you know, using it as an effective tool where you as the human are in control is a really, you know, reasonable way of using AI. So I would say that if I had a candidate using AI to build their resume, or even, right, getting closer to that line that I'm talking about drawing, even tailor their resume to a specific job role, I I think what would pass that line into cheating from my perspective is if the information that then gets tailored is untruthful in any way, overstating, misrepresenting, and and a lot of times these candidates, because they're using an AI, like, an agentic AI tool to do the applying on their behalf, they're not actually reviewing the content that is getting submitted for the hiring manager. So that's kind of the line for me is the line of misrepresentation, not knowing what you're actually submitting, you're not reviewing it. To me, that is a lazy use of AI, and not a one that I would reward in a process. But I saw a great comment in the chat saying that our definition of cheating is changing, and I completely agree with that. And as we embed AI, like, in our lives as a normal procedure and process, I think that definition will continue to change. But at the end of the day, when we're hiring people for the best parts of what a person and a human can bring to our organizations, we really need to measure those natural signals and not not, you know, increase the noise and the difficulty to measure that with misrepresentations and and other, just noise that that reduces our ability to understand what a candidate is truly about. And I'll echo Chris' sentiment that, when it when a resume is a misrepresentation of experience and skills, yeah, unfortunately, that is not ethical. However, when AI is leveraged and AI really is only as good as the prompts. So when there's that intellectual or the original thought or the deepening of prompts to not enhance, but rewrite a resume so it effectively articulates what you've done. I we will see that, hopefully, fingers crossed, with probably most resumes at at some point. And, again, it trips into cheating then if there's not the human view or the reality check or the ethical gut check of, like, gosh. Did I really do that, or does that really represent me? And and but I do think there's an an an educated and articulate way to leverage it and and kind of more power to the candidates right now for doing that as long as they represent themselves accurately. Great. Next question here. Does Criteria set any expectations around not using AI when a candidate is beginning an assessment, or are we, the the customer, responsible to do that on our own? Our candidate experience is customized and tailored to every customer. We do have best practices that we recommend while you onboard. You'll be you know, we'll be talking to you about what those best practices are and if they apply to your organization. And I would say that the common scenarios there are, yep, that's a great best practice. We'll put that instruction in our customized landing page. It's unique to my company and my experience. Or it's a flavor of what we heard in the chat where the hiring manager says, well, I don't wanna completely eliminate the usage of AI. But when they're taking something like our, CCAT, our cognitive tests, then, of course, I don't want them to use an AI to answer those questions. So we can really tailor that messaging. It is case by case. We have best practices that we would offer, but that conversation is really important to have, as you join, and and leverage our our talent success platform. Yeah. And I think that's what another reason why it's so important to define at your organization and per role and per even step in the hiring process what your definition of cheating is in that scenario, but, you know, individual to your organization. Great. Another question here. Would you say that spending a lot of time reviewing resumes is even time efficient as a recruiter? I'm basing more of my selection process on assessment results and video interviews. Whomever asked that, we are in the same page. Like, used to, just given my years of experience, I could read a resume very quickly, and, I just had seen thousands in my career. And I had to chuckle and humble myself in the last six, seven months, because I rarely do it now. I'll go to the video interview and the assessment result and then back reference the resume. So I've completely inverted the way I review candidates. I would I would also add that if you're investing time, you have to look at what the fundamental signal is that you're gonna get out of that time investment. And funny enough, the last twenty years, Criteria has been saying quite boldly and often unheard that, resumes are not a predictor to job success. And I think we're excited now, you know, not because of all the the pain and angst that it causes when we have these massive pipelines of of a lot of noise, but we're excited now to see that really come to fruition because, the resume has now been confirmed as one of the noisiest signals in hiring, and Gen AI is the cause of that amplification. People can now just use ChatGPT to tailor it as we've described. So it's so easy, and it's easier than ever, and there's no real connection to ability or potential because of that. So, again, if you're spending time on that, you're just getting, you know, better looking noise by spending your time on that, and you're not getting a predictive signal. The other thing to really note when you're buying solutions for your hiring pipeline is a lot of solutions out there put a lot. They kind of have a an an anti strategy to what I've described where we are focused more on high integrity signals, like, they come from our cognitive testing personality, video interviewing, etcetera. Other solutions in the market are choosing to use, like, machine learning and algorithms on resume data, and we would, like, draw a really firm line there and say between those two things that that is where we do not feel the scientific signals come from. So, not only would we not base our product on that, but we also would not encourage hiring managers to spend, any time really looking at a resume at the top of the funnel. There's always going to be a place for a resume and context later in your hiring process. But, again, as an early indicator of of capability and potential, it's quite, a weak and noise filled signal. Great. And, on that same topic, one question from the chat is, if not a resume, what other vehicle can be used to convey that experience? Good question. Jeff, I was just gonna answer you. Yeah. It's so good. For us, our structured interview platform and I've been preaching about using structured interviews for years, and it's so fun that it's so prevalent currently. But we leverage behavioral based questions that dig into a candidate's how their thought processes, the actions they took, what was the situation, what did that result, and literally listen for those answers, and then score candidate answers on a rubric. So it's I'm so lucky to have hiring managers here who leverage our own platform and tool because it allows for a fair and balanced hiring practice while simultaneously allowing us to understand experience that it's not just hearkened on a resume. Yeah. And, you know, it's my perspective too is that the resume was really a convenient container for a world where we didn't have data, and the reality is now we do. Right? The future of experience, as you've just asked in that question, isn't really a static document. It's really, in our opinion, about a living data rich profile built from the candidate's ability to demonstrate capability. And, again, as Julian noted, assessments give you measurable ability, structured interviewing, surfaces how someone thinks and communicates. And I think what we're doing is we're seeing such a more dynamic talent blueprint of an individual candidate because of the technology available to us, it makes that kind of convenient container, one's called the resume, a lot less, useful, for predictive hiring. Great. One more interesting question here. How do you as HR get hiring managers to see that their resume shouldn't be the tool they use to glance at a candidate and make a decision? Carrie, for us, this happened in real time. We we have had hiring managers come back to to HR to articulate, gosh. I thought I had fifty good candidates or five that I was moving forward with, and we've declined them all or they're not working. And it's been our role then to educate about what is truly happening with resumes that while it was something you leveraged and maybe not perfectly leveraged in the past, that they are less of a talent signal now than ever. And I and I guess, ultimately, the way we've helped get our hiring managers over that transom is, unfortunately, by their time being spent on who they thought was a great candidate based on a piece of paper, ultimately, to find out in the interviews that that was not the case. So we've educated upfront, maybe given them a little rope, and gently have them come back to us to say, yeah. We've noticed that too. Let's pivot and have a different talent signal that we can leverage. Great. I see there are a few more questions, but we are at the top of the hour. If you have left a question, we will try to reach back out to you, and answer your questions individually. But I wanna be respectful of everyone's time, and we're respectful of our speakers' time. Thank you both, Jillian and Chris, so much for joining this webinar. I I learned a lot from both of you, so I hope everyone else learned something from you all. Any any final words? It's an exciting topic and an exciting time to be hiring teams, so keep the conversation going. Thank you. Yeah. Thank you for having us. Thank you, everyone. Have a wonderful rest of your day.