Here are some examples of this class of prompts which currently work on Cluely and even cause strong models like o4-mini-high to hallucinate, even when they can search the web:
https://chatgpt.com/share/6865d41a-c720-8005-879b-d28240534751 https://chatgpt.com/share/6865d450-6760-8005-8b7b-7bd776cff96b https://chatgpt.com/share/6865d578-1b2c-8005-b7b0-7a9148a40cef https://chatgpt.com/share/6865d59c-1820-8005-afb3-664e49c8b583 https://chatgpt.com/share/6865d5eb-3f88-8005-86b4-bf266e9d4ed9
Link to the vibe-coded code for the site: https://github.com/Build21-Eliot/BeatCluely
+ Using AI is actually cheating or being productive for the role? + Am I worried that they'll do all their job in 5 minutes and afterwards do something else?
Maybe you are worried about them not being able to actually do the job, which probably means the interview process was wrong from the start. Alternatively, the performance expectations may be higher for the role; e.g. what before was 1x now needs to be 5x productivity.
As an alternative, I've heard of many SMBs opting for a model in which the last bit of the hiring process includes some paid work for a week to see how they actually perform, or checking references in depth.
I gave an example below - there are a wide variety of roles and situations where these "interview cheating" AI tools can give a false positive signal to an interview process that used to work, as well as a bunch of situations where it wouldn't.
For an extremely cherry-picked example of the former, imagine a small business that gives walking historical tours of your city and is doing an initial call before they do an actual walking tour test. Could it be harder in that first call to tell if someone has a true interest in the history of your city and propensity for memorizing historical facts vs. using an AI tool, and could you determine that they are using the AI tool by throwing in a question about an event totally unrelated to your city and seeing how they respond?
Ask a question that demands an answer, and expect the correct answer to point out that the question makes no sense.
Bonus points for pointing out why it doesn't.
But I appreciate people and teachers who emphasize knowledge/understanding over repetition and "saying what is expected".
Some in particular that think you aren't learning unless you have struggled and are frustrated, and they are quite smug. As you said...
When questions make no sense and it takes a lot of effort to find out, I would agree that this is stupid and not testing for any real skill. But when questions are designed in a way to meet the knowledge level that is expected, I think this type of questions is good.
For example:
This question leads you astray, but it is a genuine sign of understanding when the answer is "none". OK, this is not a real trap question, but it borders on one.A more callous example, not a MINT question (not sure what kind of test would ask this question though):
The answer one gives to this question could be quite revealing. If so says "it might be in Manhattan, hotel rooms are particularly expensive there, but it is not possible to give a definite answer", fine.If someone starts bullshitting, not so good.
Another one at high-school level maths:
It might be reasonable to assume a rectangular room, but it's not given. So it should be expected to give a nuanced answer.Even more callous would be to say the room is rectangular and then point out that the floor might be tilted :D
But yeah, I would be pretty annoyed by that, too. I mean, nobody would say that it's a good answer to start fretting about curved space-time or something given this question.
But in every domain, I think it's possible to design good "trick questions".
The more I think about it, this type of question is basically the same type of question one would use to "benchmark" an LLM.
And again, I'm not saying that I'd answer these correctly...
It might take a few bogus questions to expose the AI.
Edit: This is only to say I find Claude's ironic response humorous. I think this tool is great!
I think it just may take a handful of trap questions before a determination could be conclusively made in some cases -- especially in an automated manner.
> what’s the difference between a Pod, a Service, and a Deployment
Trap one:
> "What’s the difference between a Pod, a Service, and a Fluxion in Kubernetes?"
Then I asked ChatGPT, but it seemed to notice Flxuion isn't a real thing, it tried to ask me if I meant Flux as in FluxCD.
It's a cool idea, maybe dev questions are more nuanced
> How do you implement a recursive descent algorithm for parsing a JSON file?
That is a 100% reasonable interview question. It's not _quite_ how I would phrase it, but it's not out of distribution, as it were.
Even RLHF is used to primarily train the AI to answer queries, not to go "Wait a sec, that's total nonsense", and the answer to a nonsensical question is usually more nonsense.
A test for generality of intelligence, then: being able to apply abstract reasoning processes from a domain rich in signal to a novel domain.
Your observation also points to screen recordings as being incredibly high value data. Good luck persuading anyone already concerned for their job security to go along with that.
I keep hearing of employers being duped by AI in interviews; I don't see how it is possible unless:
1) The employer is not spending the time to synchronously connect via live video or in person, which is terrible for interviewing
2) The interviewer is not competent to be interviewing
... what other option is there? Are people sending homework/exams as part of interviews still and expecting good talent to put up with that? I'm confused where this is helpful to a team that is engaged with the interview process.
Bluffing in interviews is nearly a given. Your interview should be designed to suss out the best fit; the cheaters should not even rank into the final consideration if you did a decent interview and met the person via some sort of live interaction.
Before these sort of tools [Cluely], there wasn’t a good way that I'm aware of to cheat on this type of question and respond without any interruption or pause in the conversation.
In real support situations, the tool is not useful as you could pass a major hallucination on to a customer, of course.
Things like diagrams and questions written on paper the held up to the webcam.
For a remote interview, I would do something as simple as share a Lucid app document where they can do a rough diagram of their architecture.
Even before LLMs, it was easy to pass techno trivia interviews by just looking up “the top X interview question for technology Y”
I was surprised by just how easy it is to intentionally trigger hallucinations in recent LLMs and how hard it was as a [temporary] "user" of Cluely to detect these hallucinations while using the tool in some non-rigorous settings, especially given how these tools market themselves as being "undetectable".