Building on recent explainable AI techniques, this Article highlights the pervasiveness of Clever Hans effects in unsupervised learning and the substantial risks associated with these effects in terms of the prediction accuracy on new data., Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks., One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses. Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding., 579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models. Unlike existing works, CLEVER is augmentation-free and mitigates biases on infer- ence stage. In CLEVER, the claim-evidence fusion model and the claim-only model are independently trained to capture the corresponding information., Functionally, MambaNO achieves a clever balance between global integration, facilitated by state space model of Mamba that scans the entire function, and local integration, engaged with an alias-free architecture., While, as we mentioned earlier, there can be thorny “clever hans” issues about humans prompting LLMs, an automated verifier mechanically backprompting the LLM doesn’t suffer from these. We tested this setup on a subset of the failed instances in the one-shot natural language prompt configuration using GPT-4, given its larger context window..