AI Cheat Sheets

Home » AI Cheat Sheets

What is Chain of Thought Prompting?

2025-08-31T16:52:13+00:00

A prompting technique in Large Language Models (LLMs) where the model is guided to show intermediate reasoning steps before arriving at the final answer. Inspired by how humans solve problems step by step. Helps LLMs handle complex reasoning tasks such as math, logic, and multi-step decision-making. Key Concepts Step-by-Step Reasoning: Instead of jumping to an answer, the model explains its thought process. Intermediate Steps: Similar to “showing work” in math problems. Better Accuracy: Effective in arithmetic, logical reasoning, and multi-hop questions. Prompt Example: “Let’s think step by step.” Benefits Improves reasoning accuracy. Makes the model’s output more interpretable. Reduces errors [...]

What is Chain of Thought Prompting?2025-08-31T16:52:13+00:00

What is Model Context Protocol (MCP)?

2025-08-28T09:19:36+00:00

An open, model‑agnostic protocol introduced by Anthropic in November 2024, designed to standardize how AI systems (huge language models, LLMs) connect with external data sources and tools via a JSON‑RPC interface. Often likened to a “USB‑C port for AI,” offering a universal interface rather than bespoke integrations per system. Key Benefits of MCP Provides a standardized interface so LLMs can easily connect to multiple tools and data sources without custom adapters. Solves the “N×M” problem, removing the need to build a unique connector for every AI–tool combination. Ensures structured and validated exchanges, supporting better debugging, version control, and reliability in [...]

What is Model Context Protocol (MCP)?2025-08-28T09:19:36+00:00

What is Federated Learning?

2025-08-26T16:51:20+00:00

A machine learning technique where multiple devices or servers collaboratively train a shared model without sharing raw data. Instead of sending data to a central server, only the model updates (gradients/parameters) are sent, keeping sensitive information local. Key Concepts Decentralized Training: Data stays on local devices (e.g., smartphones, IoT, edge devices). Model Aggregation: A central server collects and averages model updates to improve the global model. Privacy-Preserving: Minimizes risk of exposing personal or sensitive data. Communication Efficiency: Reduces the need for large-scale raw data transfer. Edge AI Integration: Often paired with edge computing for real-time AI. How Federated Learning Works [...]

What is Federated Learning?2025-08-26T16:51:20+00:00

What are Clustering Algorithms in Machine Learning?

2025-08-25T06:43:36+00:00

Clustering is an unsupervised learning technique that groups similar data points without predefined labels. It helps discover hidden patterns, segment data, and reduce dimensionality in datasets. Key Concepts Clustering: Grouping data points based on similarity or distance metrics. Unsupervised Learning: No labeled data; the model identifies structure independently. Distance Metrics: Commonly used metrics include Euclidean, Manhattan, and Cosine similarity. Popular Clustering Algorithms 1. K-Means Clustering Divides data into K clusters by minimizing the variance within each cluster. Fast, easy to implement, and works well with large datasets. It requires predefining K and is sensitive to outliers. Customer segmentation, image compression. [...]

What are Clustering Algorithms in Machine Learning?2025-08-25T06:43:36+00:00

Azure AI Foundry

2025-08-01T12:49:53+00:00

Azure AI Foundry Cheat Sheet  Azure AI Foundry is a unified platform that enables enterprises to design, customize, and manage AI applications and agents at scale. It integrates tools, models, and workflows to streamline the development and deployment of AI solutions. Key Components AI Foundry: A development environment for building, testing, and deploying AI models and applications. Model Catalog: A repository of prebuilt models from Microsoft, OpenAI, and other partners, facilitating model selection and deployment. Prompt Flow: A tool for designing and orchestrating language model workflows, enabling systematic experimentation and refinement. Agent Service: A platform for securely designing, deploying, and [...]

Azure AI Foundry2025-08-01T12:49:53+00:00

What Is the Difference Between AI, ML, DL, and Generative AI?

2025-07-22T17:40:30+00:00

Imagine a world where machines compose music, diagnose diseases, write code, drive cars, and even generate original artwork. That world isn't the future, it's now. Artificial Intelligence (AI) is no longer a buzzword; it's a driving force behind the most significant innovations of our time. But here's the catch: while AI is everywhere, many still confuse its core components:  Understanding the differences between these technologies isn't just helpful, it's essential. Whether you're a student, a tech professional, a business leader, or AI-curious, this guide will give you a crystal-clear breakdown of these foundational terms in 2025 and beyond. What is [...]

What Is the Difference Between AI, ML, DL, and Generative AI?2025-07-22T17:40:30+00:00

What is ROUGE Metrics – Recall-Oriented Understudy for Gisting Evaluation?

2025-07-08T11:24:28+00:00

Recall-Oriented Understudy for Gisting Evaluation (ROUGE) Cheat Sheet ROUGE is a family of metrics designed to assess the similarity between machine-generated text (candidate) and human-written reference text (ground truth) in NLP tasks like text summarization and machine translation. Measures how well generated text captures key information and structure from reference text, emphasizing recall (proportion of relevant information preserved). Score Range: 0 to 1, where higher scores indicate greater similarity between candidate and reference texts. Key Use Cases: Evaluating text summarization systems. Assessing machine translation quality. Analyzing content accuracy in generated text. Types of ROUGE Metrics ROUGE-N: Measures the overlap of [...]

What is ROUGE Metrics – Recall-Oriented Understudy for Gisting Evaluation?2025-07-08T11:24:28+00:00

What is BERTScore – Bidirectional Encoder Representations from Transformers Score?

2025-07-03T06:03:55+00:00

BERTScore (Bidirectional Encoder Representations from Transformers Score) Cheat Sheet BERTScore is an effective evaluation metric that looks beyond surface-level word matching to assess the meaning behind the generated text. Instead of counting overlapping words like traditional metrics such as BLEU or ROUGE, BERTScore taps into the power of pre-trained transformer models (like BERT) to compare the semantic similarity between tokens in the generated output and a reference sentence. It does this by calculating the cosine similarity between their contextual embeddings. Initially proposed by Zhang et al. (2020), BERTScore has quickly become a popular choice in natural language processing tasks where [...]

What is BERTScore – Bidirectional Encoder Representations from Transformers Score?2025-07-03T06:03:55+00:00

Azure AI Bot Service

2025-07-01T08:07:59+00:00

Azure AI Bot Service Cheat Sheet A managed platform for building, deploying, and managing intelligent chatbots with conversational AI across multiple channels. Enables developers to create scalable, AI-powered bots using natural language processing (NLP) and low-code tools. Key Components: Azure Bot Service: Manages hosting, scaling, and monitoring of bots. Bot Framework SDK: Offers development tools for bot creation (C#, JavaScript, Python). Microsoft Copilot Studio: Low-code platform for building bots with a visual interface. Key Features Multi-Channel Support: Connects to Microsoft Teams, Slack, Web Chat, email, and SMS. Natural Language Processing: Integrates with Azure AI Language for intent recognition and knowledge [...]

Azure AI Bot Service2025-07-01T08:07:59+00:00

What is Retrieval Augmented Generation (RAG) in Machine Learning?

2025-06-30T03:46:57+00:00

Retrieval-Augmented Generation (RAG) Cheat Sheet Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) outputs by incorporating information from external, authoritative knowledge sources. Instead of relying solely on pre-trained data, RAG retrieves relevant content at inference time to ground its responses. LLMs (Large Language Models) are trained on massive datasets and use billions of parameters to perform tasks like: Question answering Language translation Text completion RAG extends LLM capabilities to domain-specific or private organizational knowledge without requiring model retraining. It provides a cost-efficient way to improve the relevance, accuracy, and utility of LLM outputs in dynamic or [...]

What is Retrieval Augmented Generation (RAG) in Machine Learning?2025-06-30T03:46:57+00:00

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Upskill and earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!