LABORATORY PORTFOLIO · AI & ML · 2026

Artificial Intelligence & Expert Systems

// Exploring Intelligence · One Algorithm at a Time

0 Algorithms
0 Search Methods
0 Neural Model
0 % Modular
SCROLL TO EXPLORE
SYSTEM OVERVIEW

Project Initialised

ai-lab-portfolio — bash — 80×24
user@ai-lab:~$ cat project.info
REPOSITORY : AI & Expert Systems — Lab Programs
OBJECTIVE : Analyse intelligent search and decision-making algorithms
PARADIGM : Uninformed · Informed · Adversarial · Knowledge-based
user@ai-lab:~$ ls ./features/
interactive-inputs    traversal-display    optimal-path
complexity-analysis   modular-design      comparison-study
user@ai-lab:~$ python3 run_all.py
[ OK ] BFS   [ OK ] DFS   [ OK ] UCS   [ OK ] Greedy   [ OK ] A*
[ OK ] Minimax   [ OK ] AlphaBeta   [ OK ] WaterJug   [ OK ] DecisionTree
[ OK ] CryptArithmetic   [ OK ] NeuralNetwork
user@ai-lab:~$
ALGORITHM BANK

Implementations

↓ Click any card to expand pseudocode

📡
Breadth First Search
UNINFORMED · GRAPH TRAVERSAL
BFS

Explores all nodes at current depth before moving deeper. Guarantees shortest path in unweighted graphs using a FIFO queue.

CREATE queue Q, visited set V ENQUEUE(Q, Start) WHILE Q not empty DO node ← DEQUEUE(Q) IF node not in V THEN VISIT(node); ADD(V, node) ENQUEUE(Q, neighbors) END IF END WHILE
COMPLETE OPTIMAL FIFO QUEUE
🔎
Depth First Search
UNINFORMED · GRAPH TRAVERSAL
DFS

Dives as deep as possible along each branch before backtracking. Memory-efficient using recursion or an explicit stack.

DFS(node): ADD(Visited, node) VISIT(node) FOR each neighbor NOT IN Visited: DFS(neighbor) // Recurse CALL DFS(StartNode)
RECURSIVE BACKTRACK STACK
💰
Uniform Cost Search
UNINFORMED · OPTIMAL
UCS

Expands the lowest cumulative-cost node first. Dijkstra's algorithm generalised for any step cost — always finds the optimal path.

CREATE priority queue PQ INSERT(PQ, Start, cost=0) WHILE PQ not empty DO node ← REMOVE_MIN(PQ) IF node = Goal THEN STOP ADD(Visited, node) INSERT(PQ, neighbors, updated cost) END WHILE
OPTIMAL MIN-HEAP WEIGHTED
🚀
Greedy Best-First
INFORMED · HEURISTIC-DRIVEN
GBFS

Always expands the node that appears closest to the goal using a heuristic h(n). Fast but not guaranteed to be optimal.

INSERT(PQ, Start, h(Start)) WHILE PQ not empty DO node ← REMOVE_MIN_H(PQ) IF node = Goal THEN STOP INSERT(PQ, neighbors, h(n)) END WHILE
HEURISTIC FAST NOT OPTIMAL
🌟
A* Search
INFORMED · OPTIMAL + COMPLETE
A*

Combines UCS and Greedy: f(n) = g(n) + h(n). Optimal with an admissible heuristic — the gold standard of pathfinding.

INSERT(PQ, (f=0, Start)) WHILE PQ not empty DO node ← REMOVE_MIN_F(PQ) IF node = Goal → PRINT "Goal Reached" FOR each neighbor: g ← path cost f ← g + h(neighbor) INSERT(PQ, neighbor, f) END WHILE
f(n)=g+h ADMISSIBLE OPTIMAL
🎮
Minimax
ADVERSARIAL · GAME THEORY
MM

Optimal strategy for two-player zero-sum games. MAX player maximises score; MIN player minimises it across the game tree.

MINIMAX(node, depth, isMax): IF depth = 0RETURN value IF isMax: RETURN max(left, right) ELSE: RETURN min(left, right)
GAME TREE ZERO-SUM RECURSIVE
🎯
Alpha–Beta Pruning
ADVERSARIAL · OPTIMISED
α-β

Enhancement of Minimax that prunes irrelevant branches. Cuts average branching factor from b to √b, doubling search depth.

ALPHABETA(node, α, β, isMax): IF depth=0RETURN value IF isMax: FOR child: α ← max(α, AB(child)) IF β ≤ α → BREAK // Prune ELSE: FOR child: β ← min(β, AB(child)) IF β ≤ α → BREAK // Prune
PRUNING O(√b^m) OPTIMAL
🧩
Water Jug Problem
STATE-SPACE · BFS-BASED
WJP

Classic state-space search puzzle using BFS. Generates all valid jug operations and searches for the target water volume.

ENQUEUE(Q, (0,0)); ADD(V, (0,0)) WHILE Q not empty DO (x,y) ← DEQUEUE(Q) IF x=Target OR y=Target THEN PRINT "Target Reached"; EXIT GENERATE possible states ADD unvisited states to Q END WHILE
STATE-SPACE BFS PUZZLE
🌳
Decision Tree
MACHINE LEARNING · SUPERVISED
DT

Recursively partitions data using Information Gain to select the best attribute, building an interpretable classification tree.

DecisionTree(Dataset, Target): IF all same class → RETURN class IF no attrs → RETURN majority best ← SELECT(max InfoGain) FOR each value of best: subtree ← DecisionTree(split) RETURN Decision Tree
INFO GAIN ENTROPY ID3
🔐
CryptArithmetic
CONSTRAINT SATISFACTION
CSP

Solve SEND + MORE = MONEY via constraint satisfaction. Each letter maps to a unique digit; leading letters cannot be zero.

FOR all permutations S,E,N,D,M,O,R,Y: ENSURE all digits unique ENSURE S ≠ 0, M ≠ 0 SEND ← form_num(S,E,N,D) MORE ← form_num(M,O,R,E) MONEY ← form_num(M,O,N,E,Y) IF SEND+MORE=MONEY → PRINT; STOP
SEND+MORE BRUTE FORCE CONSTRAINT
🧠
Neural Network
DEEP LEARNING · PERCEPTRON
NN

Basic perceptron training model. Computes weighted sums, applies activation functions, and updates weights via error correction.

INITIALISE weights, bias FOR each training example: sum ← WEIGHTED_SUM(inputs, w) output ← ACTIVATION(sum) error ← target - output IF error: UPDATE(weights) END FOR RETURN trained model
PERCEPTRON BACKPROP ACTIVATION
SEARCH STRATEGY

Algorithm Spectrum

🌐
Uninformed
BFS
🌐
Uninformed
DFS
⚖️
Uninformed
UCS
🧭
Informed
Greedy
Informed
A*
♟️
Adversarial
Minimax
✂️
Adversarial
α-β Pruning
KEY CONTRIBUTIONS

What Sets This Apart

🎛️
Interactive Inputs

Every algorithm accepts user-defined inputs at runtime — flexible graphs, custom heuristics, and adjustable parameters.

🗺️
Traversal Display

Step-by-step node visit sequence printed in real time, revealing how each strategy explores the search space.

📐
Complexity Analysis

Time and space complexity documented for every implementation with practical observations from actual runs.

🧩
Modular Design

Programs structured with clean, reusable functions — separation of concerns for easy understanding and extension.

🔀
Strategy Comparison

Conceptual side-by-side comparison of informed vs uninformed and optimal vs heuristic approaches.

📸
Execution Outputs

Screenshots of actual program runs included for every algorithm — see theory meet practice.