AI-Presentation.github.io

The Evolution of Artificial Intelligence: From Logic Machines to Agentic AI:

Introduction:

Early Foundations (1940s–50s):

Dartmouth Conference (1956):

Symbolic AI (1956–70s):

First AI Winter (1970s):

Expert Systems (1980s)

Machine Learning (1990s–2000s)

Deep Learning (2010s)

Transformers & Generative AI (2017–2020s)

Agentic AI (2020s–Present)

Key Figures

Future Outlook

Large Language Models work by:

  1. Predicting the next token using the Transformer architecture.

  2. Learning from massive amounts of text data.

  3. Scaling up with billions of parameters and advanced hardware.

  4. Being fine-tuned with human feedback for reliability and alignment.

Era Dominant Approach Paradigm Shift
1950s–70s Symbolic reasoning Logic → Knowledge Representation
1980s Expert systems Rules → Domain Expertise
1990s–2000s Machine learning Hand-coded → Data-driven
2010s Deep learning Manual features → Representation learning
2020s LLMs & Agentic AI Prediction → Autonomous reasoning