We are excited to announce the launch of our AI & Machine Learning Bootcamp. This intensive, hands-on program is designed to help technical professionals break into the world of applied AI.
The first course in this series is titled Applied AI: From Deep Learning Fundamentals to Agentic Systems, an 8-week journey into building real-world AI capabilities.

Applied AI: From Deep Learning Fundamentals to Agentic Systems

Key Points

  • The course is designed for 8 weeks, with 3 sessions per week, each 1.5 hours long, covering AI topics for tech professionals new to AI.
  • It includes theoretical and hands-on learning, using Python and PyTorch, focusing on NLP, CV, Generative Models, and Agentic AI.
  • The course starts with basics to ensure a smooth introduction, progressing to advanced topics, with hands-on projects to build practical skills.

Course Overview

Introduction and Structure

This Applied AI course is tailored for industry professionals with a tech background but no direct AI experience. It spans 8 weeks, with three 1.5-hour sessions per week, totaling 36 hours of learning. Each session combines theory and hands-on practice, using tools like Python, PyTorch, and Hugging Face libraries, ensuring you can apply AI concepts in real-world scenarios.

Topics Covered

The course covers these topics:

  • Natural Language Processing (NLP): Weeks 2-3 focus on architectures, fine-tuning, LLMs, and Retrieval Augmented Generation (RAG).
  • Computer Vision (CV): Weeks 4-5 cover Convolutional Neural Networks (CNNs), architectures, fine-tuning, and UNet for image tasks.
  • Generative Models: Week 6 explores GANs and Diffusion Models for creating new content.
  • Agentic AI: Week 7, aligned with 2024-2025 trends, focuses on using tools and LLMs to build AI agents and assistants, such as chatbots or task performing systems.
  • Ethics and Wrap-Up: Week 8 addresses AI ethics and consolidates learning with discussions on future trends.

What to Expect in Each Session

Each session starts with a 45-minute theoretical lecture, covering concepts like neural network basics or LLM capabilities, followed by a 45-minute hands-on activity, such as building a simple CNN or fine-tuning a pre-trained model. Expect guided coding exercises, with materials like slides and notebooks provided, and instructor support for questions.

Unexpected Detail

You’ll also explore how Agentic AI can integrate with external tools, like web search, to create more autonomous systems, which is a cutting-edge application not typically covered in beginner AI courses.

Course Structure and Duration

The course is structured over 8 weeks, with a total of 24 sessions, each 1.5 hours long, equating to 36 hours of learning. This duration was chosen to thoroughly cover the broad topics listed—NLP, CV, GANs & Diffusion, and Agentic AI—while allowing for a smooth introduction and progression to advanced concepts. The decision for 8 weeks, rather than the minimum 6, ensures sufficient time for hands-on projects and deep dives, given the participants’ lack of direct AI experience but strong tech background.

Learning Outcomes and Objectives

The Course Learning Outcomes (CLOs) are designed to ensure participants achieve a robust understanding and practical skill set:

  1. Understand core AI and machine learning concepts, including deep learning principles.
  1. Develop proficiency in building, training, and fine-tuning AI models using PyTorch.
  1. Master NLP techniques, including working with LLMs and implementing Retrieval-Augmented Generation (RAG).
  1. Apply Computer Vision techniques, designing models like CNNs and UNet for image-related tasks.
  1. Explore generative AI, creating models like GANs and Diffusion Models for content generation.
  1. Build Agentic AI systems, focusing on using LLMs and tools to create autonomous agents and assistants.
  1. Address ethical implications, such as bias and fairness, applying responsible AI practices.

These outcomes ensure participants can integrate AI into their work, driving innovation and efficiency.

Weekly Breakdown and Session Details

The course is divided into thematic weeks, each with three sessions, balancing theory and hands-on practice. Below is a detailed breakdown, including content, hands-on activities, and expectations for each session.

Session 1:
  • Content: Definitions of AI, brief history, types of machine learning (supervised, unsupervised, reinforcement), real-world applications across industries.
  • Hands-On: Explore a simple ML example (e.g., linear regression in Python) to see concepts in action.
  • What to Expect: A broad, non-technical introduction to AI, setting the stage for deeper dives.
Session 2:
  • Content: Data preprocessing (cleaning, normalization), feature engineering, model training, evaluation metrics (accuracy, precision, recall), overfitting/underfitting.
  • Hands-On: Preprocess a dataset and train a basic model using scikit-learn.
  • What to Expect: Practical insights into ML workflows, building with data preparation.
Session 3:
  • Content: Neural networks basics (neurons, layers, activation functions), training process (forward pass, backpropagation), PyTorch introduction.
  • Hands-On: Build and train a simple neural network in PyTorch (e.g., MNIST digit classification).
  • What to Expect: First taste of deep learning, with guided coding to familiarize with PyTorch.

This week ensures a smooth entry, starting with basics and introducing essential tools.

Given the breadth of NLP, it is split into two weeks for comprehensive coverage.

Week 2 (NLP Part 1):

  • Session 4: Introduction to NLP—tasks (sentiment analysis, translation), challenges, text preprocessing (tokenization, lemmatization). Hands-on: Preprocess a text dataset.
  • Session 5: Word embeddings (Word2Vec, GloVe), sequence models (RNNs, LSTMs). Hands-on: Implement a simple RNN for text classification in PyTorch.
  • Session 6: Attention mechanism and Transformer architecture (foundation of BERT, GPT). Hands-on: Use Hugging Face to explore a pre-trained Transformer model.

Week 3 (NLP Part 2):

  • Session 7: Pre-trained models and fine-tuning (e.g., BERT). Hands-on: Fine-tune BERT for classification using Hugging Face.
  • Session 8: Large Language Models (LLMs)—capabilities, limitations, ethical concerns. Hands-on: Generate text with an LLM (e.g., GPT-2).
  • Session 9: Retrieval Augmented Generation (RAG)—concept, implementation, use cases. Hands-on: Build a simple RAG system with Hugging Face tools.

Similar to NLP, CV is split for depth, covering basics and advanced topics.

Week 4 (CV Part 1):

  • Session 10: Introduction to CV—tasks, image data basics, preprocessing. Hands-on: Apply image filters (e.g., edge detection) in Python.
  • Session 11: Convolutional Neural Networks (CNNs)—convolution, pooling, basic architectures (e.g., LeNet). Hands-on: Build a simple CNN in PyTorch for classification.
  • Session 12: Advanced CNN architectures (VGG, ResNet), transfer learning. Hands-on: Use a pre-trained ResNet for classification.

Week 5 (CV Part 2):

  • Session 13: Fine-tuning CNNs for specific tasks. Hands-on: Fine-tune ResNet for a custom dataset.
  • Session 14: Image segmentation with UNet—architecture, applications. Hands-on: Implement a basic UNet for segmentation.
  • Session 15: Hands-on CV project—review concepts, complete a classifier or segmenter. Hands-on: Build and evaluate a CV project.

This ensures participants can design and adapt CV models, with practical to solidify learning.

This week focuses on creating new content, covering both GANs and Diffusion Models.

  • Session 16: Introduction to generative AI, brief on VAEs, focus on GANs (generator vs. discriminator). Hands-on: Explore pre-trained GAN outputs.
  • Session 17: Deep dive into GANs—training challenges, variants (e.g., DCGAN). Hands-on: Build a simple GAN to generate images.
  • Session 18: Diffusion Models—process, applications in image generation. Hands-on: Use a Diffusion Model via Hugging Face Diffusers to generate content.

This week introduces cutting-edge generative techniques, with hands-on creation of new data.

Given the user’s clarification, Agentic AI focuses on 2024-2025 trends, using tools and LLMs to create AI agents and assistants.

  • Session 19: Introduction to Agentic AI—definition, examples (chatbots, assistants), components (perception, action), role of LLMs. Hands-on: Simulate a basic agent using an LLM in a simple environment.
  • Session 20: Using LLMs for Agentic Behavior—planning, reasoning, decision-making, techniques like prompt engineering. Hands-on: Implement a task using an LLM to guide agent actions.
  • Session 21: Building AI Agents—designing LLM-based agents, integrating with tools (e.g., web search), handling external interactions. Hands-on: Build an agent that uses an LLM to answer questions or perform tasks with tools.

This week aligns with current trends, ensuring participants can create autonomous systems, a key area in AI development.

This final week addresses ethical considerations and consolidates learning.

  • Session 22: AI Ethics—bias, fairness, transparency, responsible practices, case studies. Hands-on: Discuss ethical scenarios and mitigation strategies.
  • Session 23: Course Wrap-Up—recap key concepts, future trends (e.g., multimodal models), Q&A. Hands-on: None, focus on consolidation.
  • Session 24: Open Discussion—applying AI in industry, sharing insights, optional guest speaker. Hands-on: None, group discussion or expert talk.

This ensures participants reflect on AI’s societal impact and connect learning to their work.

Hands-On and Theoretical Balance.

Each session is split into approximately 45 minutes of theory and 45 minutes of hands-on practice, ensuring a balanced approach. Hands-on activities include coding exercises like building neural networks, fine-tuning models, and creating agents, with materials provided (slides, notebooks, datasets). This structure supports the course’s theoretical + hands-on nature, catering to the participants’ need for practical skills.

Tools and Prerequisites.

The course uses Python, PyTorch for deep learning, and Hugging Face for NLP and generative tasks, with additional libraries for Agentic AI (e.g., LangChain for agent frameworks). Prerequisites include familiarity with Python and basic tech concepts, but no prior AI experience, aligning with the target audience’s profile.

Additional Considerations

  • Projects: Hands-on sessions build toward a portfolio of mini-projects, such as a fine-tuned NLP model, a CV classifier, generated images, andan LLM-based agent, enhancing practical application.
  • Support: Instructors are available for questions, with optional pre reading for deeper dives, ensuring accessibility.
  • Ethics and Future Trends: Week 8’s focus on ethics and future trends, like multimodal models, prepares participants for responsible AI use and industry evolution.

This detailed design ensures a comprehensive, engaging, and practical AI learning experience, ready for implementation in professional settings.