Over the past year, I’ve been increasingly interested in agentic AI — not just using large language models to generate text, but designing systems that can reason, decide, and act across a workflow with minimal human intervention.

Rather than starting with a theoretical exercise, I wanted to ground this exploration in a real, recurring problem I face as a Dynamics 365 practitioner: keeping up with Wave Release Plans.

Microsoft’s release documentation is comprehensive, but it’s also:

  • Lengthy (often 100+ pages)
  • Spread across multiple PDFs
  • Broad in scope, covering far more modules than most teams care about

So I built a small, agent‑inspired workflow in Python to automate the process end‑to‑end. The result is a project I call the Dynamics 365 Wave Release Summariser.

This post walks through the thinking behind the project, how the workflow is structured, and what I learned along the way.

What Do I Mean by Agentic AI?

BBefore diving into the code, it’s worth clarifying what I mean by agentic in this context.

This project is not:

  • A chatbot
  • A single prompt sent to an LLM
  • A human‑in‑the‑loop summarisation tool

Instead, it’s a system where:

  • Each step has a clear responsibility
  • Decisions are made programmatically
  • The LLM is used as a constrained reasoning component, not the orchestrator

In other words, the AI is part of a workflow — not the workflow itself.

Purists might argue that this isn’t “truly agentic” in the sense of autonomous agents dynamically planning and adapting at runtime — and they’d have a point. The workflow is largely deterministic by design. But adopting an agentic mindset — dividing responsibilities, constraining the LLM, and designing for autonomy over time — is a practical first step toward more capable agent‑based systems.

The Problem: Wave Release Notes at Scale

Each Dynamics 365 wave release includes:

  • Multiple PDFs
  • Hundreds of pages
  • Features spanning Sales, Customer Service, Marketing, Field Service, Power Platform, and more

In practice, most teams only care about:

  • A subset of modules
  • Features with real business impact
  • Breaking changes or behavioural shifts
  • When features actually become available

Manually extracting that information every six months is time-consuming and error-prone — and it’s exactly the kind of task that suits an automated, agent-driven approach.

Project Overview: Dynamics 365 Wave Release Summariser

At a high level, the project does the following:

  1. Downloads the official Wave Release Plan PDFs from Microsoft Learn
  2. Extracts raw text from those PDFs
  3. Filters content to only the modules I care about (for example, Sales and Customer Service)
  4. Uses OpenAI’s GPT models to generate a structured markdown summary

The final output is a clean, readable report that includes:

  • Feature descriptions
  • Business impact ratings (High / Medium / Low)
  • Availability dates
  • Breaking changes
  • A “Top 10” list of the most impactful changes

All of this runs locally with a single command.

Architecture and Agent Responsibilities

Rather than building one large script, I deliberately split the workflow into small, focused agents, each responsible for a single concern.

Configuration: config.py

This file defines the intent of the workflow:

  • Release year and wave
  • Which Dynamics 365 modules to track
  • Which OpenAI model to use
  • Output preferences

By centralising this configuration, the same workflow can be reused for future waves with minimal changes.

Orchestration: main.py

main.py acts as the conductor.

It doesn’t do any heavy lifting itself. Instead, it:

  • Reads configuration
  • Calls each agent in sequence
  • Passes structured data between steps
  • Handles failures gracefully

This separation is intentional — orchestration logic should remain simple and predictable.

Fetching Source Material: fetcher.py

This agent is responsible for:

  • Locating the correct Wave Release Plan URLs
  • Downloading the PDFs
  • Storing them locally for processing

Keeping this logic isolated makes it easy to adapt if Microsoft changes where or how the documents are published.

Summarisation and Reasoning: summariser.py

This is where the LLM comes into play.

Rather than asking GPT to “summarise everything,” the agent:

  • Feeds in filtered content only
  • Uses structured prompts
  • Requests output in a strict markdown format

The model is asked to:

  • Identify features relevant to the selected modules
  • Assess business impact
  • Highlight breaking changes
  • Rank the most impactful updates

The key point here is that the model is reasoning within constraints defined by the system, not deciding what the system should do next.

Why This Counts as an Agentic Workflow

What makes this project agentic isn’t the use of GPT — it’s the division of responsibility.

Each component:

  • Has a clear role
  • Operates independently
  • Produces structured output for the next step

The workflow can:

  • Run unattended
  • Be reconfigured without code changes
  • Be extended with additional agents (for example, posting summaries to Teams or Confluence)

This is a very different mindset from “prompt engineering” alone.

Running the Project

The usage is intentionally simple:

  1. Clone the repo
  2. Set your OpenAI API key in a .env file
  3. Configure the modules and wave details in config.py
  4. Run:
python main.py

The output is a markdown file containing a concise, decision-ready summary of the release notes.

Lessons Learned

Building this project reinforced a few key ideas about working with agentic AI in a practical, production‑minded way.

  • Agentic AI is about system design, not just model choice
    The most important decisions weren’t about which GPT model to use, but how responsibilities were divided across the workflow. Clear agent boundaries made the system easier to reason about, test, and extend.
  • LLMs are most effective when tightly constrained
    Pre‑filtering content and enforcing structured output dramatically improved the quality and consistency of the summaries. Treating the model as a reasoning component within defined limits produced far better results than asking it to “figure everything out.”
  • API costs need to be designed for, not discovered later
    Treating OpenAI usage as a metered resource shaped the architecture from the start. Filtering content before summarisation, estimating token usage up front, and supporting test and no‑API modes made the workflow safe to iterate on and sustainable to run repeatedly.
  • Agentic workflows shine on repeatable, high‑friction tasks
    This approach works best where humans are repeatedly doing the same cognitive work — reading, filtering, assessing impact, and summarising. Automating that loop is where agentic AI delivers real, compounding value.

Most importantly, this project reinforced that agentic AI doesn’t require complex frameworks — it requires clear thinking about responsibility and flow.

View the Project

If you’d like to explore the code or try the tool yourself, the full project is available on GitHub:

Dynamics 365 Wave Release Summariser

What’s Next?

There are plenty of directions this project could evolve in, particularly toward more autonomous behaviour:

  • Retrying failed downloads or API calls without manual intervention
  • Validating summary quality and re‑prompting if outputs don’t meet defined criteria
  • Comparing wave releases over time to detect meaningful changes automatically
  • Triggering downstream actions, such as notifications or documentation updates

Each of these moves the workflow closer to a system that doesn’t just execute steps, but actively manages its own outcomes.

This is very much an open project — and if you build or experiment with any of these ideas yourself, I’d genuinely love to hear about it.

Leave a comment