top of page

Summary 2025

  • Writer: Bjørnar Aassveen
    Bjørnar Aassveen
  • 2 days ago
  • 3 min read

It’s getting close to Christmas, even though it’s gray, wet, and the snow is still conspicuously absent here in Eastern Norway, you can feel things starting to calm down a bit. There are a few more empty seats on the train, the stores already have 50% off all Christmas decorations, and most of us still have one or two Christmas parties left before we can fully take time off.


It might feel more like October than December, but there’s still something special about this time of year. You get a chance to pause, look back, and maybe even appreciate everything you’ve managed to accomplish (and get annoyed about everything you didn’t). Between rib roast, aquavit, and a few too many Christmas parties, it’s nice to take a look back at some numbers.


Number of blog posts in 2025:

  • Norwegian: 24

  • English: 22

  • Number of unique readers: 2,480

  • Number of views: 3,455

  • Total reading time from unique readers: 10,365 minutes


That equals about 172.8 hours

Or about 7.2 days

Or roughly 1 week



🔭 A Glimpse into 2026


I believe 2026 will be the year when more organizations truly realize that AI is no longer “a project,” but an operational capability that impacts everything—from product development and customer engagement to reporting, case handling, and internal processes. And as AI moves from pilot to production, the risks shift as well: from “who shared what in Teams?” to “who gave an agent access to everything, and what did it do with it?”


What makes next year especially exciting is that Security Copilot will become more accessible and more integrated into security and compliance work through Microsoft 365 E3 and E5 licenses.

MCP, Agents, and Vibecoding: A New Attack Surface

One of the most interesting (and challenging) topics ahead is how organizations will manage MCP, agents, and vibecoding—in other words, when employees build, connect, and automate workflows with AI at high speed, often driven by “it works!” rather than architecture and control.

This introduces a new reality:

Code, integrations, and automation become mainstream—and so does security responsibility.

In practice, this means we need to start managing AI agents as operational entities with:

  • Identity

  • Access

  • Lifecycle

  • Logging

  • Attestation

  • and a Kill Switch

Because agents aren’t just tools. They are active actors.


Will Agents Take on a “Personnel/Process Leadership Role”?

I think that’s a fair assumption: that agents will eventually be treated as a type of digital workforce—similar to employees, just at a much larger scale.

Imagine organizations having to implement something resembling an HR and process framework for agents:


Agent Onboarding

  • What is the purpose?

  • What is the role?

  • Which systems can it interact with?

  • Which data is “off-limits”?


Job Description and Guardrails

  • Which actions can it perform?

  • Which actions require approval (human-in-the-loop)?

  • Which rules should be enforced automatically?


Access Control and Audits

  • What permissions does the agent currently have?

  • Does it still need them?

  • Has it inherited access through groups/roles it shouldn’t have?


Offboarding

  • When the agent is no longer needed, how do we remove access, integrations, and data flows?


This is governance in practice—and here, the connection between Purview, identity, security logs, compliance, and operations becomes critical.



In the end, I think we’ll keep doing what we’ve always done: cleaning up too many Teams, searching for files in SharePoint, and hearing “can you see it now?” in countless meetings—with a coffee in hand and a gradual fading memory of what life was like before AI.


Merry Christmas🎅🤶

Bjørnar&AI


 
 
 

Comments


bottom of page