The Future of Measurement & Verification in the Age of AI
We're following up on the submeter discussion from last time, and the reactions online, and expanding the discussion to measurement & verification and how the field is changing in the world of AI.
Joined by guest Rasmus Gorm Pedersen from ProptechOS, our conversation includes perspectives on the practical application of data, regulatory impacts, and the importance of efficient energy-saving practices. Join us as we explore the nuances of this critical topic and the future of energy management.
00:00 Introduction and Podcast Setup
00:32 Recap of Previous Episode and Guest Introduction
01:16 Discussion on Submeters and Energy Consumption
03:17 Insights from the Conference on Measurement Verification
08:37 Exploring AI and Data Utilization in Energy Management
12:52 Challenges and Benefits of Submeters
27:36 Measurement and Verification in Energy Management
46:10 Concluding Thoughts and Future Vision
As always, your hosts are:
Benedetto Grillone, AI Product Lead at Ento
Malte Frederiksen, CCO at Ento
Henrik Brink, CEO Ento
Participants: Henrik, Benedetto, Malte, Rasmus
Henrik:
Agreed. So, you didn’t get the podcast room, Benedetto?
Benedetto:
I wanted to, but it’s five euros per hour, so I was wondering if it was worth it.
Apparently, if there’s a lot of background noise or a big lamp behind your head – like Malte had last time – then it can be worth it.
Henrik:
I deliberately booked a meeting room just because of that. So here we are.
Great.
Welcome to episode two. Today we already have a guest, and this is going to be a great discussion. Last time we had an interesting internal discussion about submeters. We were three people basically agreeing on the same things, which sparked a lot of discussion on LinkedIn.
So of course, what we needed to do was invite some of the people who might have other perspectives and hopefully get even more discussion going. We don’t necessarily want to spend the entire episode on submeters again, but we do think it’s important to continue while it’s still fresh in our minds.
Besides that, we also have a bunch of insights from the conference you did this week, Benedetto, about measurement and verification, that we want to share and discuss.
First, let’s introduce Rasmus. Thanks for joining – and especially at such short notice after Malte’s message: “Can you join the podcast in an hour?” So we really appreciate it.
Rasmus, you’re at ProptechOS now, and you’ve been in this business for a long time across the PropTech space: data collection, IoT, energy, security, and more. Can you give the audience a 30-second intro, and then we’ll jump into the discussion?
Rasmus:
Thank you for having me. It’s a real pleasure.
I actually had to go shave and get ready because I was just working from home, so I had to prepare a bit.
I’ve been in this business for many years, working with business-critical solutions within security, governance, and compliance. I’ve also worked a lot with estimated data, because this is really about how to utilize data to get a certain level of insight – and sometimes that level is enough.
Right now I work a lot with AI and “agentic” buildings. And in that context, it’s much better to have data that is maybe 95–98 percent correct, than to spend two years getting to 100 percent. We are in a situation of urgency; we can’t wait two years to save carbon emissions.
Henrik:
Exactly – so being efficient about it.
Okay, so Benedetto, you had this post on LinkedIn that sparked a lot of good discussions. What’s your takeaway from that?
Benedetto:
Yes, let’s talk a bit about that.
In the last episode, we introduced the conversation as a “mud fight”, and it turned out we actually agreed pretty much on everything. But of course, we all work at the same company and have worked with energy data in a certain way at Ento for many years, so we tend to agree a lot because we’ve already had these discussions internally.
The beauty comes when you bring those perspectives to people who haven’t been exposed to the way we think at Ento. That’s what created the debate on LinkedIn – around 40 comments back and forth with good arguments for and against submeters.
My thoughts are mainly about communication. When we say “submeters are not important”, we’re not saying it’s not important to measure cooling or heating needs. It’s actually the opposite.
When we started Ento with the mission to reduce energy consumption in buildings, we quickly realized that certain systems were driving the consumption. We had problems building models because we couldn’t explain all the consumption without introducing temperature as a variable.
It turned out that one building had a heat pump. That’s why we started including outside temperature as an explanatory variable for the main meter’s consumption. This is basically where it started back in 2019–2020, when we introduced machine learning models built on main meter data.
You could say we tried to “hack” our way around not having a submeter. If we had had a submeter on the heat pump, we wouldn’t have needed a model to normalize or estimate that heat pump’s consumption. That’s where the journey started.
So, long intro, but the point is: we actually care a lot about these important loads – the SEUs, the large energy-consuming units in buildings.
Henrik:
And I think one of the points we ended on in the last episode was that of course there are use cases for submeters – for billing or for problem-finding. But it might not be where you should start.
If you can start with existing equipment – even if you already have submeters – you might still only integrate the main meters first. When there is a problem, you can then extend. That nuance sometimes gets lost, but it’s important.
Rasmus, what’s your take?
Rasmus:
Exactly. What’s so important is that you go out and do an initial diagnostics. You tell the customer, “This is the potential you have.” Today, you can then ask your autonomous agents to look inside afterwards – but you do it in a way where you start with the largest potentials.
Whether it’s water leakage, which then drives a specific project afterwards, or heating return temperature, or something else – when you look at a big portfolio, you don’t know where to start. It takes years to get around to everything. So you need fast access based on available data.
And “available” data doesn’t necessarily mean public data – it can be data you already have but haven’t yet connected. That is what should drive energy management in the beginning.
And yes, I know this wasn’t a fight. Maybe we’re agreeing too much, so I’m not allowed to come again…
Henrik:
Exactly.
Rasmus:
No, I think bringing nuance into the discussion is important.
Henrik:
I just want to show one thing that supports what we’re saying. Because when we say “you shouldn’t do it” – people ask how we can know, how we can know the cooling consumption, or get insight into that, if we only have main meter data.
So maybe it’s helpful to show how we actually do it, because it’s based on a lot of research plus applied machine learning and AI.
I’ve pulled a figure from our system into a document. This is what we call a “cooling profile”. It basically extracts the effect of increasing outside temperature on energy consumption.
That’s done through various models and statistical tricks, but the output is this profile. From it we can see a few things.
On the x-axis we have outside temperature in degrees Celsius. On the y-axis we have normalized energy consumption, centered around zero. You see an increase from low temperatures up to a certain point, and then a steeper increase. That indicates a clear setpoint.
Mind you, this is a huge supermarket, not a tiny building. So this is a very clear signal from a very large building, just from main meter data plus weather variables.
We can see that in this system there is a setpoint around 13–14 degrees – typical for comfort cooling. Above that, you have a certain “slope” – how much extra energy is used per extra degree.
You also see a slope below that setpoint, which in this case – being a supermarket – is linked to refrigerators and freezers that still need energy even when it’s cold outside.
Finally, the profile shows changes over time: the grey dots are historical, and the colored line shows the current relation. That indicates changes in settings, efficiency, or equipment. It doesn’t detail which component changed, but it tells you about overall asset performance.
All of that, again, from main meter data plus weather.
Benedetto:
I think that’s a great example, Henrik, of how with what you might call “less data” – just the main meter – but more external data and good models, you can get insights you wouldn’t otherwise have.
I often use my own house as an example. I have a heat pump, and I show the graph in the app: you can see exactly how it behaves. Cooling here is just one factor – there are many variables that affect consumption – but main meter plus weather gets you very far.
Henrik:
And to connect this to some of the comments on the LinkedIn thread: for example ISO certification, where you need to measure and report on the large energy consumers in a building.
You can imagine translating a profile like this into an actual cooling consumption – both process cooling and comfort cooling. Then you effectively have that information, or at least a very good estimate.
And in many certifications, some of these values are estimated anyway. This would just be a highly informed estimate based on thousands of buildings and a lot of learned knowledge.
Benedetto:
That’s actually something we’ve already done for solar panels for years. Virtual meters on cooling and heating is something we haven’t introduced yet – maybe after today – but for solar we’ve done it.
A lot of solar installations don’t have dedicated production meters. We just see net consumption, so we need to “pull out” the production. That becomes a math problem.
We’ve done this for a few years now, and we can test the accuracy because about 10 percent of the solar sites on our platform do have production meters. We compare modelled versus measured production.
The rule of thumb we’ve seen is around a 5 percent deviation between estimated annual production and measured annual production. So it can be extremely precise – without installing any new hardware.
Henrik:
Rasmus, you know a lot of these customers as well. What do you think their reaction is to this kind of approach? We spend a lot of time explaining why and how we do this. You don’t have to go into all the details every time, but how do you see it from your side?
What do we miss in the communication?
Rasmus:
First of all, I see a lot of possibilities for calculated or estimated meters. You can do it for different types of assets where you know the drivers: for solar panels you know size, manufacturer, inverter, you have solar radiation, and so on. That lets you do very precise calculations right away.
And this is, I would almost say, real-time data. You can then compare it after a few days when you get official data from the utility. Using this, you can scale fast and take geography into account.
I usually say “eight o’clock in the morning” is not the same across locations. If you are in Esbjerg or on Bornholm or across another country, the sun is in a different position – there can be half an hour difference. So you need more sophisticated approaches to get the right insights.
If you have stores across the country, you can’t just say “I want an alarm at eight”. That might work for a school where everyone arrives at eight, but the sun schedule is different in the western and eastern parts of Denmark.
Henrik:
Yes, for sure. You definitely need local weather for all buildings in order to do this. But the interesting part is: you don’t even need the PV size, manufacturer, or inverter for the initial detection.
What we do is: from the main consumption meter we combine as precise solar radiation data as possible plus other variables like heating and cooling. We then estimate the PV production meter.
We can then detect changes because if something suddenly changes in the PV behavior, that’s where it matters – not only for reporting, but for saving energy and money.
Reporting can be important for some certifications, but the real importance for us is detecting when there are issues.
Rasmus:
Yes. That’s a good example, and it connects to a comment I made about water leaks: the risk of a solar panel not producing for three or four days is “only” lost money. You’ve paid for the system, and if it doesn’t work, you don’t get the benefit. Maybe it just needs cleaning.
But for water leaks you don’t want something running for two days. So those are two very different risk profiles, and maybe that’s one reason people can be skeptical.
Building owners don’t necessarily know these differences, so you need “show and tell”. Show that it works. Sometimes they need an actual meter to validate, and once they see that the estimate matches, they become confident. That’s just standard change management.
You need them to accept that “this is good enough”. Instead of waiting a year to install meters in 200 buildings, you can get these calculated meters in a few days.
Henrik:
And that ties to communication.
Rasmus, you’re very deep in this sector, and you might not even be aware that many water utilities today can actually send hourly data every hour. So even if you have only hourly resolution, you can still have it close to “real time”.
We have several customers in that “energy real-time” regime, where we can detect leaks and they can replace or complement existing leak detection systems. These are high-risk systems that many organizations feel they must have, but increasingly they can sometimes be replaced or backed up by this kind of analysis.
Rasmus:
Yes, and that’s a good point. Some utilities can provide this within an hour now – which is great. But we have to remember many building owners come from a world where they got data 24 hours later due to legacy systems and slow communication.
So some can get things fast now, and then there’s a challenge for you as a provider: depending on the type of building, you need a stable baseline. When is “normal” water consumption? Is 100 liters per hour okay or not?
Here there can be an argument for submeters or sensors. For example: you might have a leak alarm early in the morning, but if it’s an 8,000 square meter school, you might spend hours hunting for it. Then you find out it’s “that toilet again”.
Henrik:
Put the kids to work – they can run around and find the toilet…
Rasmus:
Exactly. I actually proposed something like that to one of the Danish manufacturers of leak sensors: get kids to mount them on the toilets.
But seriously, a bit of provocation is good, and that’s what you did with the LinkedIn post. On the same day, someone else did something similar about the Danish building regulation (BR18).
I’ve had a lot of positive feedback from people who say they love these discussions. They are tired of dog photos and beach pictures in their feeds. Now they get real, sober discussions that move the industry.
Henrik:
Yes. Malte and I spent a few hours on that LinkedIn discussion. It was fun – and if you read it through, you see that we basically agree on the fundamentals. It’s just about scope and starting point.
To sum it up: there’s been very positive feedback on having clear statements. A lot of content on LinkedIn is fluff. Having strong, clear positions drives real engagement – which is also why we’re doing this follow-up episode.
And as you said earlier: communication is key. Benedetto has a PhD in machine learning. I don’t have a PhD – I’ve just written a book about applied machine learning in industry. But many facility managers don’t have a background in statistics.
This graph alone is a lot of information. There’s a huge change-management job to be done. That’s part of why we keep seeing barriers: once you can show something like this cooling profile and say “we detected this without a cooling meter”, then perspectives really start to shift.
We’re “preaching to the choir” with you, Rasmus – you’re already convinced – but others aren’t. Communicating that “something is possible today that wasn’t possible a year or five years ago” is a big part of the goal.
Rasmus:
Exactly. And I have to get this Friday rant off my chest.
We’re now also getting more and more data from inside buildings, and I keep hearing that it takes days to prepare internal systems to provide data. That might have been true before – but now we have MCP servers.
Seriously: it takes 10 minutes to install an MCP server, connect it to the right systems like your calendar or BMS, and then you start prompting. Tools like Lovable – thanks, Malte, for introducing that – give you way faster access to data than you had six or seven months ago.
People coming into facility management now expect AI and machine learning. They don’t want to live in Excel and write macros to become the “Excel macro king” of the company. They want to execute.
We still need the deep knowledge of the people who’ve done this for years – that’s critical – but the future is bright because the tools are here. Sometimes we just need a bit of patience for adoption, but we also can’t be too patient because of the climate urgency.
Henrik:
I’ve been called impatient three or four times this week, and I’ve said “thank you” every time. I think it’s good to be impatient if we want to change the industry.
Rasmus:
Exactly.
Henrik:
Alright, should we move on?
Maybe, Benedetto, you can give a quick recap of what happened in Birmingham, and then we can pick a few discussion points from that.
Benedetto:
Of course.
We had a conference in Birmingham called “Verified”. It was all about verifying energy efficiency. It might sound a bit crazy that you can fill a conference just on that topic, but there are quite a few nerds like us who really care about energy savings and how to verify them.
Even there, there was a lot of discussion about submeters. You could clearly see the difference between legacy energy management and what we – and other AI-native energy-management companies – are trying to bring.
On submeters, there was a lot of focus on IPMVP options A and B, which say you should specifically measure the equipment that is producing the savings. Then I come in and say: I think we can do this by implementing AI to get very accurate estimations of those same meters.
I don’t think you strictly need a physical submeter in many cases.
The initial reaction is often: “Okay, so you hate submeters.” Submeters have been the backbone of energy management for 50 years, so people react when you question them.
But we don’t hate submeters. We hate wasted energy and inaction. Our goal is to bring the fastest path to value.
We don’t think that path starts with submeters. If you have them, great – we’re a data company, we will happily analyze submeter data. If you don’t, we have an alternative that gets you to action in a week.
So that was one of the main points I was trying to get across at the conference.
It was also striking how much M&V is still done manually with monthly billing data. Meanwhile, while we were sitting in sessions, multiple verifications were being registered on our platform and fully processed in minutes. The contrast between legacy and AI-native approaches was very clear.
Henrik:
Malte actually pointed out that we should zoom out a bit here. You wrote your PhD on measuring and verifying energy savings using machine learning models, so you’re an expert in this field. Meanwhile, M&V isn’t widely done in many countries or by many building owners.
So we probably need to explain why it matters and how it’s done today before we go too deep.
Benedetto:
Yes, thank you.
The idea of M&V – measurement and verification – is to calculate consumption before a measure, then measure consumption afterwards, and then estimate the effect of what you did.
You can’t directly observe “what consumption would have been” if you hadn’t done anything. So you need a good statistical model of how the building behaves without the intervention. That’s the key challenge: estimating the counterfactual consumption.
Then you compare “what actually happened” with “what would have happened”, and the difference is your savings.
Henrik:
Rasmus, since we have you here – you’ve been doing energy management for many years in Denmark.
Why do you think M&V hasn’t been a huge part of energy management historically? Or am I wrong?
Rasmus:
It’s a good question.
I came into energy management through an acquisition. We acquired a company doing EPC projects. There, you must measure precisely. You need approved meters, known accuracy, and trusted calculations. You argue about baselines, sharing gains, and whether you saved 15 or 20 percent.
But in recent years, I haven’t seen as much of that focus. There are so many uncertainties – the baseline year, weather, occupancy, etc. If your baseline is 2013 or 2020, your 10 percent target looks different.
I don’t see as much focus on proving savings to the last decimal, partly because of all the non-energy benefits, and partly because the agenda has shifted toward climate. We want to save the planet, not just our energy bill.
Maybe the financial controllers are less involved in these projects now – I don’t know.
And I have a question for you, Benedetto. Years ago I worked on a project where we used clamp-on sensors with 1–5 percent accuracy, and there was a debate that they weren’t precise enough because we promised 3 percent savings. We were told we couldn’t use them for M&V.
Was that because at the time everything was focused on financial precision and not on climate impact?
Benedetto:
I think it’s almost unreasonable to say you want to prove 3 percent savings with that level of precision. Buildings are living, complex systems. It’s really difficult to get that accuracy, especially because uncertainties compound.
You have uncertainty from sensors, from models, and from variables you don’t measure. It all adds up and is often at least around 3 percent in total. So if your target is to prove 2–3 percent savings, it’s going to be difficult.
One advantage of IPMVP option C – whole-facility M&V – is that it uses utility data. It’s the utility’s responsibility to ensure metering accuracy because billing depends on it. So you don’t need to maintain or calibrate your own meters – you can trust the utility readings.
That’s another example of the value of whole-building assessments.
Henrik:
Two thoughts on that.
First: Rasmus mentioned a “downfall” in M&V. I think that’s true in some markets, but others are going the other way.
In Denmark, many of our customers don’t use formal protocols, unless it’s for EPC projects. Those are not very popular in the Nordics. In markets like Italy or the UK, there’s much more responsibility on external parties: BMS providers, ESCos, etc. That drives more M&V, because you pay for performance contracts and need to prove savings.
So there’s a structural market difference.
Second: The rule of thumb in IPMVP is that option C is not recommended if you’re targeting small savings. I think the guideline is that if you aim for less than around 10 percent, whole-building isn’t ideal because of uncertainty. For larger savings, it works well.
Benedetto:
Yes, the guideline is 10 percent. But the protocol hasn’t been updated to account for modern data logging and analysis methods. With today’s capabilities, that threshold could probably be lower.
Henrik:
And that’s why we spend time calculating uncertainties as well. There’s a whole research track behind that, but for users the key thing is: we show the estimated savings and an uncertainty band – essentially a 95 percent confidence range.
Also, when I said “instead of legacy we can do it with AI”, I want to emphasize that it’s not “magic AI” – it’s the same type of models as the cooling profile from earlier. M&V algorithms are built on the same modeling approach. They’re just structured for “before vs after” comparisons and trained to separate changes due to weather or operations from changes due to interventions.
So it’s not necessarily more complex – just systematically applied.
We could probably do a full episode on this alone, because there is a lot to unpack.
Rasmus:
Two short comments.
First, what is driving energy management now is not just money – the 2, 5, or 10 percent savings – but also regulations: ISO 50001 requirements, EPBD, ESG reporting. If you don’t do this, you end up with bigger risks elsewhere.
Second, facility operators often say that around 20 percent of total savings are “micro-savings” – all the small actions people take. Systems like Ento can find big savings, but you might add another 20 percent from everyday actions: closing a valve, fixing a small leak, adjusting a setting.
That’s important to remember. People walking around the sites do a great job, but their contribution often isn’t measured. I sometimes say to customers: if you start properly accounting for those, you’ll easily “save” another 10 percent in your reporting.
Benedetto:
I have to comment on that.
What Henrik showed earlier was project-level M&V. After the energy crisis, we did something fun: we ran M&V at portfolio level. We introduced “performance verifications” across sites.
A client had implemented some very aggressive measures across all their shops. We wanted to measure the effect, but we couldn’t attribute which person did which action in which shop. So we took hundreds of buildings and compared a “before” period with an “after” period on a portfolio level.
It wasn’t one single measure – it was all the actions combined. We could see the effect of all those micro-savings aggregated.
They could then use that in two ways: reward those who performed well, and later identify locations where performance dropped again and investigate why.
So yes, people on the ground matter a lot – and you can quantify their combined impact.
Rasmus:
Exactly. But we don’t have time to do everything the old-fashioned way. We have to adapt and move faster than before.
Fifteen years ago, installing 600 meters might take three years. We’re in a situation of urgency now, and the technology is here to help us. We’ve seen how fast tech is evolving.
The good news is: we can react much faster today than we could five years ago.
Henrik:
Benedetto, any final comments before we close?
Benedetto:
Yes. If we zoom out and think more about outcomes and less about tools, the real question is: how can we save the most energy for the least investment?
That’s crucial for public authorities with limited taxpayers’ money, and also for private companies whose profitability depends on the cost–revenue balance.
We need to start judging energy-management systems not by how many dashboards they have, but by how many euros they can verify in savings per week or per month.
That depends on how fast we can get the data, how fast we can get actionable insights, and then how fast we can verify that actions had the intended impact.
That’s our vision for the sector over the next few years.
Henrik:
Great. Amen. Perfect ending.
And now I can still make it to lunch, so all good.
Thanks a lot, Rasmus, for joining, and thanks for the discussion. See you next time.
Rasmus:
Thanks.
Benedetto:
Thanks.

.jpg)









