Skip to content
Why AVMs Lie: 4 Cases Automated Valuations Mislead Sellers
← Back to Perspective

Why AVMs Lie: 4 Cases Automated Valuations Mislead Sellers

A homeowner in Madrid sent me a screenshot last quarter and asked me to confirm a number. Her one-bedroom apartment, according to a popular automated valuation tool, was worth €342,000. The same building, fourth floor, identical layout, had sold three weeks earlier for €289,000. She did not believe me when I told her. She believed the algorithm.

This is the conversation every broker now walks into. The seller has done their homework, which means they have typed their address into a free valuation site and received a number. That number is rarely accurate. Sometimes it is dramatically wrong. And the seller, sitting at their kitchen table waiting for you to arrive, has anchored their expectations to it.

The automated valuation model is not evil. It is a useful first approximation in markets with deep, recent, comparable transaction data. The problem is that real estate is full of cases where one or more of those conditions break down — and the AVM does not know it has broken down. It simply produces a confident number, with a tidy range, and the seller takes that as truth.

The model is mathematically correct and practically useless in four specific cases

There is a category error baked into how consumers read AVM outputs. The number is presented as “your home is worth X,” when what the algorithm actually computed is “homes statistically similar to yours, in a window of recent sales, traded at X.” Those are different statements. The first is a valuation. The second is a regression on whatever data was available to ingest.

When the data is rich, recent, and the property is generic, the two statements converge. When any of those conditions fails, they diverge — sometimes by 5%, sometimes by 25%, occasionally by more. The four cases below are where the divergence shows up most reliably, and they are the cases brokers walk into every single week.

Case 1 — The unique property

Algorithms hate unique properties because uniqueness, by definition, has no comparable transactions to learn from. A standard three-bedroom apartment in a homogeneous neighborhood is the AVM’s best case. A heritage townhouse with a converted ground floor, an extension that was permitted in 2017 but never registered cleanly, and a small private garden is the AVM’s worst case.

What the algorithm does in those cases is fall back on the closest available comparables — usually properties that share two or three of the relevant features and miss the rest. It then averages the result, presents a confidence range, and hides the assumption underneath. The seller sees a clean number. The actual market value of that property is determined by which buyer walks through the door, what they value, and how thin the supply of similar homes is in that neighborhood that quarter.

I have seen unique properties sell for 40% above an AVM estimate and 30% below. The variance is not the algorithm’s fault. It is the consequence of asking a regression model to value something the regression has barely seen. A broker who walks into that kitchen table with twelve specific local references and three recent in-person valuations of nearby properties knows things the model cannot know. The seller will not realize this until the broker explains it.

Case 2 — The thin local market

The AVM assumes its local comparable set is statistically meaningful. In neighborhoods where sales volume is low — secondary cities, smaller towns, the high end of any market — the comparable set may include only six or eight transactions in the relevant window. Eight comparables are not a market. They are a coincidence.

When you regress on eight points, a single outlier — a forced sale, a divorce settlement, a prestige property sold to an out-of-town buyer — distorts the entire output. The seller has no way to see that distortion. The model presents the same confident number it would produce in a neighborhood with 800 transactions. It does not say “low sample size, high uncertainty” in language any homeowner notices.

The kitchen table conversation in a thin market is where brokers earn their commission. You are not arguing against the algorithm — you are explaining what the algorithm cannot see. Recent sales that completed off-portal. The fact that two of the eight visible comparables were inheritance sales priced to clear quickly. The buyer profile currently active in the area, which determines what type of property is actually moving and at what speed. None of that is in the model.

Case 3 — The market in transition

Algorithms learn from the past. Markets in transition — interest rates moving, regulation changing, a major employer arriving or leaving, a new transit line opening — produce a six-month period during which yesterday’s comparables stop predicting tomorrow’s prices. The AVM, by design, will be lagging.

A neighborhood that has just become reachable by a new metro stop, a city that has just announced a major rezoning, a market that has just absorbed a fast interest rate move — these are all places where the AVM’s confidence number will be wrong, and confidently wrong, for two to four quarters before the algorithm’s training data catches up. During that lag, the seller looking at the website number is reading a snapshot of the previous era.

This is also the most expensive case for a seller to misread. In a rising transitional market, listing at the AVM number leaves real money on the table — sometimes 8-15% of the home’s value. In a cooling transitional market, listing at the AVM number leaves the property sitting unsold for months while every revised reduction destroys negotiating power. The broker who has had ten conversations with active buyers that month knows where the market is heading. The model knows where the market was.

Case 4 — The condition outlier

AVMs cannot see inside the property. They see the address, the registered square meters, the year of construction, sometimes a basic number of rooms — and that is it. They do not see the kitchen renovation, the new windows, the structural issue, the damp, the lack of natural light, the view from the terrace, or the fact that the building’s communal areas are stuck in 1987.

For a property in average condition for its building and neighborhood, this is a small error. For any property that deviates meaningfully from the local norm — either above or below — it is a large error. A €600,000 apartment with a €60,000 renovation done last year is not a €600,000 apartment to a buyer who walks through it. A €400,000 apartment with severe damp, a roof that needs replacing, and a kitchen from another decade is not a €400,000 apartment either, regardless of what the address-based model says.

This is where the broker’s physical visit produces information no algorithm can access. The 90 minutes inside the property — and the valuation conversation that surrounds it — generate a set of judgments that no AVM has the input to make. Condition, finish, light, layout, smell, noise, the feeling of the staircase. Buyers price all of these. Models cannot.

The conversation that wins the listing

Sellers do not arrive hostile. They arrive anchored. The mistake brokers make is treating the AVM number as a competitor to be defeated. It is not a competitor. It is the first number the seller heard, and your job is to make a more useful number land in the same place.

The opening line that works: “I am glad you have already looked at one of the algorithm valuations — it is a useful starting point, and it tells me you have done your homework. Let me explain what the algorithm cannot see in your specific case, and then we can compare what it produced against the work I did before this meeting.”

That sentence does three things in twelve seconds. It validates the seller’s research instead of attacking it. It signals expertise without arrogance. And it sets up the only frame in which the kitchen table conversation works — not “I am right, the website is wrong,” but “the website was a starting point, here is what a real valuation includes.”

The brokers who consistently win listings against AVM-anchored sellers are not better at arguing. They are better at explaining. They name which of the four cases applies. They show specific recent comparables. They describe the buyer activity they have seen in the last 30 days. They walk through the property with the seller and tie observations to price. By the end of the conversation, the seller has updated their internal estimate — not because they were defeated, but because they were given more information.

The AVM is not going away. Consumer expectations of an instant first number are now permanent. The broker’s job is no longer to be the first source of a number — that game is lost. It is to be the source of the right number, with the reasoning attached, in a conversation the seller could not have had with a website. That conversation is where the listing contract gets signed, and it is the same conversation when market data and price contradict each other — pure data is never the answer; the broker is.