A buyer’s guide to separating real AI from re-packaged automation in video surveillance.
BY DAVE KEATING – REFRAIME – APRIL 2026
THE AI GOLDRUSH HAS A CREDIBILITY PROBLEM
Open any product brochure in the physical security industry today and you will find the letters “AI” on virtually every page. Artificial intelligence has become the most overused (and least understood) term in our sector. The result is a credibility problem that hurts everyone: buyers who cannot distinguish between genuine capability and dressed-up legacy technology; vendors with real innovation who get lost in a sea of inflated claims; and an industry whose collective reputation suffers every time a client deploys an “AI-powered” system that performs no differently from the motion detection they had a decade ago.
This article is not a product pitch. It is a practical guide written from inside the industry, intended to give security professionals, integrators and end-users the vocabulary and the questions they need to separate substance from spin. The technology itself is genuinely transformative, but only if you know what you are actually buying.
THE FUNDAMENTAL DISTINCTION: PROGRAMMED VS. TRAINED
The single most important question a buyer can ask is deceptively simple: “Was this system programmed to detect this, or was it trained to detect it?” That question alone will reveal more about what you are evaluating than any feature list or marketing video.
For decades, video analytics has relied on deterministic, rule-based systems. Motion detection, tripwire logic, pixel-change thresholds and basic object classification using classical computer vision are all examples. A developer writes a rule: “if an object of a certain pixel-block size crosses this predefined line, trigger an alarm” – and the camera follows it. These are valuable tools, and they have served the industry well. But they are not artificial intelligence. They are programming.
The meaningful shift occurs when a system moves from following explicit instructions to learning patterns from data and making inferences that were never explicitly coded.
Consider the difference: a rule-based system can detect that someone has been stationary in a defined zone for longer than a threshold you set. A trained AI model, having learned from thousands of hours of real surveillance footage, can recognise that a person loitering outside a building at two in the morning, repeatedly checking their phone and glancing at entry points, is exhibiting suspicious precursor behaviour – not because anyone wrote a loitering rule, but because the model has learned what that constellation of behaviours looks like. That is a fundamentally different capability.
“Was this system programmed to detect this, or was it trained to detect it? That single question will tell you more about what you’re buying than any product brochure.”
The distinction matters enormously because it determines what a system can and cannot do. A programmed system can only detect scenarios its developer anticipated. A trained system can generalise to novel situations within the domain it was trained on. When a vendor markets pixel-change motion detection as “AI-powered analytics,” they are selling familiarity in new packaging – and the buyer pays an innovation premium for legacy technology.
FIVE RED FLAGS THAT YOUR VENDOR IS SELLING HYPE
Through years of working with clients who have been burned by inflated claims, and through candid conversations across the industry, a clear pattern of warning signs has emerged. If you encounter any of the following during a vendor evaluation, proceed with caution.
1. THEY CANNOT EXPLAIN HOW THEIR SYSTEM WAS TRAINED
A genuine AI vendor should be able to describe, in reasonable terms, what data their models were trained on, what training methodology was used, and how the model was validated. They need not disclose proprietary architectures, but if a vendor cannot articulate the difference between their “AI” and a set of configurable rules, that tells you everything. Ask specifically: “What datasets were used to train this model and how do you ensure those datasets are representative of my operating environment?” If the answer is vague or deflective, the system is likely rule-based automation wearing an AI label.
2. THE ‘AI’ FEATURES ARE RE-BRANDED LEGACY ANALYTICS
Watch for features that sound impressive but are functionally identical to capabilities that have existed for a decade. Tripwire detection re-labelled as “intelligent perimeter analysis.” Basic people-counting re-named “AI occupancy management.” Object left behind detection marketed as “predictive threat intelligence.” The language changes; the underlying technology does not. Ask for a live demonstration that shows the system doing something a traditional rule could not achieve. If they cannot, you are looking at a re-brand.
3. NO TRANSPARENCY ON ACCURACY METRICS
Any system that makes claims about detection or classification should be able to substantiate those claims with measurable performance data. Ask for false positive and false negative rates, ideally from deployments comparable to your own environment. A vendor who cannot or will not provide these figures either has not tested rigorously, or knows the numbers would not inspire confidence. Equally, be wary of accuracy claims presented without context: “99.5% accuracy” means nothing without knowing the test conditions, the dataset and the operational environment in which that number was achieved.
4. NO AUDIT TRAIL OR EXPLAINABILITY
In security, understanding why a system raised an alert is as important as the alert itself. Genuine AI systems should provide a clear record of what was detected, what confidence level was assigned, what contextual factors contributed to the inference, and what action was taken or recommended. If a vendor’s system produces alerts with no accompanying reasoning or audit trail, it creates both an operational and a compliance liability. In regulated environments, “the AI decided” is never an acceptable answer. You need a timestamped, interrogable record – and that requirement is only going to intensify as privacy legislation such as POPIA and GDPR extends its reach into the surveillance domain.
5. THEY PROMISE PERFECTION
This may be the most reliable warning sign of all. Any vendor who claims zero false alarms, 100% detection accuracy, or a system that “never misses” is either uninformed or being dishonest. Every AI system has failure modes. Every model has edge cases where it underperforms. The question is not whether errors occur, but how a vendor manages them: through confidence thresholds, human-in-the-loop escalation, continuous retraining, and transparent performance reporting. A vendor who acknowledges limitations and shows you how they mitigate them is one you can trust. A vendor who claims they have none is one you should walk away from.
WHAT GENUINE AI CAPABILITY ACTUALLY LOOKS LIKE
The red flags describe what to avoid, but it is equally important to understand what a well-built AI system should offer. Buyers should look for several key characteristics.
CONTEXTUAL INFERENCE, NOT JUST DETECTION. The system should demonstrate an ability to interpret behaviour in context, not merely trigger on isolated events. A person running in a school corridor is very different from a person running in a gymnasium. A genuine AI system distinguishes between the two; a rule-based system cannot.
CONFIDENCE SCORING AND GRADUATED RESPONSE. Rather than binary alarm or no-alarm outputs, the system should assign confidence levels to its inferences and route events accordingly. High-confidence events can be actioned; lower-confidence events should be escalated for human review. This graduated approach dramatically reduces operator fatigue and improves the quality of human decision-making.
CONTINUOUS LEARNING FROM DEPLOYMENT-SPECIFIC DATA. A model that was trained once and deployed statically will drift over time as environments change. Genuine AI platforms incorporate feedback loops that allow models to improve based on real-world performance data from the specific site where they are deployed. This is not a luxury feature – it is fundamental to long-term reliability.
PRINCIPLED HUMAN-IN-THE-LOOP DESIGN. The best AI systems are not designed to replace human operators, but to transform what they do. By handling the volume and speed of data processing, AI frees operators to focus on judgement, strategy and the consequential decisions that require human accountability. The operator who once triaged a hundred motion alerts per shift now responds to a handful of high-confidence, contextualised incidents. They are not redundant – they are elevated.
“An operator who doesn’t trust the AI will override it constantly, eliminating its value. An operator who trusts it blindly introduces unacceptable risk. Building calibrated trust is as much a product challenge as it is a technology one.”
THE CONTEXT QUESTION: WHY ‘IMPORTED INTELLIGENCE’ OFTEN FAILS
There is a dimension to the AI evaluation conversation that is frequently overlooked and particularly relevant in African and emerging markets: the conditions under which a model was trained profoundly affect how it performs in your environment.
Most commercial AI models in the surveillance space were developed for first-world infrastructure, first-world lighting conditions, and first-world environments. When those models are deployed in contexts characterised by variable power supply, lower bandwidth, informal settlement perimeters, mixed-use commercial and residential spaces, and significantly different lighting conditions, their performance degrades – sometimes dramatically. This is not a theoretical concern. It is a daily operational reality across much of the African continent and many other emerging markets.
Buyers operating in these environments should ask a specific and pointed question: “where was this model trained, and on what conditions?” A system that achieves impressive benchmarks in controlled Northern European or North American environments may underperform significantly when confronted with the realities of an African deployment. Localisation is not a marketing angle – it is a technical requirement for model fitness. Solutions built for complex, resource-constrained environments are, almost by definition, more resilient when deployed globally. The reverse is rarely true.
BEYOND THE CAMERA: THE OPERATIONAL INTELLIGENCE OPPORTUNITY
For buyers evaluating AI investments, it is worth understanding that the technology’s value extends well beyond traditional security use cases. The same visual intelligence that analyses a live video stream for intrusion detection can extract operational data from the same footage: movement patterns, workflow bottlenecks, spatial utilisation, workforce compliance over time, and real-time process efficiency metrics.
Clients in motor, retail, logistics and manufacturing sectors are increasingly recognising that the camera infrastructure they have already paid for is an underutilised data source. When paired with genuine AI, (not re-branded motion detection) that infrastructure becomes an operational intelligence layer that serves security, operations, compliance and executive decision-making simultaneously. The ROI narrative shifts from “cost of security” to “value of operational visibility,” and that shift is only possible when the underlying technology is genuinely intelligent.
This is also where the distinction between programmed and trained systems has its starkest commercial implication. A rule-based system can count people or trigger alarms. A trained system can interpret what is happening and why – and that interpretive capability is what transforms video data from a security feed into a business asset.
TEN QUESTIONS TO ASK BEFORE YOU SIGN
Distilled from the preceding analysis, here are ten questions that any buyer, whether an integrator, a security director, or a facilities executive, can and should put to any vendor claiming AI capability. The answers will tell you whether you are looking at genuine intelligence or lipstick on a pig.
1. PROGRAMMED OR TRAINED? “Was this system programmed to detect this scenario, or was it trained on data to recognise it? Can you explain the difference in how your system works?”
2. WHAT DATA? “What datasets were used to train your models, and how representative are they of my specific operating environment, geography and conditions?”
3. SHOW ME SOMETHING A RULE CANNOT DO. “Can you demonstrate, live, a detection or inference that would be impossible to achieve with a traditional rule-based configuration?”
4. WHAT ARE YOUR ACCURACY NUMBERS? “What are your documented false positive and false negative rates, and were those measured in controlled lab conditions or in comparable real-world deployments?”
5. WHAT HAPPENS WHEN THE AI IS WRONG? “How does your system handle low-confidence detections? Is there a graduated escalation path, or is it binary alarm-or-nothing?”
6. CAN YOU SHOW ME THE AUDIT TRAIL? “For any given alert, can your system show me a timestamped record of what was detected, at what confidence level, what contextual factors were considered, and what action was taken?”
7. DOES IT LEARN FROM MY ENVIRONMENT? “Once deployed, does your model incorporate feedback from my specific site to improve over time, or is it a static deployment?”
8. WHERE WAS IT TRAINED? “Were your models developed and validated for conditions comparable to my operating environment, including lighting, infrastructure and bandwidth constraints?”
9. HOW DOES IT FIT WITH MY EXISTING INFRASTRUCTURE? “Does your solution integrate with my current VMS and control room architecture, or does it require a rip-and-replace?”
10. WHAT CAN’T IT DO? “What are the known limitations or failure modes of your system? How do you monitor for and address model drift over time?”
“A vendor who acknowledges limitations and shows you how they mitigate them is one you can trust. A vendor who claims they have none is one you should walk away from.”
INTENT MATTERS MORE THAN TECHNOLOGY
The surveillance industry is at an inflection point. Genuine Artificial Intelligence – the kind that learns, adapts and makes inferences that were never explicitly coded – is real, it is here, and its potential to transform both security and broader operational practice is enormous. But that transformation will only materialise for buyers who approach the market with clear eyes, the right vocabulary and the willingness to ask uncomfortable questions.
The future of AI in our industry will not be defined by whichever vendor shouts “AI” loudest. It will be defined by intent: the intent to build technology that solves real problems in real environments for real people. Buyers who demand that standard will get it. Buyers who accept the buzzword at face value will continue to pay innovation prices for legacy performance.
Ask the questions. Demand the evidence. The technology deserves it, and so do you.
ABOUT THE AUTHOR
Dave is one of the founders and current CEO of Refraime, a proudly African AI practice specialising in visual intelligence for security and operational environments. Refraime is the AI partner of Yellow Dog Software, a thirty-year enterprise engineering firm with deep roots in mission-critical financial services systems. Dave and the Refraime team build AI solutions designed from the ground up for African conditions, integrating with leading VMS platforms to deliver genuine intelligence without requiring infrastructure overhaul. For more information, visit refraime.ai or contact hello@refraime.ai