On this superior Roboflow Supervision tutorial, we assemble an entire object detection pipeline with the Supervision library. We begin by organising real-time object monitoring using ByteTracker, together with detection smoothing, and defining polygon zones to look at specific areas in a video stream. As we course of the frames, we annotate them with bounding containers, object IDs, and velocity info, enabling us to hint and analyze object habits over time. Our purpose is to showcase how we’re capable of combine detection, monitoring, zone-based analytics, and visual annotation proper right into a seamless and intelligent video analysis workflow. Attempt the Full Codes proper right here.
!pip arrange supervision ultralytics opencv-python
!pip arrange --upgrade supervision
import cv2
import numpy as np
import supervision as sv
from ultralytics import YOLO
import matplotlib.pyplot as plt
from collections import defaultdict
model = YOLO('yolov8n.pt')
We start by placing within the necessary packages, along with Supervision, Ultralytics, and OpenCV. After ensuring we’ve the latest mannequin of Supervision, we import all required libraries. We then initialize the YOLOv8n model, which serves as a result of the core detector in our pipeline. Attempt the Full Codes proper right here.
try:
tracker = sv.ByteTrack()
apart from AttributeError:
try:
tracker = sv.ByteTracker()
apart from AttributeError:
print("Using major monitoring - arrange latest supervision for superior monitoring")
tracker = None
try:
smoother = sv.DetectionsSmoother(dimension=5)
apart from AttributeError:
smoother = None
print("DetectionsSmoother not on the market on this mannequin")
try:
box_annotator = sv.BoundingBoxAnnotator(thickness=2)
label_annotator = sv.LabelAnnotator()
if hasattr(sv, 'TraceAnnotator'):
trace_annotator = sv.TraceAnnotator(thickness=2, trace_length=30)
else:
trace_annotator = None
apart from AttributeError:
try:
box_annotator = sv.BoxAnnotator(thickness=2)
label_annotator = sv.LabelAnnotator()
trace_annotator = None
apart from AttributeError:
print("Using major annotators - some choices may be restricted")
box_annotator = None
label_annotator = None
trace_annotator = None
def create_zones(frame_shape):
h, w = frame_shape[:2]
try:
entry_zone = sv.PolygonZone(
polygon=np.array([[0, h//3], [w//3, h//3], [w//3, 2*h//3], [0, 2*h//3]]),
frame_resolution_wh=(w, h)
)
exit_zone = sv.PolygonZone(
polygon=np.array([[2*w//3, h//3], [w, h//3], [w, 2*h//3], [2*w//3, 2*h//3]]),
frame_resolution_wh=(w, h)
)
apart from TypeError:
entry_zone = sv.PolygonZone(
polygon=np.array([[0, h//3], [w//3, h//3], [w//3, 2*h//3], [0, 2*h//3]])
)
exit_zone = sv.PolygonZone(
polygon=np.array([[2*w//3, h//3], [w, h//3], [w, 2*h//3], [2*w//3, 2*h//3]])
)
return entry_zone, exit_zone
We organize necessary components from the Supervision library, along with object monitoring with ByteTrack, optionally accessible smoothing using DetectionsSmoother, and versatile annotators for bounding containers, labels, and traces. To ensure compatibility all through variations, we use try-except blocks to fall once more to numerous classes or major efficiency when wished. Furthermore, we define dynamic polygon zones all through the physique to look at specific areas like entry and exit areas, enabling superior spatial analytics. Attempt the Full Codes proper right here.
class AdvancedAnalytics:
def __init__(self):
self.track_history = defaultdict(itemizing)
self.zone_crossings = {"entry": 0, "exit": 0}
self.speed_data = defaultdict(itemizing)
def update_tracking(self, detections):
if hasattr(detections, 'tracker_id') and detections.tracker_id simply is not None:
for i in fluctuate(len(detections)):
track_id = detections.tracker_id[i]
if track_id simply is not None:
bbox = detections.xyxy[i]
coronary heart = np.array([(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2])
self.track_history[track_id].append(coronary heart)
if len(self.track_history[track_id]) >= 2:
prev_pos = self.track_history[track_id][-2]
curr_pos = self.track_history[track_id][-1]
velocity = np.linalg.norm(curr_pos - prev_pos)
self.speed_data[track_id].append(velocity)
def get_statistics(self):
total_tracks = len(self.track_history)
avg_speed = np.suggest([np.mean(speeds) for speeds in self.speed_data.values() if speeds])
return {
"total_objects": total_tracks,
"zone_entries": self.zone_crossings["entry"],
"zone_exits": self.zone_crossings["exit"],
"avg_speed": avg_speed if not np.isnan(avg_speed) else 0
}
def process_video(provide=0, max_frames=300):
"""
Course of video provide with superior supervision choices
provide: video path or 0 for webcam
max_frames: limit processing for demo
"""
cap = cv2.VideoCapture(provide)
analytics = AdvancedAnalytics()
ret, physique = cap.study()
if not ret:
print("Did not study video provide")
return
entry_zone, exit_zone = create_zones(physique.kind)
try:
entry_zone_annotator = sv.PolygonZoneAnnotator(
zone=entry_zone,
coloration=sv.Coloration.GREEN,
thickness=2
)
exit_zone_annotator = sv.PolygonZoneAnnotator(
zone=exit_zone,
coloration=sv.Coloration.RED,
thickness=2
)
apart from (AttributeError, TypeError):
entry_zone_annotator = sv.PolygonZoneAnnotator(zone=entry_zone)
exit_zone_annotator = sv.PolygonZoneAnnotator(zone=exit_zone)
frame_count = 0
results_frames = []
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
whereas ret and frame_count
We define the AdvancedAnalytics class to hint object movement, calculate velocity, and rely zone crossings, enabling rich real-time video insights. Contained within the process_video function, we study each physique from the video provide and run it by way of our detection, monitoring, and smoothing pipeline. We annotate frames with bounding containers, labels, zone overlays, and dwell statistics, giving us a powerful, versatile system for object monitoring and spatial analytics. All by means of the loop, we moreover accumulate info for visualization and print remaining statistics, showcasing the effectiveness of Roboflow Supervision’s end-to-end capabilities. Attempt the Full Codes proper right here.
def create_demo_video():
"""Create a straightforward demo video with transferring objects"""
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter('demo.mp4', fourcc, 20.0, (640, 480))
for i in fluctuate(100):
physique = np.zeros((480, 640, 3), dtype=np.uint8)
x1 = int(50 + i * 2)
y1 = 200
x2 = int(100 + i * 1.5)
y2 = 250
cv2.rectangle(physique, (x1, y1), (x1+50, y1+50), (0, 255, 0), -1)
cv2.rectangle(physique, (x2, y2), (x2+50, y2+50), (255, 0, 0), -1)
out.write(physique)
out.launch()
return 'demo.mp4'
demo_video = create_demo_video()
analytics = process_video(demo_video, max_frames=100)
print("nTutorial completed! Key choices demonstrated:")
print("✓ YOLO integration with Supervision")
print("✓ Multi-object monitoring with ByteTracker")
print("✓ Detection smoothing")
print("✓ Polygon zones for house monitoring")
print("✓ Superior annotations (containers, labels, traces)")
print("✓ Precise-time analytics and statistics")
print("✓ Velocity calculation and monitoring historic previous")
To verify our full pipeline, we generate a man-made demo video with two transferring rectangles simulating tracked objects. This allows us to validate detection, monitoring, zone monitoring, and velocity analysis without having a real-world enter. We then run the process_video function on the generated clip. On the end, we print out a summary of all key choices we’ve utilized, showcasing the flexibility of Roboflow Supervision for real-time seen analytics.
In conclusion, we’ve effectively utilized a full pipeline that brings collectively object detection, monitoring, zone monitoring, and real-time analytics. We reveal strategies to visualise key insights like object velocity, zone crossings, and monitoring historic previous with annotated video frames. This setup empowers us to transcend major detection and assemble a smart surveillance or analytics system using open-source devices. Whether or not or not for evaluation or manufacturing use, we now have a powerful foundation to develop upon with far more superior capabilities.
Attempt the Full Codes proper right here. Be at liberty to try our GitHub Internet web page for Tutorials, Codes and Notebooks. Moreover, be blissful to look at us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Publication.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is devoted to harnessing the potential of Artificial Intelligence for social good. His latest endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth safety of machine learning and deep learning info that’s every technically sound and easily understandable by a big viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.
Elevate your perspective with NextTech Info, the place innovation meets notion.
Uncover the latest breakthroughs, get distinctive updates, and be part of with a world group of future-focused thinkers.
Unlock tomorrow’s tendencies proper now: study further, subscribe to our publication, and develop to be part of the NextTech group at NextTech-news.com
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our e-newsletter, and be part of our rising group at nextbusiness24.com

