Back to Portfolio
IRIS
AI Computer Vision

IRIS

AI-powered PPE detection and workplace safety monitoring system — real-time compliance checking, violation alerts, and safety analytics for industrial environments.

Forked from Nedo Vision — our enterprise computer vision framework.

Petrokimia Gresik
Client
PT Petrokimia Gresik
99.2%
Detection Accuracy
<100ms
Inference Time
100%
Satisfaction
About

What is IRIS?

IRIS is an AI-powered PPE (Personal Protective Equipment) detection system built on top of our Nedo Vision framework. It uses deep learning models to automatically detect whether workers are wearing required safety equipment — hard hats, safety vests, goggles, and boots — delivering real-time compliance monitoring and instant violation alerts.

IRIS Detection Feed
REC
HARD HATCOMPLIANTConfidence: 99.4%
PPE Detection Status
Hard Hat
Compliant
99.4%
Safety Vest
Compliant
98.7%
Safety Goggles
Missing
97.1%
Safety Boots
Compliant
96.8%
Challenges

The Challenge

Petrochemical plants face critical workplace safety challenges that traditional monitoring methods cannot adequately address.

Manual Safety Inspections

Safety officers manually patrol large plant areas to check PPE compliance — slow, inconsistent, and impossible to cover all zones simultaneously.

Blind Spots & Coverage Gaps

Critical high-risk zones in the petrochemical plant often go unmonitored, leaving workers exposed to hazardous conditions without proper equipment checks.

Delayed Violation Response

PPE violations are only discovered during periodic audits, not in real-time — meaning workers may spend hours in hazardous areas without proper protection.

No Data-Driven Safety Insights

Without systematic tracking, there's no way to identify patterns in safety violations, high-risk time windows, or compliance trends across departments.

Data Engine

Acquire, annotate, version, and manage your training data pipeline end-to-end.

Data Pipeline

Automatic Data Acquisition

Continuously build and improve training datasets directly from live camera feeds or offline video — no manual frame extraction required.

Live Camera Feed

Automated Training Data Pipeline

ACTIVE

Continuously capture frames from RTSP/ONVIF camera streams deployed across the plant. The system intelligently selects high-value frames — filtering out duplicates, low-quality, and redundant poses — to build a diverse training dataset automatically.

Data Flow Pipeline
RTSP Camera
Frame Selector
Dataset Store
24/7
Continuous capture
Smart
Frame selection
Auto
De-duplication
Acquired frames are automatically organized, de-duplicated, and prepared for annotation — ready for model retraining.
Manual Annotation

Web-Based Annotation Studio

A full-featured annotation workspace built right into the platform — draw bounding boxes, polygons, and polylines with professional-grade shortcuts. No external tools needed.

Annotation Studio
Image 47 / 1,2845 annotationsAuto-saved
V
B
P
L
M
Z
1920 × 1080|Zoom: 100%|BBox Tool
Label Classes
Person(142)
Hardhat(138)
Vest(135)
Gloves(98)
Boots(121)
No-Hardhat(24)
Keyboard Shortcuts
Bounding Box Tool
B
Polygon Tool
P
Polyline Tool
L
Select / Move
V
Undo
Ctrl+Z
Redo
Ctrl+Y
Save Annotations
Ctrl+S
Copy Label
Ctrl+C
Delete Selected
Del
Toggle Labels
H
Zoom In / Out
++-
Pan Canvas
Space
Rich annotation tools with auto-save — works directly in your browser
12+Shortcuts
3Draw Tools
AutoSave
Auto Annotation

Automatic Model-Assisted Annotation

The existing model auto-labels newly acquired frames — high-confidence predictions become instant annotations while uncertain detections are flagged for human review.

Auto Annotation Engine
3 Auto-labeled 2 Needs Review
Annotation Preview — Frame #1482
Hard Hat 98.4%
Detection Results
Hard Hat
Auto-labeled
98.4%
conf.
Safety Vest
Auto-labeled
96.1%
conf.
Goggles
Needs Review
62.3%
conf.
Safety Boots
Auto-labeled
94.7%
conf.
Gloves
Needs Review
45.8%
conf.
Confidence Threshold80%
0% — ReviewAuto-label — 100%
Auto-annotated frames feed directly into the retraining pipeline
90%Auto-labeled
10xFaster
Dataset Management

Dataset Versioning

Every dataset iteration is version-controlled — track changes, compare accuracy across versions, and roll back to any previous state instantly.

Dataset Registry
5 versions1 in production
Version History
v3.2.0Production
+1,240 frames, added Gloves class
Feb 2026 12,480 6 classes
99.2%
mAP
v3.1.0
+2,100 frames, night-shift data
Jan 2026 11,240 5 classes
98.5%
mAP
v3.0.0
Major: added reactor zone data
Dec 2025 9,140 5 classes
97.8%
mAP
v2.4.0
+1,800 frames, improved vest labels
Nov 2025 7,600 4 classes
96.1%
mAP
v2.3.0
Initial production dataset
Oct 2025 5,800 4 classes
94.3%
mAP
v3.2.0 Details
Total Frames
12,480
Object Classes
6
Model Accuracy
99.2%
Release Date
Feb 2026
Accuracy Trend
v2.3v3.2
Smart Sampling

Active Learning

Automatically surface the most uncertain and informative frames for human annotation — reduce labeling effort by up to 70% while maximizing model improvement per labeled sample.

284
frames
Queued
52
frames
Labeled Today
68%
less labeling
Efficiency Gain
+4.2%
after retrain
Model Δ mAP
Uncertainty Sampler
Entropy-basedAuto-queuing
Priority Queue
frame_8821.jpghigh
Low confidence on helmet edge case
uncertainty0.94
frame_9103.jpghigh
Ambiguous vest detection under shadow
uncertainty0.88
frame_7542.jpgmedium
Partially occluded worker at distance
uncertainty0.82
frame_6219.jpgmedium
Novel hard hat color not in training set
uncertainty0.76
frame_5410.jpglow
Borderline goggles detection
uncertainty0.71
frame_4837.jpglow
Unusual camera angle — low conf
uncertainty0.65
How It Works
Inference scan
Run model on unlabeled pool
Rank by uncertainty
Entropy / margin scoring
Queue top samples
Highest info-gain first
Human labels
Annotator reviews queued
Retrain model
Max improvement per label
Focus labeling effort where it matters most
70%Less Labels
+4.2%mAP Gain
Data Quality

Dataset Analytics

Visualize class distribution, label quality, and dataset health at a glance — catch imbalances, missing labels, and duplicates before they hurt model performance.

14,200
Total Images
46,400
Total Labels
3.27
Avg Labels/Image
96.8%
Label Quality
Dataset Explorer
v3.2 — Latest14,200 images
Class Distribution
28%Hard Hat
Hard Hat12,840
Safety Vest11,200
Person9,800
Goggles5,600
Safety Boots4,200
Gloves2,760
Quality Report
Missing labels42
Overlapping boxes18
Tiny bounding boxes7
Possible duplicates3
Dataset Score
A+
Excellent
Ready for training
Automated quality checks run on every dataset version
6Classes
96.8%Quality

Training Lab

Augment data, configure hyperparameters, and launch distributed training runs on any infrastructure.

Data Augmentation

Intelligent Data Augmentation

Automatically apply a rich set of augmentations to multiply your training data — increasing model robustness against real-world variations in lighting, angle, scale, and color.

Augmentation Pipeline
6 techniquesAll active
Live Augmentation Preview
Original
Horizontal Flip
Horizontal Flip
Data Multiplier
50%
Apply Probability
+12%
robustness
Active Techniques
Horizontal Flip
Probability: 50%
Random Rotation
Probability: 40%
Brightness & Contrast
Probability: 60%
Color Jitter (HSV)
Probability: 50%
Mosaic Augmentation
Probability: 100%
Random Scale & Crop
Probability: 70%
Augmentations are applied on-the-fly during training — no additional storage required
On-the-flyProcessing
More Data
Model Training

Direct Training From the Platform

Launch distributed training runs directly from the platform — deploy training agents on cloud, on-premise, or hybrid infrastructure. Configure, monitor, and deploy without ever leaving the interface.

Distributed Training Agents
AWS / GCP / AzureNVIDIA A100 × 4

Leverage elastic cloud GPUs for burst training — auto-scale across multiple nodes and pay only for what you use.

Agent-Cloud-01
A100
94% GPU
Agent-Cloud-02
A100
87% GPU
Agent-Cloud-03
A100
91% GPU
Agent-Cloud-04
A100
Standby
Training Console
Training in ProgressCloudRun #47
Epoch 20 / 12017%
Training Metrics
Loss
0.842
20406080100120
Accuracy
68.5%
20406080100120
Precision
65.2%
20406080100120
mAP@50
72.1%
20406080100120
Training Configuration
Architecture
YOLOv8x
Dataset Version
v3.2.0
Epochs
120
Batch Size
16
Learning Rate
0.001
Image Size
640×640
Best model is automatically saved and ready for one-click deployment
1-clickDeploy
GPUAccelerated
Multi-NodeDistributed
Auto-Tuning

Hyperparameter Tuning

Automatically search for the best training configuration — sweep learning rates, batch sizes, augmentation strategies, and more with grid, random, or Bayesian search.

Hyperparameter Search
Bayesian Search6 trials
Trial Results
#LRBatchEpochsAugmentmAPTimeStatus
#10.00116100Medium89.2%4h 12m Done
#20.0132100Heavy91.4%3h 45m Done
#30.00516150Heavy93.1%5h 20m Best
#40.00532100Light90.8%3h 10m Done
#50.00058200Medium Running
#60.00816120Heavy Queued
Search Space
Learning Ratelog
Range: 0.0005 — 0.01
Best: 0.005
Batch Sizecategorical
Range: 8, 16, 32
Best: 16
Epochslinear
Range: 100 — 200
Best: 150
Augmentationcategorical
Range: Light, Medium, Heavy
Best: Heavy
Best Trial #3
93.1% mAP
LR 0.005 • Batch 16 • 150 epochs
Bayesian optimization finds optimal configs 3x faster than grid search
93.1%Best mAP
3xFaster
Compare Models

Model Benchmark

Compare multiple model versions side by side — mAP, FPS, latency, size. Find the perfect accuracy-speed tradeoff for each deployment target.

Model Comparison
3 models9 metrics
iris-ppe-v3.2
latest
iris-ppe-v3.1
previous
iris-ppe-v3.2-lite
edge
mAP
93.1%
89.4%
85.6%
mAP@50
97.2%
94.1%
91.8%
Precision
94.8%
91.2%
87.4%
Recall
91.5%
87.8%
83.9%
FPS (Cloud)
42
45
78
FPS (Edge)
24
26
42
Size
47 MB
45 MB
12 MB
Params
25.3M
24.1M
6.8M
Latency
23ms
21ms
12ms
Automated benchmarking after every training run
93.1%Best mAP
78fpsFastest
12MBSmallest

Deployment

Version, compare, and deploy trained models to production with zero-downtime rollouts.

Model Registry

Model Versioning & Deployment

Every trained model is versioned and tracked. Compare performance across versions, promote candidates through environments, and deploy the best model with a single click.

Model Registry
5 models1 deployed1 staging
Trained Models
v3.2.0Deployed
Latest — Best overall performance with Gloves class
28 Feb 2026 Dataset v3.2.0 47.3 MB
99.2%
mAP@50
v3.1.2Staging
Candidate — Night-shift optimized model
15 Feb 2026 Dataset v3.1.0 46.8 MB
97.1%
mAP@50
v3.1.0Archived
Added night-shift training data
02 Feb 2026 Dataset v3.1.0 46.8 MB
96.5%
mAP@50
v3.0.0Archived
Major architecture upgrade to YOLOv8x
18 Jan 2026 Dataset v3.0.0 45.1 MB
94.2%
mAP@50
v2.4.0Archived
Improved vest detection accuracy
05 Jan 2026 Dataset v2.4.0 42.6 MB
91.8%
mAP@50
v3.2.0 Performance
99.2%
mAP@50
97.9%
Precision
98.4%
Recall
98.1%
F1 Score
Training Epochs120
Dataset Versionv3.2.0
Model Size47.3 MB
EnvironmentProduction
Rollback to any previous version instantly — zero downtime deployment
InstantRollback
A/BTesting
ZeroDowntime
Edge Deployment

Deploy Anywhere with Runner Agents

Install lightweight runner agents on any platform — Windows, Linux, macOS, or edge devices like NVIDIA Jetson. Each agent connects back to the platform for centralized management, updates, and monitoring.

Agent Fleet
5 platforms22 agents online
Supported Platforms
Windows
Windows 10/11
x86_643 online
$ iris-agent.exe --install
Ubuntu
22.04 / 24.04 LTS
x86_64 / ARM645 online
$ curl -sSL iris.ai/install | bash
Mac Mini
macOS Sonoma
Apple Silicon2 online
$ brew install iris-agent
NVIDIA Jetson
Orin / Xavier NX
ARM64 + CUDA4 online
$ jetson-iris --deploy --gpu
Docker
Any Linux Host
Multi-arch8 online
$ docker run -d iris/agent:latest
Agent Overview
Windows
Windows 10/11
Architecturex86_64
Active Agents3
StatusConnected
Agent Capabilities
Auto-connect to platform
OTA model updates
Local inference engine
Encrypted communication
One-line install — agents auto-register and start receiving inference jobs
5+Platforms
OTAUpdates
EdgeReady
Offline Resilience

Zero Data Loss — Even Offline

When network connectivity drops, the runner agent continues inference and queues all results, frames, and metrics locally. Once reconnected, everything syncs back automatically — no data is ever lost.

Local Queue Engine
Offline — Queuing Locally
Local Data Queue
0 / 6 synced
PPE Violation #2847
detection1.2 MB
14:32:08
Queued
PPE Violation #2848
detection0.9 MB
14:32:11
Queued
Inference Metrics Batch
metric48 KB
14:32:15
Queued
PPE Violation #2849
detection1.1 MB
14:32:18
Queued
Anomaly Frame Capture
frame2.4 MB
14:32:22
Queued
System Health Report
metric12 KB
14:32:30
Queued
How It Works
Network drops
Agent detects disconnection instantly
Queue locally
SQLite DB stores all inference results
Continue inference
Detection runs uninterrupted on-device
Auto-reconnect
Agent polls and reconnects automatically
Sync everything
Queued data uploaded in order, verified
Guarantees
Zero data loss100%
Queue capacity10 GB
Auto retry
Order preservedFIFO
Inference never stops — data is preserved locally until sync completes
0%Data Loss
SQLiteLocal DB
Fleet Management

One Manager, Every Agent

A single centralized dashboard manages all runner agents across cloud, on-premise, and edge deployments — unified model rollout, monitoring, and control from one place.

Fleet Manager
5/6 Running178 FPS total
Cloud2
On-Premise2
Edge2
Live status
AgentPlatformGPUCPU %GPU %FPSStatus
gpu-worker-01
ap-southeast-1
AWS g4dn.xlarge
T4 16GB
34%
72%
42
Running
gpu-worker-02
ap-southeast-1
AWS g4dn.xlarge
T4 16GB
28%
65%
38
Running
factory-srv-01
Jakarta DC
Dell R750 Server
RTX A4000
45%
88%
56
Running
factory-srv-02
Surabaya DC
HP ProLiant
RTX 3090
5%
2%
Idle
jetson-gate-A
Gate A
Jetson Orin NX
8GB VRAM
62%
91%
24
Running
jetson-gate-B
Gate B
Jetson Xavier NX
8GB VRAM
55%
84%
18
Running
All agents managed from a single control plane — push model updates, configs, and restart remotely
3Environments
6Agents
178Total FPS
Aerial Intelligence

Deploy on Drones

Mount IRIS on industrial drones for aerial PPE detection across large facilities. Autonomous patrol routes, real-time streaming, and edge inference on-board — covering areas fixed cameras cannot reach.

Drone Fleet Control
2 Active3 UAVs
Live Aerial View — Zone A
PPE Violation
4K 30fps • AI Inference
ALT 45m • SPD 8 m/s
Active UAVs
IRIS-UAV-01
scanning
DJI Matrice 350Zone A — Reactor Core
78%
45m
3
12kph
IRIS-UAV-02
scanning
DJI Matrice 350Zone B — Storage Area
92%
30m
1
8kph
IRIS-UAV-03
returning
Skydio X10Zone C — Perimeter
56%
60m
0
15kph
Real-time 4K stream with on-board Jetson inference — covers 10x more area than fixed cameras
4KStream
10xCoverage
24fpsEdge AI
Safe Rollouts

A/B Testing & Canary Deployment

Gradually roll out new model versions — 5% → 25% → 50% → 100% of agents. Monitor metrics at each stage and auto-rollback if error rate spikes.

Canary Rollout
iris-ppe-v3.2 → v3.350% deployed
50%
75%
100%
CanaryEarlyHalfMajorityFull
Rollout Stages
5%
Canary rollout
1 agents
mAP
93.1%
Error
0.2%
Latency
23ms
Done
25%
Early rollout
2 agents
mAP
92.8%
Error
0.3%
Latency
24ms
Done
50%
Half rollout
3 agents
mAP
93%
Error
0.1%
Latency
23ms
Active
75%
Majority rollout
5 agents
mAP
Error
Latency
Pending
100%
Full rollout
6 agents
mAP
Error
Latency
Pending
Safety Rules
Auto-rollback
If error rate > 2%
Latency gate
Abort if > 50ms avg
Min observations
100 inferences per stage
mAP threshold
Must maintain > 90%
Zero-downtime model updates with automatic rollback safeguards
0Downtime
AutoRollback
Edge Optimization

Model Optimization

Export and optimize models for any target — TensorRT, ONNX, INT8 quantization. Achieve 3x faster inference with minimal accuracy loss, ready for edge deployment.

Optimization Pipeline
PyTorch → TensorRTINT8 ready
Conversion Pipeline
PyTorch
Original FP32
FPS42
47 MB23ms
ONNX Export
Cross-platform IR
FPS45
47 MB21ms
TensorRT FP16
Half precision
FPS78
24 MB12ms
TensorRT INT8
Quantized
FPS120
12 MB8ms
Best tradeoff
Platform Benchmarks
Cloud GPU
TensorRT FP16
78
FPS
12ms
Latency
24 MB
Size
Jetson Orin
TensorRT INT8
42
FPS
24ms
Latency
12 MB
Size
Jetson Xavier
TensorRT INT8
28
FPS
35ms
Latency
12 MB
Size
CPU (x86)
ONNX Runtime
15
FPS
66ms
Latency
47 MB
Size
INT8 vs Original
Speed Boost
2.86x
42 → 120 FPS
Size Reduction
74%
47 → 12 MB
Latency Drop
65%
23 → 8 ms
Accuracy Loss
< 0.5%
93.1 → 92.7 mAP
One-click export to TensorRT, ONNX, or CoreML with automated calibration
120fpsINT8 Peak
12MBQuantized

Observability

Monitor model performance in real-time, detect data drift, and track inference metrics across deployments.

Violation Tracking

Real-Time Violation Dashboard

Every PPE violation is captured, categorized by severity, and displayed in a live dashboard — enabling instant response, audit trails, and compliance reporting.

23
violations
Today+12%
4
unresolved
Critical
2.4m
to resolve
Avg Response-18%
94.2%
this week
Compliance+3.1%
Live Violations Feed
Live
No Hard HatHigh
Zone A — ReactorCAM-04
14:32:08
97.2%
No Safety VestHigh
Zone B — StorageCAM-12
14:31:45
94.8%
No GogglesMedium
Zone A — ReactorCAM-06
14:30:22
88.1%
Restricted Zone EntryCritical
Zone C — PerimeterCAM-18
14:29:10
99.1%
No Safety BootsLow
Zone B — StorageCAM-11
14:28:55
82.4%
Violation Detail
No Hard Hat
TypeNo Hard Hat
LocationZone A — Reactor
CameraCAM-04
Detected14:32:08
Confidence97.2%
All violations are logged with full audit trail for compliance reporting
< 2sDetection
100%Logged
Event Broadcasting

Webhook & MQTT Event Stream

Every detection event is broadcast via webhooks and MQTT — integrate with Slack, SMS, ERPs, custom dashboards, or any system that can receive HTTP or MQTT messages.

Event Router
Webhook + MQTTStreaming
Event Log
POST /api/violations
no_hard_hat
45ms OK
iris/alerts/zone-a
no_vest_detected
12ms OK
POST /slack/webhook
critical_alert
180ms OK
iris/metrics/fps
inference_metrics
8ms OK
POST /sms-gateway
restricted_zone
Pending
iris/events/all
violation_event
10ms OK
Payload Preview
{
  "event": "no_hard_hat",
  "zone": "zone-a",
  "confidence": 0.972,
  "timestamp": "2026-02-28T14:32:08Z",
  "camera_id": "CAM-04",
  "frame_url": "https://iris.ai/frames/..."
}
Supported Integrations
REST Webhook
POST events to any HTTP endpoint
MQTT Broker
Publish to MQTT topics in real-time
Slack / Teams
Alert channels on violations
Custom Script
Trigger any script or Lambda
Event Flow
DetectionRouterDeliver
Events are delivered in real-time with automatic retry and dead-letter queue
< 50msLatency
99.9%Delivery
Custom Logic

Visual Workflow Builder

Build conditional detection pipelines with a flow-based visual editor — wire together triggers, conditions, and actions to create sophisticated rules like "only count workers when a ship is in frame."

Flow Editor
Input
Function
Output
|Deployed
Camera FeedRTSP InputEvery FrameTriggerObject DetectYOLOv8 inferenceShip in Frame?class == 'ship'Count Workersclass == 'person'PPE Checkvest + helmetSend AlertWebhook POSTMQTT Publishiris/eventsLog to DBPostgreSQL
Saved Flows
Ship Zone Counting Active
Only count workers when a ship is in frame
9 nodes 1,240 runs
Night Shift PPE Active
Extra strict PPE rules after 10 PM
6 nodes 892 runs
Restricted Area Alert Paused
Alert when anyone enters Zone C
4 nodes 0 runs
Forklift Safety Active
Check vest + hard hat near forklifts
7 nodes 2,103 runs
Node Palette
Camera
Trigger
Detect
Condition
Counter
Filter
Webhook
MQTT
Drag-and-drop flow editor — wire nodes together to build detection pipelines
Nodes
No-CodeBuilder
Live Monitoring

Camera Grid

See all camera feeds simultaneously with real-time detection overlays — PPE status, worker count, and violation alerts on every feed at a glance.

Camera Wall
8/9 Online4 violations
CAM-011
Zone A — Gate 13
CAM-02
Zone A — Reactor5
CAM-032
Zone B — Storage N2
CAM-04
Zone B — Storage S4
CAM-05
Zone C — Docks1
OFFLINE
CAM-06
Zone C — Perimeter
CAM-07
Zone D — Office6
CAM-081
Zone D — Parking2
CAM-09
Zone A — Tank Farm3
Click any feed to expand — full-screen with detection overlay and event history
9Cameras
30fpsPer Feed
Compliance

Automated Reports

Auto-generated weekly and monthly compliance reports with violation trends, zone heatmaps, and improvement tracking — export as PDF for audits.

Report Center
Weekly + MonthlyPDF Export
Violation Trend (8 Weeks)
38
W1
35
W2
42
W3
28
W4
38
W5
35
W6
31
W7
23
W8
39% improvement over 8 weeks
Generated Reports
Week 9 — Feb 24-28
23 violations • 94.2% compliance
+3.1%
Week 8 — Feb 17-21
31 violations • 91.1% compliance
+1.8%
Week 7 — Feb 10-14
38 violations • 89.3% compliance
-0.5%
Week 6 — Feb 3-7
35 violations • 89.8% compliance
+2.3%
Report Contents
Executive Summary
Key metrics overview
Violation Breakdown
By type, zone, severity
Zone Heatmap
Geographic violation density
Trend Analysis
Week-over-week comparison
Camera Performance
Uptime, FPS, coverage
Recommendations
AI-generated suggestions
Auto-generated
Every Friday 6 PM
Audit-ready PDF reports with full evidence chain
AutoGenerated
PDFExport
Drift Monitoring

Data Drift Detection

Continuously monitor input data distribution against the training baseline — detect camera changes, lighting shifts, and novel scenarios before they degrade model performance.

Drift Monitor
1 alert2 warnings3 ok
Feature Drift Scores
Brightness Distribution
Baseline: 0.82Current: 0.78
drift4.9%
ok
Color Histogram
Baseline: 0.9Current: 0.87
drift3.3%
ok
Object Size Distribution
Baseline: 0.75Current: 0.61
drift18.7%
alert
Class Balance Ratio
Baseline: 0.68Current: 0.65
drift4.4%
ok
Edge Density
Baseline: 0.71Current: 0.58
drift18.3%
warning
Confidence Mean
Baseline: 0.91Current: 0.84
drift7.7%
warning
Drift Events
Object size drift exceeded 15% threshold
14:20
Edge density shifting — possible camera repositioning
13:45
Confidence mean dropping — new scenario detected
12:10
All metrics within baseline — no drift
09:00
Auto Actions
Queue samples for relabeling
Trigger retraining pipeline
Notify team via Slack
Continuous monitoring with auto-triggered retraining when drift exceeds threshold
15%Threshold
AutoRetrain
Impact

Impact & Results

Measurable safety improvements at PT Petrokimia Gresik since deploying IRIS.

85%

Fewer Violations

Dramatic reduction in PPE non-compliance incidents across monitored zones.

<100ms

Detection Speed

Real-time inference on edge devices enabling instant violation detection.

99.2%

Detection Accuracy

Industry-leading precision trained on Petrokimia Gresik's specific PPE requirements.

24/7

Monitoring Coverage

Continuous automated surveillance across all high-risk plant zones.

Technology

Technology Stack

Built on Nedo Vision's proven AI framework with edge computing for real-time performance.

AI & Vision

Nedo Vision Framework
YOLOv8 / Custom Models
TensorRT Optimization
ONNX Runtime

Backend & Infrastructure

Python FastAPI
NVIDIA Jetson Edge
Redis Streams
PostgreSQL + TimescaleDB

Frontend & Dashboard

React + TypeScript
WebSocket Live Feeds
D3.js Analytics Charts
Responsive PWA

Ready to Enhance Workplace Safety with AI?

Deploy AI-powered PPE detection across your industrial facilities. Built on our proven Nedo Vision framework, customized for your safety requirements.