4. Detectors

Detectors Overview

The Detectors section in AeroMegh Intelligence provides a central hub to manage all your detection models for faster object detection drone analysis—whether they are trained, in progress, or public GeoAI detectors. Each model (called a “detector”) is shown as a visual card, giving you an at-a-glance summary of its type, accuracy, training status, classes, and access level.

Detectors
Detectors

What’s New

  • New Detector Type: Classification added alongside Object Detection and Change Detection
  • Public & Private View Toggle
  • Sub-class support for Classification Detectors
  • Manage Access feature added
  • Filter by Detector Type (Object / Change / Classification)
  • Enhanced Card UI with more metadata
  • Paginated Navigation for large detector sets

Key Components

1. Detector Count


Shows the total number of detectors currently listed.
Located at the top left, it helps track how many detectors you’ve built.

💡 Tip:Use this as a quick benchmark for progress when experimenting with multiple use cases.

2. Add Detector

Click the Add button next to the Detectors count to create a new detector. This opens a modal where you can:

  • Enter Detector Name
  • Select Type: Object Detection, Change Detection, or Classification
  • Optionally Add Description
💡 Tip: Start with simple use cases to validate performance, then branch into more advanced scenarios.
3. Detector Cards

Each detector appears as a card containing the following information:

Field Description
Icon 🔒 Private or 🌐 Public detector indicator
Name Detector title, e.g., Training_task_29_april
Type Label showing Object Detection, Change Detection, or Classification
Accuracy Displayed as a percentage (e.g., 91.97%) or marked “NA” if not trained
Classes Displays assigned classes (e.g., pit, cauliflower, piller)
Sub-Classes ➕ Appears for Classification detectors only
Date Detector creation date
Status Completed or Requires Training
Actions Menu Rename, Manage Access, Delete
💡 Tip: Use descriptive names and class labels to stay organised as your detector library grows.

4. Status Indicators

Each detector is marked with a training status:

  • Completed: Detector is trained and ready for use.
  • Requires Training: Detector needs to be trained before use.
💡 Tip: Retrain your detectors when image data or annotation patterns change significantly to maintain accuracy.

5. Public and Private Detectors

  •  Private: Only visible to you or team members with assigned roles for faster object detection drone analysis.
  •  Public: Shared, read-only models provided by the GeoAI team (e.g., GeoAI_Pit, GeoAI_Tree).
pricing
pricing

Private Detectors show a   icon.

Public Detectors use a  icon and cannot be edited or deleted.

6. Search and Filter Tools

  • Search Bar: Quickly find detectors by name.

  • Filter Button:  Filter detectors by type—Object Detection, Change Detection, or Classification.
💡 Tip: Combine filters with search to manage large detector lists more efficiently.

7. Pagination Controls

Page navigation is available at the bottom of the interface. Use it to scroll through all your detectors page by page.

💡 Tip: Use pagination and sorting to organise work by creation date, accuracy, or project priority.

4.1 Managing Detectors

The Detectors section in AeroMegh Intelligence serves as the central workspace for creating, managing, and applying AI-based detection models for drone volumetric measurement. These models—called detectors—can identify objects, changes, or classifications within aerial images. The interface supports two types of detectors:

  • Private Detectors: Custom models created by you or shared within your team.
  • Public Detectors: Ready-to-use pre-trained models curated by the AeroMegh GeoAI team.

This section walks you through how to manage both types of detectors and explains which actions are available based on detector type.

Switching Between Private and Public Detectors

At the top of the Detectors screen, use the toggle buttons to switch between:

  • 🔒 Private Detectors – Fully editable: you can create, rename, retrain, delete, and assign access.

  • 🌐 Public Detectors – Read-only models shared by the GeoAI team for universal access.
Private Public Detectors
Private Public Detectors
💡 Tip: Start with public detectors to test detection workflows before investing time in training your own.

What Are Public Detectors?

Public detectors are pre-trained AI models built by the AeroMegh GeoAI research team for drone volumetric measurement and made available to all users of the platform. These detectors are designed to solve common use cases across industries such as agriculture, construction, utilities, mining, and urban planning.

Key Characteristics:

  • Read-only: You can use them in projects but cannot rename, delete, or retrain them.
  • Curated by Experts: Built using large, high-quality datasets and validated for production use.
  • Consistently Available: Always accessible under the Public tab in the Detectors section.
Public Detectors

Why Are Public Detectors Included?

Public detectors are added to the platform to:

  • Accelerate analysis without requiring manual training
  • Standardise models across similar projects or teams
  • Benchmark custom models against high-quality baselines
  • Enable rapid prototyping in early-stage workflows
  • Help new users start detection workflows with zero setup time
💡 Use Case Example: A forestry analyst can apply the GeoAI_Tree detector directly in a classification workflow to count tree coverage without needing to train a model.
Managing Private Detectors

Private detectors offer full control and are completely customisable. The following actions are available for each private detector via the three-dot (⋮) menu.
Detectors Option

1. Rename Detector

The Rename Detector option allows you to update the detector’s name and add or edit its description for ai models for gis. This helps maintain a clean, organised workspace, especially when managing multiple detectors.

Rename Detector

Steps to Rename a Detector:

  1. Click the three-dot menu (⋮) on the detector card.
  2. Select Rename (opens the Update Detector dialog).
  3. In the Detector Name field, enter the new name.
  4. (Optional) Enter or update the Description to clarify the purpose of the detector.
  5. Click Update to save your changes.
🔒Note: The Detector Type field is visible but disabled—once a detector is created, its type cannot be changed.
💡 Best Practice: Use a structured naming format such as
Target_Type_Date (e.g., Car_Detection_May2025)
to quickly identify a detector’s purpose, especially in collaborative projects.
💡 Tip: Use the Description field to document changes, training notes, or detection goals for better traceability.

2. Manage Access

Control collaboration by assigning team roles to your detectors.

Steps:

  1. Click the three-dot menu (⋮).
  2. Choose Manage Access.
  3. Assign roles to collaborators:
    • Manager – Full permissions: rename, train, delete, share.
    • Operator – Can retrain and annotate.
    • Viewer – Read-only access.
    • Not Assigned – No access granted.
💡 Note: Access control is only available for private detectors. Public detectors are not editable or shareable.
Learn More: The Manage Access process is the same as for projects. See Manage Access Section  for detailed steps and permission role descriptions.

3. Delete Detector

If a detector for ai models for gis is outdated, redundant, or no longer required, you can permanently delete it from your list. Deletion helps keep your workspace clean and focused—especially when managing multiple detectors across different projects.

Figure 61 Delete Detector Dialog
Delete Detector Dialog

Steps to Delete a Detector:

  1. Click the three-dot menu (⋮) on the desired detector card.
  2. Select Delete.
  3. A confirmation dialog will appear:
    • Click Confirm to proceed.
    • Click Cancel to abort the action.

Confirmation Message:
Are you sure you want to delete this detector?

⚠️ Warning: Deleting a detector is permanent and cannot be undone. Ensure that the detector is no longer needed, and consider exporting or backing up relevant data beforehand
💡 Tip: If you’re unsure, consider renaming or annotating the detector for archival instead of deleting it immediately.

4. Retrain Detector

If a detector is marked as Requires Training, it must be retrained before it can be used in any detection workflow. Retraining involves uploading or selecting new images, defining training areas, creating classes, annotating regions, and starting the training process.

 When to Retrain:

  • The detector was created but not yet trained
  • New annotated images or classes have been added
  • You notice a drop in accuracy or changes in detection needs
Figure 62 Detector requires training
Detector requires training

Steps to Retrain a Detector

  1. Click the detector card marked as Requires Training.
  2. On the Train Detector screen:
    • Add images if needed
    • Verify or update annotations, classes, and training areas
  3. Click the Train Detector button.
  4. Confirm when prompted to start training
⚠️ Note: The retraining process is identical to creating a new detector. You are simply repeating the training steps with new or updated data.

Learn More: For full details on the process, see Section 4.2 – Train a New Detector.

💡 Tip: Retrain your detector after each major update—like adding new classes, changing annotations, or receiving new image types—to maintain high detection accuracy.
Best Practice: Save and reuse datasets from previous training runs for consistent benchmarking during retraining.

5. Monitor Accuracytor

Each detector card displays the model’s current accuracy percentage. This helps assess model health and determine when retraining is required.

How to Monitor:

  • View accuracy directly on the card (e.g., 91.24%).
  • Compare performance across model versions.
  • Use historical results to evaluate detection consistency.
💡 Best Practice: Maintain a threshold accuracy target (e.g., >85%). Retrain if your detector’s performance drops below this benchmark.

6. Filter and Search Detectors

As your list of detectors grows, use the Filter and Search tools to locate what you need quickly.

Steps to Filter:

  1. Click the Filter button (top-right).
  2. Select detector types:
    • Object Detection
    • Change Detection
    • Classification
  3. Click Apply.
Detector Filter
Detector Filter

Search:

  • Use the search bar to locate detectors by name.
💡 Tip: Combine the filter and search tools to pinpoint detectors precisely—especially in high-volume projects.

7.View Public Detectors

Public detectors are shared, pre-trained models that you can use as-is. These cannot be modified but are excellent for baseline analysis and quick deployment.

What you can do:

  1. View detector names, classes, date, and accuracy.
  2. Apply them in project workflows.

What you cannot do:

  • Rename, delete, retrain, or manage access.
Public detector
Public detector
💡 Tip: Public detectors are ideal for validation, benchmarking, and use cases with standard detection needs.
Feature Comparison Table

Feature / Action Private Detectors Public Detectors
Rename ✅ Yes ❌ No
Manage Access ✅ Yes ❌ No
Delete ✅ Yes ❌ No
Retrain ✅ Yes ❌ No
Monitor Accuracy ✅ Yes ✅ Yes
Filter by Detector Type ✅ Yes ✅ Yes
Toggle View (Public/Private) ✅ Yes ✅ Yes
Use in Project Workflow ✅ Yes ✅ Yes

4.2 Train a New Detector

The Train a Detector module in AeroMegh Intelligence allows you to build custom AI models tailored to your geospatial analysis needs. These models—known as detectors—are trained using aerial images and annotations to perform one of three detection tasks:


Detector TypeDescription
Object DetectionDetects and locates specific objects in aerial imagery (e.g., pits, poles, crops)
Change DetectionCompares two time-separated images of the same area to identify visual changes
ClassificationClassifies images or regions into defined sub-classes (e.g., soil, road, tree)


What You’ll Do in This Process

Training a detector involves the following key steps:

  1. Add a New Detector – Define its name, type, and purpose
  2. Add Images for Training – Upload or select relevant image data
  3. Select Images from Existing Projects – Choose datasets that fit the detection objective
  4. Define Areas – Mark training, testing, and accuracy zones on each image
  5. Create Classes – Label the categories your model should learn
  6. Add Annotations – Draw regions linked to classes for the model to learn from
  7. Train the Detector – Initiate the model training process
  8. Review Training Confirmation – Check for successful initiation and next steps
  9. Apply Pro Tips – Learn how to improve training performance and accuracy
💡 Tip: Before you begin, prepare a dataset that includes diverse images and clear examples of what you want the detector to learn. The better the data, the better your model.
🔒 Note: Detector type cannot be changed after creation. Choose carefully based on your objective.
Let’s begin with: 1  Add a New Detector

Steps to Train a New Detector

1. Add a New Detector

The first step in creating an AI-powered detector is to define its basic configuration: name, type, and purpose.

Steps to Create a New Detector

  1. Navigate to the Detectors tab from the top menu.

  2. Click the Add Add Detector button beside the detector count.

  3. The Train a New Detector popup window appears.

  4. Fill in the required details:
    • Detector Name: Enter a unique and meaningful name.
    • Detector Type: Choose from:
      •  Object Detection
      •  Change Detection
      •  Classification
    • Description (Optional): Brief notes or objectives.

  5. Click Create to proceed.
Train a New Detector
Train a New Detector
Validation & Error Handling

Trigger Error Message Resolution
Empty name field Detector name is required. Provide a valid name.
No type selected Please select a detector type. Choose one of the three types.
Duplicate detector name Detector with this name already exists. Change the name (e.g., append _v2).
Invalid characters Invalid characters used in name. Use only letters, numbers, _, or -.
Name too long Name exceeds character limit. Limit name to 50 characters or fewer.
🔒 Note: Detector names must be unique across your private detectors.

What Happens Next?

  • A new detector card appears in the list.
  • Status will display: Requires Training.
  • Click the card to continue with the training steps.
Figure 66 Detector card.
Train a New Detector
Continue with: 2 – Add Images for Training

2. Add Images for Training

After creating a new detector, the next step is to add the image data that the model will learn from. Images must be selected from existing projects in AeroMegh Intelligence.

When you open the training screen for the first time:

Click on the detector card (status: Requires Training). This opens the Train Detector workspace. The screen will be blank initially, with prompts to begin uploading or selecting training images.

Train Detector
Train Detector

The process differs slightly based on the detector type:

Detector TypeImage Requirement
Object DetectionAdd 1 image from a project
ClassificationAdd 1 image from a project
Change DetectionAdd 2 images: one Base and one Secondary image

Steps to Add Images for Training

  1. On the Train Detector screen, go to the Images Panel on the right-hand side.
  2. Click the Add Add Image button.
  3. The Select Images for Training Detector window will open.
  4. You will see:
    • A list of available projects containing uploaded images.
    • A search bar to quickly find the relevant project.
Select project for images
Select project for images

Selecting Images from a Project

  1. Use the search bar or scroll to locate the required project.

  2. Click the project name to view available images.

  3. Select the image(s) by ticking the checkbox beside the thumbnail:
    • For Object Detection or Classification: Select 1 image.
    • For Change Detection: Select 2 images — one will be set as the Base Image, and the second as the Secondary Image.

  4. Click Select to confirm your selection.
Select Images
Select Images
The selected images will now appear in the Images Panel of the Train Detector screen.

💡 Tip: Train Smarter with More Images

Adding just one image is enough to start—but why stop there?
You can add and annotate multiple images within a single detector!

Using a variety of images (different lighting, angles, and scenes) helps your detector:

  • Learn faster
  • Generalise better
  • Perform with higher accuracy in real-world conditions

More images = Smarter models. If your goal is production-ready detection, give your detector more to learn from.

Detector image panel
SelectDetector image panel Images

💡 Tips for Better Image Selection

  • Use diverse image sources across different environments or times of day to improve generalisability.

  • Ensure the selected images accurately represent the types of objects, areas, or changes your detector is expected to learn.

  • For Change Detection, make sure both images are from the same location, taken at different times.

💡 Tip: More meaningful and varied images lead to better training results. Avoid using identical or low-quality images.

Proceed to the next step: 4.2.3 – Define Areas (Training, Testing, Accuracy)

3. Define Areas

Once you’ve added image(s) to your detector, the next step is to define the spatial zones used for training and evaluation. These zones guide the model on where to learn, where to test, and how to validate performance.
Detector screen before adding image
Detector screen before adding image

 Toolbar Activation Logic

When you first open the Train Detector screen (before adding images):
Only the  Pan and  Center tools are available on the left-hand toolbar.

Once you add an image:

  • The toolbar updates with additional tools:
    •  Training Area

       

    • Testing Area

       

    • Accuracy Area

       

    •  Import Annotations

       

    •  Export Annotations
Detector screen with image
Detector screen with image

Area Types and Colours

Area TypePurposeToolbar IconVisual Cue
Training AreaArea where the model will learn from annotations Yellow dashed borderYellow dashed polygon
Testing AreaArea used to evaluate model during training Blue dashed borderBlue dashed polygon
Accuracy AreaUsed to validate performance metrics Green dashed borderGreen dashed polygon

Steps to Define Each Area

On the Train Detector screen, you must draw defined zones to guide how the detector learns and evaluates. These zones are created using polygon tools available in the left-hand toolbar once at least one image is added.

Training-Testing-Accuracy Area
Training-Testing-Accuracy Area

1. Training Area

  1. Define a Training Area (Required)

The Training Area is where the detector learns to identify and classify objects based on annotations you create.

Steps:

  1. On the left toolbar, click the Training Area tool ().
  2. Move your cursor to the image and click to set points that outline the area.
  3. Close the shape by clicking back on the first point or double-clicking the last point.
  4. The area will now be highlighted with a yellow dashed line.

 You must define at least one Training Area per image for the detector to train.

2. Testing Area

  1. Define a Testing Area (Optional but Recommended)

The Testing Area is used to evaluate how well the model performs during training. It allows the system to check predictions against known annotations that it hasn’t used for learning.

Steps:

  1. Select the Testing Area tool from the left toolbar (  ).
  2. Click on the image to draw the boundary points.
  3. Close the polygon to finalise the shape.
  4. The area will be shown as a blue dashed line.
💡 Tip: Use the testing area to measure the model’s real-time learning performance and spot overfitting.

3. Accuracy Area

  1. Define an Accuracy Area (Required)

The Accuracy Area is used to validate the final performance of the trained model using independent data.

Steps:

  1. Select the Accuracy Area tool from the toolbar ( ).
  2. Click around the region to outline the shape.
  3. Complete the polygon to lock the shape in place.
  4. The area will now appear as a green dashed line.

Every image must contain at least one Accuracy Area before training can start.

Area Validation Rules

To ensure proper training, the following area-related conditions must be met:

RuleRequirement
Minimum area types per imageAt least 1 Training Area and 1 Accuracy Area
Minimum annotations per area type3 annotations per area
Total annotations across all areasAt least 20 annotations
⚠️ Warning: The detector will not allow you to begin training unless these conditions are satisfied.
💡 Tip: Use Training Areas for diverse object examples, Accuracy Areas for precise evaluation, and Testing Areas to fine-tune performance if needed.
Best Practice: Avoid overlapping area types on the same region. Keep them distinct to avoid bias in learning vs. validation.

⚠️ Additional Notes on Area & Annotation Rules

  1. Each Training Area and Accuracy Area must contain at least 3 annotations to be valid for training.

  2. The total number of annotations across all areas (Training + Accuracy) must be at least 20.

  3. Testing Areas are optional and may be left empty. They are useful for monitoring model performance during training but are not required.
💡 Tip: These requirements must be fulfilled for the “Train Detector” button to be enabled. If not, AeroMegh Intelligence will block training until they are corrected.
Next: 4 – Classes

4. Classes

In AeroMegh Intelligence, Classes are essential building blocks for training AI detectors. Each class represents a category or label that tells the detector what to learn and identify during training.

Class

What is a Class?

In AeroMegh Intelligence, a Class is a label or category that defines what the AI model should learn to recognise in an image. Every annotation you create during the training process must be linked to a class.

Each class acts as a semantic identifier that tells the detector,

“This is a weed.”
“This is a cauliflower.”
“This is soil.”

Without classes, the model has no way of understanding what it’s learning—it would simply see shapes and colours without meaning.

Real-World Example:

In an agricultural use case, you might want the AI to detect and separate:

  • Cauliflower (desired crop)
  • Weed (unwanted growth)
  • Soil (background)

These are three distinct classes. When you annotate image regions and label them accordingly, the detector learns:

  • What each item looks like
  • How they differ from one another
  • Where each typically appears

Why Classes Matter

Foundation for Learning
Classes form the foundation of your model’s intelligence. Without them, your detector cannot classify, compare, or distinguish objects.

Model Accuracy Depends on Class Quality
Well-labelled, clearly defined classes lead to higher-quality predictions and lower false positives or misclassifications.

Supports Complex Scenarios
In projects where multiple object types exist (e.g., utility poles, cables, road damage), classes help your detector understand the landscape.

Improves Maintainability
Using classes keeps your annotations clean, consistent, and easy to interpret—even months later or when working in teams.

💡 Insight: Think of classes as the “vocabulary” your detector is learning. The better defined and more consistent your vocabulary, the more fluent and intelligent your detector becomes.
Classes Section
Classes Section

Create a Class

➕ How to Create a Class

  1. On the Train Detector screen, locate the Classes Section (usually below the image section).

  2. Click the Add Add Class button.

  3. In the Add New Class dialog, enter the name of the class.

  4. Click Add to save.
Add New Class
Add New Class
💡 Tip: Use short, descriptive, and consistent names (e.g., Pole, Crack, Tree) to keep your model organised.
⚠️ Important: You must create at least one class before annotation tools become available.

Manage Classes

Manage Existing Classes

Each created class can be renamed or deleted from the Classes Panel using the three-dot menu (⋮) next to the class name.

Class Options
Class Options

Rename

Rename a Class

  1. Click the three-dot menu (⋮) next to the class name.
  2. Select Rename.
  3. In the Update Class dialog, enter the new name.
  4. Click Update to save changes.
Rename Class
Rename Class

After saving, a confirmation dialog will appear with the message:

“Class updated successfully.”

Class Updated
Class Updated
💡 Best Practice: Rename classes early in the training process to avoid inconsistencies.
💡 Note:
If detector training is already completed, the Rename option will be disabled. Classes tied to completed models cannot be renamed.

Delete

Delete a Class

1. Click the three-dot menu (⋮) next to the class name.

2. Select Delete.

3. A confirmation dialog appears:

Delete Classes
Are you sure you want to delete the selected class(es)?”

4. Choose:

    • Confirm to delete
    • Cancel to keep the class

with the message:

“Class updated successfully.”

Delete Class
Delete Class
💡 Note: If detector training is already completed, the Delete option will be disabled. Classes tied to completed models cannot be removed.

Summary

ActionAvailabilityUI Location
Add ClassBefore trainingClasses Panel → Add Class
Rename ClassBefore/after training (if allowed)⋮ Menu → Rename
Delete ClassOnly before training is complete⋮ Menu → Delete
Required to Annotate✅ YesAnnotation tools unlock only after at least 1 class is created
💡  Tip: Plan your class structure early. Well-defined and consistently named classes lead to more accurate and maintainable detectors.
Up Next: 5 – Create and Manage Annotations

5. Annotations

Annotations define the precise regions on images that your detector should learn from. Each annotation is linked to a Class and drawn within a defined Training, Testing, or Accuracy area.

High-quality annotations are the key to building a detector that performs well across different environments and real-world use cases.

Annotation
What is an Annotation?

An Annotation is a marked region on an image used to train your AI detector. It tells the model:

“This is what a Tree looks like.”
“This shape belongs to the class ‘Weed’.”

Each annotation:

  • Is linked to a class (e.g., Pole, Crack, Soil)
  • Represents a region within a Training, Testing, or Accuracy area
  • Provides examples the AI model uses to learn and validate its understanding

Example

In a land-use classification project:

  • A rectangle around a tree → Class: Tree
  • A polyline along a crack in pavement → Class: Crack

💡 More precise and varied annotations = better detector performance.

Unlock Annotation Tool

Unlocking Annotation Tools

When you first open the Train Detector screen, annotation tools are disabled.

To activate them:

  1. Add at least one image
  2. Define your Training/Accuracy/Testing areas
  3. Create or select a Class

Once a class is active, the following annotation tools become available in the left-hand toolbar.

Annotation Tool
Annotation Tool

Annotation Tool

Annotation Tools and How to Use Them

ToolBest For
RectangleBox-shaped features (e.g., vehicles, panels)
CircleRound features (e.g., tree tops, manholes)
PolygonIrregular shapes (e.g., patches, fields)
PolylineLong linear features (e.g., cracks, cables)

Create Annotation

Rectangle

Rectangle Tool

Steps:

  1. Select the Rectangle Tool.
  2. Click on the image to set the starting corner.
  3. Drag to define width and height.
  4. Release to complete.
  5. Click ✔ Done to finalise.

💡 Great for: poles, signs, panels.

Circle

Circle Tool

Steps:

  1. Click the Circle Tool.
  2. Click once to place the centre.
  3. Drag to set radius.
  4. Release and click ✔ Done.

💡 Great for: trees, round tanks, manholes.

Polygon

Polygon Tool

Steps:

  1. Select the Polygon Tool.
  2. Click around the object to form edges.
  3. Close the shape by clicking the first point or double-click the last.
  4. Click ✔ Done.

💡 Best for irregular areas like vegetation or cracks.

Polyline

Polyline Tool

Steps:

  1. Click the Polyline Tool.
  2. Click multiple points to trace the feature.
  3. Double-click to complete the line.
  4. Click ✔ Done.

💡 Use for roads, cables, pipe lines.

Manage Annotations

Clicking any existing annotation opens a contextual menu with the following ptions:

ActionDescription
CopyDuplicate the selected annotation
PastePaste copied annotation onto current or next image
EditModify size, shape, or position
DeleteRemove the annotation
DoneFinalise the annotation and save changes

💡 Tip: Use Copy–Paste for fast and consistent annotation of repeating objects across images.

Annotation Rules & Validation

To proceed with training, your annotations must meet the following criteria:

RequirementMinimum Threshold
Per Area (Training/Accuracy)At least 3 annotations per area/td>
Across All AreasAt least 20 annotations in total
Linked to a ClassEvery annotation must be tied to a class

Clarification

These annotation rules are enforced strictly before starting the training process.

  • If any Training or Accuracy Area contains fewer than 3 annotations, or
  • If the total annotations are fewer than 20,

… the system will prevent you from proceeding to train the detector.

Example: If you create only 2 annotations in the Training Area and 2 in Accuracy, training will remain disabled—even if other steps are completed.

Pro Tip: Use the annotation counter or summary panel to keep track of annotation totals across all images and areas.

Annotation Validation
Annotation Validation
💡 Note: Training cannot be started unless these conditions are met.

Best Practices for Annotation

  • Use the appropriate shape for each object
  • Keep annotations precise and tightly fitted
  • Avoid overlapping annotations unless necessary
  • Use consistent class labelling across images
  • Don’t forget to finalise each annotation with ✔️
💡Pro Tip: Annotate diverse examples of the same class—different angles, sizes, lighting—for better model generalisation.
Up Next: 6 – Train the Detector

6. Train the Detector

After completing all the required setup steps—adding images, defining areas, creating classes, and adding annotations—you are now ready to initiate the training of your detector.

This section guides you through starting the training process and understanding the system’s confirmation messages.

Pre-Training Checklist

Before clicking the Train Detector button, make sure:

  • At least one Training Area and one Accuracy Area is drawn
  • ✔ Each area contains a minimum of 3 annotations
  • ✔ A total of more than 20 annotations are present across all areas
  • ✔ At least one Class is created and assigned to annotations
⚠️ Note: If any of these conditions are not met, the system will prevent training from starting.

Training Eligibility Checklist – All Conditions Must Be Met

Ensure the following requirements are satisfied before starting training:

  • At least one Training Area and one Accuracy Area are defined
  • Each area contains 3 or more annotations
  • There are 20+ total annotations across all areas combined
  • All annotations are linked to a valid Class
  • The Detector Type is selected and correctly set
⚠️ If any of the above conditions are not met, the “Train Detector” button will be disabled. A validation alert will notify you of what’s missing.

How to Start Training

1. Navigate to the Train Detector screen.

2. Click the Train Detector button located in the top-middle of image view screen.

Train Detector
Train Detector

3. A confirmation dialog will appear asking:

“Are you sure you want to start training this detector?”

Figure 83 Confirm Training
Confirm Training
  • Click Yes to proceed.
  • Click No to cancel and return to editing if needed.

Training Confirmation

If the setup is valid and training begins successfully:

  • A message will appear:

“Detector training started successfully.”

  • Click OK to close the dialog.
  • The detector’s status will now change to Training in Progress. You can monitor the progress directly from the Detectors screen.
💡 Tip: Training duration depends on image volume, annotation complexity, and number of classes. If it takes longer than expected, consider simplifying your dataset or reviewing annotation density.
Once the training is complete, the detector status will automatically update to Completed, and it will become available for use in detection workflows..

Pro Tips for Better Performance

Training a detector isn’t just about drawing shapes and clicking “Train.” The quality of your input data, annotation discipline, and model maintenance practices directly affect how accurate and reliable your detector becomes in real-world scenarios.

Use the following best practices to maximise your detector’s learning efficiency and output accuracy.

  1. Use Diverse Training Data

    Why it matters: A model trained on similar-looking images may perform poorly when introduced to new conditions.
  • Include images from different locations, angles, lighting conditions, and resolutions.
  • If applicable, vary weather, seasons, or environmental backgrounds.
💡 Tip: For detectors deployed in the field, training on real-world variation builds robustness and reduces false positives.
  1. Retrain Regularly with New Data

    Why it matters: Over time, environments and object appearances may change.
  • Periodically retrain your detector with fresh data to maintain high accuracy.
  • Retrain especially when:
    • New object types are introduced
    • The environment evolves (e.g., seasonal changes, new infrastructure)
    • Detector performance starts declining
💡 Tip: Save versions of your trained detectors so you can benchmark new training runs against previous ones.
  1. Eliminate Irrelevant or Noisy Data

    Why it matters: Poor-quality or unrelated images dilute your model’s learning process.
  • Remove blurry, overexposed, or irrelevant images before training.
  • Avoid including objects or areas unrelated to the class definitions.
💡 Tip: Keep your training dataset focused and consistent with your detection objective.
  1. Prioritise Annotation Quality Over Quantity

    Why it matters: A few well-labelled examples are more valuable than many inaccurate ones.
  • Ensure each annotation is precisely aligned with the object’s shape.
  • Avoid sloppy or overlapping annotations unless required.
  • Maintain class consistency across images.
💡 Tip: Use tools like Polygon or Polyline when objects don’t fit cleanly into rectangular shapes.
  1. Validate Before Training

    Why it matters: Missed requirements delay training or reduce model effectiveness.
  • Confirm:
    • At least 1 Training and 1 Accuracy Area are defined
    • Each area has 3+ annotations
    • The total annotation count exceeds 20
    • Every annotation is tied to a valid class
💡 Tip: Use the annotation validation warnings in the interface to quickly spot gaps.y into rectangular shapes.
  1. Test Before Deployment


    Why it matters: Even a well-trained model may perform differently in live environments.
  • Use the Testing Area to evaluate detector accuracy before full-scale deployment.
  • Examine false positives/negatives and retrain if needed.
💡 Tip: Always test with images that weren’t used during training.

Summary: The Formula for High-Performance Detectors

StrategyImpact
Diverse ImagesBetter generalisation across environments
Quality AnnotationsImproved model precision and fewer errors
Regular RetrainingSustained accuracy over time
Clean DataFaster training, less confusion
Validation CheckpointsReduced errors before model execution
Field TestingEnsures readiness for real-world use

 

Table of Contents
whatsapp
Scroll to Top