-
AI User Manual
4. Detectors
Detectors Overview
The Detectors section in AeroMegh Intelligence provides a central hub to manage all your detection models for faster object detection drone analysis—whether they are trained, in progress, or public GeoAI detectors. Each model (called a “detector”) is shown as a visual card, giving you an at-a-glance summary of its type, accuracy, training status, classes, and access level.
What’s New
- New Detector Type: Classification added alongside Object Detection and Change Detection
- Public & Private View Toggle
- Sub-class support for Classification Detectors
- Manage Access feature added
- Filter by Detector Type (Object / Change / Classification)
- Enhanced Card UI with more metadata
- Paginated Navigation for large detector sets
Key Components
1. Detector Count
Shows the total number of detectors currently listed.
Located at the top left, it helps track how many detectors you’ve built.
| 💡 Tip:Use this as a quick benchmark for progress when experimenting with multiple use cases. |
2. Add Detector
Click the
button next to the Detectors count to create a new detector. This opens a modal where you can:
- Enter Detector Name
- Select Type: Object Detection, Change Detection, or Classification
- Optionally Add Description
| 💡 Tip: Start with simple use cases to validate performance, then branch into more advanced scenarios. |
Each detector appears as a card containing the following information:
| Field | Description |
| Icon | 🔒 Private or 🌐 Public detector indicator |
| Name | Detector title, e.g., Training_task_29_april |
| Type | Label showing Object Detection, Change Detection, or Classification |
| Accuracy | Displayed as a percentage (e.g., 91.97%) or marked “NA” if not trained |
| Classes | Displays assigned classes (e.g., pit, cauliflower, piller) |
| Sub-Classes | ➕ Appears for Classification detectors only |
| Date | Detector creation date |
| Status | Completed or Requires Training |
| Actions Menu | Rename, Manage Access, Delete |
| 💡 Tip: Use descriptive names and class labels to stay organised as your detector library grows. |
4. Status Indicators
Each detector is marked with a training status:
- Completed: Detector is trained and ready for use.
- Requires Training: Detector needs to be trained before use.
| 💡 Tip: Retrain your detectors when image data or annotation patterns change significantly to maintain accuracy. |
5. Public and Private Detectors
- Private: Only visible to you or team members with assigned roles for faster object detection drone analysis.
- Public: Shared, read-only models provided by the GeoAI team (e.g., GeoAI_Pit, GeoAI_Tree).
Private Detectors show a
icon.
Public Detectors use a
icon and cannot be edited or deleted.
6. Search and Filter Tools
- Search Bar:
Quickly find detectors by name. - Filter Button:
Filter detectors by type—Object Detection, Change Detection, or Classification.
| 💡 Tip: Combine filters with search to manage large detector lists more efficiently. |
7. Pagination Controls
Page navigation is available at the bottom of the interface. Use it to scroll through all your detectors page by page.
![]()
| 💡 Tip: Use pagination and sorting to organise work by creation date, accuracy, or project priority. |
4.1 Managing Detectors
The Detectors section in AeroMegh Intelligence serves as the central workspace for creating, managing, and applying AI-based detection models for drone volumetric measurement. These models—called detectors—can identify objects, changes, or classifications within aerial images. The interface supports two types of detectors:
- Private Detectors: Custom models created by you or shared within your team.
- Public Detectors: Ready-to-use pre-trained models curated by the AeroMegh GeoAI team.
This section walks you through how to manage both types of detectors and explains which actions are available based on detector type.
Switching Between Private and Public Detectors
At the top of the Detectors screen, use the toggle buttons to switch between:
- 🔒 Private Detectors – Fully editable: you can create, rename, retrain, delete, and assign access.
- 🌐 Public Detectors – Read-only models shared by the GeoAI team for universal access.
| 💡 Tip: Start with public detectors to test detection workflows before investing time in training your own. |
What Are Public Detectors?
Public detectors are pre-trained AI models built by the AeroMegh GeoAI research team for drone volumetric measurement and made available to all users of the platform. These detectors are designed to solve common use cases across industries such as agriculture, construction, utilities, mining, and urban planning.
Key Characteristics:
- Read-only: You can use them in projects but cannot rename, delete, or retrain them.
- Curated by Experts: Built using large, high-quality datasets and validated for production use.
- Consistently Available: Always accessible under the Public tab in the Detectors section.
Why Are Public Detectors Included?
Public detectors are added to the platform to:
- Accelerate analysis without requiring manual training
- Standardise models across similar projects or teams
- Benchmark custom models against high-quality baselines
- Enable rapid prototyping in early-stage workflows
- Help new users start detection workflows with zero setup time
| 💡 Use Case Example: A forestry analyst can apply the GeoAI_Tree detector directly in a classification workflow to count tree coverage without needing to train a model. |
Private detectors offer full control and are completely customisable. The following actions are available for each private detector via the three-dot (⋮) menu.
1. Rename Detector
The Rename Detector option allows you to update the detector’s name and add or edit its description for ai models for gis. This helps maintain a clean, organised workspace, especially when managing multiple detectors.
Steps to Rename a Detector:
- Click the three-dot menu (⋮) on the detector card.
- Select Rename (opens the Update Detector dialog).
- In the Detector Name field, enter the new name.
- (Optional) Enter or update the Description to clarify the purpose of the detector.
- Click Update to save your changes.
| 🔒Note: The Detector Type field is visible but disabled—once a detector is created, its type cannot be changed. |
| 💡 Best Practice: Use a structured naming format such as Target_Type_Date (e.g., Car_Detection_May2025) to quickly identify a detector’s purpose, especially in collaborative projects. |
| 💡 Tip: Use the Description field to document changes, training notes, or detection goals for better traceability. |
2. Manage Access
Control collaboration by assigning team roles to your detectors.
Steps:
- Click the three-dot menu (⋮).
- Choose Manage Access.
- Assign roles to collaborators:
- Manager – Full permissions: rename, train, delete, share.
- Operator – Can retrain and annotate.
- Viewer – Read-only access.
- Not Assigned – No access granted.
| 💡 Note: Access control is only available for private detectors. Public detectors are not editable or shareable. |
3. Delete Detector
If a detector for ai models for gis is outdated, redundant, or no longer required, you can permanently delete it from your list. Deletion helps keep your workspace clean and focused—especially when managing multiple detectors across different projects.
Steps to Delete a Detector:
- Click the three-dot menu (⋮) on the desired detector card.
- Select Delete.
- A confirmation dialog will appear:
- Click Confirm to proceed.
- Click Cancel to abort the action.
Confirmation Message:
“Are you sure you want to delete this detector?”
| ⚠️ Warning: Deleting a detector is permanent and cannot be undone. Ensure that the detector is no longer needed, and consider exporting or backing up relevant data beforehand |
| 💡 Tip: If you’re unsure, consider renaming or annotating the detector for archival instead of deleting it immediately. |
4. Retrain Detector
If a detector is marked as Requires Training, it must be retrained before it can be used in any detection workflow. Retraining involves uploading or selecting new images, defining training areas, creating classes, annotating regions, and starting the training process.
When to Retrain:
- The detector was created but not yet trained
- New annotated images or classes have been added
- You notice a drop in accuracy or changes in detection needs
Steps to Retrain a Detector
- Click the detector card marked as Requires Training.
- On the Train Detector screen:
- Add images if needed
- Verify or update annotations, classes, and training areas
- Click the Train Detector button.
- Confirm when prompted to start training
| ⚠️ Note: The retraining process is identical to creating a new detector. You are simply repeating the training steps with new or updated data. |
Learn More: For full details on the process, see Section 4.2 – Train a New Detector.
| 💡 Tip: Retrain your detector after each major update—like adding new classes, changing annotations, or receiving new image types—to maintain high detection accuracy. |
| Best Practice: Save and reuse datasets from previous training runs for consistent benchmarking during retraining. |
5. Monitor Accuracytor
Each detector card displays the model’s current accuracy percentage. This helps assess model health and determine when retraining is required.
How to Monitor:
- View accuracy directly on the card (e.g., 91.24%).
- Compare performance across model versions.
- Use historical results to evaluate detection consistency.
| 💡 Best Practice: Maintain a threshold accuracy target (e.g., >85%). Retrain if your detector’s performance drops below this benchmark. |
6. Filter and Search Detectors
As your list of detectors grows, use the Filter and Search tools to locate what you need quickly.
Steps to Filter:
- Click the Filter button (top-right).
- Select detector types:
- Object Detection
- Change Detection
- Classification
- Click Apply.
Search:
- Use the search bar to locate detectors by name.
| 💡 Tip: Combine the filter and search tools to pinpoint detectors precisely—especially in high-volume projects. |
7.View Public Detectors
Public detectors are shared, pre-trained models that you can use as-is. These cannot be modified but are excellent for baseline analysis and quick deployment.
What you can do:
- View detector names, classes, date, and accuracy.
- Apply them in project workflows.
What you cannot do:
- Rename, delete, retrain, or manage access.
| 💡 Tip: Public detectors are ideal for validation, benchmarking, and use cases with standard detection needs. |
| Feature / Action | Private Detectors | Public Detectors |
| Rename | ✅ Yes | ❌ No |
| Manage Access | ✅ Yes | ❌ No |
| Delete | ✅ Yes | ❌ No |
| Retrain | ✅ Yes | ❌ No |
| Monitor Accuracy | ✅ Yes | ✅ Yes |
| Filter by Detector Type | ✅ Yes | ✅ Yes |
| Toggle View (Public/Private) | ✅ Yes | ✅ Yes |
| Use in Project Workflow | ✅ Yes | ✅ Yes |
4.2 Train a New Detector
The Train a Detector module in AeroMegh Intelligence allows you to build custom AI models tailored to your geospatial analysis needs. These models—known as detectors—are trained using aerial images and annotations to perform one of three detection tasks:
| Detector Type | Description |
| Object Detection | Detects and locates specific objects in aerial imagery (e.g., pits, poles, crops) |
| Change Detection | Compares two time-separated images of the same area to identify visual changes |
| Classification | Classifies images or regions into defined sub-classes (e.g., soil, road, tree) |
What You’ll Do in This Process
Training a detector involves the following key steps:
- Add a New Detector – Define its name, type, and purpose
- Add Images for Training – Upload or select relevant image data
- Select Images from Existing Projects – Choose datasets that fit the detection objective
- Define Areas – Mark training, testing, and accuracy zones on each image
- Create Classes – Label the categories your model should learn
- Add Annotations – Draw regions linked to classes for the model to learn from
- Train the Detector – Initiate the model training process
- Review Training Confirmation – Check for successful initiation and next steps
- Apply Pro Tips – Learn how to improve training performance and accuracy
| 💡 Tip: Before you begin, prepare a dataset that includes diverse images and clear examples of what you want the detector to learn. The better the data, the better your model. |
| 🔒 Note: Detector type cannot be changed after creation. Choose carefully based on your objective. |
Steps to Train a New Detector
1. Add a New Detector
The first step in creating an AI-powered detector is to define its basic configuration: name, type, and purpose.
Steps to Create a New Detector
- Navigate to the Detectors tab from the top menu.
- Click the
Add Detector button beside the detector count. - The Train a New Detector popup window appears.
- Fill in the required details:
- Detector Name: Enter a unique and meaningful name.
- Detector Type: Choose from:
- Object Detection
- Change Detection
- Classification
- Description (Optional): Brief notes or objectives.
- Click Create to proceed.
| Trigger | Error Message | Resolution |
| Empty name field | Detector name is required. | Provide a valid name. |
| No type selected | Please select a detector type. | Choose one of the three types. |
| Duplicate detector name | Detector with this name already exists. | Change the name (e.g., append _v2). |
| Invalid characters | Invalid characters used in name. | Use only letters, numbers, _, or -. |
| Name too long | Name exceeds character limit. | Limit name to 50 characters or fewer. |
| 🔒 Note: Detector names must be unique across your private detectors. |
What Happens Next?
- A new detector card appears in the list.
- Status will display: Requires Training.
- Click the card to continue with the training steps.
2. Add Images for Training
After creating a new detector, the next step is to add the image data that the model will learn from. Images must be selected from existing projects in AeroMegh Intelligence.
When you open the training screen for the first time:
Click on the detector card (status: Requires Training). This opens the Train Detector workspace. The screen will be blank initially, with prompts to begin uploading or selecting training images.
The process differs slightly based on the detector type:
| Detector Type | Image Requirement |
| Object Detection | Add 1 image from a project |
| Classification | Add 1 image from a project |
| Change Detection | Add 2 images: one Base and one Secondary image |
Steps to Add Images for Training
- On the Train Detector screen, go to the Images Panel on the right-hand side.
- Click the
Add Image button. - The Select Images for Training Detector window will open.
- You will see:
- A list of available projects containing uploaded images.
- A search bar to quickly find the relevant project.
Selecting Images from a Project
- Use the search bar or scroll to locate the required project.
- Click the project name to view available images.
- Select the image(s) by ticking the checkbox beside the thumbnail:
- For Object Detection or Classification: Select 1 image.
- For Change Detection: Select 2 images — one will be set as the Base Image, and the second as the Secondary Image.
- Click Select to confirm your selection.
💡 Tip: Train Smarter with More Images Adding just one image is enough to start—but why stop there? Using a variety of images (different lighting, angles, and scenes) helps your detector:
More images = Smarter models. If your goal is production-ready detection, give your detector more to learn from. |
💡 Tips for Better Image Selection
- Use diverse image sources across different environments or times of day to improve generalisability.
- Ensure the selected images accurately represent the types of objects, areas, or changes your detector is expected to learn.
- For Change Detection, make sure both images are from the same location, taken at different times.
💡 Tip: More meaningful and varied images lead to better training results. Avoid using identical or low-quality images.
3. Define Areas
Toolbar Activation Logic
When you first open the Train Detector screen (before adding images):
Only the
Pan and
Center tools are available on the left-hand toolbar.
Once you add an image:
- The toolbar updates with additional tools:
Training Area-
Testing Area
Accuracy Area
Import Annotations
Export Annotations
Area Types and Colours
| Area Type | Purpose | Toolbar Icon | Visual Cue |
| Training Area | Area where the model will learn from annotations | Yellow dashed border | Yellow dashed polygon |
| Testing Area | Area used to evaluate model during training | Blue dashed border | Blue dashed polygon |
| Accuracy Area | Used to validate performance metrics | Green dashed border | Green dashed polygon |
Steps to Define Each Area
On the Train Detector screen, you must draw defined zones to guide how the detector learns and evaluates. These zones are created using polygon tools available in the left-hand toolbar once at least one image is added.
1. Training Area
- Define a Training Area (Required)
The Training Area is where the detector learns to identify and classify objects based on annotations you create.
Steps:
- On the left toolbar, click the Training Area tool ().
- Move your cursor to the image and click to set points that outline the area.
- Close the shape by clicking back on the first point or double-clicking the last point.
- The area will now be highlighted with a yellow dashed line.
You must define at least one Training Area per image for the detector to train.
2. Testing Area
- Define a Testing Area (Optional but Recommended)
The Testing Area is used to evaluate how well the model performs during training. It allows the system to check predictions against known annotations that it hasn’t used for learning.
Steps:
- Select the Testing Area tool from the left toolbar ( ).
- Click on the image to draw the boundary points.
- Close the polygon to finalise the shape.
- The area will be shown as a blue dashed line.
| 💡 Tip: Use the testing area to measure the model’s real-time learning performance and spot overfitting. |
3. Accuracy Area
- Define an Accuracy Area (Required)
The Accuracy Area is used to validate the final performance of the trained model using independent data.
Steps:
- Select the Accuracy Area tool from the toolbar ( ).
- Click around the region to outline the shape.
- Complete the polygon to lock the shape in place.
- The area will now appear as a green dashed line.
Every image must contain at least one Accuracy Area before training can start.
Area Validation Rules
To ensure proper training, the following area-related conditions must be met:
| Rule | Requirement |
| Minimum area types per image | At least 1 Training Area and 1 Accuracy Area |
| Minimum annotations per area type | 3 annotations per area |
| Total annotations across all areas | At least 20 annotations |
| ⚠️ Warning: The detector will not allow you to begin training unless these conditions are satisfied. |
| 💡 Tip: Use Training Areas for diverse object examples, Accuracy Areas for precise evaluation, and Testing Areas to fine-tune performance if needed. |
⚠️ Additional Notes on Area & Annotation Rules
|
| 💡 Tip: These requirements must be fulfilled for the “Train Detector” button to be enabled. If not, AeroMegh Intelligence will block training until they are corrected. |
4. Classes
Class
What is a Class?
In AeroMegh Intelligence, a Class is a label or category that defines what the AI model should learn to recognise in an image. Every annotation you create during the training process must be linked to a class.
Each class acts as a semantic identifier that tells the detector,
“This is a weed.”
“This is a cauliflower.”
“This is soil.”
Without classes, the model has no way of understanding what it’s learning—it would simply see shapes and colours without meaning.
Real-World Example:
In an agricultural use case, you might want the AI to detect and separate:
- Cauliflower (desired crop)
- Weed (unwanted growth)
- Soil (background)
These are three distinct classes. When you annotate image regions and label them accordingly, the detector learns:
- What each item looks like
- How they differ from one another
- Where each typically appears
Why Classes Matter
Foundation for Learning
Classes form the foundation of your model’s intelligence. Without them, your detector cannot classify, compare, or distinguish objects.
Model Accuracy Depends on Class Quality
Well-labelled, clearly defined classes lead to higher-quality predictions and lower false positives or misclassifications.
Supports Complex Scenarios
In projects where multiple object types exist (e.g., utility poles, cables, road damage), classes help your detector understand the landscape.
Improves Maintainability
Using classes keeps your annotations clean, consistent, and easy to interpret—even months later or when working in teams.
| 💡 Insight: Think of classes as the “vocabulary” your detector is learning. The better defined and more consistent your vocabulary, the more fluent and intelligent your detector becomes. |
Create a Class
➕ How to Create a Class
- On the Train Detector screen, locate the Classes Section (usually below the image section).
- Click the
Add Class button. - In the Add New Class dialog, enter the name of the class.
- Click Add to save.
| 💡 Tip: Use short, descriptive, and consistent names (e.g., Pole, Crack, Tree) to keep your model organised. |
| ⚠️ Important: You must create at least one class before annotation tools become available. |
Manage Classes
Manage Existing Classes
Each created class can be renamed or deleted from the Classes Panel using the three-dot menu (⋮) next to the class name.
Rename
Rename a Class
- Click the three-dot menu (⋮) next to the class name.
- Select Rename.
- In the Update Class dialog, enter the new name.
- Click Update to save changes.
After saving, a confirmation dialog will appear with the message:
“Class updated successfully.”
| 💡 Best Practice: Rename classes early in the training process to avoid inconsistencies. |
| 💡 Note: If detector training is already completed, the Rename option will be disabled. Classes tied to completed models cannot be renamed. |
Delete
Delete a Class
1. Click the three-dot menu (⋮) next to the class name.
2. Select Delete.
3. A confirmation dialog appears:
“Delete Classes
Are you sure you want to delete the selected class(es)?”
4. Choose:
- Confirm to delete
- Cancel to keep the class
with the message:
“Class updated successfully.”
| 💡 Note: If detector training is already completed, the Delete option will be disabled. Classes tied to completed models cannot be removed. |
Summary
| Action | Availability | UI Location |
| Add Class | Before training | Classes Panel → Add Class |
| Rename Class | Before/after training (if allowed) | ⋮ Menu → Rename |
| Delete Class | Only before training is complete | ⋮ Menu → Delete |
| Required to Annotate | ✅ Yes | Annotation tools unlock only after at least 1 class is created |
| 💡 Tip: Plan your class structure early. Well-defined and consistently named classes lead to more accurate and maintainable detectors. |
5. Annotations
Annotations define the precise regions on images that your detector should learn from. Each annotation is linked to a Class and drawn within a defined Training, Testing, or Accuracy area.
High-quality annotations are the key to building a detector that performs well across different environments and real-world use cases.
Annotation
What is an Annotation?
An Annotation is a marked region on an image used to train your AI detector. It tells the model:
“This is what a Tree looks like.”
“This shape belongs to the class ‘Weed’.”
Each annotation:
- Is linked to a class (e.g., Pole, Crack, Soil)
- Represents a region within a Training, Testing, or Accuracy area
- Provides examples the AI model uses to learn and validate its understanding
Example
In a land-use classification project:
- A rectangle around a tree → Class: Tree
- A polyline along a crack in pavement → Class: Crack
💡 More precise and varied annotations = better detector performance.
Unlock Annotation Tool
Unlocking Annotation Tools
When you first open the Train Detector screen, annotation tools are disabled.
To activate them:
- Add at least one image
- Define your Training/Accuracy/Testing areas
- Create or select a Class
Once a class is active, the following annotation tools become available in the left-hand toolbar.
Annotation Tool
Annotation Tools and How to Use Them
| Tool | Best For |
| Rectangle | Box-shaped features (e.g., vehicles, panels) |
| Circle | Round features (e.g., tree tops, manholes) |
| Polygon | Irregular shapes (e.g., patches, fields) |
| Polyline | Long linear features (e.g., cracks, cables) |
Create Annotation
Rectangle
Rectangle Tool
Steps:
- Select the Rectangle Tool.
- Click on the image to set the starting corner.
- Drag to define width and height.
- Release to complete.
- Click ✔ Done to finalise.
💡 Great for: poles, signs, panels.
Circle
Circle Tool
Steps:
- Click the Circle Tool.
- Click once to place the centre.
- Drag to set radius.
- Release and click ✔ Done.
💡 Great for: trees, round tanks, manholes.
Polygon
Polygon Tool
Steps:
- Select the Polygon Tool.
- Click around the object to form edges.
- Close the shape by clicking the first point or double-click the last.
- Click ✔ Done.
💡 Best for irregular areas like vegetation or cracks.
Polyline
Polyline Tool
Steps:
- Click the Polyline Tool.
- Click multiple points to trace the feature.
- Double-click to complete the line.
- Click ✔ Done.
💡 Use for roads, cables, pipe lines.
Manage Annotations
Clicking any existing annotation opens a contextual menu with the following ptions:
| Action | Description |
| Copy | Duplicate the selected annotation |
| Paste | Paste copied annotation onto current or next image |
| Edit | Modify size, shape, or position |
| Delete | Remove the annotation |
| Done | Finalise the annotation and save changes |
💡 Tip: Use Copy–Paste for fast and consistent annotation of repeating objects across images.
Annotation Rules & Validation
To proceed with training, your annotations must meet the following criteria:
| Requirement | Minimum Threshold |
| Per Area (Training/Accuracy) | At least 3 annotations per area/td> |
| Across All Areas | At least 20 annotations in total |
| Linked to a Class | Every annotation must be tied to a class |
Clarification
These annotation rules are enforced strictly before starting the training process.
- If any Training or Accuracy Area contains fewer than 3 annotations, or
- If the total annotations are fewer than 20,
… the system will prevent you from proceeding to train the detector.
Example: If you create only 2 annotations in the Training Area and 2 in Accuracy, training will remain disabled—even if other steps are completed.
Pro Tip: Use the annotation counter or summary panel to keep track of annotation totals across all images and areas.
| 💡 Note: Training cannot be started unless these conditions are met. |
Best Practices for Annotation
- Use the appropriate shape for each object
- Keep annotations precise and tightly fitted
- Avoid overlapping annotations unless necessary
- Use consistent class labelling across images
- Don’t forget to finalise each annotation with ✔️
| 💡Pro Tip: Annotate diverse examples of the same class—different angles, sizes, lighting—for better model generalisation. |
6. Train the Detector
After completing all the required setup steps—adding images, defining areas, creating classes, and adding annotations—you are now ready to initiate the training of your detector.
This section guides you through starting the training process and understanding the system’s confirmation messages.
Pre-Training Checklist
Before clicking the Train Detector button, make sure:
- ✔ At least one Training Area and one Accuracy Area is drawn
- ✔ Each area contains a minimum of 3 annotations
- ✔ A total of more than 20 annotations are present across all areas
- ✔ At least one Class is created and assigned to annotations
| ⚠️ Note: If any of these conditions are not met, the system will prevent training from starting. |
Training Eligibility Checklist – All Conditions Must Be Met
Ensure the following requirements are satisfied before starting training:
- At least one Training Area and one Accuracy Area are defined
- Each area contains 3 or more annotations
- There are 20+ total annotations across all areas combined
- All annotations are linked to a valid Class
- The Detector Type is selected and correctly set
| ⚠️ If any of the above conditions are not met, the “Train Detector” button will be disabled. A validation alert will notify you of what’s missing. |
How to Start Training
1. Navigate to the Train Detector screen.
2. Click the Train Detector button located in the top-middle of image view screen.
3. A confirmation dialog will appear asking:
“Are you sure you want to start training this detector?”
- Click Yes to proceed.
- Click No to cancel and return to editing if needed.
Training Confirmation
If the setup is valid and training begins successfully:
- A message will appear:
“Detector training started successfully.”
- Click OK to close the dialog.
- The detector’s status will now change to Training in Progress. You can monitor the progress directly from the Detectors screen.
| 💡 Tip: Training duration depends on image volume, annotation complexity, and number of classes. If it takes longer than expected, consider simplifying your dataset or reviewing annotation density. |
Pro Tips for Better Performance
Training a detector isn’t just about drawing shapes and clicking “Train.” The quality of your input data, annotation discipline, and model maintenance practices directly affect how accurate and reliable your detector becomes in real-world scenarios.
Use the following best practices to maximise your detector’s learning efficiency and output accuracy.
- Use Diverse Training Data
Why it matters: A model trained on similar-looking images may perform poorly when introduced to new conditions.
- Include images from different locations, angles, lighting conditions, and resolutions.
- If applicable, vary weather, seasons, or environmental backgrounds.
| 💡 Tip: For detectors deployed in the field, training on real-world variation builds robustness and reduces false positives. |
- Retrain Regularly with New Data
Why it matters: Over time, environments and object appearances may change.
- Periodically retrain your detector with fresh data to maintain high accuracy.
- Retrain especially when:
- New object types are introduced
- The environment evolves (e.g., seasonal changes, new infrastructure)
- Detector performance starts declining
| 💡 Tip: Save versions of your trained detectors so you can benchmark new training runs against previous ones. |
- Eliminate Irrelevant or Noisy Data
Why it matters: Poor-quality or unrelated images dilute your model’s learning process.
- Remove blurry, overexposed, or irrelevant images before training.
- Avoid including objects or areas unrelated to the class definitions.
| 💡 Tip: Keep your training dataset focused and consistent with your detection objective. |
- Prioritise Annotation Quality Over Quantity
Why it matters: A few well-labelled examples are more valuable than many inaccurate ones.
- Ensure each annotation is precisely aligned with the object’s shape.
- Avoid sloppy or overlapping annotations unless required.
- Maintain class consistency across images.
| 💡 Tip: Use tools like Polygon or Polyline when objects don’t fit cleanly into rectangular shapes. |
- Validate Before Training
Why it matters: Missed requirements delay training or reduce model effectiveness.
- Confirm:
- At least 1 Training and 1 Accuracy Area are defined
- Each area has 3+ annotations
- The total annotation count exceeds 20
- Every annotation is tied to a valid class
| 💡 Tip: Use the annotation validation warnings in the interface to quickly spot gaps.y into rectangular shapes. |
- Test Before Deployment
Why it matters: Even a well-trained model may perform differently in live environments.
- Use the Testing Area to evaluate detector accuracy before full-scale deployment.
- Examine false positives/negatives and retrain if needed.
| 💡 Tip: Always test with images that weren’t used during training. |
Summary: The Formula for High-Performance Detectors
| Strategy | Impact |
| Diverse Images | Better generalisation across environments |
| Quality Annotations | Improved model precision and fewer errors |
| Regular Retraining | Sustained accuracy over time |
| Clean Data | Faster training, less confusion |
| Validation Checkpoints | Reduced errors before model execution |
| Field Testing | Ensures readiness for real-world use |