Skip to content

MSUSAzureAccelerators/Manufacturing-Vision-AMD64-Accelerator

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

97 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MSUS Solution Accelerator

Manufacturing Vision Accelerator for AMD64 architectures

Welcome to the Manufacuring Vision Accelerator repo - we hope what you find here may be helpful to your pursuit of vision analytics on the Edge! The main portion of this code and process flow was created by myself and Nick Kwiecien, Ph.D., but there are a great number of technical collaborators as well on this AIoT journey, including Rick Durham (Azure AI/ML) and Keith Hill (Azure PM).

This is a sample repository of working code - nothing more, nothing less. There are no warranties or guarantees stated, or otherwise implied.

Architecture Walkthrough Video

Demo Walkthrough Video

Accelerator Overview Video

Azure Resource Setup Video

Click here to access the Hands-on-Lab

Module Deep Dives

Part 1 - CIS (capture|inference|store) module

Part 2 - CIS (capture|inference|store) module

Module Twin Configuration Tool module

Model Repo module

Custom Dashboard module

Image Upload module

File Cleanup module

File Upload/Download Utiliy modules

As you get started on your journey with the Accelerator, here are a few application notes to keep in mind:

1) Once you've deployed IoT Edge, you will need to either 'vi' or 'nano' into the /etc/docker/daemon.json - if this file doesn't exist, go ahead and create it. In here, I recommend adding some implicit values that apply to all containers.  

For CPU-only deployments, I would use something simlar to the following:

{
    "dns": ["8.8.8.8"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "10m",
        "max-file": "3"
    }
}

For Nvidia GPU-based deployments, I would paste the following:

{
    "dns": ["8.8.8.8"],
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "10m",
        "max-file": "3"
    }
}

2) Once you've built and deployed the accelerator containers, a new directory at /home/edge_assets will be created on the Edge device. If you want to use the sample models and images contained in this repository for testing, you will need to elevate the permissions of this directory by running:

sudo chmod -R 777 /home/edge_assets

About our 'vision'

This project was built out of necessity - the need to provide for that 'last mile' connectivity of remote vision workloads while also solving for the unique challenges of these use cases. Taking a 'code-first' approach across a number of deep engagements with multiple manufacturing customers, we began to see universal patterns emerge:

  1. Build for flexibility:

Taking a 'code-first' approach meant being deliberate about choices in terms of programming language and the packages used. For this project, we coded everything in Python, and used only open source elements, such as OpenCV for image capture/manipulation and the Open Neural Network Exchange (ONNX) format for vision models. The goal is to provide a usable code base as is, but malleable enough that customers and/or our partners can easily customize it as well.

  1. Address the 80%:

Within our vision engagements, we found that 80%+ of the use cases revolved around anomaly detection, so the focus of this repository is on object detection. This could be detecting the presence of foreign objects in a production line, detection of errors in assembly, safety infractions such as the lack of PPE or a spill/leakage of product. While image classification and segmentation are also important areas which this repository will expand into eventually, the initial focus is on the majority use case. We will also, in time, provide patterns for integrating Computer Vision services such as OCR to augment capabilities.

  1. Plan for model retraining:

What many of the out-of-the-box solutions fail to plan for is the eventual need for model retraining. A vision solution has to have the ability to augment its datasets, as model drift or 'data' drift happens - it's not a question of if, but a question of when. On the model side, perhaps there is a bias in your classes that gets revealed during production inferencing which requires a rebalancing of the class representation. On the 'data' side, it could be that a camera gets knocked out of aligment, the lighting in a plant changes or seasonal changes shift the available natural light, etc.

  1. Always think about the Enterprise perspective:

The other challenge we've seen consistently in pre-built solutions is the propensity to create yet another silo. Although the lower layers of the ISA-95 network have traditionally been partially or totally 'airgapped' in Manufacturing, the current trend towards Industry 4.0 has created a paradigm shift to connect OT networks to IT systems. Vision systems have also traditionally been in that 'air-gapped' mode, but few have made the transition to Enterprise transparency. From what we've observed, vision analytics systems provide intrinsic value to the front-line plant engineers and workers, but that unique visibility into operations across an Enterprise has value all the way to the C-suite. From the QA to ERP to MES, vision analytics change the way organizations do business for the better.

License

Copyright (c) Microsoft Corporation

All rights reserved.

MIT License

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED AS IS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

About

This is the base code for the Manufacturing Vision for AMD64 architecture.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.8%
  • HTML 3.0%
  • Shell 1.9%
  • CSS 1.1%
  • Dockerfile 0.2%