Create an End to End Machine Learning architecture which includes
Model Training, Testing and operationalization,
Infrastructure and endpoint monitoring.
1. Python
2. shell scripting
3. aws cloud Provider
4. Prometheus And grafana
5. FastApi for endpoint
6. S3bucket - as feature store and model registry
7. CI-CD tool Jenkins
conda create --prefix ./env python=3.9
conda activate ./env
pip install -r requirements.txt
1. Feature Store s3 bucket with lambda call on put event
2. Model Registry - Testing
- production
Install jenkins on ec2 and make a webhook with github repository to access it whenever updated on push.
Create 3 jobs Train, Test and Deploy. While creating Seperate jobs remember to put jenkins-jobs-script in it.
I have written 3 sepreate script in it.
Create a master pipeline to run different train,test and deploy.
Create Lambda Trigger on S3 Feature store on put event, use python3.7 in lambda as it has request library pre-installed
Remote trigger Master pipeline to run all the stages.
Maintain Configuration file. Changes required in
- Feature-Store
- Model Registry
- Email Params
- Please put gmail application key in it else you will get error
- Ml_Model_params
Install prometheus on Ec2 machine. In configuration file add scrape job set in endpoints.
- job_name: "python_endpoint"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:5000"]
- job_name: "wmi_exporter"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9182"]
Install grafana and it will run on port 3000 by default.
Configure prometheus in it and create monetoring dash board.
Free free to improve this project and remove issues if you find any as nothing is perfect.