Skip to content
Deepjyot Kapoor edited this page Apr 17, 2024 · 4 revisions

ImageInquiry Project Overview

Abstract

ImageInquiry is a web application that allows users to upload and search images based on detected objects or tags. Users can input search queries such as "show me images of beaches and pineapple," and the website will return relevant images from their collection.

Motivation

The motivation behind ImageInquiry was to develop a solution that helps users easily find their photos based on context or content. For example, a user wanting to reminisce about a beach visit where they ate pineapple two years ago can easily locate these specific images.

Features

1. Upload

  • Users upload images through an API gateway.
  • The API gateway facilitates the image upload to an S3 bucket.
  • An S3 Bucket trigger activates LF1 Lambda on image upload.
  • LF1 Lambda uses AWS Recognition to fetch the image from the bucket and receive labels.
  • These labels, along with the bucket and object names, are indexed in an Elasticsearch service.

2. Search

  • Users submit queries (e.g., "Show me images of a dog") via the API gateway.
  • The API gateway directs these requests to LF2 Lambda.
  • LF2 Lambda interfaces with LEX to extract nouns or objects (fills the slot) from the query.
  • LF2 then queries Elasticsearch to retrieve the image locations in the S3 bucket.
  • The front-end accesses these images from a public S3 bucket.

Current Features

  • Ability to associate custom labels with images.

Future Work

  • Enhance security by making the S3 bucket private and using presigned URLs.
  • Fine-tune object recognition to distinguish specific faces rather than labeling all as "person."