A React + AWS Serverless full stack implementation of the example applications found in the official OpenAI API documentation. See this system architectural diagram for details. This is an instructional tool for the YouTube channel "Full Stack With Lawrence" and for University of British Columbia course, "Artificial Intelligence Cloud Technology Implementation".
Works with Linux, Windows and macOS environments.
-
Verify project requirements: AWS Account and CLI access, Terraform, Python 3.11, NPM and Docker Compose.
-
Review and edit the master Terraform configuration file.
-
Run
make
and add your credentials to the newly created.env
file in the root of the repo. -
Initialize, build and run the application.
git clone https://github.com/FullStackWithLawrence/aws-openai.git
make # scaffold a .env file in the root of the repo
make init # initialize Terraform, Python virtual environment and NPM
make build # deploy AWS cloud infrastructure, build ReactJS web app
make run # run the web app locally in your dev environment
-
Complete OpenAI API: Deploys a production-ready API for integrating to OpenAI's complete suite of services, including ChatGTP, DALL·E, Whisper, and TTS.
-
LangChain Integration: A simple API endpoint for building context-aware, reasoning applications with LangChain’s flexible abstractions and AI-first toolkit. Use this endpoint to develop a wide range of applications, from chatbots to question-answering systems.
-
Dynamic ChatGPT Prompting: Simple Terraform templates to create highly presonalized ChatBots. Program and skin your own custom chat apps in minutes.
-
Function Calling: OpenAI's most advanced integration feature to date. OpenAI API Function Calling is a feature that enables developers to integrate their own custom Python functions into the processing of chat responses. For example, when a chatbot powered by OpenAI's GPT-3 model is generating responses, it can call these custom Python functions to perform specific tasks or computations, and then include the results of these functions in its responses. This powerful feature can be used to create more dynamic and interactive chatbots that can perform tasks such as fetching real-time data, performing calculations, or interacting with other APIs or services. See the Python source code for additional documentation and examples, including, "get_current_weather()" from The official OpenAI API documentation
-
Function Calling Plugins: We created our own yaml-based "plugin" model. See this example plugin and this documentation for details, or try it out on this live site. Yaml templates can be stored locally or served from a secure AWS S3 bucket. You'll find set of fun example plugins here.
Complete source code and documentation is located here.
React app that leverages Vite.js, @chatscope/chat-ui-kit-react, and react-pro-sidebar.
- robust, highly customizable chat features
- A component model for implementing your own highly personalized OpenAI apps
- Skinnable UI for each app
- Includes default assets for each app
- Small compact code base
- Robust error handling for non-200 response codes from the custom REST API
- Handles direct text input as well as file attachments
- Info link to the OpenAI API official code sample
- Build-deploy managed with Vite
Complete documentation is located here. Python code is located here
A REST API implementing each of the 30 example applications from the official OpenAI API Documentation using a modularized Terraform approach. Leverages OpenAI's suite of AI models, including GPT-3.5, GPT-4, DALL·E, Whisper, Embeddings, and Moderation.
- OpenAI API library for Python. LangChain enabled API endpoints where designated.
- Pydantic based CI-CD friendly Settings configuration class that consistently and automatically manages Python Lambda initializations from multiple sources including bash environment variables,
.env
andterraform.tfvars
files. - CloudWatch logging
- Terraform fully automated and parameterized build. Usually builds your infrastructure in less than a minute.
- Secure: uses AWS role-based security and custom IAM policies. Best practice handling of secrets and sensitive data in all environments (dev, test, CI-CD, prod). Proxy-based API that hides your OpenAI API calls and credentials. Runs on https with AWS-managed SSL/TLS certificate.
- Excellent documentation
- AWS serverless implementation. Free or nearly free in most cases
- Deploy to a custom domain name
- git. pre-installed on Linux and macOS
- make. pre-installed on Linux and macOS.
- AWS account
- AWS Command Line Interface
- Terraform. If you're new to Terraform then see Getting Started With AWS and Terraform
- OpenAI platform API key. If you're new to OpenAI API then see How to Get an OpenAI API Key
- Python 3.11: for creating virtual environment used for building AWS Lambda Layer, and locally by pre-commit linters and code formatters.
- NodeJS: used with NPM for local ReactJS developer environment, and for configuring/testing Semantic Release.
- Docker Compose: used by an automated Terraform process to create the AWS Lambda Layer for OpenAI and LangChain.
Optional requirements:
- Google Maps API key. This is used the OpenAI API Function Calling coding example, "get_current_weather()".
- Pinecone API key. This is used for OpenAI API Embedding examples.
Detailed documentation for each endpoint is available here: Documentation
To get community support, go to the official Issues Page for this project.
This project demonstrates a wide variety of good coding best practices for managing mission-critical cloud-based micro services in a team environment, namely its adherence to 12-Factor Methodology. Please see this Code Management Best Practices for additional details.
We want to make this project more accessible to students and learners as an instructional tool while not adding undue code review workloads to anyone with merge authority for the project. To this end we've also added several pre-commit code linting and code style enforcement tools, as well as automated procedures for version maintenance of package dependencies, pull request evaluations, and semantic releases.
We welcome contributions! There are a variety of ways for you to get involved, regardless of your background. In addition to Pull requests, this project would benefit from contributors focused on documentation and how-to video content creation, testing, community engagement, and stewards to help us to ensure that we comply with evolving standards for the ethical use of AI.
For developers, please see:
- the Developer Setup Guide
- and these commit comment guidelines 😬😬😬 for managing CI rules for automated semantic releases.
You can also contact Lawrence McDaniel directly. Code composition as of Feb-2024:
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 29 732 722 2663
HCL 30 352 714 2353
Markdown 52 779 6 2344
YAML 23 112 149 1437
JavaScript 39 114 127 1088
JSX 6 45 47 858
CSS 5 32 14 180
make 1 27 30 120
Text 6 13 0 117
INI 2 15 0 70
HTML 2 1 0 65
Jupyter Notebook 1 0 186 48
Bourne Shell 5 17 55 47
TOML 1 1 0 23
Dockerfile 1 4 4 5
-------------------------------------------------------------------------------
SUM: 203 2,244 2,054 11,418
-------------------------------------------------------------------------------