Skip to content

Latest commit

 

History

History
515 lines (395 loc) · 24.6 KB

gomoku-backend-impl.md

File metadata and controls

515 lines (395 loc) · 24.6 KB

Gomoku - Backend Documentation 🉐

This is the backend documentation for the Gomoku Royale game.

Table of Contents


Introduction

The backend server is a RESTful API that provides the functionality for the Gomoku Royale board game. It is written mainly in Kotlin in a JVM gradle project.

The JVM application is a Spring Boot application, built with Spring Initializr.

Some dependencies used in this project are:


Modeling the Database

Conceptual Model

The following diagram holds the Extended Entity-Relationship (EER) model for the information managed by the system.

Extended Entity Relationship Diagram
Extended Entity relationship diagram

We highlight the following aspects:

The conceptual model has the following restrictions:

  • User entity:

    • The username and email attributes should be unique;
    • The username attribute length should be 5–30 characters long;
    • The email attribute needs to follow the following regex pattern: ^[a-zA-Z0-9._-]+@[a-zA-Z0-9.-]+$.
  • Token entity:

    • The created_at and last_used_at attributes represent the seconds since the Unix epoch, and should be greater than 0.
    • The last_used_at attribute should be greater than or equal to the created_at attribute;
  • Statistics entity:

    • The games_played, games_won, games_drawn and points attributes should be greater than 0;
    • The games_won and games_drawn attributes should be less than or equal to the games_played attribute;
  • GameVariants entity:

    • The name attribute should be unique;
    • The boardSize attribute should be greater than 0;
  • Game entity:

    • The state attribute only accepts the following values: IN_PROGRESS, FINISHED;
    • The board attribute is of type jsob and should be a valid JSON object.
    • The updated_at and created_at attributes represent the seconds since the Unix epoch, and should be greater than 0.
    • The updated_at attribute should be greater than or equal to the created_at attribute;
    • The host_id and guest_id attributes reference the same user;
    • The lobby_id attribute has to be unique;
  • Lobby entity:

    • The created_at attribute represents the seconds since the Unix epoch, and should be greater than 0.

Physical Model

The physical model of the database is available in create-schema.sql.

To implement and manage the database PostgreSQL was used.

The code/jvm/src/sql folder contains all SQL scripts developed:

We highlight the following aspects of this model:

  • Uniqueness of identifying attributes: In this database model, attributes that are not primary keys but uniquely identify an entity have been marked as unique. This ensures that these attributes maintain their uniqueness throughout the data, contributing to data integrity.

  • Selection of the Jsonb data type for the Board attribute: The choice to utilize the jsonb data type for the board attribute within the Game entity was a deliberate decision shaped by several key considerations:

    • Efficient storage and retrieval: Given that the board attribute is an abstract entity that can be represented in different ways by other entites (specialization relation in this case), the jsonb data type was chosen to allow for flexibility in the representation of board subtypes.

    • Ease of representation: Since the board attribute could be represented in different ways, according to the game state, the jsonb data type was chosen to allow for flexibility in the representation of the board.

  • Lobby entity and game configuration: The decision to not make the Game entity weak of the Lobby entity and instead repeat the variant_id attribute which points to the game configuration, was made for efficiency and practicality. The Lobby entity serves as a user intention to start ame with a specific game configuration. When another user attempts to create a game with a specific game configuration (represented by the variant id) that matches an existing entry in the Lobby table, a new game is created with both players instead. The host Lobby row is then deleted to allow another user to create a game with the same configuration. This approach simplifies the match-making algorithm, as it only needed to search in the Lobby table, which is smaller and only contains one row per game configuration combination instead of the ever-growing Game table.

  • Not always using check constraints for data integrity: Not all restrictions described in the conceptual model have been directly implemented using check constraints in the physical model. In cases where certain restrictions might evolve or expand in the future, such as the game variants, an entity was created to store the current supported values. Additionally, a foreign key was added to the Game entity to ensure data consistency and referential integrity while allowing for flexibility in adding new supported values.

  • Using epoch seconds for timestamps: The decision to use epoch seconds for the created_at and updated_at attributes was made for efficiency and simplicity. Epoch seconds are easy to work with and are more efficient to store and retrieve than other formats.

Spring MVC Architecture

The Spring MVC framework was used to implement the REST API.

Spring MVC Architecture
Spring MVC architecture diagram

In the above example, the client makes a request to the server, which requires authentication. The request follows the pipeline and is handled by the AuthenticationInterceptor, which checks if the user is in fact authenticated. To do that, the interceptor uses the RequestTokenProcessor to parse the token and validate it based on the token validation value stored on the database. If the token is valid, the AuthenticatedUser information is placed on the request in order for the AuthenticatedUserArgumentResolver to retrieve it and place it on the controller method parameter when the handler is called. If the token is invalid, the AuthenticationInterceptor short-circuits the pipeline and returns an error response to the client.

The LoggingFilter and RequestIdFilter are responsible for logging and generating a unique id for each request, respectively.

For implementation details, please refer to the /http/pipeline folder.

Application Architecture

Application Architecture
Application architecture diagram

The JVM application is organized as follows:

  • /domain - contains the domain classes of the application ensure data integrity throughout the application;
  • /http - contains the HTTP layer of the application. This layer is responsible for handling the HTTP requests and generating the responses, orchestrating the service layer;
  • /repository - contains the repository layer of the application, which provides implementations that can access the database;
  • /services - contains the services that manage the business logic of the application and orchestrate the repository layer;
  • /utils - contains utility classes used by the application in all layers, such as the Either class which serves as an abstract representation of an operation result (either a success or a failure);
  • GomokuApplication.kt - contains the spring boot class configuration and the entry point of the application.

The presentation layer is responsible for receiving the requests from the client, processing them in a way the service layer is expecting, sending them and returning the responses to the client, using the appropriate media type.

To represent the data in the requests, several models were created:

  • input models - used to represent the data in the requests.
  • output models - used to represent the data in the responses.

This layer is implemented using Spring Web MVC and Spring Validation for input models.

The presentation layer is organized as follows:

  • /controllers - contains the controllers that handle the HTTP requests and generate the responses;
  • /jackson - contains jackson config used by Spring, and several serializers and deserializers used by the application;
  • /media - contains the classes that represent the media types used in the application such as application/problem+json;
  • /pipeline - contains all filters, interceptors, argument resolvers and request processors used by the application before and after the request is handled by the controllers;
  • CustomExceptionHandler - contains exception handlers that generate the responses for the exceptions thrown by the application;
  • Uris - object that contains the URIs of the application used by the controllers;

The service layer is responsible for managing the business logic of the application, receiving the requests from the presentation layer, processing them, sending them to the data access layer and returning the responses to the presentation layer.

To represent the result of a service operation, the Either class was created. This class ensures both the success and failure cases are always represented, which then allows the presentation layer to generate the appropriate response based on the result of the service operation.

Each service provided by the application does not have an interface because it is not expected to have multiple implementations of the same service. In a service, a TransactionManager is received as a constructor dependency, which then allows the service to manage the transaction scope of the service operation and the underlying data access.

The service layer is organized as follows:

Associated with each service package, there are one or more classes that represent the result of the service operation. Some are defined as typealiases to improve readability.

The data access layer is responsible for interacting with the database to persist and retrieve the data.

An interface was created for each entity of the application, which then has an implementation that uses JDBI fluent api to interact with the database. Only domain classes can be used in the operations of the data access layer as parameters or return types.

The data access layer is organized as follows:

  • /jdbi - contains the configuration, repository and transaction implementations, mappers and models that work with Jdbi directly;
  • /transaction - contains the transaction abstractions used by the service layer to manage the transaction scope of the service operation;
  • UsersRepository - exposes the operations related to the users;
  • GamesRepository - exposes the operations related to the games;

Data Representation

There are types of data representation in the application:

  • Json Models - which are tied to the JSON representation of the data;
    • Input Models - used to represent the data in the HTTP requests;
    • Output Models - used to represent the data in the HTTP responses;
  • Jdbi Models - used to represent the data from the database using the Jdbi interface;
  • Domain Classes - used to represent the data in the application domain;

To ease the transformation between these models and the domain classes, a few interfaces were created. We hightlight:

  • JsonOutputModel - responsible for transforming the domain classes into the output models;
  • JdbiModel - responsible for transforming jdbi models into the domain classes;

The Json output models are tied to Jackson library while Json input models use the Spring Validation Library to validate the data in the requests.

Validation

In the backend infrastructure, the validation of the data is done in three different layers:

  • Spring validation: The Spring validation is responsible for validating the data in the requests, such as the request body, path variables, query parameters, etc.

    Example:

    data class UserCreationRequest(
        @field:Size(min = 5, max = 30)
        val username: String,
        @field:Email
        val email: String,
        @field:Size(min = 8, max = 30)
        val password: String
    )
    
    // In the controller handler method, the request body is 
    // validated by Spring when @Valid is used.
    @PostMapping
    fun create(@Valid @RequestBody request: UserCreationRequest): ResponseEntity<*> {
        // (...)
    }
  • Domain Components: The domain components are responsible for validating the data in the domain classes and ensure data integrity throughout the application.

    Example:

    // Component that represents the id of the user.
    class Id private constructor (val value: Int) : Component {
      companion object {
          // invoke operator allows access to the constructor
          // as if it was public.
          operator fun invoke(value: Int): Either<InvalidIdError, Id> = when {
              value <= 0 -> Failure(InvalidIdError.InvalidId)
              else -> Success(Id(value))
          }
      }
    }
    
    // In the http layer, before calling the service 
    // which is expecting an Id object, the object construction is validated.
    val id: Either<InvalidIdError, Id> = Id(2)
    when (id) {
        is Either.Right -> service.get(id = id.value)
        is Either.Left -> when(id.value) {
            is InvalidIdError.InvalidId -> // handle error
        }
    }
  • Database: The database is responsible for validating the data integrity of the data stored in the database. This is done by defining constraints on the database schema, such as primary keys, foreign keys, unique constraints, check constraints, etc.

Concurrency

To ensure the application's thread safety and data consistency, several measures have been implemented:

  • Database Transactions: Database transactions are employed to maintain data consistency and ensure that operations are atomic. This means that a series of database operations either all succeed or all fail, preventing data corruption or inconsistencies.

  • Immutable Objects: The domain classes are designed as immutable objects. Once these objects are created, they cannot be modified. This immutability guarantees data consistency and helps maintain atomic operations.

  • Isolation Levels: The database transactions are configured to use the SERIALIZABLE isolation level, which ensures that the transactions are executed in a way that is indistinguishable from serial execution. This means that the transactions are executed one after the other, preventing concurrency problems such as dirty reads, non-repeatable reads and phantom reads.

These measures collectively contribute to the application's thread safety and data integrity, ensuring that operations are executed consistently and in a reliable way.

Error Handling

To handle errors/exceptions, we implemented the CustomExceptionHandler class, which is annotated with @ControllerAdvice, and is responsible for intercepting the harder to detect errors that occur in the application and generating the appropriate response.

As mentioned before, the Either class is used to represent the result of a service operation. Which then the presentation layer can generate the appropriate response based on the result type of the service.

Example:

// service returns:
Either<UserCreationError, Board>

// controller receives the result and evaluates it:
return when (result) {
    // Success
    is Either.Right -> ResponseEntity.status(HttpStatus.CREATED).body(result.value)
    // Failure
    is Either.Left -> when (result.value) {
        is UserCreationError.UsernameAlreadyExists -> Problem(
            type = "https://example.com/probs/user-already-exists",
            title = "User already exists",
            status = 400,
            detail = "The username provided already exists",
            instance = "https://example.com/users/"
        ).toResponse()
        // (...) other errors
    }
}

To represent an error in the presentation layer, the media type chosen was application/problem+json, which is a standard media type for representing errors in HTTP APIs. The RFC 7807 defines the standard for this media type.

It consists of:

  • type - a URI reference that identifies the problem type;
  • title - a short, human-readable summary of the problem type;
  • status - the HTTP status code generated by the origin server for this occurrence of the problem;
  • detail - a human-readable explanation specific to this occurrence of the problem;
  • instance - a URI reference that identifies the specific occurrence of the problem;

Docker Compose Solution

The solution is deployed using Docker Compose, which can be found in the docker-compose.yml file.

This solution was devised in response to the following considerations:

  • Increase in computational capacity (Horizontal Scaling)
  • Enhance service availability (High Availability)
  • Simplify service maintenance (Maintenance)
  • Retry failed requests when possible (Resilience)

Below is a visual representation:

Docker Compose Solution
Docker Compose Solution

All dockerfiles were created with the following considerations:

  • reduce image size by using multi-stage builds and lightweight base images;
  • optimize image layers by grouping commands, remaining mindful of the cache and the order of the commands

Db-tests Service

Note

This service provides a PostgreSQL database specifically tailored for testing purposes.

The database is pre-configured with user credentials and a default database schema.

Exposes a port for external access.

Dockerfile

Spring Service

Note

This service encapsulates a scalable JVM Spring application that provides the Gomoku Royale REST API.

Utilizes the database service for data storage.

Configured to run on a specific port for external access.

Dockerfile

Nginx Service

Note

Nginx service serving a load balancer for the scalable spring-service JVM application.

Listens for incoming requests on an external port to which clients connect and forwards these requests using a defined strategy.

Dockerfile

Nginx's configuration establishes a web server listening on a designated port. It directs requests to a Spring service on a specified port for paths starting with /api/ and serves static files for other paths. Additionally, the configuration incorporates settings to handle upstream server failures and timeouts.

Configuration can be found in the nginx.conf file.

Ubuntu Service

Note

Ubuntu machine serving as a diagnostic and debugging tool within the Docker Compose environment.

Provides an interactive shell for direct interaction.

Includes 'dig' for DNS-related observations or testing.

Useful for monitoring and understanding the Docker Compose network and container interactions.

Dockerfile

Implementation Challenges

  • Database design: Finding the best way to represent the data in the database was a challenge. We had to consider the data integrity, the performance and the flexibility of the database, and was no easy task to find the best balance between these aspects.
  • Abstracting code: We tried to abstract the code as much as possible, using interfaces, abstract classes and generics, to make the code more reusable and easier to maintain. But sometimes we lacked the knowledge to abstract the code in a better way.
  • The concurrency problem: Since the application will run later in a distributed environment, which means that multiple instances of the application will be running at the same time, we needed to ensure that the application was thread-safe. However, finding the best way to ensure that the application was thread-safe was a challenge.

Further Improvements

  • Siren media type: The Siren media type is a hypermedia type that allows the client to navigate through the API and discover the available resources. We didn't have the time to implement this media type, but it would be a great improvement for the next phase of the project where we will implement the frontend.
  • More variants: We only implemented the standard variant of the game. It would make the application more interesting if we implemented more variants of the game, and give more options to the users to choose from.
  • Add more tests: We only implemented the basic tests for the application, but we could add more tests to improve the code coverage and ensure that the application is working as expected in all possible scenarios.
  • Support more operations: We plan to add more service operations to further enhance the application functionality. However, we will make sure that the new services are backward compatible with the existing ones, so we can add new features without breaking the existing ones.