Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: [#280] Manage migrations table #664

Merged
merged 7 commits into from
Oct 4, 2024
Merged

feat: [#280] Manage migrations table #664

merged 7 commits into from
Oct 4, 2024

Conversation

hwbrzzl
Copy link
Contributor

@hwbrzzl hwbrzzl commented Oct 3, 2024

📑 Description

goravel/goravel#280

  1. Implement GetTables and HasTable;
  2. Implement managing repository(the migrations table), will be used when running a migration file;

Summary by CodeRabbit

  • New Features

    • Introduced new methods for managing database tables, including CompileTables, GetTables, and HasTable.
    • Added error handling to existing methods like Create and DropIfExists.
    • New struct File added to enhance migration functionality.
    • Simplified the Blueprint and Schema structures for improved clarity.
  • Bug Fixes

    • Updated method signatures to improve error handling and return types across various interfaces.
  • Tests

    • Added a comprehensive test suite for repository functionality, ensuring robust testing of migration operations.
    • Restructured test suite for better handling of database queries.
  • Chores

    • Streamlined the initialization process for schema and migration, improving overall code clarity and maintainability.

✅ Checks

  • Added test cases for my code

Copy link

coderabbitai bot commented Oct 3, 2024

Walkthrough

The pull request introduces several enhancements across multiple files in the migration package. Key changes include the addition of new methods and structs, updates to existing method signatures to include error handling, and the removal of certain fields to streamline functionality. Notably, a new CompileTables method is added to both the Grammar interface and the Postgres struct. Additionally, the Schema interface sees the introduction of methods for table management, while the Repository interface undergoes significant updates to its method signatures for better error handling.

Changes

File Path Change Summary
contracts/database/migration/grammar.go - Added method: CompileTables(database string) string in Grammar interface.
contracts/database/migration/repository.go - New struct: File with fields ID uint, Migration string, Batch int.
- Updated method signatures for CreateRepository, Delete, GetLast, etc., to include return types and error handling.
contracts/database/migration/schema.go - Updated Create and DropIfExists methods to return an error.
- Added methods: GetTables() ([]Table, error) and HasTable(table string) bool.
- Removed Connection method from Migration interface.
database/migration/blueprint.go - Updated constructor NewBlueprint to accept table instead of schema.
- Removed schema field from Blueprint struct.
database/migration/grammars/postgres.go - Added method: CompileTables(database string) string in Postgres struct.
database/migration/repository_test.go - Introduced RepositoryTestSuite for testing repository functionality with methods for setup and tests on repository behavior.
database/migration/schema.go - Removed blueprint field from Schema struct.
- Updated NewSchema constructor and added methods for table management.
database/migration/schema_test.go - Restructured test suite and updated imports.
- Replaced TestDB with driverToTestQuery for testing database interactions.
database/service_provider.go - Updated Register method to call migration.NewSchema without the blueprint parameter.

Possibly related PRs

Suggested reviewers

  • devhaozi
  • kkumar-gcc

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 1961a55 and bc30279.

📒 Files selected for processing (1)
  • database/service_provider.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • database/service_provider.go

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.50.

Benchmark suite Current: f77315b Previous: aeaf75d Ratio
Benchmark_Panic 0.001043 ns/op 0 B/op 0 allocs/op 0.0006658 ns/op 0 B/op 0 allocs/op 1.57
Benchmark_Panic - ns/op 0.001043 ns/op 0.0006658 ns/op 1.57

This comment was automatically generated by workflow using github-action-benchmark.

@hwbrzzl hwbrzzl marked this pull request as ready for review October 3, 2024 09:52
@hwbrzzl hwbrzzl requested a review from a team as a code owner October 3, 2024 09:52
@hwbrzzl hwbrzzl closed this Oct 3, 2024
@hwbrzzl hwbrzzl reopened this Oct 3, 2024
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Outside diff range and nitpick comments (4)
contracts/database/migration/grammar.go (1)

12-13: LGTM! Consider enhancing the comment for clarity.

The new CompileTables method is a great addition to the Grammar interface. It fits well with the existing methods and follows consistent naming conventions.

To improve clarity, consider expanding the comment slightly:

- // CompileTables Compile the query to determine the tables.
+ // CompileTables Compile the query to determine the tables in the specified database.
+ // Returns a string representing the compiled SQL query.

This enhancement provides more context about the method's behavior and the nature of the returned string.

contracts/database/migration/repository.go (1)

10-29: Excellent updates to the Repository interface.

The changes to the Repository interface significantly improve its clarity and robustness:

  1. All methods now have explicit return types, including proper error handling.
  2. The use of the File struct in return types provides a clear structure for migration data.
  3. The Log method's batch parameter is now an int, which is more appropriate for representing batch numbers.

These updates align well with the PR objectives of improving migration management.

Consider adding godoc comments for the File struct and its fields to improve documentation.

database/migration/blueprint.go (1)

Line range hint 91-94: LGTM: GetTableName method updated to use table field.

The GetTableName method has been correctly updated to use r.table instead of r.schema, which is consistent with the previous changes. This ensures that the table name is correctly generated.

There's a TODO comment about adding schema support for Postgres. Would you like me to create a GitHub issue to track this task for future implementation?

database/migration/repository_test.go (1)

22-22: Improve the skip message for clarity

The message "Skipping tests of using docker" can be rephrased for better readability.

Consider updating the message as follows:

- t.Skip("Skipping tests of using docker")
+ t.Skip("Skipping tests that use Docker")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 804ff1f and b86a56c.

⛔ Files ignored due to path filters (5)
  • mocks/database/migration/Connection.go is excluded by !mocks/**
  • mocks/database/migration/Grammar.go is excluded by !mocks/**
  • mocks/database/migration/Migration.go is excluded by !mocks/**
  • mocks/database/migration/Repository.go is excluded by !mocks/**
  • mocks/database/migration/Schema.go is excluded by !mocks/**
📒 Files selected for processing (11)
  • contracts/database/migration/grammar.go (1 hunks)
  • contracts/database/migration/repository.go (1 hunks)
  • contracts/database/migration/schema.go (3 hunks)
  • database/migration/blueprint.go (1 hunks)
  • database/migration/grammars/postgres.go (1 hunks)
  • database/migration/repository.go (1 hunks)
  • database/migration/repository_test.go (1 hunks)
  • database/migration/respository.go (0 hunks)
  • database/migration/schema.go (2 hunks)
  • database/migration/schema_test.go (2 hunks)
  • database/service_provider.go (1 hunks)
💤 Files with no reviewable changes (1)
  • database/migration/respository.go
🔇 Additional comments (27)
contracts/database/migration/repository.go (2)

3-7: LGTM: Well-structured File struct for migration entries.

The File struct is well-designed with appropriate fields and types for representing migration entries. Its exported status allows for use in other packages, which is suitable for this context.


1-29: Well-structured and consistent file.

The overall file structure is clean and well-organized. The changes maintain consistency with the existing code style, and there are no apparent issues with imports or package-level declarations.

database/service_provider.go (2)

Line range hint 1-73: Consider documenting the migration system changes

The simplification of the schema initialization in this file appears to be part of a larger refactoring effort in the migration system. While this change looks good, it's important to ensure that it doesn't inadvertently break any existing functionality.

  1. Consider updating the package documentation or README to reflect these changes in the migration system, explaining the new approach to schema initialization.

  2. Ensure that all related tests have been updated to reflect this new initialization method.

  3. If this change is part of a breaking change or significant update, consider mentioning it in the CHANGELOG or upgrade guide for the framework.

Let's check for any related test files that might need updating:

#!/bin/bash
# Description: Find test files related to schema initialization

# Test: Search for test files that might be affected by this change
rg --type go -l 'TestNewSchema|TestSchemaInitialization' 

This will help ensure that all relevant tests are updated to match the new implementation.


39-39: Approve change with verification request

The simplification of the NewSchema function call looks good. It aligns with the PR objective of improving migration management. However, we should verify that this change is compatible with the migration package implementation.

  1. Please add a comment explaining the reason for this change, e.g.:
// Simplified schema initialization by passing config directly instead of creating a blueprint
return migration.NewSchema(config, connection, log, orm), nil
  1. Let's verify the NewSchema function signature in the migration package:

This will help ensure that the change is consistent with the migration package implementation.

✅ Verification successful

Change Verified Successfully

The updated NewSchema function call in database/service_provider.go aligns with the current implementation in the migration package. The removal of the blueprint parameter is consistent with the NewSchema function signature.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the NewSchema function signature in the migration package

# Test: Check the NewSchema function signature
ast-grep --lang go --pattern 'func NewSchema($config, $connection, $log, $orm) $_ {
  $$$
}'

Length of output: 216


Script:

#!/bin/bash
# Description: Verify the NewSchema function signature in the migration package

# Test: Check the NewSchema function signature
ast-grep --lang go --pattern 'func NewSchema($config, $connection, $log, $orm) $_ {
  $$$
}'

Length of output: 94


Script:

#!/bin/bash
# Description: List all NewSchema function definitions in the migration package

rg --type go 'func NewSchema' -A2

Length of output: 566

database/migration/blueprint.go (3)

27-31: LGTM: NewBlueprint function updated to use table parameter.

The NewBlueprint function has been correctly updated to use table instead of schema, which is consistent with the changes in the Blueprint struct. This modification ensures that the table field is properly initialized.

To ensure all calls to NewBlueprint have been updated, please run the following script:

#!/bin/bash
# Search for all calls to NewBlueprint in the codebase
rg --type go 'NewBlueprint\('

Review the results to confirm that all calls to NewBlueprint have been updated to provide the correct arguments (prefix and table).


Line range hint 1-265: Overall LGTM: Blueprint implementation updated to use table instead of schema.

The changes in this file are consistent and align well with the PR objectives. The Blueprint struct, NewBlueprint function, and GetTableName method have all been updated to use table instead of schema, which simplifies the table name handling process.

To ensure these changes don't introduce any regressions:

  1. Run the existing test suite, paying special attention to any tests related to migrations or table creation.
  2. Consider adding new test cases that specifically verify the behavior of the updated Blueprint implementation with various table names.
  3. Perform integration tests to ensure that the migration process works correctly with these changes.
#!/bin/bash
# Run the test suite
go test ./...

If all tests pass and the suggested verifications are successful, this change is ready for merge.


Line range hint 19-24: LGTM: Blueprint struct updated to use table instead of schema.

The change from schema to table in the Blueprint struct aligns with the PR objectives for managing the migrations table. This modification simplifies the table name handling process.

To ensure this change doesn't break existing functionality, please run the following script to check for any remaining references to the schema field in the codebase:

If the script returns any results, those occurrences should be reviewed and updated accordingly.

contracts/database/migration/schema.go (3)

9-9: ⚠️ Potential issue

Ensure implementations of DropIfExists are updated due to signature change

The DropIfExists method now returns an error, modifying the Schema interface and potentially breaking existing implementations. All implementations must be updated to include the error return value to satisfy the interface.

Run the following script to find implementations of DropIfExists that may not have been updated:

#!/bin/bash
# Description: Find implementations of the Schema interface with the old DropIfExists method signature.

# Search for structs that implement Schema interface
rg --type go 'type (\w+) struct' -o -r '$1' | while read -r struct_name; do
    # Find files where the DropIfExists method is defined for the struct
    rg --type go "func \(\*${struct_name}\) DropIfExists\(.*\)" -l | while read -r file; do
        # Check if the DropIfExists method returns an error
        if ! rg "func \(\*${struct_name}\) DropIfExists\(.*\) error" "$file" -q; then
            echo "DropIfExists method in $file may not return error"
        fi
    done
done

11-13: ⚠️ Potential issue

Adding methods to an interface can introduce breaking changes

The addition of GetTables() and HasTable(table string) bool methods to the Schema interface changes the interface's contract. Existing implementations will not satisfy the interface unless they implement these new methods, which can lead to compilation errors.

Consider whether these methods should be added to a new interface or ensure that all existing implementations are updated to include these methods.

Run the following script to identify implementations missing these methods:

#!/bin/bash
# Description: Find implementations of the Schema interface missing new methods.

# Search for structs that implement Schema interface
rg --type go 'type (\w+) struct' -o -r '$1' | while read -r struct_name; do
    file=$(rg --files-with-matches "type ${struct_name} struct")
    missing_methods=()
    # Check for GetTables method
    if ! rg "func \(\*${struct_name}\) GetTables\(\)" "$file" -q; then
        missing_methods+=("GetTables")
    fi
    # Check for HasTable method
    if ! rg "func \(\*${struct_name}\) HasTable\(table string\)" "$file" -q; then
        missing_methods+=("HasTable")
    fi
    if [ ${#missing_methods[@]} -ne 0 ]; then
        echo "Struct ${struct_name} in $file is missing methods: ${missing_methods[*]}"
    fi
done

5-5: ⚠️ Potential issue

Be cautious of breaking changes due to method signature alteration

Changing the Create method to return an error modifies the Schema interface and introduces a breaking change. Existing implementations of the Schema interface will no longer satisfy the interface unless they are updated to include the error return value. Please ensure that all implementations are updated accordingly, and consider the impact on any external users who may implement this interface.

Run the following script to identify implementations of the Schema interface that may not have updated the Create method signature:

database/migration/schema_test.go (7)

8-12: Imports are appropriate and necessary

The imported packages, including docker and env, are relevant for the new testing approach and support the test suite effectively.


18-18: Effective mapping with driverToTestQuery

Defining driverToTestQuery as a map of database.Driver to *gorm.TestQuery enhances flexibility and prepares the test suite for multiple database drivers.


38-40: Good structure for scalable testing

Iterating over driverToTestQuery with s.Run facilitates the addition of more database drivers in the future, making the test suite easily extensible.


45-49: Verify mock expectations for Query method

The expectation mockOrm.EXPECT().Query().Return(testQuery.Query()).Twice() assumes that the Query method will be called twice within schema.DropIfExists and schema.Create. Ensure that these methods indeed call Query twice to match the expectation; otherwise, the test may fail due to unmet expectations.


51-52: Appropriate assertion with HasTable

Using s.True(schema.HasTable(table)) correctly verifies that the table has been created successfully.


54-56: Consistency in setting mock expectations for Connection

You have set mockOrm.EXPECT().Connection(schema.connection).Return(mockOrm).Once(). Ensure that this matches the actual number of times Connection is called within schema.DropIfExists. If Connection is called more than once, adjust the expectation accordingly to prevent test failures.


64-68: Initialization of Schema with mock configurations

The initSchema function properly initializes the Schema using testQuery.MockConfig() and the driver string, ensuring that the schema is set up with the correct configurations for testing.

database/migration/repository.go (8)

8-12: Struct fields are appropriately defined.

The Repository struct fields are clearly defined and suitable for managing migrations.


14-20: Constructor function correctly initializes Repository.

The NewRepository function properly initializes the Repository struct with the provided parameters.


30-34: Proper error handling in Delete method.

The Delete method correctly handles errors by returning any encountered error.


40-47: Verify handling of zero batch numbers in GetLast method.

In the GetLast method, when there are no migrations, getLastBatchNumber() returns 0, causing the query to filter with batch = 0. Please verify if batch number 0 is valid in your context or adjust the logic to handle cases when no batches exist.

You can check if batches start from 1 and modify the method accordingly. For example:

 func (r *Repository) getLastBatchNumber() int {
 	var batch int
 	if err := r.query.Table(r.table).OrderByDesc("batch").Limit(1).Pluck("batch", &batch); err != nil {
-		return 0
+		return 0 // Or possibly return -1 to indicate no batches exist
 	}

 	return batch
 }

71-78: Consistent ordering in GetRan method.

The GetRan method orders migrations by batch and migration, which provides a consistent execution order.


80-85: Logging method is correctly implemented.

The Log method correctly inserts a new migration record into the migration table with the appropriate fields.


87-89: RepositoryExists method properly checks for table existence.

The RepositoryExists method accurately checks if the migration table exists using schema.HasTable.


49-56: ⚠️ Potential issue

Ensure GetMigrations retrieves complete batches as per steps.

The GetMigrations method uses Limit(steps), which may not retrieve all migrations in the last steps batches if multiple migrations exist per batch. Consider fetching all migrations from the last steps batches to ensure consistency.

You might adjust the query to:

 func (r *Repository) GetMigrations(steps int) ([]migration.File, error) {
 	var files []migration.File
+	lastBatchNumber := r.getLastBatchNumber()
+	targetBatchNumber := lastBatchNumber - steps + 1
 	if err := r.query.Table(r.table).
-		Where("batch >= 1").
+		Where("batch >= ?", targetBatchNumber).
 		OrderByDesc("batch").
 		OrderByDesc("migration").
-		Limit(steps).
 		Get(&files); err != nil {
 		return nil, err
 	}

 	return files, nil
 }

This adjustment ensures you retrieve all migrations from the last steps batches.

database/migration/schema.go (1)

99-101: Verify error propagation in the build method

The build method returns an error from blueprint.Build(...). Ensure that any errors returned by this method are appropriately handled by the calling functions to prevent unhandled exceptions during the migration process.

database/migration/repository_test.go (1)

71-76: Ensure repository existence check is reliable

The condition if !repository.RepositoryExists() might lead to flaky tests if the repository state is not properly cleaned up between tests. Ensure that the repository state is consistent at the start of each test.

Consider resetting the repository or ensuring isolation between tests. You can verify the repository state with the following script:

database/migration/grammars/postgres.go Show resolved Hide resolved
contracts/database/migration/schema.go Show resolved Hide resolved
contracts/database/migration/schema.go Show resolved Hide resolved
contracts/database/migration/schema.go Show resolved Hide resolved
database/migration/schema_test.go Show resolved Hide resolved
database/migration/schema.go Show resolved Hide resolved
database/migration/schema.go Show resolved Hide resolved
database/migration/repository_test.go Show resolved Hide resolved
database/migration/repository_test.go Outdated Show resolved Hide resolved
database/migration/repository_test.go Show resolved Hide resolved
@hwbrzzl hwbrzzl closed this Oct 4, 2024
@hwbrzzl hwbrzzl reopened this Oct 4, 2024
Copy link

codecov bot commented Oct 4, 2024

Codecov Report

Attention: Patch coverage is 73.19588% with 26 lines in your changes missing coverage. Please review.

Project coverage is 70.44%. Comparing base (a2c5e98) to head (bc30279).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
database/migration/repository.go 75.00% 7 Missing and 7 partials ⚠️
database/migration/schema.go 81.81% 4 Missing and 2 partials ⚠️
database/migration/grammars/postgres.go 0.00% 5 Missing ⚠️
database/service_provider.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #664      +/-   ##
==========================================
+ Coverage   70.27%   70.44%   +0.16%     
==========================================
  Files         187      187              
  Lines       11896    11944      +48     
==========================================
+ Hits         8360     8414      +54     
+ Misses       2963     2948      -15     
- Partials      573      582       +9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
database/migration/repository_test.go (3)

15-26: LGTM: Test suite structure is well-defined.

The RepositoryTestSuite and TestRepositoryTestSuite are correctly set up. Skipping tests on Windows due to Docker limitations is a good practice.

Consider adding a comment explaining why tests are skipped on Windows for better clarity:

if env.IsWindows() {
    t.Skip("Skipping tests that use Docker due to limitations on Windows")
}

64-129: LGTM: TestRecord method is comprehensive and well-implemented.

This test method thoroughly covers various repository operations, including logging migrations, retrieving batch numbers, and managing migration records. The structure allows for testing with multiple database drivers, and proper error checking is implemented throughout.

Consider adding a test case for the scenario where GetMigrations is called with a limit greater than the total number of migrations to ensure it handles this edge case correctly.


1-135: Overall, the test file is well-structured and provides good coverage.

The test suite effectively covers the main repository operations and uses appropriate mocking techniques to isolate the repository from the actual database. The structure allows for easy extension to multiple database drivers in the future.

To further improve the test file:

  1. Extend coverage to include more database drivers (MySQL, SQLite) as previously suggested.
  2. Implement error handling for the initSchema function in initRepository.
  3. Consider adding edge cases, such as testing GetMigrations with a limit exceeding the total number of migrations.
  4. Add comments explaining the purpose of each test method for better readability.

These improvements will enhance the robustness and maintainability of the test suite.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between b86a56c and 1961a55.

⛔ Files ignored due to path filters (5)
  • mocks/database/migration/Connection.go is excluded by !mocks/**
  • mocks/database/migration/Grammar.go is excluded by !mocks/**
  • mocks/database/migration/Migration.go is excluded by !mocks/**
  • mocks/database/migration/Repository.go is excluded by !mocks/**
  • mocks/database/migration/Schema.go is excluded by !mocks/**
📒 Files selected for processing (11)
  • contracts/database/migration/grammar.go (1 hunks)
  • contracts/database/migration/repository.go (1 hunks)
  • contracts/database/migration/schema.go (3 hunks)
  • database/migration/blueprint.go (1 hunks)
  • database/migration/grammars/postgres.go (1 hunks)
  • database/migration/repository.go (1 hunks)
  • database/migration/repository_test.go (1 hunks)
  • database/migration/respository.go (0 hunks)
  • database/migration/schema.go (2 hunks)
  • database/migration/schema_test.go (2 hunks)
  • database/service_provider.go (1 hunks)
💤 Files with no reviewable changes (1)
  • database/migration/respository.go
🚧 Files skipped from review as they are similar to previous changes (6)
  • contracts/database/migration/grammar.go
  • contracts/database/migration/repository.go
  • database/migration/blueprint.go
  • database/migration/grammars/postgres.go
  • database/migration/repository.go
  • database/service_provider.go
🔇 Additional comments (19)
contracts/database/migration/schema.go (5)

5-5: LGTM: Improved error handling in Create method

The addition of an error return to the Create method is a good improvement. It allows for better error handling and follows Go's idiomatic approach to dealing with potential failures in operations.


9-9: LGTM: Consistent error handling in DropIfExists method

The addition of an error return to the DropIfExists method is consistent with the Create method change and improves error handling capabilities. This change enhances the robustness of the schema operations.


10-11: LGTM: New GetTables method enhances schema functionality

The addition of the GetTables method is a valuable enhancement to the Schema interface. It provides a way to retrieve all tables in the database, returning a slice of Table structs along with an error for proper error handling. This method will be useful for migration management as mentioned in the PR objectives.


12-13: 🛠️ Refactor suggestion

Consider returning an error along with the boolean in HasTable

While the addition of the HasTable method is beneficial, it's worth reconsidering its signature. As mentioned in a previous review, database operations can fail due to various issues (e.g., connectivity problems). Returning an error along with the boolean can help handle such cases more gracefully.

Consider changing the method signature to HasTable(table string) (bool, error).


31-34: 🛠️ Refactor suggestion

Consider renaming Connection interface and Table struct to avoid potential naming conflicts

While the addition of the Connection interface and Table struct enhances the migration functionality, their names might lead to potential conflicts:

  1. The Connection interface might conflict with other packages or standard library types using the same name. Consider a more specific name like MigrationConnection or SchemaConnection.

  2. The Table struct might conflict with other types named Table in the codebase or imported packages. Consider renaming it to something more specific like SchemaTable or MigrationTable.

These renamings would improve clarity and reduce the risk of naming collisions.

Also applies to: 51-55

database/migration/schema.go (7)

23-23: LGTM: Schema struct modification

The removal of the blueprint field and addition of the prefix field appear to be part of a refactoring to improve the handling of table prefixes. This change aligns well with the overall modifications in the file.


26-37: LGTM: NewSchema function update

The changes to the NewSchema function are consistent with the Schema struct modifications. Retrieving the prefix from the configuration provides more flexibility and centralizes the prefix management.


42-42: LGTM: Connection method simplification

The Connection method has been simplified to align with the updated NewSchema function signature. This change maintains consistency across the codebase.


62-70: LGTM: Well-implemented GetTables method

The GetTables method is well-implemented with proper error handling. It uses the ORM to execute a raw SQL query generated by the grammar's CompileTables method, which allows for database-specific SQL generation. This approach is flexible and maintainable.


45-52: ⚠️ Potential issue

Implement error handling and rollback in the Create method

The Create method has been updated to return an error, which is a good practice. However, the TODO comment indicates that error handling and rollback functionality are not yet implemented. To ensure data integrity during migrations, it's crucial to implement proper error handling and transaction management.

Consider implementing the following:

  1. Wrap the operations in a transaction.
  2. If an error occurs during the build process, rollback the transaction.
  3. If successful, commit the transaction.

This will help prevent partial migrations in case of failures.

Would you like assistance in implementing the error handling and rollback logic?


54-60: ⚠️ Potential issue

Implement error handling in the DropIfExists method

The DropIfExists method has been added with a structure similar to the Create method. However, the TODO comment indicates that error handling is not yet implemented. To maintain database consistency, it's important to implement proper error handling for this operation as well.

Consider implementing the following:

  1. Wrap the operation in a transaction.
  2. If an error occurs during the build process, rollback the transaction.
  3. If successful, commit the transaction.

This will help handle potential failures during the drop operation and maintain database consistency.

Would you like assistance in implementing the error handling logic?


71-87: 🛠️ Refactor suggestion

Consider propagating errors in the HasTable method

Currently, if GetTables() returns an error, the HasTable method logs the error and returns false. This approach may mask underlying issues and lead to incorrect assumptions about the existence of tables. Consider modifying the method to return an error along with the boolean result. This will allow calling functions to handle errors appropriately.

Apply this diff to adjust the method signature and error handling:

-func (r *Schema) HasTable(name string) bool {
+func (r *Schema) HasTable(name string) (bool, error) {
    blueprint := r.createBlueprint(name)
    tableName := blueprint.GetTableName()

    tables, err := r.GetTables()
    if err != nil {
-       r.log.Errorf("failed to get %s tables: %v", r.connection, err)
-       return false
+       return false, fmt.Errorf("failed to get %s tables: %w", r.connection, err)
    }

    for _, table := range tables {
        if table.Name == tableName {
-           return true
+           return true, nil
        }
    }

-   return false
+   return false, nil
}
database/migration/repository_test.go (4)

1-13: LGTM: Package declaration and imports are appropriate.

The package name and imports are correctly set up for testing the migration repository. All imported packages are used in the test file.


28-34: LGTM: SetupTest method is correctly implemented for PostgreSQL.

The SetupTest method properly initializes a PostgreSQL Docker instance and test query.

As previously suggested, consider extending test coverage to include more database drivers like MySQL and SQLite to enhance the test suite's comprehensiveness.


36-62: LGTM: TestCreate_Delete_Exists method is comprehensive and well-structured.

This test method effectively covers the creation, existence check, and deletion operations of the repository. The use of mock expectations for the ORM ensures proper verification of interactions. The structure allows for easy extension to multiple database drivers in the future.


131-135: ⚠️ Potential issue

Handle potential errors from initSchema function.

The initRepository method correctly initializes a repository instance and its mock ORM. However, as previously noted, the error returned by initSchema is not being handled.

Modify the function to handle errors appropriately:

-func (s *RepositoryTestSuite) initRepository(testQuery *gorm.TestQuery) (*Repository, *mocksorm.Orm) {
-	schema, mockOrm := initSchema(s.T(), testQuery)
+func (s *RepositoryTestSuite) initRepository(testQuery *gorm.TestQuery) (*Repository, *mocksorm.Orm, error) {
+	schema, mockOrm, err := initSchema(s.T(), testQuery)
+	if err != nil {
+		return nil, nil, err
+	}

-	return NewRepository(testQuery.Query(), schema, "migrations"), mockOrm
+	return NewRepository(testQuery.Query(), schema, "migrations"), mockOrm, nil
}

Remember to update the calls to initRepository in the test methods to handle the returned error.

database/migration/schema_test.go (3)

Line range hint 18-34: Well-structured refactoring of test initialization

The shift from driverToTestDB to driverToTestQuery in the SchemaSuite struct simplifies the test setup and enhances clarity. Initializing the test queries in SetupTest provides a more streamlined and maintainable approach.


38-59: Comprehensive testing of DropIfExists and HasTable methods

The updated TestDropIfExists method effectively tests the creation, existence, and deletion of tables using the schema object. Iterating over the driverToTestQuery map ensures that the tests are prepared to accommodate multiple database drivers.


64-68: Proper initialization of the Schema object

The initSchema function correctly initializes the Schema object with the appropriate configurations and mock ORM. Passing testQuery.MockConfig() and testQuery.Docker().Driver().String() ensures that the schema is set up with the correct settings for the tests.

Copy link
Member

@kkumar-gcc kkumar-gcc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@hwbrzzl hwbrzzl merged commit 6b2ed2c into master Oct 4, 2024
12 checks passed
@hwbrzzl hwbrzzl deleted the bowen/#280-5 branch October 4, 2024 15:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants