Job Summary
This is a senior-level Laravel Developer position at 9HDIGITAL, focusing on leading a legacy application rewrite project. The role requires strong technical expertise in Laravel, system architecture, and database design. The position involves independent work, client interaction, and full project ownership. The company emphasizes growth, flexibility, and technical innovation, with operations across Malta, Egypt, and Saudi Arabia.
How to Succeed
-
Prepare case studies from your experience about:
- Legacy system modernization
- Leading technical projects independently
- Making architectural decisions
- Working with clients directly
-
Be ready to discuss:
- Your approach to analyzing legacy systems
- How you make architectural decisions
- Your experience with Laravel best practices
- Real examples of database optimization
- How you handle project ownership
-
Practice explaining technical concepts in simple terms, as client interaction is key
Table of Contents
Laravel Architecture and Modern Practices 7 Questions
Critical for leading the legacy system rewrite, understanding Laravel's architecture and best practices ensures building scalable, maintainable applications
Laravel's Service Container is a powerful IoC (Inversion of Control) container that manages class dependencies and performs dependency injection. In the context of legacy application rewriting, it's particularly valuable because:
- Automatic Resolution:
public function __construct(
private UserRepository $userRepository,
private Logger $logger
) {
// Dependencies automatically resolved
}
- Service Binding:
// In ServiceProvider
public function register()
{
$this->app->bind(UserRepositoryInterface::class, function ($app) {
return new UserRepository($app->make(Database::class));
});
}
- Singleton Binding:
$this->app->singleton(CacheManager::class, function ($app) {
return new CacheManager($app);
});
This is especially important for the project as it allows for easy swapping of implementations when transitioning from legacy code to new components, maintaining loose coupling throughout the system.
For the legacy application rewrite, the Repository Pattern would be instrumental in separating data access logic from business logic:
- Interface Definition:
interface UserRepositoryInterface
{
public function find(int $id): ?User;
public function findByEmail(string $email): ?User;
public function save(User $user): void;
}
- Concrete Implementation:
class EloquentUserRepository implements UserRepositoryInterface
{
public function find(int $id): ?User
{
return User::find($id);
}
public function findByEmail(string $email): ?User
{
return User::where('email', $email)->first();
}
public function save(User $user): void
{
$user->save();
}
}
- Service Provider Registration:
$this->app->bind(UserRepositoryInterface::class, EloquentUserRepository::class);
Benefits particularly relevant to the project:
- Easier migration from legacy data sources
- Consistent data access layer
- Simplified unit testing
- Cleaner separation of concerns
Event broadcasting in Laravel is crucial for building real-time applications. In the context of a legacy system rewrite, it can be used to:
- Event Definition:
class UserRegistered implements ShouldBroadcast
{
public function __construct(private User $user)
{}
public function broadcastOn(): array
{
return ['user-notifications'];
}
}
- Broadcasting Configuration:
// Broadcasting with Redis for scalability
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'broadcast',
],
Use cases relevant to the project:
- Real-time notifications for system updates
- Live data synchronization during migration
- User activity monitoring
- Async processing of legacy data imports
For a legacy system rewrite, proper transaction handling is critical for data integrity:
- Basic Transaction:
DB::transaction(function () {
DB::table('users')->update(['active' => false]);
DB::table('logs')->insert(['message' => 'Users deactivated']);
});
- Manual Transaction Control:
try {
DB::beginTransaction();
// Complex legacy data migration logic
DB::commit();
} catch (\Exception $e) {
DB::rollBack();
Log::error('Migration failed: ' . $e->getMessage());
}
Best practices:
- Use automatic deadlock retry
- Implement proper error handling
- Keep transactions as short as possible
- Consider using database locks for critical operations
- Implement proper logging for transaction failures
Service providers are the backbone of Laravel's bootstrapping process. For a legacy system rewrite, they're essential for:
- Custom Service Provider:
class LegacyDataServiceProvider extends ServiceProvider
{
public function register()
{
$this->app->singleton(LegacyDataConnector::class);
}
public function boot()
{
$this->loadMigrationsFrom(__DIR__ . '/migrations');
$this->publishes([
__DIR__ . '/config' => config_path('legacy'),
]);
}
}
Key responsibilities:
- Database connection management
- Third-party service integration
- Configuration publishing
- Migration registration
- Event listener registration
For the legacy rewrite project, DDD would help organize complex business logic:
- Domain Layer Structure:
app/
Domain/
User/
Actions/
Events/
Models/
ValueObjects/
Repositories/
Billing/
Actions/
Services/
Aggregates/
- Example Domain Service:
class UserRegistrationService
{
public function register(UserDTO $userData): User
{
$user = $this->userFactory->create($userData);
$this->userRepository->save($user);
event(new UserRegistered($user));
return $user;
}
}
Key implementations:
- Bounded contexts for different business domains
- Rich domain models
- Value objects for immutable concepts
- Domain events for cross-boundary communication
Middleware is crucial for request/response handling, especially important for legacy system integration:
- Custom Middleware Example:
class LegacyAuthenticationMiddleware
{
public function handle($request, Closure $next)
{
if ($this->isLegacyUser($request)) {
// Handle legacy authentication
return $this->handleLegacyAuth($request);
}
return $next($request);
}
}
Practical applications for the project:
- Authentication bridging between old and new systems
- Request logging for migration debugging
- API rate limiting
- Response transformation for legacy clients
- Cross-Origin Resource Sharing (CORS) handling
Legacy System Migration Strategies 6 Questions
Essential for the main project requirement of rewriting a legacy application while ensuring data integrity and business continuity
- System Documentation Review:
- Gather all existing documentation
- Review business rules and workflows
- Identify integration points
- Code Analysis:
- Use static code analysis tools
- Document system architecture
- Identify dependencies
- Stakeholder Workshops:
- Since the job requires running client workshops, I would organize sessions with:
- Business stakeholders to understand critical workflows
- End users to identify pain points
- Technical team to understand technical debt
- Database Analysis:
- Map data relationships
- Identify data quality issues
- Document database schema
- Create Migration Strategy Document:
- Risk assessment
- Timeline planning
- Resource allocation
- Success metrics
Given that the legacy system is Microsoft Access-based (mentioned in bonus skills), I would:
- Create ETL Pipeline:
- Build Laravel commands for extraction
- Use Laravel's Database migrations for schema creation
- Implement data transformers using Laravel's collections
- Implement Migration Process:
class DataMigrationService
{
public function migrate()
{
DB::beginTransaction();
try {
// Extract from legacy system
$legacyData = $this->extractFromLegacy();
// Transform data
$transformedData = $this->transformData($legacyData);
// Load into new system
$this->loadData($transformedData);
DB::commit();
} catch (Exception $e) {
DB::rollBack();
Log::error('Migration failed: ' . $e->getMessage());
}
}
}
- Use Queue System:
- Break migration into chunks
- Use Laravel's queue system for processing
- Implement retry mechanisms
- Parallel Operation Strategy:
- Keep legacy system running
- Implement synchronization mechanism
- Use event-driven architecture for real-time updates
- Staged Migration:
class StagedMigrationService
{
public function migrateInStages()
{
// Stage 1: Read-only migration
$this->migrateHistoricalData();
// Stage 2: Sync new operations
$this->enableSynchronization();
// Stage 3: Switchover
$this->performSwitchover();
}
}
- Feature Flags:
- Use Laravel's configuration system
- Implement gradual rollout
- Enable quick rollback capability
- Monitoring:
- Implement health checks
- Set up error tracking
- Monitor system performance
- Automated Testing:
class DataValidationTest extends TestCase
{
public function testDataIntegrity()
{
$legacyRecord = LegacyDB::first();
$migratedRecord = NewDB::where('legacy_id', $legacyRecord->id)->first();
$this->assertEquals(
$this->normalizeData($legacyRecord),
$this->normalizeData($migratedRecord)
);
}
}
- Validation Tools:
- Checksums for data integrity
- Database comparison tools
- Custom validation scripts
- Business Logic Validation:
- Unit tests for business rules
- Integration tests for workflows
- End-to-end testing scenarios
- Manual Verification:
- Sample data review
- Business stakeholder validation
- User acceptance testing
- Data Transformation Layer:
class DataTypeTransformer
{
public function transform($data, $sourceType, $targetType)
{
return match($sourceType) {
'access_datetime' => Carbon::createFromFormat('...', $data),
'access_currency' => $this->transformCurrency($data),
default => $this->defaultTransform($data)
};
}
}
- Type Casting Strategy:
- Define type mapping rules
- Implement validation rules
- Handle edge cases
- Data Cleaning:
- Normalize data formats
- Handle null values
- Clean invalid characters
- Error Handling:
- Log transformation errors
- Implement fallback values
- Report transformation issues
- Domain-Driven Design Approach:
- Identify bounded contexts
- Define domain models
- Create service boundaries
- Modular Architecture:
// Example of a modular service structure
namespace App\Modules\Billing;
class BillingService
{
public function __construct(
private PaymentGateway $gateway,
private InvoiceRepository $invoices
) {}
public function processPayment(Order $order)
{
// Implementation
}
}
- Implementation Strategy:
- Start with highest business value modules
- Use Laravel's service container for dependency management
- Implement SOLID principles
- Communication Pattern:
- Define clear interfaces
- Use events for loose coupling
- Implement API contracts
- Database Strategy:
- Consider separate databases per module
- Implement data consistency patterns
- Use Laravel's database transactions
Database Design and Optimization 7 Questions
Crucial for implementing efficient database structures and ensuring optimal performance in the new system
For large datasets, I employ several optimization strategies:
- Proper indexing based on query patterns
- Using EXPLAIN to analyze query execution plans
- Implementing database partitioning for tables exceeding 1M rows
- Using compound indexes for frequently combined conditions
- Implementing query caching with Redis for read-heavy operations
- Breaking down complex queries into smaller, more manageable ones
- Using LIMIT and offset for pagination instead of SELECT *
- Considering denormalization where appropriate for read performance
Given the project's requirement to rewrite a legacy application, I would also analyze existing query patterns to identify optimization opportunities during the migration.
My approach to scalable database design includes:
- Proper normalization (typically to 3NF) while considering practical performance trade-offs
- Implementation of appropriate primary and foreign keys
- Using appropriate data types (e.g., UNSIGNED for positive numbers only)
- Implementing vertical partitioning for tables with many columns
- Using ENUM for fixed sets of values
- Creating intermediate tables for many-to-many relationships
- Using UUID for distributed systems
- Implementing appropriate indexes based on access patterns
Since this role involves rewriting a legacy application, I would analyze the existing schema for pain points and design the new schema to address current scalability issues while anticipating future growth.
My indexing strategy includes:
- PRIMARY KEY indexing for unique record identification
- Creating composite indexes for frequently combined WHERE clauses
- Using covering indexes to optimize SELECT queries
- Implementing UNIQUE indexes for columns requiring uniqueness
- Adding indexes on foreign key columns
- Using partial indexes for filtered queries
- Regular index maintenance and cleanup
When migrating from a legacy system, I would:
- Analyze existing query patterns
- Review current index usage with tools like MySQL's sys schema
- Create new indexes based on actual query patterns
- Remove redundant or unused indexes
For production database migrations, I follow these steps:
- Use Laravel's migration system for version control of schema changes
- Implement zero-downtime migrations:
- Add new tables/columns without removing old ones
- Deploy code that can work with both old and new schemas
- Migrate data in small batches
- Remove old schema elements after confirming stability
- Always create backup points before migrations
- Test migrations in staging environment first
- Schedule migrations during low-traffic periods
- Use database transactions for atomic changes
- Implement rollback plans for each migration
Given the project's legacy system migration requirement, I would ensure minimal disruption to business operations during the transition.
My experience includes:
- Using EXPLAIN ANALYZE to identify query execution bottlenecks
- Implementing proper indexing strategies based on query patterns
- Optimizing JOIN operations:
- Using appropriate JOIN types (INNER, LEFT, etc.)
- Ordering JOINs for optimal execution
- Query rewriting:
- Converting subqueries to JOINs where beneficial
- Using EXISTS instead of IN for better performance
- Implementing caching strategies:
- Query cache
- Result cache using Redis
- Regular monitoring and optimization of slow queries
- Using tools like MySQL Workbench and Percona Toolkit for analysis
For the legacy application rewrite, I would apply these techniques to ensure the new system performs better than the original.
I implement data partitioning through:
- Range Partitioning:
- For date-based data
- For numerical ranges
- List Partitioning:
- For categorical data
- For geographic distribution
- Hash Partitioning:
- For even data distribution
- When natural partitioning keys aren't available
- Composite Partitioning:
- Combining different partitioning strategies
Implementation considerations:
- Analyzing data access patterns
- Determining partition size for optimal performance
- Regular maintenance of partitions
- Monitoring partition pruning effectiveness
This would be particularly relevant when migrating large datasets from the legacy system to ensure optimal performance in the new application.
To maintain data integrity, I employ:
- ACID compliance through:
- Proper transaction management
- Using Laravel's DB::transaction()
- Implementing appropriate isolation levels
- Implementing database constraints:
- Foreign key constraints
- Unique constraints
- Check constraints
- Using database locks when necessary:
- Row-level locking
- Table-level locking
- Implementing retry mechanisms for deadlock situations
- Following the Unit of Work pattern
- Using optimistic locking for concurrent updates
- Regular data validation and consistency checks
For the legacy system migration, this would be crucial to ensure no data is lost or corrupted during the transition process.
Modern PHP Development and SOLID Principles 6 Questions
Fundamental for writing maintainable, scalable code in the new system while following best practices
In Laravel applications, I implement SRP by ensuring each class has one specific purpose. For example, in a legacy system rewrite (as mentioned in the job description), I would:
- Create separate service classes for distinct business logic
- Use dedicated repository classes for database operations
- Implement form request classes for validation logic
- Utilize event classes for specific events
- Use job classes for queue processing
Example:
// Instead of putting everything in controller
class UserController {
public function store(StoreUserRequest $request, UserService $userService)
{
$user = $userService->createUser($request->validated());
event(new UserCreated($user));
return new UserResource($user);
}
}
This approach ensures better maintainability and easier testing of the legacy system rewrite.
Dependency injection is a technique where a class receives its dependencies through constructor injection, method injection, or property injection rather than creating them internally. In the context of Laravel's Service Container:
class LegacyDataMigrationService {
private UserRepository $userRepository;
private Logger $logger;
public function __construct(
UserRepository $userRepository,
Logger $logger
) {
$this->userRepository = $userRepository;
$this->logger = $logger;
}
}
This achieves loose coupling by:
- Making dependencies explicit
- Allowing easy dependency substitution
- Facilitating testing through mocking
- Enabling interface-based programming
- Making the code more maintainable and flexible
This is particularly important for the legacy application rewrite mentioned in the job description, as it allows for easier testing and future modifications.
ISP states that clients shouldn't be forced to depend on interfaces they don't use. In the context of a legacy system rewrite:
// Instead of one large interface
interface LegacyDataHandler {
public function read(): array;
public function write(array $data): void;
public function validate(): bool;
public function transform(): array;
}
// Break it into smaller, focused interfaces
interface DataReader {
public function read(): array;
}
interface DataWriter {
public function write(array $data): void;
}
interface DataValidator {
public function validate(): bool;
}
// Classes can implement only what they need
class LegacyFileReader implements DataReader {
public function read(): array {
// Implementation
}
}
This approach:
- Reduces coupling
- Increases cohesion
- Makes the system more maintainable
- Allows for better unit testing
- Facilitates gradual legacy system migration
In the context of a legacy system rewrite, I implemented OCP when designing a notification system:
abstract class NotificationChannel {
abstract public function send(string $message): void;
}
class EmailNotification extends NotificationChannel {
public function send(string $message): void {
// Email implementation
}
}
class SMSNotification extends NotificationChannel {
public function send(string $message): void {
// SMS implementation
}
}
// Adding new channel doesn't require modifying existing code
class SlackNotification extends NotificationChannel {
public function send(string $message): void {
// Slack implementation
}
}
This design:
- Allows adding new notification channels without modifying existing code
- Makes the system extensible
- Maintains backward compatibility
- Facilitates testing
- Supports gradual feature migration from legacy systems
PHP 8.3 offers several features that enhance code maintainability:
- Type Declaration Improvements:
class UserService {
public function findUser(readonly string $id): ?User {
return User::find($id);
}
}
- Anonymous Classes with Constructor Parameters:
$logger = new class('app.log') implements LoggerInterface {
public function __construct(private string $logFile) {}
// Implementation
};
- Typed Class Constants:
class Config {
public const string DEFAULT_LOCALE = 'en';
public const int CACHE_DURATION = 3600;
}
- Override Attribute:
class LegacyUserRepository extends BaseRepository {
#[Override]
public function find(int $id): ?User {
// Implementation
}
}
These features help create more robust and self-documenting code, which is crucial for maintaining a large-scale application like the one mentioned in the job description.
For the legacy system rewrite, I would implement the Repository pattern as follows:
interface UserRepositoryInterface {
public function find(int $id): ?User;
public function create(array $data): User;
public function update(int $id, array $data): bool;
public function delete(int $id): bool;
}
class EloquentUserRepository implements UserRepositoryInterface {
public function find(int $id): ?User {
return User::find($id);
}
// Other implementations
}
// For legacy system
class LegacyUserRepository implements UserRepositoryInterface {
public function find(int $id): ?User {
// Legacy system specific implementation
}
// Other implementations
}
// In service provider
class RepositoryServiceProvider extends ServiceProvider {
public function register(): void {
$this->app->bind(UserRepositoryInterface::class,
EloquentUserRepository::class);
}
}
This approach:
- Provides a consistent interface for data access
- Facilitates switching between different implementations
- Makes testing easier through dependency injection
- Enables gradual migration from legacy systems
- Maintains clean separation of concerns
DevOps and Deployment Strategies 6 Questions
Important for ensuring smooth deployment and maintenance of the application in production
For this legacy application rewrite project, I would implement a robust CI/CD pipeline using GitHub Actions or GitLab CI, with the following stages:
- Build Stage:
- Composer install with optimization
- NPM install and asset compilation
- Generate application key
- Cache configuration
- Test Stage:
- Static code analysis (PHPStan)
- PHP CS Fixer for code style
- Unit tests
- Feature tests
- Integration tests
- Deploy Stage:
- Environment validation
- Database backup
- Artisan down
- Deploy new code
- Run migrations
- Cache clearing
- Artisan up
The pipeline would integrate with the specified cloud platforms (AWS/Azure/Google Cloud) as mentioned in the bonus skills, ensuring automated deployments while maintaining code quality and testing standards.
For Laravel applications, I create a Docker environment with multiple containers:
- Application container:
- PHP 8.3 with required extensions
- Composer
- Laravel-specific configurations
- Web Server container (Nginx)
- Database container (MySQL/MariaDB as per project requirements)
- Redis container for caching
- Queue worker container for background jobs
The setup includes:
- docker-compose.yml for local development
- Dockerfile optimized for production
- Volume mappings for persistent data
- Network configuration for container communication
This containerized approach ensures consistent environments across development, staging, and production, which is crucial for the legacy application rewrite project mentioned in the job description.
For handling database migrations in a CI/CD pipeline, especially for a legacy system rewrite, I implement the following strategy:
- Pre-deployment:
- Automated backup of the current database
- Validation of migration files
- Dry-run of migrations in a staging environment
- During deployment:
- Run migrations with --force flag
- Transaction wrapping for rollback capability
- Schema validation checks
- Post-deployment:
- Verification of migration success
- Data integrity checks
- Performance validation of new schema
Additional safety measures:
- Maintaining backward compatibility
- Using database transactions
- Implementing rollback procedures
- Version control for migrations
This approach ensures safe database schema updates while maintaining data integrity during the transition from the legacy system.
For zero-downtime deployments, especially crucial when rewriting a business-critical application, I implement:
- Blue-Green Deployment:
- Maintain two identical production environments
- Deploy to inactive environment
- Switch traffic after validation
- Keep previous version for quick rollback
- Implementation Steps:
- Load balancer configuration
- Database replication setup
- Session handling strategy
- Cache warming
- Health checks
- Deployment Process:
- Deploy new code to inactive environment
- Run migrations
- Verify application health
- Gradually shift traffic (canary deployment)
- Monitor for errors
- Complete cutover when stable
This approach ensures business continuity and minimizes risk during the legacy system transition.
For secure environment management across deployment stages:
- Secret Management:
- Use AWS Secrets Manager/Azure Key Vault
- Implement encryption at rest
- Rotate credentials regularly
- Use different keys per environment
- Environment Variables:
- .env files for local development
- CI/CD platform secret storage
- Environment-specific configurations
- Validation of required variables
- Access Control:
- Role-based access to secrets
- Audit logging
- Separate credentials per environment
- Automated secret rotation
This approach ensures secure handling of sensitive data while maintaining flexibility across development, staging, and production environments, which is essential for the legacy application rewrite project.
For production monitoring, I implement a comprehensive strategy:
- Application Monitoring:
- Laravel Telescope for development
- New Relic or Datadog for production
- Custom logging for business-critical operations
- Exception tracking with Sentry
- Performance Metrics:
- Response time monitoring
- Database query performance
- Cache hit rates
- Queue processing times
- Memory usage
- CPU utilization
- Alerting System:
- Define critical thresholds
- Alert routing based on severity
- Integration with team communication tools
- Automated incident response
- Business Metrics:
- User activity monitoring
- Transaction success rates
- API endpoint usage
- Custom metrics for business KPIs
This comprehensive monitoring approach ensures high availability and performance of the rewritten application, meeting the project's business-critical requirements.
Testing and Quality Assurance 6 Questions
Essential for ensuring the reliability and maintainability of the new system
In Laravel applications, I follow these key principles for unit testing:
- Use PHPUnit with Laravel's TestCase class
- Focus on testing individual components in isolation
- Implement test doubles (mocks/stubs) for dependencies
- Follow the AAA (Arrange-Act-Assert) pattern
- Use data providers for testing multiple scenarios
- Specifically for legacy application rewrite, I create parallel tests to ensure both systems produce identical results during migration
Example:
public function test_user_service_creates_user()
{
// Arrange
$userData = ['name' => 'Test User', 'email' => '[email protected]'];
$userRepositoryMock = $this->mock(UserRepository::class);
$userRepositoryMock->shouldReceive('create')->once()->with($userData);
// Act
$userService = new UserService($userRepositoryMock);
$result = $userService->createUser($userData);
// Assert
$this->assertTrue($result);
}
For feature testing, especially in a legacy system rewrite context:
- Focus on end-to-end business workflows
- Test critical user journeys first
- Use Laravel's HTTP testing methods for API endpoints
- Implement database transactions for test isolation
- Create parallel tests comparing legacy and new system behaviors
Example:
public function test_user_registration_workflow()
{
// Simulate user registration
$response = $this->postJson('/api/register', [
'name' => 'Test User',
'email' => '[email protected]',
'password' => 'password'
]);
$response->assertStatus(201)
->assertJsonStructure(['id', 'name', 'email']);
// Verify database state
$this->assertDatabaseHas('users', [
'email' => '[email protected]'
]);
}
For mocking external services, I use:
- Laravel's built-in mocking facilities
- Facade mocks for Laravel services
- Mock HTTP responses using Laravel's HTTP facades
- Service container bindings for dependency injection
Example:
public function test_external_api_integration()
{
Http::fake([
'api.external-service.com/*' => Http::response([
'status' => 'success'
], 200)
]);
// Test your service
$response = $this->get('/api/external-data');
$response->assertStatus(200);
Http::assertSent(function ($request) {
return $request->url() === 'api.external-service.com/data';
});
}
My TDD approach, particularly for legacy system rewrites:
- Write tests that mirror existing functionality
- Follow Red-Green-Refactor cycle
- Start with simple test cases
- Gradually add complexity
- Focus on business requirements
- Maintain test documentation
Process:
// 1. Write failing test
public function test_legacy_data_migration()
{
$legacyData = $this->getLegacyData();
$migrationService = new DataMigrationService();
$result = $migrationService->migrate($legacyData);
$this->assertEquals(
$this->getExpectedStructure(),
$result->getStructure()
);
}
// 2. Implement functionality
// 3. Refactor while keeping tests green
To maintain high test coverage:
- Use PHPUnit's code coverage reports
- Set minimum coverage thresholds in CI/CD pipeline
- Focus on critical business logic paths
- Implement both unit and feature tests
- Use mutation testing tools like Infection
- Regular code review with coverage analysis
Configuration example:
<phpunit>
<coverage>
<include>
<directory suffix=".php">src/</directory>
</include>
<report>
<html outputDirectory="tests/coverage"/>
<clover outputFile="tests/coverage.xml"/>
</report>
</coverage>
</phpunit>
For testing complex database interactions:
- Use Laravel's DatabaseTransactions trait
- Create specific test databases
- Implement database seeders for test data
- Use factories for complex data structures
- Test database triggers and stored procedures
- Verify data integrity after migrations
Example:
public function test_complex_transaction()
{
// Arrange
$this->seed(TestDatabaseSeeder::class);
// Act
$result = DB::transaction(function() {
$user = User::factory()->create();
$order = Order::factory()
->for($user)
->has(OrderItem::factory()->count(3))
->create();
return $this->orderService->process($order);
});
// Assert
$this->assertDatabaseHas('orders', [
'id' => $order->id,
'status' => 'processed'
]);
$this->assertDatabaseCount('order_items', 3);
}