Workspace - Command-Line Tool Suite

Workspace is a tool suite for file operations, version management, and development workflow automation. Includes refac (string replacement), scrap (local trash folder), unscrap (file restoration), and st8 (automatic versioning).

View on GitHub

Testing & Quality Assurance

Workspace maintains reliability and safety through testing and quality assurance practices.

Test Suite Overview

Test Statistics

Test Suite Breakdown

Test Suite Tests Focus Area Critical Scenarios
integration_tests.rs 15 End-to-end workflows Tool integration, real-world usage
refac_concurrency_tests.rs 9 Multi-threading safety Race conditions, parallel processing
refac_edge_cases_tests.rs 14 Complex scenarios Deep nesting, special characters, unicode
refac_empty_directory_tests.rs 8 Directory handling Empty dirs, permission issues, cleanup
refac_encoding_tests.rs 7 Character encoding UTF-8, BOM, invalid encodings
scrap_advanced_integration_tests.rs 21 Advanced workflows Archive, search, metadata management
scrap_integration_tests.rs 18 Core functionality Basic operations, git integration
st8_integration_tests.rs 25 Version management Git hooks, multi-format support

Safety Features

Pre-Operation Validation

Every operation undergoes validation before execution:

// Validation prevents mid-operation failures
validate_all_operations()  // Tests every operation
    .then(execute_atomically)  // Only proceeds if validation passes

Validation Scope:

Race Condition Prevention

Proper operation ordering eliminates race conditions:

// Files processed before directories to prevent path invalidation
operations.sort_by(|a, b| {
    match (a.is_file(), b.is_file()) {
        (true, false) => Ordering::Less,    // Files first
        (false, true) => Ordering::Greater, // Then directories
        _ => a.depth().cmp(&b.depth()).reverse() // Deepest first
    }
});

Race Condition Tests:

Encoding Safety

Character encoding validation:

// Encoding validation prevents crashes during operations
fn validate_file_encoding(path: &Path) -> Result<(), EncodingError> {
    let content = fs::read(path)?;
    match std::str::from_utf8(&content) {
        Ok(_) => Ok(()),
        Err(e) => Err(EncodingError::InvalidUtf8 { path, error: e })
    }
}

Encoding Test Coverage:

Edge Case Testing

🌊 Deep Nesting Scenarios

Testing extreme directory structures:

# Test creates 1000+ level deep directories
test_maximum_directory_depth_limits()
test_very_long_file_and_directory_names()
test_complex_circular_directory_reference_patterns()

πŸ”’ Permission and Security Testing

Comprehensive permission scenario coverage:

# Permission edge cases
test_readonly_files_and_directories()
test_directory_rename_with_permission_issues()
test_filesystem_stress_concurrent_operations()

🌐 Cross-Platform Compatibility

Platform-specific behavior validation:

# Windows-specific tests
test_case_insensitive_filesystem_handling()
test_windows_path_length_limits()
test_reserved_filename_handling()

# Unix-specific tests
test_symlink_handling()
test_permission_bit_preservation()
test_hidden_file_processing()

🧡 Concurrency and Performance

Multi-threading safety validation:

# Concurrency stress tests
test_high_thread_count_processing()
test_concurrent_file_access_safety()
test_thread_pool_exhaustion_handling()
test_interrupt_safety_simulation()

Quality Standards

βœ… Zero-Warning Policy

All code compiles without warnings:

cargo build --release  # Must produce zero warnings
cargo clippy           # Lint checks must pass
cargo fmt --check      # Code formatting enforced

πŸ”’ Memory Safety

Rust’s ownership model provides memory safety guarantees:

⚑ Performance Validation

Performance testing ensures scalability:

#[test]
fn test_large_dataset_performance() {
    // Test with 1M+ files
    let large_dataset = create_test_files(1_000_000);
    let start = Instant::now();
    refac_operation(&large_dataset);
    assert!(start.elapsed() < Duration::from_secs(60));
}

Error Handling and Recovery

🚨 Comprehensive Error Scenarios

Every possible error condition is tested:

// Error scenario testing
test_insufficient_disk_space()
test_network_filesystem_failures()
test_permission_changes_during_operation()
test_file_locks_and_concurrent_access()
test_system_resource_exhaustion()

πŸ”„ Recovery and Rollback

Atomic operation guarantees:

// Operations are atomic - either all succeed or all fail
match execute_operations(&validated_ops) {
    Ok(_) => println!("All operations completed successfully"),
    Err(e) => {
        rollback_partial_changes();
        return Err(e);
    }
}

Test Execution and CI/CD

πŸƒ Running Tests Locally

# Run all tests
cargo test

# Run specific test suite
cargo test --test integration_tests
cargo test --test refac_concurrency_tests

# Run tests with verbose output
cargo test -- --nocapture

# Run performance tests
cargo test --release test_large_dataset

πŸ”„ Continuous Integration

Automated testing pipeline:

  1. Code Quality Checks
    • Compilation without warnings
    • Clippy lint validation
    • Code formatting verification
  2. Test Execution
    • All 231 tests must pass
    • Performance regression testing
    • Memory usage validation
  3. Platform Testing
    • Windows, macOS, Linux validation
    • Different Rust versions
    • Various filesystem types
  4. Security Validation
    • Dependency security scanning
    • Static analysis checks
    • Fuzz testing for edge cases

Test Development Guidelines

πŸ“ Test Writing Standards

#[test]
fn test_specific_scenario_with_clear_name() {
    // Arrange: Set up test environment
    let temp_dir = TempDir::new().unwrap();
    create_test_files(&temp_dir);
    
    // Act: Execute the operation
    let result = refac_operation(&temp_dir, "old", "new");
    
    // Assert: Verify expected outcomes
    assert!(result.is_ok());
    verify_expected_changes(&temp_dir);
    
    // Cleanup handled automatically by TempDir
}

🎯 Test Coverage Goals

Testing Best Practices

Quality Assurance

Test Execution

Contributing to Tests

🀝 Test Contribution Guidelines

When adding new features or fixing bugs:

  1. Write Tests First: Test-driven development approach
  2. Cover Edge Cases: Think about what could go wrong
  3. Use Descriptive Names: Test names should explain the scenario
  4. Include Performance Tests: For operations on large datasets
  5. Document Complex Tests: Explain non-obvious test scenarios

πŸ” Test Review Process

All test additions undergo review for:

The comprehensive test suite ensures Workspace remains reliable, safe, and performant for mission-critical operations across all supported platforms and use cases.