Clone
11
Contributing Guide
Eaven Kimura edited this page 2025-11-12 16:12:19 +00:00

Contributing

We welcome contributions! Whether you're fixing bugs, adding features, improving documentation, or reporting issues, your help makes this project better.

Getting Started

  1. Fork the repository on Gitea
  2. Clone your fork:
    git clone https://git.serendipity.systems/YOUR_USERNAME/pterodactyl-discord-bot.git
    cd pterodactyl-discord-bot
    
  3. Set up development environment (see Development section)
  4. Create a feature branch:
    git checkout -b feature/your-feature-name
    

Contribution Types

🐛 Bug Reports

When reporting bugs, please include:

  • Description: Clear description of the issue
  • Steps to Reproduce: Detailed steps to recreate the bug
  • Expected Behavior: What should happen
  • Actual Behavior: What actually happens
  • Environment: Python version, OS, Discord.py version
  • Logs: Relevant log excerpts (sanitize sensitive data)
  • Screenshots: If applicable

Template:

**Bug Description**
Brief description of the issue

**To Reproduce**
1. Run command `/server_status`
2. Select server "Test Server"
3. Click "Start" button
4. Error occurs

**Expected Behavior**
Server should start and embed should update

**Actual Behavior**
Error message appears: "Failed to start server"

**Environment**
- OS: Ubuntu 22.04
- Python: 3.11.5
- Discord.py: 2.3.2
- Bot Version: v1.2.3

**Logs**
2024-10-15 10:30:45 - ERROR - Failed to send power action: Connection timeout

**Additional Context**
Only happens with servers that have multiple allocations

Feature Requests

When requesting features:

  • Use Case: Explain why this feature is needed
  • Proposed Solution: Describe your ideal implementation
  • Alternatives: Consider alternative approaches
  • Additional Context: Screenshots, mockups, or examples

Template:

**Feature Request**
Add support for automatic server backups

**Use Case**
Server administrators want automated daily backups triggered from Discord

**Proposed Solution**
Add `/backup` command that:
1. Creates backup via Pterodactyl API
2. Sends confirmation embed
3. Allows scheduling recurring backups

**Alternatives Considered**
- Use Pterodactyl's built-in backup scheduling
- Third-party backup management bot

**Additional Context**
Many users request this in Discord support channel

🔧 Pull Requests

Before submitting:

  1. Tests pass locally (make test)
  2. Code is linted (make lint)
  3. Code is formatted (make format)
  4. Security scans pass (make security)
  5. Documentation updated (if needed)
  6. Commits follow conventional commits format

PR Template:

## Description
Brief description of changes

## Type of Change
- [ ] Bug fix (non-breaking change fixing an issue)
- [ ] New feature (non-breaking change adding functionality)
- [ ] Breaking change (fix or feature causing existing functionality to change)
- [ ] Documentation update
- [ ] Performance improvement
- [ ] Code refactoring

## Changes Made
- Detailed list of changes
- Each major change on its own line
- Include technical details

## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] Manual testing performed
- [ ] All tests pass locally

## Checklist
- [ ] Code follows project style guidelines
- [ ] Self-review completed
- [ ] Comments added for complex code
- [ ] Documentation updated
- [ ] No new warnings introduced
- [ ] Tests added for new functionality
- [ ] Dependent changes merged

## Related Issues
Fixes #123
Relates to #456

## Screenshots (if applicable)
[Add screenshots here]

## Additional Notes
[Any additional information]

Development Guidelines

Code Style

Commit Messages:

Follow Conventional Commits:

type(scope): subject

body

footer

Types:

  • feat: New feature
  • fix: Bug fix
  • docs: Documentation changes
  • style: Code style changes (formatting, no logic change)
  • refactor: Code refactoring
  • perf: Performance improvements
  • test: Test additions or changes
  • chore: Build process, dependencies, or tooling changes
  • ci: CI/CD configuration changes

Examples:

feat(api): add support for server backups

- Add backup_server() method to PterodactylAPI
- Add /backup slash command
- Include backup status in embeds
- Add tests for backup functionality

Closes #123

fix(metrics): correct CPU scaling calculation for 16+ core servers

Previously failed to calculate correct scale for servers with >16 cores.
Now properly rounds up to nearest 100% increment.

Fixes #456

docs(readme): update installation instructions for Docker

- Add Docker Compose example
- Clarify volume mount requirements
- Add troubleshooting section

test(api): add integration tests for power actions

- Test start/stop/restart commands
- Mock Pterodactyl API responses
- Verify error handling

chore(deps): update discord.py to 2.3.2

Security update to address CVE-2024-XXXXX

Testing Requirements

For Bug Fixes:

  1. Write failing test that reproduces bug
  2. Fix the bug
  3. Verify test now passes
  4. Add regression test if needed

For New Features:

  1. Write tests first (TDD approach)
  2. Implement feature
  3. Achieve ≥80% coverage for new code
  4. Add integration test if appropriate

Test Structure:

class TestNewFeature:
    """Test suite for new feature."""
    
    def test_basic_functionality(self):
        """Test basic functionality works."""
        # Arrange
        expected = "result"
        
        # Act
        actual = new_function()
        
        # Assert
        assert actual == expected
    
    def test_edge_case(self):
        """Test edge case handling."""
        with pytest.raises(ValueError):
            new_function(invalid_input)
    
    @pytest.mark.asyncio
    async def test_async_behavior(self):
        """Test asynchronous operations."""
        result = await async_function()
        assert result is not None

Code Review Process

What Reviewers Look For:

  1. Functionality

    • Does it solve the problem?
    • Are edge cases handled?
    • Is error handling appropriate?
  2. Code Quality

    • Follows style guidelines
    • Well-structured and readable
    • Appropriate comments/documentation
    • No unnecessary complexity
  3. Testing

    • Adequate test coverage
    • Tests actually test the functionality
    • Tests are maintainable
  4. Performance

    • No obvious performance issues
    • Efficient algorithms used
    • Resources properly managed
  5. Security

    • No security vulnerabilities introduced
    • Input validation present
    • Secrets not exposed

Addressing Review Comments:

  • Respond to all comments within 1 week
  • Make requested changes promptly
  • Mark conversations as resolved after addressing
  • Request re-review when ready
  • Don't force-push after review (use regular commits)

Approval Requirements:

  • At least 1 approval from maintainer
  • All CI checks passing (tests, lint, security)
  • No unresolved conversations
  • Branch up to date with target branch
  • Conflicts resolved
  • Documentation updated (if applicable)

Review Iteration:

# After receiving review comments
git add .
git commit -m "refactor: address review comments"
git push origin feature-branch

# Maintainer will be notified automatically
# Once approved, maintainer will merge

Merging Strategy

We use the following merge strategies:

  1. Squash and Merge (default for features)

    • Combines all commits into one
    • Clean, linear history
    • Used for: Feature branches with many small commits
  2. Rebase and Merge (for clean commits)

    • Preserves individual commits
    • Linear history without merge commits
    • Used for: Well-crafted commit history worth preserving
  3. Regular Merge (rarely)

    • Creates merge commit
    • Preserves branch history
    • Used for: Long-running feature branches, releases

After Merge:

# Delete your feature branch
git checkout main
git pull origin main
git branch -d feature-branch
git push origin --delete feature-branch  # Delete remote branch

Documentation Standards

When to Update Documentation:

  • Always for new user-facing features
  • Always for changed command behavior
  • Always for new configuration options
  • Usually for significant refactoring
  • ⚠️ Sometimes for internal changes
  • Rarely for minor bug fixes

Documentation Locations:

Type Location When to Update
User Guide README.md New features, commands, configuration
API Reference Docstrings New functions, classes, methods
Architecture Wiki Architecture Page Major structural changes
Testing TESTING.md New test patterns, tools
Changelog CHANGELOG.md Every release
Code Comments Inline Complex logic, non-obvious decisions

Documentation Examples:

Good Docstring:

async def get_server_resources(self, server_id: str) -> dict:
    """
    Get resource usage for a specific server.
    
    Uses client API key as this is a client endpoint. Returns current
    state and resource metrics including CPU, memory, disk, and network.
    
    Args:
        server_id: The Pterodactyl server identifier (e.g., 'abc123')
        
    Returns:
        Dictionary containing server resource usage and current state.
        Example structure:
        {
            'attributes': {
                'current_state': 'running',
                'resources': {
                    'cpu_absolute': 45.5,
                    'memory_bytes': 1073741824,
                    'disk_bytes': 5368709120
                }
            }
        }
        
    Raises:
        aiohttp.ClientError: For network-related issues
        
    Example:
        >>> resources = await api.get_server_resources('abc123')
        >>> print(resources['attributes']['current_state'])
        'running'
    """
    logger.debug(f"Fetching resource usage for server {server_id}")
    try:
        response = await self._request("GET", f"client/servers/{server_id}/resources")
        if response.get('status') == 'error':
            error_msg = response.get('message', 'Unknown error')
            logger.error(f"Failed to get resources for server {server_id}: {error_msg}")
            return {'attributes': {'current_state': 'offline'}}
        
        state = response.get('attributes', {}).get('current_state', 'unknown')
        logger.debug(f"Server {server_id} current state: {state}")
        return response
    except Exception as e:
        logger.error(f"Exception getting resources for server {server_id}: {str(e)}")
        return {'attributes': {'current_state': 'offline'}}

Good Inline Comment:

# Calculate dynamic CPU scale limit in 100% increments
# This handles multi-vCPU servers where usage can exceed 100%
# e.g., 4 vCPU server can use up to 400% CPU
cpu_scale_limit = math.ceil(max_cpu_value / 100) * 100

Good README Section:

### `/server_status` Command

Display an interactive dashboard to select which server to monitor.

**Permissions Required:**
- Use Slash Commands

**User Requirements:**
- Must be in the configured guild (AllowedGuildID)

**Usage:**
1. Type `/server_status` in any channel
2. An ephemeral dropdown menu appears (only visible to you)
3. Select a server from the list
4. The bot posts a status embed in the current channel

**Behavior:**
- If an embed already exists for that server, it will be deleted first
- The new embed will auto-update every 10 seconds
- Button controls are available for users with "Game Server User" role

**Example:**
User: /server_status
Bot: [Ephemeral dropdown with servers]
User: [Selects "Minecraft Production"]
Bot: [Posts status embed in channel]

Dependency Management

Adding New Dependencies:

  1. Evaluate necessity:

    • Can we implement this ourselves?
    • Is it actively maintained?
    • What's the license?
    • What are the transitive dependencies?
  2. Add to requirements:

    # Add to requirements.txt with version pinning
    echo "new-package==1.2.3" >> requirements.txt
    
    # For test dependencies
    echo "test-package==2.3.4" >> requirements-test.txt
    
  3. Document usage:

    • Why this dependency is needed
    • What it replaces or adds
    • Any special configuration
  4. Update Docker:

    • Rebuild Docker image to include new dependency
    • Test in containerized environment
  5. Security check:

    safety check
    pip-audit
    

Updating Dependencies:

# Check for outdated packages
make outdated
pip list --outdated

# Update specific package
pip install --upgrade package-name==X.Y.Z

# Update all (carefully!)
pip install --upgrade -r requirements.txt

# Run security scans
make security

# Run full test suite
make test

# Update requirements file
pip freeze > requirements-frozen.txt

Dependency Version Pinning:

# ❌ Bad - No version specified
requests

# ⚠️ Okay - Minimum version
requests>=2.28.0

# ✅ Good - Pinned version
requests==2.31.0

# ✅ Best - Pinned with hash (in production)
requests==2.31.0 --hash=sha256:abc123...

Performance Considerations

When contributing, consider:

1. API Rate Limits

Problem: Discord and Pterodactyl APIs have rate limits

Solutions:

# ✅ Good - Batch operations
async with asyncio.gather(*[
    update_embed(server1),
    update_embed(server2),
    update_embed(server3)
]):
    pass

# ✅ Good - Add delays between requests
await asyncio.sleep(0.5)

# ❌ Bad - Sequential with no delays
for server in servers:
    await update_embed(server)  # May hit rate limits

2. Memory Usage

Problem: Bot runs 24/7, memory leaks accumulate

Solutions:

# ✅ Good - Use generators for large datasets
def process_servers():
    for server in get_servers():
        yield process(server)

# ✅ Good - Explicit cleanup
async def cleanup():
    await session.close()
    plt.close('all')  # Close matplotlib figures

# ❌ Bad - Loading everything into memory
all_data = [expensive_operation(x) for x in huge_list]

3. Blocking Operations

Problem: Blocking operations freeze the entire bot

Solutions:

# ✅ Good - Use async/await
async def fetch_data():
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            return await response.json()

# ✅ Good - Run blocking code in executor
import asyncio
loop = asyncio.get_event_loop()
result = await loop.run_in_executor(None, blocking_function)

# ❌ Bad - Synchronous blocking call
import requests
response = requests.get(url)  # Blocks entire event loop!

4. Database Queries (if implemented)

Solutions:

# ✅ Good - Use connection pooling
# ✅ Good - Index frequently queried columns
# ✅ Good - Batch inserts/updates
# ❌ Bad - N+1 query problem

Security Considerations

Security Checklist for Contributors:

Authentication & Authorization

  • All Discord interactions check guild ID
  • Role requirements enforced for sensitive actions
  • API keys never logged or displayed
  • User input validated before use

Input Validation

# ✅ Good - Validate user input
def validate_server_id(server_id: str) -> bool:
    if not server_id or not isinstance(server_id, str):
        return False
    if len(server_id) > 50:  # Reasonable limit
        return False
    if not server_id.isalnum():  # Only alphanumeric
        return False
    return True

# ❌ Bad - Direct use of user input
server_id = interaction.data['server_id']
await api.get_server(server_id)  # No validation!

Secrets Management

# ✅ Good - From config file
token = config.get('Discord', 'Token')

# ✅ Good - From environment
token = os.getenv('DISCORD_TOKEN')

# ❌ Bad - Hardcoded
token = "MTIzNDU2Nzg5.ABCDEF.ghijklmnop"

Error Handling

# ✅ Good - Generic error messages
try:
    result = await api.dangerous_operation()
except Exception as e:
    logger.error(f"Operation failed: {str(e)}")
    await interaction.response.send_message(
        "An error occurred. Please try again.",
        ephemeral=True
    )

# ❌ Bad - Leaking sensitive info
except Exception as e:
    await interaction.response.send_message(
        f"Error: {str(e)}\nAPI Key: {api.key}\nServer: {server_details}",
        ephemeral=True
    )

Logging Security

# ✅ Good - Sanitize sensitive data
logger.info(f"User {user.id} executed command")
logger.debug(f"API request to {url}")

# ❌ Bad - Logging secrets
logger.debug(f"Using API key: {api_key}")
logger.info(f"Server details: {server_data}")  # May contain tokens

Troubleshooting Contributions

Common Issues and Solutions:

Import Errors

# Problem: ModuleNotFoundError
# Solution: Install dependencies
pip install -r requirements.txt -r requirements-test.txt

# Problem: Wrong module path
# Solution: Add to PYTHONPATH
export PYTHONPATH="${PYTHONPATH}:$(pwd)"

Test Failures

# Problem: Tests fail locally
# Solution 1: Check Python version
python --version  # Must be 3.9+

# Solution 2: Update dependencies
pip install --upgrade -r requirements-test.txt

# Solution 3: Clean cache
make clean-all
pytest --cache-clear

# Solution 4: Run specific test with verbose output
pytest test_pterodisbot.py::TestClass::test_method -vv --tb=long

Linting Failures

# Problem: Flake8 errors
# Solution: Auto-fix with black and isort
make format

# Problem: Pylint errors
# Solution: Address specific issues or disable specific checks
# Add to top of file: # pylint: disable=specific-check

# Problem: Formatting issues
# Solution: Auto-format
black --line-length=120 file.py
isort --profile black file.py

Git Issues

# Problem: Merge conflicts
# Solution: Rebase and resolve
git fetch origin
git rebase origin/main
# Fix conflicts in files
git add .
git rebase --continue

# Problem: Accidentally committed to wrong branch
# Solution: Cherry-pick to correct branch
git checkout correct-branch
git cherry-pick commit-hash

# Problem: Want to undo last commit
# Solution: Use reset (if not pushed)
git reset --soft HEAD~1  # Keep changes
git reset --hard HEAD~1  # Discard changes

Docker Issues

# Problem: Docker build fails
# Solution: Clear cache and rebuild
docker build --no-cache -t pterodisbot:latest .

# Problem: Container won't start
# Solution: Check logs
docker logs pterodisbot

# Problem: Config not loading
# Solution: Check volume mount
docker run -v $(pwd)/config.ini:/app/config.ini:ro pterodisbot

Testing Your Contribution

Before submitting PR, ensure:

1. Unit Tests Pass

# Run all unit tests
make test-unit

# Run your specific test
pytest test_pterodisbot.py::TestYourFeature -v

# Check coverage
make test-coverage
# Ensure your new code has >80% coverage

2. Integration Tests Pass

# Run integration tests
make test-integration

# If you added integration test
pytest test_pterodisbot.py::TestIntegration::test_your_workflow -v

3. Code Quality Passes

# Run all linters
make lint

# Auto-fix formatting
make format

# Individual checks
flake8 your_file.py
pylint your_file.py
black --check your_file.py

4. Security Scans Pass

# Run all security checks
make security

# Individual scans
bandit -r your_file.py
safety check
pip-audit

5. Manual Testing

# Create test config if needed
cp config.ini.example config.ini
# Edit with test credentials

# Run bot locally
python pterodisbot.py

# Test your changes in Discord:
# - Use test commands
# - Click test buttons
# - Verify expected behavior
# - Test error cases
# - Check logs for errors

6. Documentation Updated

# Check if documentation needs updates
# - README.md for user-facing changes
# - Docstrings for code changes
# - TESTING.md for test changes
# - Comments for complex logic

Benchmarking and Profiling

When optimizing performance:

CPU Profiling

# Profile the bot
python -m cProfile -o profile.stats pterodisbot.py

# Analyze results
python -m pstats profile.stats
>>> sort cumtime
>>> stats 20  # Show top 20 functions

# Visualize with snakeviz
pip install snakeviz
snakeviz profile.stats

Memory Profiling

# Install memory profiler
pip install memory_profiler

# Decorate functions to profile
from memory_profiler import profile

@profile
def my_function():
    # Function code

# Run with profiler
python -m memory_profiler pterodisbot.py

Async Profiling

# Install yappi
pip install yappi

# Add to code:
import yappi
yappi.start()
# Code to profile
yappi.stop()
yappi.get_func_stats().print_all()

Performance Targets

Metric Target Measurement
Bot startup time <5 seconds Time to "Bot ready" log
Embed update cycle <30 seconds Time to update all embeds
Command response time <1 second Time to ephemeral response
CPU usage (average) <1% of single modern CPU thread Docker stats
Memory usage (average) <200 MB Docker stats
API calls per minute <100 Log analysis

Advanced Contribution Topics

Adding New Bot Commands

Complete sample workflow:

  1. Design the command:

    # Command specification
    Name: /backup
    Description: Create a backup of the server
    Parameters:
      - server_id: str (required)
      - description: str (optional)
    Permissions: "Game Server User" role
    Response: Ephemeral confirmation
    
  2. Implement the command:

    @bot.tree.command(name="backup", description="Create a backup of the server")
    async def backup_server(
        interaction: discord.Interaction,
        server_id: str,
        description: str = "Manual backup"
    ):
        """
        Create a backup of a Pterodactyl server.
    
        Args:
            interaction: Discord interaction object
            server_id: The Pterodactyl server identifier
            description: Optional backup description
        """
        # Implementation
    
  3. Add API method:

    # In PterodactylAPI class
    async def create_backup(self, server_id: str, description: str) -> dict:
        """
        Create a backup for a server.
    
        Args:
            server_id: The server identifier
            description: Backup description
    
        Returns:
            API response dictionary
        """
        logger.info(f"Creating backup for server {server_id}")
        return await self._request(
            "POST",
            f"client/servers/{server_id}/backups",
            {"description": description}
        )
    
  4. Write tests:

    @pytest.mark.asyncio
    async def test_backup_command_success(mock_discord_interaction, mock_pterodactyl_api):
        """Test successful backup creation."""
        mock_pterodactyl_api.create_backup = AsyncMock(
            return_value={'status': 'success'}
        )
    
        await backup_server(
            mock_discord_interaction,
            "abc123",
            "Test backup"
        )
    
        mock_pterodactyl_api.create_backup.assert_called_once_with(
            "abc123",
            "Test backup"
        )
        mock_discord_interaction.followup.send.assert_called_once()
    
  5. Update documentation:

    ### `/backup` Command
    
    Create a backup of a Pterodactyl server.
    
    **Parameters:**
    - `server_id`: The server identifier (required)
    - `description`: Backup description (optional)
    
    **Permissions:** "Game Server User" role required
    
  6. Test manually and submit PR

Adding New Metrics

Complete sample workflow for adding disk I/O metrics:

  1. Update ServerMetricsGraphs:

    class ServerMetricsGraphs:
        def __init__(self, server_id: str, server_name: str):
            self.server_id = server_id
            self.server_name = server_name
            # Add new data structure
            self.data_points = deque(maxlen=6)  # (timestamp, cpu, memory, disk_io)
    
        def add_data_point(
            self,
            cpu_percent: float,
            memory_mb: float,
            disk_io_mb: float,  # New parameter
            timestamp: Optional[datetime] = None
        ):
            """Add data point with disk I/O."""
            if timestamp is None:
                timestamp = datetime.now()
            self.data_points.append((timestamp, cpu_percent, memory_mb, disk_io_mb))
    
        def generate_disk_io_graph(self) -> Optional[io.BytesIO]:
            """Generate disk I/O graph."""
            # Implementation similar to CPU/memory graphs
            pass
    
  2. Update data collection:

    # In update_status method
    disk_io = round(
        resource_attributes.get('resources', {}).get('disk_io_bytes', 0) / (1024 ** 2),
        2
    )
    self.metrics_manager.add_server_data(
        server_id,
        server_name,
        cpu_usage,
        memory_usage,
        disk_io  # New parameter
    )
    
  3. Write tests:

    def test_disk_io_tracking(self):
        """Test disk I/O data tracking."""
        graphs = ServerMetricsGraphs('abc123', 'Test Server')
        graphs.add_data_point(50.0, 1024.0, 15.5)  # cpu, memory, disk_io
    
        assert len(graphs.data_points) == 1
        assert graphs.data_points[0][3] == 15.5  # disk_io value
    
  4. Update documentation and submit PR

Extending the API Client

Sample adding support for file management:

  1. Add new methods to PterodactylAPI:

    async def list_files(self, server_id: str, directory: str = "/") -> dict:
        """List files in server directory."""
        return await self._request(
            "GET",
            f"client/servers/{server_id}/files/list",
            {"directory": directory}
        )
    
    async def get_file_contents(self, server_id: str, file_path: str) -> dict:
        """Get contents of a file."""
        return await self._request(
            "GET",
            f"client/servers/{server_id}/files/contents",
            {"file": file_path}
        )
    
  2. Add tests with mocked responses:

    @pytest.mark.asyncio
    async def test_list_files(mock_pterodactyl_api):
        """Test file listing."""
        mock_pterodactyl_api._request = AsyncMock(return_value={
            'data': [
                {'name': 'config.yml', 'size': 1024},
                {'name': 'server.jar', 'size': 50000000}
            ]
        })
    
        files = await mock_pterodactyl_api.list_files('abc123', '/')
        assert len(files['data']) == 2
    
  3. Document new API methods and submit PR

Release Management

For maintainers preparing releases:

Pre-Release Checklist

  • All tests passing on main branch
  • No open critical bugs
  • Documentation up to date
  • CHANGELOG.md updated with all changes
  • Version bumped in code
  • Release notes drafted

Release Process

  1. Update version:

    # In pterodisbot.py
    __version__ = "1.4.0"
    
  2. Update CHANGELOG.md:

    ## [1.4.0] - 2024-10-20
    
    ### Added
    - Server backup functionality via `/backup` command
    - Disk I/O metrics in status embeds
    - File management API methods
    
    ### Changed
    - Improved error messages for API failures
    - Enhanced logging for debugging
    
    ### Fixed
    - CPU scaling calculation for 32+ core servers
    - Memory leak in graph generation
    - Race condition in embed updates
    
    ### Security
    - Updated discord.py to 2.3.3 (CVE-2024-XXXXX)
    - Added input validation for all user commands
    
  3. Create release commit:

    git add .
    git commit -m "chore(release): bump version to v1.4.0"
    git push origin main
    
  4. Create and push tag:

    git tag -a v1.4.0 -m "Release v1.4.0
    
    Added:
    - Server backup functionality
    - Disk I/O metrics
    - File management API
    
    Changed:
    - Improved error messages
    - Enhanced logging
    
    Fixed:
    - CPU scaling for 32+ cores
    - Memory leak in graphs
    - Embed update race condition
    
    Security:
    - Updated discord.py (CVE fix)
    - Added input validation"
    
    git push origin v1.4.0
    
  5. CI/CD automatically:

    • Runs full test suite
    • Builds Docker images for amd64 and arm64
    • Tags images: v1.4.0, v1.4, v1, latest
    • Pushes to registry
  6. Create release on Gitea:

    • Go to repository → Releases → New Release
    • Select tag v1.4.0
    • Title: "Pterodisbot v1.4.0 Stable"
    • Description: Copy from CHANGELOG.md
    • Attach any binary artifacts if applicable
    • Publish release
  7. Announce release:

    • Update Discord server announcement
    • Post in relevant channels
    • Update external documentation if needed

Hotfix Process

For critical bugs in production:

  1. Create hotfix branch from latest release tag:

    git checkout v1.4.0
    git checkout -b hotfix/critical-bug
    
  2. Fix the bug with tests:

    # Fix bug
    # Add test
    # Verify fix
    make test
    
  3. Bump patch version:

    __version__ = "1.4.1"
    
  4. Update CHANGELOG.md:

    ## [1.4.1] - 2024-10-21
    
    ### Fixed
    - Critical bug causing bot crashes on server restart
    
  5. Commit and tag:

    git commit -am "fix: critical bug causing crashes on restart"
    git tag -a v1.4.1 -m "Hotfix v1.4.1"
    
  6. Merge to main and push:

    git checkout main
    git merge hotfix/critical-bug
    git push origin main --tags
    
  7. CI/CD handles deployment automatically

Release Process

Version Numbering

We follow Semantic Versioning:

MAJOR.MINOR.PATCH

MAJOR: Breaking changes
MINOR: New features (backwards compatible)
PATCH: Bug fixes (backwards compatible)

Examples:

  • v1.2.3v1.2.4: Bug fix
  • v1.2.3v1.3.0: New feature
  • v1.2.3v2.0.0: Breaking change

Creating a Release

  1. Update version in code:

    # pterodisbot.py
    __version__ = "1.3.0"
    
  2. Update CHANGELOG.md:

    ## [1.3.0] - 2024-10-15
    
    ### Added
    - Server backup functionality
    - Scheduled backup support
    
    ### Fixed
    - CPU scaling for 16+ core servers
    
    ### Changed
    - Improved error messages
    
  3. Create release commit:

    git add .
    git commit -m "chore(release): bump version to v1.3.0"
    
  4. Create and push tag:

    git tag -a v1.3.0 -m "Release version 1.3.0"
    git push origin main --tags
    
  5. CI/CD automatically:

    • Runs all tests
    • Builds Docker images
    • Tags: v1.3.0, v1.3, v1, latest
    • Pushes to registry

Community Guidelines

Code of Conduct

We are committed to providing a welcoming and inclusive environment. All contributors must:

  • Be respectful and inclusive
  • Accept constructive criticism gracefully
  • Focus on what's best for the community
  • Show empathy toward others
  • Use inappropriate language or imagery
  • Make personal attacks
  • Publish others' private information

Communication Channels

  • Issues: Bug reports and feature requests
  • Pull Requests: Code contributions and discussions
  • Discussions: General questions and ideas
  • Discord (if applicable): Real-time community chat

Response Times

Maintainers aim to:

  • Acknowledge issues within 48 hours
  • Review PRs within 5 business days
  • Respond to security issues within 24 hours

Contributors should:

  • Respond to review comments within 1 week
  • Update PRs to address feedback
  • Close PRs if no longer pursuing

Maintainer Responsibilities

Maintainers will:

  • Review PRs and provide feedback
  • Merge approved PRs
  • Create releases
  • Manage issues and discussions
  • Enforce community guidelines
  • Keep documentation updated
  • Ensure CI/CD pipeline works

Maintainers commit to:

  • Respectful and constructive feedback
  • Timely responses
  • Clear communication
  • Fair and unbiased reviews
  • Supporting contributors

Recognition

Contributors are credited in:

  • CONTRIBUTORS.md file
  • Release notes
  • Git commit history

Significant contributors may:

  • Receive collaborator status
  • Be invited to maintain specific areas
  • Help guide project direction

First-Time Contributors

Looking for your first contribution?

Issues labeled with good-first-issue are great starting points:

  • Well-defined scope
  • Clear acceptance criteria
  • Mentorship available

Example first contributions:

  • Fix typos in documentation
  • Improve error messages
  • Add test coverage
  • Enhance logging
  • Update dependencies

Advanced Contributions

For experienced contributors:

Issues labeled with help-wanted need expertise:

  • Performance optimization
  • Architecture improvements
  • Complex feature implementation
  • Security enhancements

License

By contributing, you agree that your contributions will be licensed under the GPL-3.0 License.

All contributions must:

  • Be your original work
  • Not infringe on others' intellectual property
  • Be compatible with GPL-3.0

Getting Help

Questions about contributing?

  1. Check existing documentation:

  2. Search existing issues and PRs:

    • Someone may have already asked
    • Previous discussions contain valuable context
  3. Ask in Discussions:

    • General questions
    • Ideas for contributions
    • Clarification on project direction
  4. Open an issue:

    • Specific technical questions
    • Clarification on code behavior
    • Help with development setup

Quick Reference

Development Commands

# Setup
make install              # Install all dependencies
make setup-test           # Create test configuration

# Testing
make test                 # Run all tests with coverage
make test-quick           # Fast test without coverage
make test-unit            # Only unit tests
make test-integration     # Only integration tests
make watch                # Watch mode (continuous testing)

# Code Quality
make lint                 # Run all linters
make format               # Auto-format code
make format-check         # Check formatting without changes
make security             # Run security scans

# CI/CD
make ci                   # Run full CI pipeline locally

# Cleanup
make clean                # Remove test artifacts
make clean-all            # Deep clean (includes caches)

# Docker
make docker-build         # Build Docker image
make docker-run           # Run bot in container

# Utilities
make outdated             # Check for outdated packages
make update               # Update all dependencies
make freeze               # Generate requirements-frozen.txt

Git Workflow

# Start new feature
git checkout -b feature/feature-name

# Make changes and commit
git add .
git commit -m "feat(scope): description"

# Keep branch updated
git fetch origin
git rebase origin/main

# Push and create PR
git push origin feature/feature-name

# After PR approval and merge
git checkout main
git pull origin main
git branch -d feature/feature-name

Common Tasks

Add new slash command:

  1. Add command in pterodisbot.py with @bot.tree.command()
  2. Add guild check with check_allowed_guild()
  3. Add tests in test_pterodisbot.py
  4. Update README.md with command documentation
  5. Test locally, then submit PR

Add new API endpoint:

  1. Add method to PterodactylAPI class
  2. Add error handling and logging
  3. Add unit tests with mocked responses
  4. Update docstrings
  5. Test with real API, then submit PR

Fix a bug:

  1. Create branch: git checkout -b fix/bug-description
  2. Write failing test
  3. Fix the bug
  4. Verify test passes
  5. Run full test suite: make test
  6. Commit: git commit -m "fix(scope): description"
  7. Push and create PR